diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzkoos" "b/data_all_eng_slimpj/shuffled/split2/finalzzkoos" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzkoos" @@ -0,0 +1,5 @@ +{"text":"\\section{}\n\n\\section{Introduction}\nElectron impact excitation of nitrogen molecules plays an \nimportant role in atmospheric emission of planets and satellites \nsuch as the Earth, Titan and Triton. \nFor example, excitation of the ${a}^{1} \\Pi_g$ state \nand subsequent transitions to the ground ${X}^{1} \\Sigma_g^+$ \nstate are responsible for the far ultraviolet emissions of the \nLyman-Birge-Hopfield system which is prominent in \nthe airglow of the Earth's atmosphere\\cite{SpaceScienceRev.58.1}. \nRecently, Khakoo et al.\\cite{2005PhRvA..71f2703K} measured \ndifferential cross sections (DCSs) \nof electron impact excitation of N$_2$ molecule from the ground \n${X}^1 \\Sigma^{+}_{g}$ state to the 8 lowest excited electronic \nstates of \n$A^{3} \\Sigma_{u}^{+}$, $B^{3} \\Pi_{g}$, $W^{3} \\Delta_{u}$, \n${B'}^{3} \\Sigma_{u}^{-}$, ${a'}^{1} \\Sigma_{u}^{-}$, \n$a^{1} \\Pi_{g}$, $w^{1} \\Delta_{u}$ \nand $C^{3} \\Pi_{u}$ states. \nBased on their differential cross section data, \nJohnson et al.\\cite{2005JGRA..11011311J} \nderived integral cross sections (ICSs) \nfor these electron impact excitations. \nIn general, their ICSs are smaller than the \nother experimental cross sections at low impact energies below \n30 eV. These deviations may have some significance on \nstudy of atmospheric emissions, because a mean kinetic energy of \nelectron at high altitudes is about 10 eV\\cite{Wayne2000}. \nTo shed light on this situation from a theoretical point of view, \nwe perform the ab initio R-matrix calculations of \nelectron impact excitations of N$_2$ molecule in this work. \n\nMany previous experimental measurements have been focused on \nexcitation to a specific electronic state. \nFor example, Ajello and Shemansky\\cite{JGeophysResSpacePhys.90.9845} and \nMason and Newell\\cite{JPhysB.20.3913} \nmeasured ICSs for electron impact excitation to the ${a}^1 \\Pi_g$ state, \nwhereas Poparic et al.\\cite{ChemPhys.240.283}, \nZubek\\cite{JPhysB.27.573} and \nZubek and King\\cite{JPhysB.27.2613} measured \ncross sections for the ${C}^3 \\Pi_u$ state. \nIn addition to these works, Zetner and Trajmar \\cite{Zetner1987}\nreported excitation cross sections to \nthe ${A}^{3} \\Sigma_{u}^{+}$, $B^{3} \\Pi_{g}$, \n$W^{3} \\Delta_{u}$ and $a^{1} \\Pi_{g}$ states. \nSo far, comprehensive measurements of the excitation to\nthe 8 lowest electronic states are limited to three groups of \nCartwright et al. \\cite{PhysRevA.16.1013}, \nBrunger and Teubner \\cite{PhysRevA.41.1413} and \nKhakoo et al.\\cite{2005PhRvA..71f2703K}. \nThe measurements of Brunger and Teubner \\cite{PhysRevA.41.1413} include \nexcitation DCSs for the ${E}^{3} \\Sigma_{g}^{+}$ and \n${a''}^{1} \\Sigma_{g}^{+}$ states in addition to the 8 lowest excited states. \nThe DCSs of Brunger and Teubner\\cite{PhysRevA.41.1413} and \nKhakoo et al.\\cite{2005PhRvA..71f2703K} were later converted \nto ICSs by Campbell et al.\\cite{JPhysB.34.1185} and \nJohnson et al.\\cite{2005JGRA..11011311J}, respectively. \nDetailed reviews on electron N$_2$ collisions \ncan be found in Itikawa\\cite{JPhysChemRefData.35.31} \nand Brunger and Buckman\\cite{Br02}. \n\nSeveral groups have performed theoretical calculation of \nlow energy electron collisions with N$_2$ molecule. \nFor example, Chung and Lin\\cite{PhysRevA.6.988} employed the Born \napproximation to calculate excitation cross sections for the 11 target states \nincluding the $A^{3} \\Sigma_{u}^{+}$, $B^{3} \\Pi_{g}$, \n$W^{3} \\Delta_{u}$, $a^{1} \\Pi_{g}$, $w^{1} \\Delta_{u}$ \nand $C^{3} \\Pi_{u}$ states. \nLater, the same group of Holley et al.\\cite{PhysRevA.24.2946}\ncalculated excitation ICSs for the $a^{1} \\Pi_{g}$ state \nusing a two-state-close-coupling method. \nFliflet et al.\\cite{JPhysB.12.3281} and \nMu-Tao and McKoy\\cite{PhysRevA.28.697} reported \ndistorted-wave cross sections for excitation of \nthe $A^{3} \\Sigma_{u}^{+}$, $B^{3} \\Pi_{g}$, \n$W^{3} \\Delta_{u}$, $w^{1} \\Delta_{u}$, $C^{3} \\Pi_{u}$, \n$E^{3} \\Sigma_{g}^{+}$, ${b'}^{1} \\Sigma_{u}^{+}$ and \n${c'}^{1} \\Sigma_{u}^{+}$ states. \nIn general, these approximate methods are expected to \nbe accurate at high impact energies above 30 eV. \nHowever, more elaborate method is required for precise comparison \nwith experiment at low energies. \nGillan et al.\\cite{JPhysB.23.L407} calculated excitation ICSs for \nthe $A^{3} \\Sigma_{u}^{+}$, $B^{3} \\Pi_{g}$ and \n$W^{3} \\Delta_{u}$ states using the fixed nuclei R-matrix method. \nThey included the 4 lowest target states in their R-matrix model, \nwith target CI wave functions containing 2-13 CSFs. \nTheir cross sections for the $A^{3} \\Sigma_{u}^{+}$ and \n$W^{3} \\Delta_{u}$ states agree well with the experimental results \nof Cartwright et al. \\cite{PhysRevA.16.1013}. \nHowever, ICSs for the $B^{3} \\Pi_{g}$ state \ndeviate considerably from the experimental cross sections. \nSubsequently, they extended their R-matrix model to include \nthe 8 lowest valence states\\cite{JPhysB.29.1531}. \nTheir target CI wave functions were much improved from \ntheir previous work by employing valence active space description, \nresulting in 68-120 CSFs per target state. \nIn their paper, the ICSs were shown for the $A^{3} \\Sigma_{u}^{+}$, \n$B^{3} \\Pi_{g}$, $W^{3} \\Delta_{u}$ and ${B'}^{3} \\Sigma_{u}^{-}$ states, \nwhile the DCSs were presented for only the $A^{3} \\Sigma_{u}^{+}$ state. \nAgreement with the ICSs of Cartwright et al.\\cite{PhysRevA.16.1013} \nis good for these 4 excited states. \nHowever, agreement is marginal at DCS level. \n\nIn this work, we study electron impact excitation of \nN$_2$ molecule by the fixed nuclei R-matrix method \nas in our previous work on electron O$_2$ \nscatterings \\cite{PhysRevA.73.052707, 2006TASHIRO-2}. \nAlthough theoretical treatment is similar to the previous work of \nGillan et al.\\cite{JPhysB.29.1531}, more target states and \npartial waves of a scattering electron are included in the present work. \nMain purpose of this work is comparison of ICSs as well as DCSs \nfor the 8 lowest excited states \nwith the experimental results of Cartwright et al. \\cite{PhysRevA.16.1013}, \nBrunger and Teubner\\cite{PhysRevA.41.1413}, \nCampbell et al.\\cite{JPhysB.34.1185}, Khakoo et al. \\cite{2005PhRvA..71f2703K} \nand Johnson et al.\\cite{2005JGRA..11011311J}. \nThis is because previous theoretical works have covered only a part of \nthese 8 excitations. \n\nIn this paper, details of the calculation are presented in section 2, \nand we discuss the results in section 3 comparing our ICSs and DCSs with \nthe previous theoretical and available experimental data. \nThen summary is given in section 4. \n\n\\clearpage\n\n\n\\section{Theoretical methods}\n\nThe R-matrix method itself has been described extensively in the literature \n\\cite{Bu05,Go05,Mo98} as well as \nin our previous paper\\cite{PhysRevA.73.052707}. \nThus we do not repeat general explanation of the method here. \nWe used a modified version of the polyatomic programs in the UK molecular \nR-matrix codes \\cite{Mo98}. \nThese programs utilize the gaussian type orbitals (GTO) to \nrepresent target electronic states as well as a scattering electron. \nAlthough most of the previous R-matrix works in electron N$_2$ collisions \nhave employed Slater type orbitals (STO), we select GTO mainly because of \nsimplicity of the input and availability of basis functions. \nIn the R-matrix calculations, we have included 13 target states; \n${X}^1 \\Sigma^{+}_{g}$, $A^{3} \\Sigma_{u}^{+}$, $B^{3} \\Pi_{g}$, \n$W^{3} \\Delta_{u}$, ${B'}^{3} \\Sigma_{u}^{-}$, ${a'}^{1} \\Sigma_{u}^{-}$, \n$a^{1} \\Pi_{g}$, $w^{1} \\Delta_{u}$, $C^{3} \\Pi_{u}$, \n${E}^{3} \\Sigma_{g}^{+}$, ${a''}^{1} \\Sigma_{g}^{+}$, \n$c^{1} \\Pi_{u}$ and ${c'}^{1} \\Sigma_{u}^{+}$. \nThe potential energy curves of these target electronic states are\nshown in figure \\ref{fig1} for reference. \nThese target states were represented by valence configuration interaction \nwave functions constructed by state averaged complete active space SCF \n(SA-CASSCF) orbitals. \nNote that some target states, ${E}^{3} \\Sigma_{g}^{+}$, \n${a''}^{1} \\Sigma_{g}^{+}$ and ${c'}^{1} \\Sigma_{u}^{+}$, \nare Rydberg states and cannot be described \nadequately in the present valence active space. \nInclusion of these states are intended to improve quality of \nthe R-matrix calculations by adding more target states in the \nmodel, as in our previous works \\cite{PhysRevA.73.052707,2006TASHIRO-2} \nas well as other R-matrix works \\cite{No92,Hi94}. \nTest calculation was performed with an extra $4 a_g$ orbital in the target \norbital set. However, the target excitation energies as well as the \nexcitation cross sections did not change much compared to the results \nwith valence orbital set described above. \nAlso, removal of $3 b_{1u}$ orbital from target active space \ndid not affect the result much in our calculation. \nIn this study, the SA-CASSCF orbitals were obtained by calculations with \nMOLPRO suites of programs \\cite{molpro}. \nThe target orbitals were constructed from the [5s3p1d] level of \nbasis set taken from Sarpal et al. \\cite{Sa96}. \nIn our fixed-bond R-matrix calculations, the target states were \nevaluated at the equilibrium bond length $R$ = 2.068 a$_0$ \nof the N$_2$ ${X}^1\\Sigma^{+}_{g}$ ground electronic state. \nAlthough we also performed calculations with $R$ = 2.100 a$_0$ as in \nthe previous R-matrix calculation of Gillan et al.\\cite{JPhysB.29.1531}, \nthe cross sections with $R$ = 2.068 a$_0$ and $R$ = 2.100 a$_0$ are \nalmost the same. Thus, we will only show the results with \nthe equilibrium bond length of N$_2$ in the next section. \nThe radius of the R-matrix sphere was chosen to be 10 a$_0$ in our \ncalculations.\nIn order to represent the scattering electron, we included diffuse\ngaussian functions up to $l$ = 5, with 9 functions for $l$ = 0, 7 functions \nfor $l$ = 1 - 3 and 6 functions for $l$ = 4 and 5. \nExponents of these diffuse gaussians were fitted using the GTOBAS \nprogram \\cite{Fa02} in the UK R-matrix codes. \nIn addition to these continuum orbitals, we included 8 extra virtual \norbitals, one for each symmetry. \n\nWe constructed the 15-electron configurations from the orbitals\nlisted in table \\ref{tab0}. \nThe CI target wave functions are composed of the valence orbitals in \ntable \\ref{tab0} with the 1$a_g$ and 1$b_{1u}$ orbitals kept doubly \noccupied. \nWe have included 3 types of configurations in the calculation. \nThe first type of configurations has the form, \n\\begin{equation}\n1a_g^2 1b_{1u}^2 \\{ 2a_g 3 a_g 1 b_{2u} 1 b_{3u} 2 b_{1u} 3 b_{1u} \n1 b_{3g} 1 b_{2g} \\}^{10} \n\\left( {}^{1} A_{g} \\right) \\{5a_{g}...39a_{g} \\}^{1} \n\\left( {}^{2}A_g \\right), \n\\end{equation}\nhere we assume that the total symmetry of this 15 electrons system is\n${}^2A_g$. \nThe first 4 electrons are always kept in the 1$a_g$ and 1$b_{1u}$\norbitals, then the next 10 electrons are distributed over the valence\norbitals with restriction of target state symmetry, ${}^{1} A_{g}$\nsymmetry of the N$_2$ ground state in this case. \nThe last electron, the scattering electron, occupies one of the\ndiffuse orbitals, of $a_{g}$ symmetry in this example. \nTo complete the wave function with the total symmetry ${}^2A_g$, \nwe also have to include configurations with the other target states \ncombined with diffuse orbitals having appropriate symmetry in the same\nway as in the example. \nThe second type of configurations has the form, \n\\begin{equation} \n1a_g^2 1b_{1u}^2 \\{ 2a_g 3 a_g 1 b_{2u} 1 b_{3u} 2 b_{1u} 3 b_{1u} \n1 b_{3g} 1 b_{2g} \\}^{10} \n\\left( {}^{1} A_{g} \\right) \\{ 4a_{g} \\}^{1} \\left( {}^{2}A_g\n\\right),\n\\end{equation}\nwhere the scattering electron occupies a bound $4a_{g}$ extra virtual \norbital, instead of the diffuse continuum orbitals in the \nexpression (1). \nAs in table \\ref{tab0}, we included one extra virtual orbital for each\nsymmetry. \nThe third type of configurations has the form, \n\\begin{equation}\n1a_g^2 1b_{1u}^2 \\{ 2a_g 3 a_g 1 b_{2u} 1 b_{3u} 2 b_{1u} 3 b_{1u} 1\nb_{3g} 1 b_{2g} \\}^{11} \n\\left( {}^{2}A_g \\right).\n\\end{equation} \nIn this case, the last 11 electrons including the scattering electron \nare distributed over the valence orbitals with the restriction of \n${}^2A_g$ symmetry. \nNote that the third type of configurations are crucial in\ndescription of N$_2^-$ resonance states, which often have dominant\ncontributions to the excitation cross sections. \nIn this way, the number of configurations generated for a specific total \nsymmetry is typically about 60000, though the final dimension of the inner \nregion Hamiltonian \nis reduced to be about 600 by using CI target contraction and \nprototype CI expansion method \\cite{Te95}. \n\nThe R-matrix calculations were performed for all 8 irreducible \nrepresentations of the D$_{2h}$ symmetry, \n$A_g$, $B_{2u}$, $B_{3u}$, $B_{1g}$, $B_{1u}$, $B_{3g}$, $B_{2g}$ \nand $A_u$, in doublet spin multiplicity of the\nelectron plus target system. \nDCSs were evaluated in the same way as \nin our previous paper\\cite{2006TASHIRO-2}. \n\n\\clearpage\n\n\\section{Results and discussion}\n\n\\subsection{Excitation energies}\n\nFigure \\ref{fig1} shows the potential energy curves of all N$_2$ target \nstates included in the present R-matrix model. \nThese curves were obtained by the same SA-CASSCF method employed \nin our R-matrix calculation. \nTable \\ref{tab1} compares the excitation energies of the N$_2$ target states \nfrom the present calculation with the previous R-matrix results of \nGillan et al.\\cite{JPhysB.29.1531}, multi-reference coupled cluster results of \nBen-Shlomo and Kaldor \\cite{JChemPhys.92.3680} as well as experimental values. \nSince these energies are evaluated at different inter-nuclear distance, \n2.068 $a_0$ in our case, \n2.100 $a_0$ in Gillan et al.\\cite{JPhysB.29.1531} and \n2.074 $a_0$ in Ben-Shlomo and Kaldor \\cite{JChemPhys.92.3680}, \nprecise comparison is not so meaningful. \nHowever, deviations of excitation energies from the experimental \nvalues are less than 0.8 eV in our calculation, which \nis good considering the level of calculation. \nIn terms of excitation energies, our calculation and \nthe previous R-matrix calculation of Gillan et al.\\cite{JPhysB.29.1531} \nhave similar quality. \n\nIn addition to this good agreement of target energies with experimental\nresults, N$_2^+$ energies are also well described in our SA-CASSCF\ncalculation. In our calculation, N$_2^+$ $X {}^2 \\Sigma_g^+$ and \n$A {}^2 \\Pi_u$ states are located \nat 15.63 and 17.21 eV above N$_2$ $X {}^1 \\Sigma_g^+$ state, respectively. \nCompared to the experimental values of 15.61 and 17.08 eV, \nour SA-CASSCF calculation gives good results. \nNote that the energy ordering of N$_2^+$ $X {}^2 \\Sigma_g^+$ and \n$A {}^2 \\Pi_u$ states are not well described in \nthe Hartree Fock level calculation, see Ermler and McLean \\cite{JChemPhys.73.2297} for\nexample. \n\n\n\\subsection{Integral cross sections}\n\nFigure \\ref{fig2} shows integral cross sections for electron impact \nexcitation from the N$_2$ $X^{1} \\Sigma_{g}^{+}$ state to the \n$A^{3} \\Sigma_{u}^{+}$, $B^{3} \\Pi_{g}$, $W^{3} \\Delta_{u}$ \nand ${B'}^{3} \\Sigma_{u}^{-}$ states. \nIn this figure, present results are compared with \nthe previous R-matrix calculations of Gillan et al.\\cite{JPhysB.29.1531}, \nrecent calculations of da Costa and Lima \\cite{2006IJQC..106.2664D}, \nexperimental results of Cartwright et al. \\cite{PhysRevA.16.1013}, \nCampbell et al.\\cite{JPhysB.34.1185} and recent measurements of \nJohnson et al.\\cite{2005JGRA..11011311J}. \nRenormalized values of Cartwright et al. \\cite{PhysRevA.16.1013} are \nused as recommended by Trajmar et al. \\cite{PhysRep.97.221}. \nFigure \\ref{fig3} compares the present excitation cross sections \nof the ${a'}^{1} \\Sigma_{u}^{-}$, $a^{1} \\Pi_{g}$, $w^{1} \\Delta_{u}$ \nand $C^{3} \\Pi_{u}$ states with the previous experimental results of \nCartwright et al. \\cite{PhysRevA.16.1013}, \nCampbell et al.\\cite{JPhysB.34.1185} and \nJohnson et al.\\cite{2005JGRA..11011311J}.\nFor the $a^{1} \\Pi_{g}$ state cross sections, \nthe recent calculations of \nda Costa and Lima \\cite{2006IJQC..106.2664D},\nother experimental values \nof Ajello and Shemansky \\cite{JGeophysResSpacePhys.90.9845}, \nZetner and Trajmar \\cite{Zetner1987} and \nMason and Newell \\cite{JPhysB.20.3913} are included. \nFor the $C^{3} \\Pi_{u}$ state cross sections, \nthe experimental results of Zubek \\cite{JPhysB.27.573}, \nZubek and King \\cite{JPhysB.27.2613} and \nPoparic et al. \\cite{ChemPhys.240.283} are included. \n\nOur excitation cross sections for the $A^{3} \\Sigma_{u}^{+}$ state \nhave a resonance feature at approximately 12 eV \nas in the previous R-matrix results of Gillan et al.\\cite{JPhysB.29.1531}. \nThe N$_2^-$ ${}^2 \\Pi_u$ resonance state is responsible for \nthis peak structure. \nThe main configuration of this resonance state\nis $1\\pi_u^3 1\\pi_g^2$. \nOther than the ${}^2 \\Pi_u$ symmetry partial cross sections, \nThe ${}^2 \\Pi_g$ symmetry contributes to the ICSs as \na smooth background component (not shown in the figure). \nCompared to the previous R-matrix cross sections, \nthe peak at 12 eV is more pronounced in our case. \nOur results are slightly larger than theirs at 12-17.5 eV. \nCompared to the recent experimental results of \nJohnson et al.\\cite{2005JGRA..11011311J}, \nour cross sections are about 50\\% larger at 12.5-20 eV, though \n50\\% smaller at 10 eV. Also our calculation overestimates \nthe results of Campbell et al.\\cite{JPhysB.34.1185}, \nhowever, the results of Cartwright et al. \\cite{PhysRevA.16.1013} \nagree well with our results except at 12.5 eV. \nThe position of the resonance peak depends rather strongly on the \ninter-nuclear distance of N$_2$ molecule, which is 12.2 eV \nfor 2.068 $a_0$ and 11.75 eV for 2.100 $a_0$ in our calculations. \nThus, inclusion of vibrational motion may be necessary to resolve \nthis discrepancy of the resonance peak. \n\nOur excitation cross sections for the $B^{3} \\Pi_{g}$ state \nhave a small bump at 12.8 eV, which is not evident in \nthe previous R-matrix cross sections. \nThe origin of this bump is the N$_2^-$ ${1}^2 \\Delta_g$ state, with main\nconfiguration of $3\\sigma_g^1 1\\pi_g^2$.\nOther than this bump, the ICSs are mostly composed of the ${}^2 \\Pi_g$ \nsymmetry contribution and have a shape similar to \nthe previous R-matrix results. \nThe magnitude of our ICSs is about 50\\% larger than the previous \nresults of Gillan et al.\\cite{JPhysB.29.1531}. \nRecently, da Costa and Lima \\cite{2006IJQC..106.2664D} calculated \nICSs for the $B {}^{3} \\Pi_{g}$ state using the Schwinger multichannel\nmethod with the minimal orbital basis for the single configuration \ninteractions (MOB-SCI) approach. \nThere cross sections are much larger than our results above 12 eV. \nAlso, there is a prominent peak around 10 eV in their ICSs, which \ndoes not exist in the R-matrix calculations. \nCompared to the experimental ICSs, our results agree well with the cross\nsections of Cartwright et al. \\cite{PhysRevA.16.1013}, especially above 15 eV. \nHowever, the results of Campbell et al.\\cite{JPhysB.34.1185} are \nmuch larger than ours. \nRecent measurements of Johnson et al.\\cite{2005JGRA..11011311J} agree \nbetter with the previous R-matrix calculation of \nGillan et al.\\cite{JPhysB.29.1531}. \n\nFor the excitation cross sections for the $W^{3} \\Delta_{u}$ state, \nour results have a shape and magnitude similar to \nthe previous R-matrix results. \nMost of our ICSs are composed of the ${}^2 \\Pi_g$ symmetry \npartial cross sections. \nAgreement with the experimental cross sections of \nJohnson et al.\\cite{2005JGRA..11011311J} is good in this case. \nThe cross sections of Campbell et al.\\cite{JPhysB.34.1185} \nagree well with our results at 15 and 17.5 eV, \nbut their value is about half as much as our result at 20 eV. \nThe results of Cartwright et al. \\cite{PhysRevA.16.1013} are \nabout two times larger than our cross sections. \n\nOur excitation cross sections for the ${B'}^{3} \\Sigma_{u}^{-}$ state \nare about half of the previous R-matrix cross sections of \nGillan et al.\\cite{JPhysB.29.1531}. \nApart from this difference in magnitude, the shape of the \ncross sections is similar. \nDominant component in these ICSs is the ${}^2 \\Pi_g$ symmetry partial \ncross sections, \nalthough the ${}^2 \\Pi_u$ symmetry also has certain contribution \naround 18-20 eV. \nAmong 3 different experimental measurements, our results agree \nwell with the results of Johnson et al.\\cite{2005JGRA..11011311J}. \nThe experimental cross sections of other two groups \nare much larger than our results at 15 and 17.5 eV, and have \na different energy dependence compared to the present calculation. \n\nThe situation of the excitation cross sections for \nthe ${a'}^{1} \\Sigma_{u}^{-}$ state is similar to the case \nof the ${B'}^{3} \\Sigma_{u}^{-}$ state. \nThe ${}^2 \\Pi_g$ and ${}^2 \\Pi_u$ symmetry partial cross sections \ncontribute almost equally to the ICSs. \nOur cross sections roughly agree with the results of \nJohnson et al.\\cite{2005JGRA..11011311J}, while the cross sections of \nCartwright et al. \\cite{PhysRevA.16.1013} and \nCampbell et al.\\cite{JPhysB.34.1185} at 15 eV are much larger than our result. \nThe results of Cartwright et al. \\cite{PhysRevA.16.1013} and \nCampbell et al.\\cite{JPhysB.34.1185} \ndecrease as impact energy increases from 15 to 20 eV, however, \nour cross sections increase mildly in this energy region. \n\nIn case of excitation to the $a^{1} \\Pi_{g}$ state, \nseveral other experimental results are available in addition \nto the measurements of Cartwright et al. \\cite{PhysRevA.16.1013}, \nCampbell et al.\\cite{JPhysB.34.1185}, \nand Johnson et al.\\cite{2005JGRA..11011311J}. \nThe cross section profiles of Johnson et al.\\cite{2005JGRA..11011311J}, \nAjello and Shemansky \\cite{JGeophysResSpacePhys.90.9845}, \nCartwright et al. \\cite{PhysRevA.16.1013} and \nMason and Newell \\cite{JPhysB.20.3913} are similar to our ICSs. \nHowever, the magnitude of our cross sections is \nlower than the experimental values in most case except \nthe cross sections of Johnson et al.\\cite{2005JGRA..11011311J}. \nAt 15, 17.5 and 20 eV, agreement of our results with \nthe cross sections of Johnson et al.\\cite{2005JGRA..11011311J} \nis very good, although our cross section at 12.5 eV is \ntwice as large as their value. \nNote that there is no dominant symmetry contribution to the calculated \nICSs. All partial cross sections contribute rather equally to the ICSs. \nRecent ICSs of da Costa and Lima \\cite{2006IJQC..106.2664D} by \nthe Schwinger multichannel method are also shown in the panel (b) of the\nfigure \\ref{fig3}. Their result has a sharp peak at 12 eV as in their \ncalculation for the $B {}^{3} \\Pi_{g}$ state excitation. \nThis difference between our and their results may come from \ndifferent number of target states considered in the scattering calculation. \nOnly the $X {}^{1} \\Sigma_{g}^1$, $a {}^{1} \\Pi_{g}$ and $B {}^{3}\n\\Pi_{g}$ states were included in the calculations of da Costa and Lima. \nThe other part of the cross section profile is similar to the\nshape of our cross sections, although the magnitude of their cross\nsections are about twice as large as our results at 15-20 eV. \n\n\nOur excitation cross section for the $w^{1} \\Delta_{u}$ state \ngradually increases as a function of energy from the threshold to \nthe broad peak around 17.5 eV, then decreases toward 20 eV. \nIn this case, agreement with the results of \nJohnson et al.\\cite{2005JGRA..11011311J} is not so good compared \nto the excitations of the $a^{1} \\Pi_{g}$ and \n${a'}^{1} \\Sigma_{u}^{-}$ states. \nOur cross sections are about 50\\% larger than their values at 17.5 and 20 eV. \nAt 15 eV, our results agree well with the cross section of \nJohnson et al.\\cite{2005JGRA..11011311J}, however, they are \nabout 50\\% lower than the results of \nCartwright et al. \\cite{PhysRevA.16.1013} and \nCampbell et al.\\cite{JPhysB.34.1185}. \nIn the calculated ICSs, the ${}^2 \\Pi_u$ symmetry partial cross section is \na major component, with a minor contribution from the ${}^2 \\Pi_g$ symmetry. \n\n\n\nThe calculated excitation cross sections for \nthe $C^{3} \\Pi_{u}$ state has a peak \nsimilar to the experimental results of Zubek \\cite{JPhysB.27.573} and \nPoparic et al.\\cite{ChemPhys.240.283}. \nAlthough the shape of the cross sections is similar, \nposition of the cross section peak is different from \nexperimental results. \nIn our case, it is located at about \n17 eV, whereas corresponding peaks are located at \n14 eV in the experimental cross sections. \nThe height of the peak in our ICSs is lower than the \nexperimental values of Zubek \\cite{JPhysB.27.573} \nand Poparic et al.\\cite{ChemPhys.240.283}. \nIt is unclear whether there is a cross section peak in \nthe experimental cross sections of Cartwright et al. \\cite{PhysRevA.16.1013}, \nCampbell et al.\\cite{JPhysB.34.1185} \nand Johnson et al.\\cite{2005JGRA..11011311J}. \nAt least, it appears that they do not have a peak around 17 eV. \nThe origin of this discrepancy in the cross section peak is uncertain, \nbut may be related to the employment of the fixed-nuclei approximation or \ninsufficiency of higher excited target states in the R-matrix model. \nThe calculated ICSs are composed of the ${}^2 \\Sigma_u^+$ and \n${}^2 \\Sigma_u^-$ symmetry partial cross sections near the peak \nstructure at 17 eV. \nThe contribution of the ${}^2 \\Sigma_u^+$ symmetry is about 50\\% larger \nthan the ${}^2 \\Sigma_u^-$ component. \nOther than these two symmetries, the ${}^2 \\Pi_g$ symmetry partial cross \nsection contributes to the ICSs as a smooth background component. \n\n\n\n\n\n\\subsection{Differential cross sections}\n\nFigure \\ref{fig4} shows calculated DCSs for excitation of \nthe ${A}^{3} \\Sigma_{u}^{+}$ state with the experimental \nresults of Khakoo et al. \\cite{2005PhRvA..71f2703K}, \nBrunger and Teubner \\cite{PhysRevA.41.1413}, \nCartwright et al. \\cite{PhysRevA.16.1013}, \nZetner and Trajmar \\cite{Zetner1987}, \nLeClair and Trajmar \\cite{JPhysB.29.5543} and \nthe previous R-matrix DCSs of Gillan et al.\\cite{JPhysB.29.1531}. \nOur DCSs at 12.5, 15 and 17.5 eV have similar shape in common. \nThey are enhanced in backward direction and have a small dimple \nat 120 degrees with a bump at 75 degrees. \nAt 17.5 eV, our cross sections are located between the \nexperimental values of Khakoo et al.\\cite{2005PhRvA..71f2703K} \nand Cartwright et al. \\cite{PhysRevA.16.1013}. \nThe profile of the experimental DCSs are reproduced well in \nour calculation. \nAt 15 eV, our results agree better with the results of \nKhakoo et al. \\cite{2005PhRvA..71f2703K} compared to the other experiments. \nIn the DCSs of the previous R-matrix calculation of \nGillan et al.\\cite{JPhysB.29.1531}, \na bump is located at 40 degrees and a small dimple \nis located at 100 degrees, which agree better with the \nexperimental results of Brunger and Teubner\\cite{PhysRevA.41.1413}. \nIn our calculation, these dimple and bump are shifted toward \nbackward direction by 20 degrees, and agreement with \nthe results of Brunger and Teubner \\cite{PhysRevA.41.1413} is not so good. \nAt 12.5 eV, our calculation overestimates the experimental results by \na factor of two. As seen in panel (a) of Fig.\\ref{fig2}, \nthis discrepancy is related to the existence of a resonance \npeak around 12.5 eV. \n\nFigure \\ref{fig5} compares calculated excitation DCSs for the \n$B^{3} \\Pi_{g}$ state with the experimental and recent theoretical results. \nOur DCSs at 12.5, 15 and 17.5 eV have backward-enhanced feature \nwith a broad peak at 130 degrees. \nAt 15 and 17.5 eV, our DCSs agree well with the results of \nKhakoo et al. \\cite{2005PhRvA..71f2703K} \nat forward direction below 80 degrees. \nHowever, their DCSs are smaller than ours by a factor of \ntwo at 80-130 degrees. Agreement with the results of \nCartwright et al. \\cite{PhysRevA.16.1013} \nat 15 eV is good at 20-130 degrees, although their DCSs are twice as \nlarge as our DCSs at 17.5 eV for low scattering angles. \nBecause of a resonance-like feature at 12.5 eV as seen in panel (b) \nof Fig.\\ref{fig2}, our results are larger than the experimental results \nat 12.5 eV. \nRecent Schwinger multi-channel results of da Costa and Lima \n\\cite{2006IJQC..106.2664D} are much larger than our DCSs at 12.5 \nand 15 eV. The deviation is especially large at 12.5 eV, which is\npossibly related to the difference in the excitation energies of \nthe target state. \n\n\nFigure \\ref{fig6} shows the excitation DCSs for \nthe $W^{3} \\Delta_{u}$ state with the experimental cross sections. \nAt 15 and 17.5 eV, our cross section gradually increases \nas a function of scattering angle, without noticeable bump or dip. \nAt 12.5 eV, the shape of DCSs is nearly symmetric around 90 degrees. \nAgreement with the experimental DCSs of \nKhakoo et al. \\cite{2005PhRvA..71f2703K} is good, \nalthough their results at 15 and 17.5 eV have more complex structure \nsuch as a small peak at 80 degrees. \nOur DCSs are generally smaller than the other experimental results \nof Brunger and Teubner\\cite{PhysRevA.41.1413}, \nCartwright et al. \\cite{PhysRevA.16.1013}, \nZetner and Trajmar \\cite{Zetner1987}. \n\nExcitation cross sections for the ${B'}^{3} \\Sigma_{u}^{-}$ state \nare shown in figure \\ref{fig7}. \nCalculated DCSs decrease to be zero toward 0 and 180 degrees, \nbecause of a selection rule associated with \n$\\Sigma^{+}$-$\\Sigma^{-}$ transition \\cite{Go71,Ca71}. \nOur DCSs have a broad single peak near 90 degrees at 12.5 and 15 eV, \nwhereas there are two broad peaks at 17.5 eV. \nThe position of the right peak at 17.5 eV coincides with that of \nthe experimental DCSs of Khakoo et al. \\cite{2005PhRvA..71f2703K} and \nCartwright et al. \\cite{PhysRevA.16.1013}, \nalthough the peak of Cartwright et al. \\cite{PhysRevA.16.1013} is \nmuch higher than ours. \nOur results agree well with the DCSs of \nKhakoo et al. \\cite{2005PhRvA..71f2703K} at 15 and 17.5 eV. \nHowever, their cross sections at 15 eV have a small dip at 100 degrees \nand a small bump 60 degrees, which do not exist in our results. \nAt 12.5 eV, our cross sections are slightly larger than the results \nof Khakoo et al.\\cite{2005PhRvA..71f2703K}. \nOn the whole, agreement with the other experimental results of \nBrunger and Teubner \\cite{PhysRevA.41.1413} and \nCartwright et al. \\cite{PhysRevA.16.1013} is not good. \n\nFigure \\ref{fig8} shows the excitation DCSs for the \n${a'}^{1} \\Sigma_{u}^{-}$ state. \nBecause of $\\Sigma^{+}$-$\\Sigma^{-}$ selection rule, \nDCSs at 0 and 180 degrees become zero as in the case of \nthe ${B'}^{3} \\Sigma_{u}^{-}$ state DCSs. \nCalculated DCSs have a broad single peak near 60 degrees at 12.5 and 15 eV. \nAt 17.5 eV, there are two broad peaks at 50 and 120 degrees. \nAlthough there is slight overestimation of DCSs near 50-60 degrees, \nour DCSs agree marginally with the results of \nKhakoo et al.\\cite{2005PhRvA..71f2703K}. \nAgreement with the other experimental results is \nnot good except low scattering angles at 17.5 eV. \n\nFigure \\ref{fig9} compares our excitation DCSs for \nthe $a^{1} \\Pi_{g}$ state with the experimental cross sections. \nBecause of large variation of the DCSs, the cross sections are \nshown in logarithmic scale. \nCalculated DCSs are strongly forward-enhanced, \nwhich is consistent with all experimental results shown \nin the figure. \nOur DCSs at 12.5 eV have a small dip around 100 degrees, which \nmoves forward to 85 degrees at 15 eV and 75 degrees at 17.5 eV. \nThis behavior roughly agrees with the results of \nCartwright et al.\\cite{PhysRevA.16.1013} and \nKhakoo et al.\\cite{2005PhRvA..71f2703K}. \nAt 15 eV, our DCSs agree better with \nthe results of Khakoo et al.\\cite{2005PhRvA..71f2703K} than \nthe other experimental DCSs. \nAt 17.5 eV, the results of Cartwright et al.\\cite{PhysRevA.16.1013} \nare closer to our DCSs at scattering angles above 40 degrees. \nBelow 40 degrees, our calculation significantly underestimates \nthe experimental DCSs. \nOur results at 12.5 eV are located between the DCSs of \nCartwright et al. \\cite{PhysRevA.16.1013} \nand Khakoo et al.\\cite{2005PhRvA..71f2703K}, \nhowever the shape of the DCSs is similar to their results. \nThe shapes of DCSs calculated by da Costa and Lima \n\\cite{2006IJQC..106.2664D} are similar to our results. \nHowever, their cross sections are larger than our results at\nlow-scattering angles below 80 degrees, where their results agree better \nwith the experimental DCSs of Brunger and Teubner \\cite{PhysRevA.41.1413} and \nZetner and Trajmar \\cite{Zetner1987}.\n\n\nFigure \\ref{fig10} shows calculated excitation DCSs for \nthe $w^{1} \\Delta_{u}$ state with the experimental cross sections. \nOur DCSs are enhanced in forward direction as in the case of \nthe $a^{1} \\Pi_{g}$ state. However, magnitude of the \nenhancement is much smaller than that of the $a^{1} \\Pi_{g}$ state. \nAgreement with the DCSs of Cartwright et al. \\cite{PhysRevA.16.1013} \nis good at 17.5 eV except low scattering angles below 20 degrees. \nAt 12.5 and 15 eV, their results are much larger than our DCSs. \nAt 15 eV, our DCSs agree marginally with the results of \nKhakoo et al. \\cite{2005PhRvA..71f2703K}, \nalthough details of the DCS profile are different. \nTheir results are smaller than ours at 17.5 and 12.5 eV. \nDiscrepancy is especially large for forward scattering at 12.5 eV. \n\nFigure \\ref{fig11} shows excitation DCSs for the $C^{3} \\Pi_{u}$ state \nwith the experimental cross sections of \nKhakoo et al.\\cite{2005PhRvA..71f2703K}, \nBrunger and Teubner\\cite{PhysRevA.41.1413}, \nZubek and King \\cite{JPhysB.27.2613} and \nCartwright et al. \\cite{PhysRevA.16.1013}. \nCalculated DCS profiles are almost flat at 12.5 and 15 eV, \nwhereas they are enhanced in backward direction at 17.5 eV. \nBelow 90 degrees, slope of the calculated DCSs at 17.5 eV \nis similar to the results of Khakoo et al.\\cite{2005PhRvA..71f2703K}, \nZubek and King \\cite{JPhysB.27.2613} and \nCartwright et al. \\cite{PhysRevA.16.1013}, \nthough our results are about 50\\% larger than their DCSs. \nIn general, our results do not agree well with the experimental DCSs. \nAlthough the ICS of \nKhakoo et al. \\cite{2005PhRvA..71f2703K} at 15 eV agrees well \nour result as shown in panel (d) of Fig.\\ref{fig3}, \nangular dependence of the cross sections appears to be different. \n\n\n\\subsection{Discussion}\n\n\n\n\nThe excitation ICSs of the $B^3 \\Pi_g$ state, shown in panel (b) of figure\n\\ref{fig2}, have a small bump around 13 eV. However, there is no such\nstructure in the previous R-matrix ICSs of \nGillan et al. \\cite{JPhysB.29.1531}. \nThe origin of this bump in our calculation is the N$_2^-$ ${1}^2\n\\Delta_g$ state, with main configuration of $3\\sigma_g^1 1\\pi_g^2$. \nThe existence of the N$_2^-$ ${1}^2 \\Delta_g$ state can also be verified by \nusual CASSCF calculation on N$_2^-$ with valence active space ignoring \ncontinuum orbitals. In molpro calculation, the energy of the \n${1}^2 \\Delta_g$ state is 15.7 eV. Since diffuse continuum orbitals are\nadded in the R-matrix calculation, the energy of the state is stabilized\nto be 12.8 eV in the present scattering calculation. \nIn the same way, the N$_2^-$ ${}^2 \\Pi_u$ ($1\\pi_u^3 1\\pi_g^2$) \nresonance peak in the $A{}^{3} \\Sigma_u^+$ excitation ICSs can be \nverified by usual CASSCF calculation. In molpro calculation, \nit is located at 14.7 eV, whereas the position of the resonance \nis stabilized to be 12.2 eV in our R-matrix scattering calculation. \nIt is unclear why the bump in the ICSs of the $B^3 \\Pi_g$ state \nis not evident in the previous R-matrix \ncross sections of Gillan et al.\\cite{JPhysB.29.1531}. \nSome details of the R-matrix calculations are different in their\ncalculation and ours, e.g., they used hybrid orbitals with Slater type \nfunctions, whereas we employed SA-CASSCF orbitals with Gaussian type functions. \nThese difference may contribute to the difference in magnitude of \nthe ${}^2 \\Delta_g$ partial cross section. \n\n\nIn this study, we employed the fixed-nuclei (FN) approximation. \nAs we can see in figure \\ref{fig1}, equilibrium bond lengths of \nthe excited N$_2$ states are longer than that of the ground state. \nThus, in principle, it would be desirable to include the effect of \nnuclear motion in the R-matrix calculation. \nUse of the FN approximation may be responsible for \nseveral discrepancies between our calculation and experiments, \nincluding bumps in the ICSs of the $A^{3} \\Sigma_{u}^{+}$ and \n$B^{3} \\Pi_{g}$ states, the position \nof the peak in the ICSs of the $C^{3} \\Pi_{u}$ state. \nAlthough the calculated DCSs agree very well with experimental results \nin general, our DCSs of the $A^{3} \\Sigma_{u}^{+}$, $B^{3} \\Pi_{g}$, \n$w^{1} \\Delta_{u}$ and $C^{3} \\Pi_{u}$ states at 12.5 eV are 2-4 times\nlarger than experimental results. These deviations in the near-threshold\nDCSs can also be related to the FN approximation. \nIn spite of these discrepancies, good agreements are observed between \nour calculation and experiments in most ICS and DCS cases as we can see \nin the figures. Agreements with the recent experimental results of \nKhakoo et al.\\cite{2005PhRvA..71f2703K} and Johnson et al.\n\\cite{2005JGRA..11011311J} are especially impressive. \nIt is possible to include nuclear motion in the R-matrix \nformalism through vibrational averaging of T-matrix elements \nor the non-adiabatic R-matrix method, though application of \nthese methods will be a difficult task in the presence of many \ntarget electronic states. In the future, we plan to perform \nthe R-matrix calculation with these methods including nuclear \nmotion. \n\n\n\n\n\n\\clearpage\n\n\\section{summary}\n\nWe have investigated electron impact excitations of \nN$_2$ molecule using the fixed-bond R-matrix method \nwhich includes 13 target electronic states,\n${X}^1 \\Sigma^{+}_{g}$, $A^{3} \\Sigma_{u}^{+}$, $B^{3} \\Pi_{g}$, \n$W^{3} \\Delta_{u}$, ${B'}^{3} \\Sigma_{u}^{-}$, ${a'}^{1} \\Sigma_{u}^{-}$, \n$a^{1} \\Pi_{g}$, $w^{1} \\Delta_{u}$, $C^{3} \\Pi_{u}$, \n${E}^{3} \\Sigma_{g}^{+}$, ${a''}^{1} \\Sigma_{g}^{+}$, \n$c^{1} \\Pi_{u}$ and ${c'}^{1} \\Sigma_{u}^{+}$. \nThese target states are described by CI wave functions in the valence \nCAS space, using SA-CASSCF orbitals. Gaussian type orbitals \nwere used in this work, in contrast to the STOs in \nthe previous R-matrix works. \nWe have obtained integral cross sections as well as \ndifferential cross sections of excitations to the \n$A^{3} \\Sigma_{u}^{+}$, $B^{3} \\Pi_{g}$, $W^{3} \\Delta_{u}$, \n${B'}^{3} \\Sigma_{u}^{-}$, ${a'}^{1} \\Sigma_{u}^{-}$, \n$a^{1} \\Pi_{g}$, $w^{1} \\Delta_{u}$ \nand $C^{3} \\Pi_{u}$ states, which have been studied a lot experimentally \nbut not enough theoretically before. \nIn general, good agreements are observed both in the \nintegrated and differential cross sections, \nwhich is encouraging for further theoretical and experimental \nstudies in this field. \nHowever, some discrepancies are seen in the integrated \ncross sections of the $A^{3} \\Sigma_{u}^{+}$ \nand $C^{3} \\Pi_{u}$ states, especially around a peak structure. \nAlso, our DCSs do not agree well with the experimental results at \nlow impact energy of 12.5 eV, compared to the higher energies \nof 15 and 17.5 eV. \nThese discrepancies may be related to the fixed-nuclei approximation or \ninsufficiency of higher excited target states in the R-matrix model. \n\n\n\n\\begin{acknowledgments}\nThe work of M.T. is supported by the Japan Society for the \nPromotion of Science Postdoctoral Fellowships for Research Abroad. \nThe present research is supported in part by the grant from the Air Force \nOffice of Scientific Research: the Advanced High-Energy Closed-Cycle Chemical \nLasers project (PI: Wayne C. Solomon, University of Illinois, \nF49620-02-1-0357). \n\\end{acknowledgments}\n\n\\clearpage\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nDeep learning has led to breakthroughs in various fields, such as computer vision \\cite{Krizhevsky2012, Kaiming2016} and language processing \\cite{Oord2016}. Despite its success, it is still limited by its vulnerability to adversarial examples \\cite{Szegedy2014}. In image processing, adversarial examples are small, typically imperceptible perturbations to the input that can lead to misclassifications in real-world scenarios. In domains like autonomous driving or healthcare this can potentially have fatal consequences. Since the weakness of neural networks to adversarial examples has been demonstrated, many methods were proposed to make neural networks more robust and reliable \\cite{Goodfellow2015, Madry2018, Tramer19}. In a constant challenge between new adversarial attacks and defenses, most of the proposed defenses have been shown to be rather ineffective \\cite{Guo2018, Kurakin2018, Samangouei2018}. Yet, substantial progress in this field is needed to increase the reliability and trustworthiness of neural networks in our daily lives.\n\n\n\n\\begin{figure*}\n \\centering\n \n \n \n \n \n \\begin{subfigure}{0.205\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/gradient_descent_normal.pdf}\n \\caption{}\n \\label{fig:gradient_descent}\n \\end{subfigure} \n \\begin{subfigure}{0.205\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/gradient_descent_smoothed.pdf}\n \\caption{}\n \\label{fig:gradient_descent_smoothed}\n \\end{subfigure}\n \\begin{subfigure}{0.210\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/surface.pdf}\n \\caption{}\n \\label{fig:surface}\n \\end{subfigure} \n \\begin{subfigure}{0.210\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/surface_smoothed.pdf}\n \\caption{}\n \\label{fig:surface_smoothed}\n \\end{subfigure}\n \\begin{subfigure}{0.05\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/colorbar.pdf}\n \\label{fig:gradient_descent_cb}\n \\caption*{}\n \\end{subfigure}\n \\caption{Comparison between the standard \\acf{GD} method \\textbf{(A)} and the proposed \\acf{SNGD} \\textbf{(B)}. \\textbf{(C)} shows the side view of the noisy loss surface displayed in (A) and (B). The calculated ascent direction is displayed by a white arrow. For SNGD the ascent direction is obtained by averaging multiple gradients (red arrows) in the vicinity of the current point on the loss surface. \\textbf{(D)} illustrates the effect of SNGD when the amount of sampling operations $N$ tends to infinity. In the limit, using SNGD is equivalent to using GD on a loss surface that is convoluted with a convolution kernel defined by the sampling distribution.}\n \\label{fig:gradient_descent_comparison}\n\\end{figure*}\n\n\nIn this paper we aim to improve the effectiveness of adversarial attacks on artificial neural networks. For this we adopt an intuitive idea that is known as gradient sampling \\cite{Burke05}. This method is designed to reliably estimate the significant optima of non-smooth non-convex functions with unreliable gradient information. Gradient sampling can be interpreted as a generalized steepest descent method where the gradient is not only calculated at a given point but additionally for points in the direct vicinity. The final gradient direction is subsequently calculated in the convex hull of all sampled gradients and thus incorporates additional information about the local geometry of the loss surface. The central idea behind this approach is that point estimates of gradients can be misleading, especially for noisy and non-convex loss surface. However, this method is only feasible for low-dimensional functions, due to the complexity of computing the convex hull in higher dimensions. To overcome this problem, we propose to simplify the algorithm so that it works with high dimensional data. \nWe discard the calculation of the convex hull and compute the final gradient direction as a weighted average of all sampled gradients. Figure \\ref{fig:gradient_descent_comparison} illustrates how the convergence of the standard \\acf{GD} method can be improved for noisy and non-convex loss surfaces by the proposed \\acf{SNGD}. Standard GD calculates nearly random ascent directions while SNGD approximates the global ascent direction more accurately as it is less susceptible to noise and local optima. \n\nThe contributions of this paper can be summarized as follows. First we relate the proposed gradient sampling method to the empirical mean, which converges towards a nonlocal gradient with respect to an underlying probability distribution for enough sampling operations. Subsequently we show that SNGD-based attacks are more query efficient while achieving higher success rates than prior attacks on popular benchmark datasets. Finally, we demonstrate that SNGD approximates the direction of an adversarial example more accurately during the attack and empirically show the effectiveness of SNGD on non-convex loss surfaces. \n\n\\section{Preliminary}\nIn the following we introduce the necessary mathematical notation to describe adversarial attacks and to review current contributions in this field.\n\n\\subsection{Notation}\n\nLet ($x$, $y$) with $x\\in\\mathbb{R}^d$ and $y\\in\\{1, 2, \\dots, C\\}$ be pairs of samples and labels in a classification task with $C$ different classes, where each sample is represented by a $d$-dimensional feature vector. In the following we assume that the samples are drawn from the $d$-dimensional unit cube, i.e., $x \\in [0,1]^d$, as this is typically the case for image data. Let $\\mathcal{L}$ be the loss function (e.g., categorical cross-entropy) of a neural network $F_{\\theta}$, parameterized by the parameter vector $\\theta \\in \\Theta$. \nConstructing an adversarial perturbation $\\gamma \\in \\mathbb{R}^d$ with maximum effect on the loss value can be stated as the following optimization problem: \n\\begin{align} \\label{eq:adversarial_opt}\n\\underset{\\gamma}\\max ~\\mathcal{L}(F_{\\theta}(x + \\gamma), y)\n\\end{align}\nThe perturbation $\\gamma$ is usually constrained in two ways: 1) The value range of the adversarial example is still valid for the respective domain (e.g, between [$0$, $1$] or [$0$, $255$] for images), 2) The adversarial example $x_{adv} = x + \\gamma$ is within a set of allowed perturbations $S$ that are unlikely to change the class label for human perception. In the following, we focus on \\textit{untargeted gradient-based adversarial attacks} that are constrained by the $L_{\\infty}$ norm such that $||\\gamma||_{\\infty} \\leq \\epsilon$, as done in prior work \\cite{Lin2020}.\n\n\\subsection{Related Work}\n\nA variety of adversarial attacks have been proposed. We give a brief overview for the most successful gradient-based algorithms and also a proven zeroth order attack known as \\ac{SPSA}.\n\n\\subsubsection{Fast Gradient Sign Method (FGSM)}FGSM introduced by \\citeauthor{Goodfellow2015}, is one of the first gradient-based adversarial attacks. FGSM calculates an adversarial example according to the following equation:\n\\begin{align} \\label{eq:fgsm}\n x_{adv} = \\Pi_{S} \\, (x + \\epsilon \\cdot \\operatorname{sign}(\\nabla_x\\mathcal{L}(F_{\\theta}(x), y))\n\\end{align}\nwhere $\\Pi_{S}(x)$ is a projection operator that keeps $\\gamma$ within the set of valid perturbations $S$ and $\\operatorname{sign}$ is the componentwise signum operator. \n\n\\subsubsection{Basic Iterative Method (BIM)}\\citeauthor{Kurakin2017} proposed an iterative variant of the FGSM attack, in which multiple smaller gradient updates are used to find the adversarial perturbation:\n\\begin{align} \\label{eq:ifgsm}\n x_{adv}^{t + 1} = \\Pi_{S} \\, (x_{adv}^{t} + \\alpha \\cdot \\operatorname{sign}(\\nabla_x \\mathcal{L}(F_\\theta(x_{adv}^{t}), y))\n\\end{align}\nwhere $\\epsilon \\geq \\alpha > 0$ and $x_{adv}^{t + 1}$ describes the adversarial example at iteration $t$ and $x_{adv}^{0} = x$. \\citeauthor{Madry2018} introduced a slightly modified version of BIM called \\textbf{Projected Gradient Descent (PGD)}, in which the starting point of the attack is randomly chosen from the set of valid perturbations $S$. \nMore variants of iterative gradient-based attacks have been proposed. This includes several momentum-based methods \\cite{Dong18, Useato2018, Lin2020} MI-FGSM ADAM-PGD, NI-FGSM.\n\n\\subsubsection{Nesterov Iterative Fast Gradient Sign Method (NI-FGSM)}\\citeauthor{Lin2020} incorporated Nesterov momentum into iterative attacks to improve their transferability and success rate. The attack is formally given by:\n\\begin{equation}\n\\begin{split}\n\\label{eq:nifgsm}\n x_{nes}^{t} &= x_{adv}^{t} + \\alpha \\cdot \\mu \\cdot g^{t} \\\\ \n g^{t + 1} &= \\mu \\cdot g^{t} + \\frac{\\nabla_{x}\\mathcal{L}(F_{\\theta}(x_{nes}^{t}), y)}{||\\nabla_{x}\\mathcal{L}(F_{\\theta}(x_{nes}^{t}), y)||_{1}} \\\\\n x_{adv}^{t + 1} &= \\Pi_{S} \\, (x_{adv}^{t} + \\alpha \\cdot \\operatorname{sign}(g^{t + 1})\n\\end{split}\n\\end{equation}\nwhere $g^{t}$ denotes the accumulated gradients at iteration $t$ and $g^{0} = 0$ and $x_{adv}^0=x$. $\\mu>0$ describes the momentum parameter.\n\n\\subsubsection{Output Diversified Initialization (ODI)}\\citeauthor{tashiro2020} introduced ODI, which aims at finding efficient starting points for adversarial attacks. Therefore, they calculate the starting point of an adversarial attack such that it maximizes the difference in the output space compared to a clean sample. They empirically show that this initialization provides more diverse and effective starting points than random initialization. The initialization of an ODI-based attack is given by: \n\\begin{align}\n \n x_{ODI}^{t + 1} &= \\Pi_{S} \\, (x_{adv}^{t} + \\alpha \\cdot \\operatorname{sign}(\\frac{\\nabla_{x}\\omega^{\\top}F_{\\theta}(x_{ODI}^t)}{||\\nabla_{x}\\omega^{\\top}F_{\\theta}(x_{ODI}^t)||_{2}})) \n\\end{align}\nwhere $x_{adv}^0 = x$ and $\\omega \\in \\mathbb{R}^{C}$ is sampled from a uniform distribution in a predefined range. This procedure is repeated for a given number of steps. Afterwards, the calculated starting point can be used for initializing an adversarial attack.\n\n\\subsubsection{Brendel \\& Bethge Attack (B\\&B)}\\citeauthor{Brendel19} propose an alternative gradient-based attack. They use gradients to estimate the decision boundary between an adversarial and a benign sample. Subsequently the attack follows the decision boundary towards the clean input to find the smallest perturbation that leads to a misclassification. Simultaneously, the updated perturbation is forced to stay within a box-constraint of valid inputs. The optimization problem can be formulated as follows:\n\\begin{equation}\n\\begin{split}\n \\underset{\\delta}{\\min} ~& ||x-x_{adv}^{t-1} - \\delta^{t}||_{\\infty} \\quad \\text{ s.t. } \\\\\n &x-x_{adv}^{t-1} + \\delta^{t} \\in [0,1]^d, \\ b^{t \\top}\\delta^{t} = c^{t}, \\ ||\\delta^{t}||_{2}^{2} \\leq r\n\\end{split}\n\\end{equation}\nwhere $\\delta^{t} \\in \\mathbb{R}^d$ is the step along the decision boundary in iteration $t\\in\\mathbb{N}$ within a given trust region with radius $r>0$, $b^t$ denotes the current estimate of the normal vector of the local boundary, and $c^t$ describes the constraint that the current perturbation is on the decision boundary. In particular the $L_{\\infty}$-variant of this attack is of interest for our investigations.\n\n\\subsubsection{\\acf{SPSA}}proposed by \\cite{Spall1992, Useato2018} is a zeroth order optimization algorithm which has been successfully used in prior work to break models with obfuscated gradients \\cite{Useato2018}. It uses finite-differences to approximate the optimal descent direction. Zeroth order attacks are usually less efficient than their gradient-based counterparts. Nevertheless, they can be used in situations where the gradient information of the model is either removed by a non-differentiable operation (e.g., JPEG compression \\cite{Guo2018}) or is highly obfuscated, as described in \\cite{Kurakin2018}. \n\n\n\\section{Sampled Nonlocal Gradient Descent} \\label{SampledNonlocal}\n\nWe propose an alternative to the standard \\acf{GD} algorithm, which we name \\acf{SNGD}. Our aim is to use this method to further augment existing gradient-based attacks. SNGD is specifically designed for noisy and non-convex loss surfaces as it calculates the gradient direction as the weighted average over multiple sample points in the vicinity of the current data point. The gradient calculation of SNGD is given by:\n\\begin{equation} \\label{eq:gradient_averaging}\n\\begin{split}\n\\nabla_{\\scriptscriptstyle{SNGD}} &\\mathcal{L}(F_\\theta (x,y)) :=\\\\\n&\\nabla_x \\frac{1}{N} \\sum_{i=1}^{N} w_{i} \\cdot \\mathcal{L}(F_\\theta (\\operatorname{clip_{[0,1]}}\\{x + \\xi_i^\\sigma\\}), y)),\n\\end{split}\n\\end{equation}\nwhere $N\\in\\mathbb{N}$ is the number of sampling operations, $w_i$ is the $i$-th weight of a sampled gradient, and $\\operatorname{clip_{[a,b]}}$ is the component-wise clipping operator with value range $[a, b]$. The clipping operator is needed to ensure that the data stays in the normalized range, e.g., in the case of images. It can be discarded for other applications with unbounded data. The random variables $\\xi_i^\\sigma$ are considered to be drawn i.i.d. from a distribution $P^\\sigma$ parametrized by the standard deviation $\\sigma>0$,\nwhich effectively determines the size of the neighborhood.\nBy the law of large numbers the sampled nonlocal gradient converges (with a rate of order $N^{-1\/2}$ in variance) to the respective expectation value, which is the given by the following nonlocal gradient\n\\begin{align} \n\\nabla_x \\mathbb{E}_{\\xi \\sim P^\\sigma} \\left[ \\mathcal{L}(F_\\theta (\\operatorname{clip_{[0,1]}}\\{x +\\xi\\}), y)) \\right]\n\\end{align}\nThe expectation is effectively a local averaging of the likelihood around $x$. Note that by linearity the proposed SNGD method is equivalent to an averaging of the gradients, i.e., the standard form of a nonlocal gradient, cf. \\cite{du2019nonlocal}.\nThis observation is also useful to efficiently compute the attack, as one can use only a single backward pass for all sampled gradients in each iteration. Note that as the forward-passes can be parallelized, the effective runtime of SNGD is equivalent to GD with sufficient memory.\nA SNGD-based BIM attack is formally given by:\n\\begin{align} \\label{eq:sngdifgsm}\n x_{adv}^{t + 1} = \\Pi_{S} \\, (x_{adv}^{t} + \\alpha \\cdot \\operatorname{sign}(\\nabla_{\\scriptscriptstyle{SNGD}} \\mathcal{L}(F_\\theta(x_{adv}^{t}), y))\n\\end{align}\nWe demonstrate the benefit of this approach in terms of efficiency in the results section below.\n\n\\section{Experiments}\n\nWe conduct several experiments to evaluate SNGD. We first analyze if SNGD can improve the success rate of adversarial attacks compared to GD. Secondly, we explore the possibility of combining SNGD with other methods, e.g., ODI. Also we investigate the option to decay the standard deviation $\\sigma$ during an attack and to individually weight the sampled gradients. Furthermore, we inspected the performance difference between several attacks, as efficient attacks play an important role to facilitate the evaluation of model robustness in real-world applications. Lastly, we analyze the ability of SNGD to better approximate the global descent direction and show that SNGD is effective for non-convex loss surfaces. Additional and ineffective experiments are included in the supplementary material.\n\n\\subsection{Setup} \n\nIn the following we give an overview of general hyperparameters used for the experiments, including thread model, training, evaluation, and datasets. We describe dataset-specific hyperparameters such as the model architecture in the corresponding sections.\n\n\\subsubsection{Threat model}In this work we focus our evaluation on the $L_{\\infty}$ norm and untargeted attacks. We combine our proposed method, SNGD with state-of-the art attacks, including PGD \\cite{Madry2018} and PGD with Nesterov Momentum (N-PGD) \\cite{Lin2020}, which we call SN-PGD and SN-N-PGD respectively. We additionally combine all PGD-based attacks with ODI \\cite{tashiro2020}. ODI-based attacks achieve one of the highest Success rates on the Madry MNIST leaderboard \\cite{Madry2018}. Moreover, we compare our approach to the B\\&B attack \\cite{Brendel19}, one of the most recent and effective gradient-based attacks. We additionally evaluated if models are obfuscating their gradients with the zeroth order \\ac{SPSA} attack \\cite{Spall1992, Useato2018}. If not stated otherwise all SNGD-based attacks are performed with $w_{i} = 1,\\, \\forall i \\in \\{1,2,\\dots, N\\}$. \n\n\nWe limited the amount of model evaluations to $2000$ for each gradient-based attack and distributed them over multiple restarts and samples (SNGD), such that $R\\cdot I \\cdot N = 2000$, where $R$ denotes the total number of restarts, $I$ the amount of attack iterations, and $N$ the amount of sampling operations for SNGD. We tried multiple combinations of model evaluations and restarts, as we observed that for a fixed budget of total evaluations this considerably impacts the performance of the attacks. For the standard PGD attack we obtained the highest success rates between $20-400$ iterations and $5-100$ random restarts. For SN-PGD attacks we used $100$ iterations, $4$ sampling operations and $5$ random restarts. The B\\&B attack was performed with $1000$ iterations and two restarts. The SPSA attack was performed with $100$ steps and a sample size of $8192$, as shown to be effective in \\cite{Useato2018}. Additional hyperparameters are included in the supplementary material. We performed all attacks on the same subset of $1000$ ($10\\%$) randomly selected test images as in \\cite{Useato2018, Brendel19}.\n\n\\subsection{Data and architectures}\n\nThree different image classification datasets were used to evaluate the adversarial robustness of the different models (MNIST \\cite{LeCun98}, Fashion-MNIST \\cite{Xiao2017} and CIFAR10 \\cite{Krizhevsky2009}). We split each dataset into the predefined train and test sets and additionally removed $10\\%$ ($5000$ samples) of the training data for validation.\n\n\\subsubsection{Training}We decided to evaluate our attack on two of the strongest empirical defenses to date, adversarial training \\cite{Athalye18, Madry2018, Useato2018} and TRADES \\cite{Zhang19}. For adversarial training we used the fast FGSM-based adversarial training algorithm \\cite{Wong2020}. In preliminary experiments on MNIST we observed that the loss surfaces of these models are not as convex as described in prior work \\cite{Kurakin2018, Chan20} (see Figure \\ref{fig:loss_landscape_fgsm_mnist}) compared to models trained with PGD-based adversarial training. For comparison we additionally trained each model with the typically used PGD-based adversarial training \\cite{Madry2018}. For TRADES we used the pre-trained model provided by the authors of the original paper \\cite{Zhang19} for the MNIST and CIFAR10 datasets.\n\nFor fast FGSM-based training we used the same hyperparameters as proposed in \\cite{Wong2020}. For PGD-based training we used $7$ steps and a step size of $1\/4$ $\\epsilon$ \\cite{Madry2018}. All self-trained networks were trained and evaluated $5$ times using stochastic gradient descent with the Adam optimizer ($\\beta_{1} = 0.9$, $\\beta_{2} = 0.999$) \\cite{Kingma14}. We used a cyclical learning rate schedule \\cite{Smith2017} which has been successfully used for adversarial training in prior work \\cite{Wong2020}. Thereby, the learning rate $\\lambda$ is linearly increased up to its maximum $\\Lambda$ over the first $2\/5$ epochs and then decreased to zero over the remaining epochs. The maximum learning rate $\\Lambda$ was estimated by increasing the learning rate of each individual network for a few epochs until the training loss diverged \\cite{Wong2020}. All models were optimized for $100$ epochs, which was sufficient for convergence, and the checkpoint with the lowest adversarial validation loss was chosen for testing.\n\n\\subsubsection{MNIST}consists of greyscale images of handwritten digits each of size $28\\times28\\times1$ ($60,000$ training and $10,000$ test). We used the same MNIST model that \\citeauthor{Wong2020} used for fast adversarial training. However, we doubled the number of filters for the convolutional layers, as we noticed that the performance of the model sometimes diverged to random guessing during training. The optimal maximum learning rate we found for MNIST was about $0.005$, which is in line with \\cite{Wong2020}. As in prior work, we used a maximum perturbation budget of $\\epsilon = 0.3$.\n\n\n\n\\subsubsection{Fashion-MNIST}consists of greyscale images of $10$ different types of clothing, each of size $28\\times28\\times1$ ($60,000$ training and $10,000$ test). The Fashion-MNIST classification task is slightly more complicated than MNIST, as it contains more intricate patterns. For Fashion-MNIST we used the same architecture as for MNIST. The optimal learning rate we found for Fashion-MNIST was approximately $0.007$. To the best of our knowledge there is no standard perturbation budget $\\epsilon$ commonly used for Fashion-MNIST. Since this dataset contains more complicated patterns than MNIST we used a lower maximum perturbation budget of $\\epsilon = 0.15$.\n\n\n\\subsubsection{CIFAR10}consists of color images, each of size $32\\times32\\times3$, with $10$ different labels ($50,000$ training and $10,000$ test). CIFAR10 is the most challenging classification task out of the three. For CIFAR10 we used the same PreActivationResNet18 \\cite{Kaiming2016} architecture as in \\cite{Wong2020}. All images from the CIFAR10 dataset were standardized and random cropping and horizontal flipping were used for data augmentation during training as in \\cite{Kaiming2016, Madry2018, Wong2020}. We found the optimal learning rate to be around $0.21$. Inline with previous work, we set the maximum perturbation budget to $\\epsilon = 8\/255$.\n\n\\subsection{Experiments on noise distributions and sampling}\n\n\\subsubsection{Noise distribution:}To combine \\ac{SNGD} with an adversarial attack we need to define the distribution $P^\\sigma$ from which we sample data points in the local neighborhood. In preliminary experiments we evaluated the performance for the Uniform, Gaussian and Laplacian distributions on the MNIST dataset. We analyzed the success rate for a wide range of distribution parameters but did not observe any considerable differences between the optimally tuned distributions. Since the Gaussian distribution achieved marginally superior results, we decided to use it for the remaining experiments.\n\nWe constrained the search space for the optimal standard deviation $\\sigma$ to $0< \\sigma <\\epsilon$ since the gradient information outside of the attack radius should be non-relevant for the optimization of the attack.\nTo rapidly find a good estimate for the standard deviation $\\sigma$ for each model and attack, we fine-tuned $\\sigma$ on a single batch of the validation set. We observed that for a wide range of $\\sigma$ values the performance of the PGD attack increases. This is exemplified in Figure \\ref{fig:success} for the MNIST dataset ($\\epsilon = 0.3)$.\n\n\\begin{figure}\n \\centering\n \\begin{subfigure}{0.22\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/success.pdf}\n \\caption{Success Rate}\n \\label{fig:success}\n \\end{subfigure} \n \\begin{subfigure}{0.22\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/sampling.pdf}\n \\caption{Sampling}\n \\label{fig:sampling}\n \\end{subfigure}\n\\caption{The two plots show the success rate of SNGD-based PGD attacks (SN-PGD) on a single validation batch (512 samples) of the \\textbf{MNIST} dataset ($\\epsilon$ = 0.3). \\textbf{(A)} SN-PGD success rate for varying standard deviations $\\sigma$. All displayed attacks are performed with $100$ iterations, $5$ restarts and $N = 4$. For noise free attacks ($\\sigma = 0$) we made an exception and used $400$ iterations and $5$ restarts. \\textbf{(B)} SN-PGD success rate for different amounts of sampling operations $N$ and $\\sigma = 0.05$. Every attack was performed with $5$ restarts and the amount of attack iterations $I$ was set to $400\/N$.}\n\\label{fig:results}\n\\end{figure}\n\n\\subsubsection{Sampling:}In an additional experiment we evaluated the number of sampling operations $N$ that are the most effective for \\ac{SNGD}-based attacks. Each sampling operation $N$ increases the computational overhead of SNGD-based attacks but should in turn improve the calculated descent direction. Therefore, we evaluated the success rate for different $\\sigma$ values and number of sampling operations on the MNIST validation set. Figure \\ref{fig:sampling} illustrates that $4$ sampling operations were optimal to increase the success rate for MNIST. We set $N = 4$ for all datasets for the remaining experiments. \n\n\\section{Results and Discussion} \\label{Sec:Results}\n\n\\subsection{Success Rate} Table \\ref{tab:first_results} demonstrates that the proposed SN-PGD attack surpasses the mean success rate of prior attacks in our experiments. Nevertheless, for the TRADES model trained on the CIFAR10 dataset the B\\&B attack was more effective than our approach. In contrast to the results reported in \\cite{Brendel19} the B\\&B attack did not always outperform the standard PGD attack for the adversarially trained models. We could also not see a consistent increase in performance when combining SN-PGD or PGD and Nesterov momentum denoted by SN-N-PGD and N-PGD. N-PGD outperformed PGD in $3$ out of $8$ cases and SN-N-PGD outperformed SN-PGD in $1$ out of $8$ cases. We additionally analyzed the performance for individual runs as the standard deviation was high in some cases (e.g., F-MNIST). We observed that in general, the individual models were either more robust or vulnerable against all attacks. For all experiments where SN-PGD showed the highest mean success rate it also showed the highest success rate for all individual models. An exception was a single individual run (CIFAR10, PGD training), in which it was surpassed by the B\\&B attack.\n\n\\begin{table*}\n \\centering\n \\begin{tabular}{lcccccccc}\n \\toprule\n & Clean & FGSM & PGD & N-PGD &B\\&B & SN-PGD & SN-N-PGD & SPSA \\\\\n \\midrule\n \\textbf{MNIST} \\\\\n RFGSM & 99.2 \\textsubscript{\\textpm 0} & 96.4 \\textsubscript{\\textpm 2} & 88.8 \\textsubscript{\\textpm 3} & 88.6 \\textsubscript{\\textpm 4} & 87.0 \\textsubscript{\\textpm 3} & \\textbf{86.4}\\textsubscript{\\textpm 2} & 88.8 \\textsubscript{\\textpm 3} & 92.8\\textsubscript{\\textpm 3}\\\\\n PGD & 99.0 \\textsubscript{\\textpm 0} & 97.0 \\textsubscript{\\textpm 2} & 92.4 \\textsubscript{\\textpm 2} & 93.6 \\textsubscript{\\textpm 2} & 91.0 \\textsubscript{\\textpm 4} & \\textbf{90.8\\textsubscript{\\textpm 1}} & 91.2 \\textsubscript{\\textpm 2} & 95.0 \\textsubscript{\\textpm 3} \\\\\n TRADES & 99.5 & 96.2 & 91.2 & 91.6 & 90.6 & \\textbf{90.4} & 91.3 & 92.2 \\\\\n \\midrule\n \\textbf{F-MNIST} \\\\\n RFGSM & 85.4 \\textsubscript{\\textpm 1} & 74.8 \\textsubscript{\\textpm 4} & 60.4 \\textsubscript{\\textpm 8} & 61.6 \\textsubscript{\\textpm 8} & 60.6 \\textsubscript{\\textpm 9} & \\textbf{58.8}\\textsubscript{\\textpm 8} & 61.0 \\textsubscript{\\textpm 9} & 66.6\\textsubscript{\\textpm 8}\\\\\n PGD & 85.7 \\textsubscript{\\textpm 0} & 83.2 \\textsubscript{\\textpm 5} & 70.0 \\textsubscript{\\textpm 7} & 69.2 \\textsubscript{\\textpm 7} & 70.8 \\textsubscript{\\textpm 7} & 68.8 \\textsubscript{\\textpm 6} & \\textbf{68.6} \\textsubscript{\\textpm 6} & 74.2 \\textsubscript{\\textpm 6} \\\\\n \\midrule\n \\textbf{CIFAR10} \\\\\n RFGSM & 83.6 \\textsubscript{\\textpm 0} & 54.0 \\textsubscript{\\textpm 7} & 43.4 \\textsubscript{\\textpm 6} & 44.0 \\textsubscript{\\textpm 4} & 45.8 \\textsubscript{\\textpm 5} & \\textbf{43.0} \\textsubscript{\\textpm 6} & 44.6\\textsubscript{\\textpm 5} & 49.1\\textsubscript{\\textpm 6}\\\\\n PGD & 79.7 \\textsubscript{\\textpm 0} & 55.0 \\textsubscript{\\textpm 3} & 48.4 \\textsubscript{\\textpm 2} & 49.6 \\textsubscript{\\textpm 2} & 49.0 \\textsubscript{\\textpm 3} &\\textbf{48.2} \\textsubscript{\\textpm 2} & 49.2\\textsubscript{\\textpm 3} & 50.5 \\textsubscript{\\textpm 4} \\\\\n TRADES & 84.9 & 63.2 & 59.3 & 59.2 & \\textbf{58.3} & 58.5 & 59.0 & 60.2 \\\\\n \\end{tabular}\n \\caption{Mean accuracy and standard deviation ($\\%$) on \\textbf{MNIST}, \\textbf{Fashion-MNIST} and \\textbf{CIFAR10} for various adversarial attacks. The lower the accuracy the better (higher success rate). The attack with the highest success rate is displayed in bold for each row. RFGSM- and PGD-trained models were trained and evaluated five times.}\n \\label{tab:first_results}\n\\end{table*}\n\n\\subsection{Additional experiments}\n\nThe following section summarizes additional experiments: 1) combination of SNGD with ODI, 2) combination of SNGD with noise decay, 3) different approaches to weight the gradients sampled with SNGD, 4) the runtime between different attacks, 5) the ability to approximate the global descent direction between GD- and SNGD-based attacks, and 6) the effectiveness of SNGD on increasingly convex loss surfaces. \n\n\\subsubsection{1) Combination with ODI}We evaluated if combining the different PGD-based attacks with \\acf{ODI} increases their success rate. From Table \\ref{tab:odi_results} we can see the accuracy and accuracy difference after combining the attacks with ODI. In $9$ out of $24$ cases the success rate improved while it decreased in $4$ out of $24$. For the PGD trained CIFAR10 model initialization with ODI improved the SN-PGD attack the most by $2.2\\%$. For the remaining experiments the performance was not changed considerably. This is in line with the original paper, where the success rate increased only marginally on the MNIST dataset and more substantially on CIFAR10 \\cite{tashiro2020}. Note that ODI was partly designed to increase the transferability of adversarial attacks to other models which was not tested in this experiment.\n\n\n\\begin{table}[ht]\n \\centering\n \\begin{tabular}{lccc}\n \\toprule\n ODI & PGD & N-PGD & SN-PGD \\\\\n \\midrule\n \\textbf{MNIST} \\\\\n RFGSM & 88.6 (-0.2) & 88.4(-0.2) & \\textbf{86.8}(+0.4) \\\\\n PGD & 92.4(+0.0) & 93.3(-0.3) & \\textbf{90.8}(+0.0) \\\\\n TRADES & 91.2 (+0.0) & 91.4(-0.2) & \\textbf{90.3}(-0.1) \\\\\n \\midrule\n \\textbf{F-MNIST} \\\\\n RFGSM & 60.4(+0.0) & 61.4(-0.2) & \\textbf{59.2}(+0.4) \\\\\n PGD & 70.0(+0.0) & 69.2(+0.0) & \\textbf{68.8}(+0.0) \\\\\n \\midrule\n \\textbf{CIFAR10} \\\\\n RFGSM & 43.7(+0.3) & 44.4(+0.4) & \\textbf{43.2}(+0.2) \\\\\n PGD & 48.4(+0.0) & 49.6(+0.0) & \\textbf{46.0}(-2.2) \\\\\n TRADES & 57.5(-1.8) & 57.7(-1.5) & \\textbf{56.8}(-1.7) \\\\\n \\end{tabular}\n \\caption{Mean accuracy ($\\%$) on \\textbf{MNIST}, \\textbf{Fashion-MNIST} and \\textbf{CIFAR10} for various adversarial attacks with ODI. The performance difference to attacks without ODI is given by a subscript (negative subscript values indicate an attack success rate increase). The attack with the highest success rate (With or without ODI) is displayed in bold for each row. RFGSM- and PGD-trained models were trained and evaluated five times.}\n \\label{tab:odi_results}\n \n\\end{table}\n\n\\subsubsection{2) Noise decay}\\citeauthor{Madry2018} demonstrate that individual runs of a PGD attack converge to distinctive optima with similar loss values. Based on this observation, they concluded that that PGD might be an optimal first-order adversary. We noticed that SN-PGD converges to different loss values than PGD and further evaluated if different noise levels $\\sigma$ are optimal for different samples, as the characteristics of the loss surface are likely to change between different samples. Therefore, we evaluated if decaying the noise during a SN-PGD attack and between restarts improves the success rate. The idea is that decaying the noise during the attack should make the optimization more stable, as it is less likely that the algorithm alternates around a local optimum. A similar approach has also been proposed for the gradient sampling methodology \\cite{Burke05}. We compared three different decay schedules: 1) decaying the noise in each attack iteration (SN-PGD + ID), 2) decaying the noise at every attack restart (SN-PGD + RD), 3) decaying the noise at every iteration and restart (SN-PGD + IRD). The results are summarized in Table \\ref{tab:noise_decay_results}. Every schedule improved the performance on average. Note that we did not tune the hyperparameters for noise decay and divided the noise by $1.05$ at each iteration and by $2$ at each restart.\n\n\n\\begin{table}[ht]\n \\centering\n \\begin{tabular}{lcccc}\n \\toprule\n & SN-PGD & +ID & +RD & +IRD \\\\\n \\midrule\n \\textbf{MNIST} \\\\\n RFGSM & 86.4\\textsubscript{\\textpm 2} & \\textbf{85.8} \\textsubscript{\\textpm 2} & 86.0 \\textsubscript{\\textpm 3} & 86.2 \\textsubscript{\\textpm 3} \\\\\n PGD & 90.8\\textsubscript{\\textpm 1} & \\textbf{90.2} \\textsubscript{\\textpm 2} & \\textbf{90.2} \\textsubscript{\\textpm 2} & 90.4 \\textsubscript{\\textpm 2} \\\\\n TRADES & 90.4 & \\textbf{90.0} & 90.4 & 90.1 \\\\\n \\midrule\n \\textbf{F-MNIST} \\\\\n RFGSM & 58.8\\textsubscript{\\textpm 8} & 58.0 \\textsubscript{\\textpm 9} & 58.4 \\textsubscript{\\textpm 9} & \\textbf{57.8} \\textsubscript{\\textpm 9} \\\\\n PGD & 68.8 \\textsubscript{\\textpm 6} & \\textbf{67.4} \\textsubscript{\\textpm 6} & 68.0 \\textsubscript{\\textpm 6} & 67.6 \\textsubscript{\\textpm 6} \\\\\n \\midrule\n \\textbf{CIFAR10} \\\\\n RFGSM & 43.0 \\textsubscript{\\textpm 6} & \\textbf{42.8} \\textsubscript{\\textpm 3} & 43.0 \\textsubscript{\\textpm 5} & 43.0 \\textsubscript{\\textpm 5} \\\\\n PGD & 48.2 \\textsubscript{\\textpm 2} & 48.2 \\textsubscript{\\textpm 2} & 48.0 \\textsubscript{\\textpm 1} & \\textbf{47.8} \\textsubscript{\\textpm 2} \\\\\n TRADES & 58.5 & \\textbf{58.0} & 58.4 & \\textbf{58.0} \\\\\n \\end{tabular}\n \\caption{Mean accuracy and standard deviation ($\\%$) on \\textbf{MNIST}, \\textbf{Fashion-MNIST} and \\textbf{CIFAR10} for various \\ac{SNGD}-based adversarial attacks with different noise decay schedules. The lower the accuracy the better (higher success rate). The attack with the highest success rate is displayed in bold for each row. RFGSM- and PGD-trained models were trained and evaluated five times.}\n \\label{tab:noise_decay_results}\n \n\\end{table}\n\n\\subsubsection{3) Gradient weighting}The dimensionality of the optimization problem is high compared to the amount of sampling operations $N$. This makes it less likely that the SNGD algorithm behaves like it would in the limit of $N$. Thus, we explored heuristics to improve the convergence rate of the algorithm. Instead of computing the final gradient direction as the weighted average of the gradients of all samples, we weighted the gradients according to their relation to the gradient at the original data point. We tested three different measurements and their reciprocals as weights: 1) Cosine similarity, 2) Euclidean distance, 3) Scalar-product. In our experiment weighting with the cosine similarity was the most effective approach. This reduced the sensitivity of the method to the standard deviation $\\sigma$ without reducing the performance of the attack. However, we must know the individual gradients of each sample in order to weigh them. Thus, we cannot sum up the activation during the forward-passes and calculate only the average gradient, but rather have to do one backward-pass for each forward-pass. This introduces a considerable additional computational overhead. Overall, averaging the gradients was more effective in our experiments with respect to runtime and success rate. \n\n\\subsubsection{4) Runtime comparison}Table \\ref{tab:runtime} shows the runtime average and standard deviation for each attack over all experiments shown in Table \\ref{tab:first_results}. Due to the lack of gradient information, SPSA requires a high amount of model evaluations and takes by far the longest time to find adversarial examples. The B\\&B attack was also considerably slower than PGD for the same amount of model evaluations in our experiments. Note that in the beginning of each B\\&B attack, we need to find the decision boundary which introduces an additional computational overhead. The fastest attack was the proposed SN-PGD where we use the same amount of forward-passes as with the PGD attack but use less backward passes. We did not parallelize the sampling operations of SNGD in our experiments. The runtimes of the attacks were compared on a Nvidia Geforce GTX1080.\n\n\\begin{table}\n \\centering\n \\begin{tabular}{lccccccc}\n \\toprule\n PGD & SN-PGD & B\\&B & SPSA \\\\\n \\midrule\n 100\\% \\textsubscript{\\textpm 4.9\\%} & 75\\% \\textsubscript{\\textpm 1\\%} & 455\\% \\textsubscript{\\textpm 49\\%} & 2134\\% \\textsubscript{\\textpm 367\\%}\n \\end{tabular}\n \\caption{Mean relative runtime of several attacks compared to standard PGD on all datasets (\\textbf{MNIST}, \\textbf{Fashion-MNIST} and \\textbf{CIFAR10}) for various adversarial attacks.}\n \\label{tab:runtime}\n \n\\end{table}\n\n\\subsubsection{5) Approximation of the adversarial direction}To get a better understanding of the effectiveness of SNGD, we inspected if \\ac{SNGD}-based PGD attacks approximate the final direction of a successful adversarial attack more accurately. This was achieved by computing the cosine similarity between subsequent iterations during the attack. Figure \\ref{fig:cosinesim_subsequent} shows that for the correct noise values $\\sigma$ the cosine similarity between subsequent attack iterations increases. Thus, the final adversarial direction is estimated more accurately by the \\ac{SNGD}-based attack. Depending on the characteristics of the loss surface the standard deviation $\\sigma$ of a SNGD-based attack can be adjusted to avoid improper local optima.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.4\\textwidth]{Images\/cosine_sub.pdf}\n\\caption{Average cosine similarity between subsequent gradients directions of a SN-PGD attack for varying standard deviations $\\sigma$. For specific $\\sigma$ values the cosine-similarity between subsequent steps increases.}\n\\label{fig:cosinesim_subsequent}\n\\end{figure}\n\n\\subsubsection{6) Loss surface}After observing that SNGD can improve the approximation of the global descent direction we examined if SNGD has a bigger impact on models with non-convex loss surfaces, where the global descent direction is hard to approximate for standard GD. Therefore we calculated an approximate visualization of the loss landscape by calculating the loss value along the direction of a successful adversarial perturbation ($g$) and a random orthogonal direction ($g^\\perp$) originating from a clean sample as exemplified in Figure \\ref{fig:loss_landscape}.\nIn contrast to prior work \\cite{Kurakin2018, Chan20} we found that the loss surface of the adversarial trained models (RFGSM and PGD) is often not increasing most rapidly towards the adversarial direction, which shows the non-convexity of the optimization problem. Furthermore, we noticed that in cases where the loss surface is less convex, the performance difference between PGD and SN-PGD increases. The sub-figures \\textbf{(A)}, \\textbf{(B)}, and \\textbf{(C)} show loss surfaces which are increasingly convex, simultaneously the performance difference between SN-PGD and PGD for these models decreases (\\textbf{(A)}:$4.1\\%$, \\textbf{(B)}: $1.2\\%$, \\textbf{(C)}: $0.1\\%$). Table \\ref{tab:first_results} shows that the difference between SN-PGD and PGD is higher for RFGSM-trained networks, where we generally observed that the loss surface is less convex. Note that this kind of loss surface visualization is limited and provides only an approximation of the true characteristics.\n\n\n\n\\begin{figure}[ht] \n \\centering\n \\begin{subfigure}{0.23\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/loss_landscape_fgsm_mnist.pdf}\n \\caption{FGSM MNIST}\n \\label{fig:loss_landscape_fgsm_mnist}\n \\end{subfigure} \n \\begin{subfigure}{0.23\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/loss_landscape_pgd_mnist.pdf}\n \\caption{PGD MNIST}\n \\label{fig:loss_landscape_pgd_mnist}\n \\end{subfigure} \n \\begin{subfigure}{0.23\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/loss_landscape_pgd_cifar.pdf}\n \\caption{PGD CIFAR10}\n \\label{fig:loss_landscape_cifar}\n \\end{subfigure} \n \\caption{Representative loss surfaces around a clean sample $X_i$ (red dot) for a fast FGSM-trained model \\textbf{(A)} and PGD-trained models \\textbf{(B)}, \\textbf{(C)}. We calculate the loss value for sample $x_{i} + \\epsilon_{1} \\cdot \\gamma + \\epsilon_{2} \\cdot \\gamma^\\perp$ where $\\gamma$ is the direction of a successful adversarial attack and $\\gamma^\\perp$ a random orthogonal direction.}\n \\label{fig:loss_landscape}\n\\end{figure}\n\n\\section{Conclusion}\n\nIn this paper, we propose \\acf{SNGD}, an easy to implement modification of gradient descent to improve its convergence for non-convex and noisy loss surfaces. Through our experiments on three different datasets, we demonstrate that this method can be effectively combined with state-of-the-art adversarial attacks to achieve higher success rates. Furthermore, we show that SNGD-based attacks are more query efficient than current state-of-the-art attacks. Although the method proved to be effective in our experiments, larger datasets like ImageNet \\cite{Deng2009} still need to be explored. Additionally, the performance of the attack on alternative defense methods can be tested. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nAfter the observation of GRB~990123 and its afterglow, it was realized\nthat Gamma-Ray Bursts (GRBs) are collimated and not isotropic\nexplosions. GRB~990123 was a most problematic event since it called\nfor more than a solar mass of energy entirely converted into\n$\\gamma$-ray photons (Kulkarni et al. 1999) to explain its fluence\nunder the assumption of isotropic emission. It also had a peculiar\nafterglow, with a steepening of the decay rate approximately one day\nafter explosion (Castro-Tirado et al. 1999). This behavior had been\npredicted to be the signature of a collimated outflow (Rhoads 1997,\n1999). After correcting for beaming the implied energy release was\nreduced to $\\sim5\\times10^{51}$~erg (Bloom, Frail \\& Kulkarni 2003),\nwell below the energy crisis level of $10^{53}$~erg.\n\nGRB jets were initially postulated to be uniform, i.e., with constant\nproperties (energy flux, Lorentz factor, pressure) within a\nwell-defined opening angle $\\theta_j$ and nothing outside. Such jets\nwould produce a steepening afterglow light curve, with the break time\nscaling as $\\theta_j^{8\/3}\\,(E_{\\rm{iso}}\/n)^{1\/3}$ (Sari, Piran \\&\nHalpern 1999), where $E_{\\rm{iso}}$ is the isotropic equivalent energy\nof the outflow and $n$ the density of the ambient medium. Frail et\nal. (2001) and Panaitescu \\& Kumar (2001) discovered a remarkable\nanti-correlation between the jet break time and the isotropic\nequivalent energy release $E_{\\gamma,\\rm{iso}}$ in the prompt\nphase. Such a correlation, initially entirely empirical, can be\naccounted for if it is assumed that the same total energy is given to\nevery GRB but is channeled into jets with different opening angles. A\nnarrow jet would appear as a very bright GRB followed by an\nearly-breaking afterglow, while a wide jet would produce a weak GRB\nand an afterglow with a late break.\n\nRossi, Lazzati \\& Rees (2002) realized that the relation\n$E_{\\gamma,\\rm{iso}}\\,\\theta_j^2=$constant may reflect a universal\nangular distribution of jet energy rather than a distribution of\nopening angles among different jets (see also Lipunov, Postnov \\&\nProkhorov 2001; Zhang \\& \\mes\\ 2002). In this case one would write\n\\begin{equation}\n\\frac{dE(\\theta)}{d\\Omega} \\propto \\theta^{-2}.\n\\label{eq:stru}\n\\end{equation}\nRossi et al. also showed that such an energy pattern would produce an\nafterglow that reproduces the Frail et al. correlation and does not\nviolate other observations. Different observed properties would\nreflect different observing geometries rather than different intrinsic\njet properties. Several other jet morphologies have since been\nproposed and studied, specifically general power-law profiles and\nGaussian profiles (Granot \\& Kumar 2003; Salmonson 2003, Rossi et\nal. 2004; Zhang et al. 2004a). Despite this theoretical interest,\nresearch on structured jets has focused on their late-time dynamics\n(Kumar \\& Granot 2003) and observable properties (Perna, Sari \\& Frail\n2003; Nakar, Granot \\& Guetta 2004) and not on the origin of the\nenergy distribution. The only exception is the electromagnetic\nforce-free model, in which a structured jet is predicted naturally\n(Lyutikov, Pariev \\& Blandford 2003).\n\nSome degree of structure in GRB jets is implied by their connection to\nsupernova (SN) explosions (Galama et al. 1998; Stanek et al. 2003;\nHjorth et al. 2003; Malesani et al. 2004). The shear forces and mixing\nof the cold stellar material with the outflowing relativistic plasma\nare expected to create an interface or hollow slower jet around the\nhyper-relativistic flow (MacFadyen \\& Woosley 1999; MacFadyen et\nal. 2001; Proga et al. 2003). The resulting energy distributions of\nthe highly relativistic material are, however, far from a power-law\n(Zhang, Woosley \\& Heger 2004b). If one includes the\ntrans-relativistic outflow energy a power-law profile is obtained, but\nwith the steep index $-3$.\n\nIn this letter we propose a mechanism to create structured jets with\n$dE\/d\\Omega \\propto \\theta^{-2}$ as a result of the time evolution of\nthe opening angle of an hydrodynamic jet. This mechanism relies on the\nnature of the jet propagation through the star but is more robust and\npredictable than the poorly understood shear effects and mixing\ninstabilities. We show that, under a simple ansatz (constant\nluminosity engine) this mechanism yields a beam pattern following the\nprescription of eq.~\\ref{eq:stru} under wide conditions. This letter\nis organized as follows: in \\S~2 we describe the star, cocoon and jet\nconditions at breakout; in \\S~3 we compute the jet evolution during\nthe release of the cocoon material and the resulting jet structure at\nlarge radii. We discuss our results and their implications in \\S~4 and\nwe summarize our findings in \\S~5.\n\n\\newpage\n\n\\section{The jet and its cocoon at breakout}\n\nConsider a high entropy mildly relativistic jet ($\\Gamma_0 \\gtrsim 1$)\ngenerated at some small radius $r_0\\approx10^7-10^8$~cm in the core of\na massive star with an opening angle $\\theta_0$. The jet tries to\naccelerate under the pressure of internal forces. We assume that the\ninitial jet has enough momentum to define a direction of propagation\nand avoid a spherical explosion of the star. Such initial conditions\nare analogous to those adopted in numerical simulations (Zhang et\nal. 2003, 2004b). In the absence of any external material, it would\naccelerate in a broad conical outflow, with its Lorentz factor scaling\nwith radius as $\\Gamma\\propto{}r\/r_0$.\n\nIf the jet is surrounded by dense cold matter, as in the\ncollapsar\/hypernova scenario (Woosley 1993; MacFadyen \\& Woosley\n1999), such acceleration is inhibited. Once the jet reaches supersonic\nspeeds with respect to the stellar material, the ram pressure of the\nshocked gas ahead of the jet drives a reverse shock into the head of\nthe jet, slowing the jet head down to subrelativistic speeds. The\nmixture of shocked jet and shocked stellar material surrounds the\nadvancing jet to form a high pressure cocoon, analogous to the cocoon\nformed by a low-density jet from a radio galaxy advancing into the\nintergalactic medium (Scheuer 1974). As a result the jet dynamics is\ngoverned by three competing effects. First, the internal pressure\naccelerates the jet material, whose Lorentz factor scales as\n$\\Gamma\\propto(\\Sigma_j\/\\Sigma_0)^{1\/2}$ if the acceleration is\nisentropic, where $\\Sigma_j$ is the jet cross section of a single jet\nand $\\Sigma_0$ its value at $r_0$ (see, e.g., Beloborodov 2003,\n\\S~3.1). If the jet suffers internal dissipation as it propagates,\ne.g., from entrainment of cocoon material or internal shocks, then the\nincrease of $\\Gamma$ with $\\Sigma_j$ is slower. This can be caused,\ne.g., by shear instabilities at the jet-cocoon boundary. Second, the\nhead of the jet is slowed down by its interaction with the massive\nstar. The larger the jet cross-section, the slower its head\npropagates. Both shocked jet and stellar material flow to the sides\nfeeding the cocoon. Third, the cocoon pressure acts as a collimating\nforce on the jet, which therefore becomes narrower and more\npenetrating, albeit less relativistic.\n\nMatzner (2003) developed a simple analytic treatment to follow this\npropagation. Using the Kompaneets approximation (Kompaneets 1960) to\ndescribe the cocoon expansion, he was able to compute the jet\npropagation time within the star and the cocoon properties at the\nmoment the jet head reaches the stellar surface --- the breakout\ntime, $t_{\\rm br}$. For our purposes we need merely to compare the\nwidth of the cocoon at breakout with the width of the jet.\n\nConsider a jet with luminosity $L_j$ that reaches the stellar surface\nat $t_{\\rm{br}}$. The energy stored in the cocoon (neglecting\nadiabatic losses) is $E_c = L_j\\,(t_{\\rm{br}}-{r_\\star}\/{c})$, where\n$r_\\star$ is the radius of the star. About half of the cocoon energy\nis transferred to the nonrelativistic shocked stellar plasma via the\ncocoon shock, with the other half remaining in relativistic, shocked\njet material. If we adopt a relativistic equation of state for the\nentire cocoon, we can write the cocoon pressure as\n\\begin{equation}\np_c \\approx \\frac{E_c}{3\\,V_c}\n\\label{eq:pc}\n\\end{equation}\nwhere $V_c\\approx{}r_\\star\\,r_\\perp^2$ is the volume and\n$r_\\perp=v_{\\rm{sh}}\\,t_{\\rm{br}}$ the transverse radius of the cocoon\n(we assume that the jet occupies a negligible fraction of the cocoon\nvolume.) The velocity of the shock $v_{\\rm{sh}}$ driven by the cocoon\npressure into the star can be computed by balancing the cocoon\npressure against the ram pressure exerted by the stellar material\n(which has negligible internal pressure):\n$v_{\\rm{sh}}=\\sqrt{p_c\/\\rho_\\star}$, where $\\rho_\\star$ is the matter\ndensity of the star. This set of relations can be used with\neq.~\\ref{eq:pc} to obtain an equation for the cocoon pressure that\nreads (see also Begelman and Cioffi 1986 for a similar treatment in\nthe context of extragalactic radio sources):\n\\begin{equation}\np_c=\\left[\\frac{L_j\\rho_\\star}\n{3\\,r_\\star\\,t_{\\rm{br}}^2}\\left(t_{\\rm{br}}-\\frac{r_\\star}{c}\\right)\n\\right]^{1\/2}\n\\simeq\\left(\\frac{L_j\\,\\rho_\\star}{3\\,r_\\star\\,t_{\\rm{br}}}\\right)^{1\/2} .\n\\label{eq:pc2}\n\\end{equation}\nWe assume that the head of the jet propagates subrelativistically\nthrough the star and write $t_{\\rm br} \\equiv \\eta r_\\star\/c$, where\n$\\eta >1$. Then, adopting the notation $Q=10^x\\,Q_x$ and using cgs\nunits throughout, we have\n\\begin{equation}\np_c\\simeq 3\\times10^{19} \\eta^{-1\/2} \\left(\n\\frac{L_{j,51}\\,\\rho_{\\star}}{r_{\\star,11}^2} \\right)^{1\/2} .\n\\end{equation}\nThe opening angle of the cocoon is given by \n\\begin{equation}\n\\theta_c\\simeq\\frac{r_\\perp}{r_\\star}\\simeq\n\\frac{\\sqrt{p_c\/\\rho_\\star}\\,t_{\\rm{br}}}{r_\\star}\n\\simeq 0.2 \\eta^{3\/4} \n\\left(\\frac{L_{j,51}}{r_{\\star,11}^2\\,\\rho_\\star}\\right)^{1\/4} ,\n\\label{eq:thc}\n\\end{equation}\ni.e., of the order of tens of degrees.\n\nNow let us consider the properties of the jet as it reaches the\nbreakout radius. The jet pressure is given by $p_j = L_j\/(4 c \\Sigma_j\n\\Gamma_j^2)$, where $\\Gamma_j$ is the Lorentz factor of the jet\nmaterial before it is shocked at the jet head. Setting $p_j = p_c$\n(Begelman \\& Cioffi 1989; Kaiser \\& Alexnder 1997) with $\\Sigma_j =\n\\pi r_\\star^2 \\theta_j^2$, we find\n\\begin{equation}\n\\theta_{j,{\\rm br}} \\simeq 0.1 \\eta^{1\/4} \\left(\\frac{L_{j,51}}\n{r_{\\star,11}^2 \\rho_{\\star}}\\right)^{1\/4} \\Gamma_{j,{\\rm br}}^{-1} .\n\\label{eq:thj}\n\\end{equation}\nFor isentropic flow between $r_0$ and $r_\\star$ the confined jet\nreaches a Lorentz factor\n\\begin{equation}\n\\Gamma_{j,{\\rm br}}\\simeq 14 \\eta^{1\/8}\n\\Gamma_0\\left(\\frac{r_{\\star,11}}{r_{0,8}} \\right)^{1\/2}\n\\left(\\frac{\\theta_0}{30^\\circ} \\right)^{-1\/2}\n\\left(\\frac{L_{j,51}}{r_{\\star,11}^2 \\rho_{\\star}}\\right)^{1\/8} ,\n\\end{equation}\nwith somewhat lower values if the jet is dissipative. The\ncorresponding jet opening angle at breakout is $\\la 1^\\circ$; however,\nthe fact that $\\theta_{j,{\\rm br}}\\Gamma_{j,{\\rm br}} < 1$ suggests\nthat the jet should freely expand to a few times $\\theta_{j,{\\rm br}}$\nafter exiting the star. Even so, the jet at breakout is expected to be\nmore than an order of magnitude narrower than the cocoon. A more\ndetailed treatment of the evolution of the cocoon and its interaction\nwith the jet can be obtained by numerical integration of the set of\ndifferential equations that govern the flow. This allows us to take\ninto account the density gradient of the star and the shape of the\ncocoon. Results (Lazzati et al. 2005) do not differ substantially from\nthese analytic estimates.\n\nThese results may be altered if substantial dissipation due to\nrecollimation shocks and\/or shear instabilities takes place. We do not\nattempt to model these effects here. Recollimation shocks are however\nrelatively weak, being mostly oblique to the flow. As a matter of\nfact, these results are in qualitative agreement with results from\nnumerical simulations (Zhang, Woosley \\& MacFadyen 2003). Simulations\nshow, indeed, that isentropic conditions are not respected throughout\nthe propagation, since collimation shocks take place. These shocks\neffectively create a new nozzle at a larger radius $r>r_0$ which\nmodifies the scaling of the Lorentz factor with the jet cross\nsection. What is of relevance here is that no matter how wide the jet\nis initially (and it is likely to be poorly collimated, especially if\ninitial collimation is provided by the accretion disk), the jet that\nemerges from the star is narrow and highly collimated by the cocoon\npressure. This result is observed in all numerical simulations of\njets in collapsars (MacFadyen \\& Woosley 1999; MacFadyen, Woosley \\&\nHeger 2001; Zhang et al. 2003). The cocoon, on the other hand, covers\na wide solid angle, several tens of degrees across. In the next\nsection, we explore what happens after jet breakout.\n\n \n\\section{Jet breakout}\n\n\\begin{figure}\n\\psfig{file=unijet_f1.ps,width=\\columnwidth}\n\\caption{{Energy distribution for the afterglow\nphase for three instantaneous beam patterns (see inset). In all three\ncases a well defined $dE\/d\\Omega\\propto\\theta^{-2}$ section is clearly\nvisible. Only the edge of the jet and its core show marginal\ndifferences. The results shown are for a jet\/star with\n$L=10^{51}$~erg~s$^{-1}$, $T_{\\rm{GRB}}=40$~s and\n$r_\\star=10^{11}$~cm. Inset: Instantaneous beam patterns that reach\nthe surface of the star. The solid line shows a uniform jet, dashed\nline shows a Gaussian energy distribution, while the dash-dotted line\nshows an edge brightened (or hollow) jet.}\n\\label{fig:pat}}\n\\end{figure}\n\nWe now consider the jet development after the breakout, with the inner\nengine still active. At this stage, which has not been investigated in\nnumerical simulations so far, it is likely that dissipation will have\na lesser role, since the jet, as we shall see, is de-collimated rather\nthan recollimated.\n\nAs the jet reaches the stellar surface, it clears a channel for the\ncocoon. The cocoon material is therefore now free to expand out of the\nstar and its pressure drops. We assume that from this moment on the\nshock between the cocoon and the cold stellar material stalls, as a\nconsequence of the dropping cocoon pressure. This is equivalent to\nassuming a constant volume of the cocoon cavity inside the star. The\npressure drop for the relativistic cocoon can be derived through\n$dE_c=-\\epsilon_c\\,\\Sigma_c\\,c_s\\,dt$ where $\\Sigma_c$ is the area of\nthe free surface through which the cocoon material expands,\n$\\epsilon_c$ the cocoon energy density and $c_s=c\/\\sqrt{3}$ is the\nsound speed of the relativistic gas. Writing the cocoon volume as\n$V_c\\sim\\Sigma_c\\,r_\\star$, we can obtain the pressure evolution:\n\\begin{equation}\np_c=p_{c,{\\rm br}}\\,\\exp\\left({-\\frac{ct}{\\sqrt{3}\\,r_\\star}}\\right)\n\\end{equation}\nwhere $p_{c,{\\rm br}}$ is the cocoon pressure at the moment of shock\nbreakout.\n\nAs the cocoon pressure decreases, fresh jet material passing through\nthe cocoon is less tightly collimated. Under isentropic conditions the\njet Lorentz factor increases linearly with the opening angle and\npressure balance yields $\\theta_j \\propto p_c^{-1\/4}$, implying an\nexponentially increasing opening angle of the form\\footnote{Note that,\nif the jet is causally connected at breakout as suggested by\neq.~\\ref{eq:thj}, the jet would freely expand to an angle\n$\\theta_j=1\/\\Gamma_{j,{\\rm{br}}}>\\theta_{j,{\\rm{br}}}$ of the order of\na few degrees. This would result in an initially constant opening\nangle. The only effect on the final energy distribution of\neq.~\\ref{eq:wow} is to increase the size of the jet core from\n$\\theta_{j,{\\rm{br}}}$ to $1\/\\Gamma_{j,{\\rm{br}}}$.}\n\\begin{equation}\n\\theta_j=\\theta_{j, {\\rm br}}\\,\\exp\\left[\\frac{c\\,t}{4\\sqrt{3}\\,r_\\star}\n\\right] .\n\\label{eq:thopen}\n\\end{equation}\nDissipative jet propagation gives similar results (with merely a\ndifferent numerical coefficient $\\sim O(1)$ inside the exponential),\nprovided that $\\Gamma$ varies roughly as a power of $\\Sigma_j$. The\nangular distribution of integrated energy, as observed in the\nafterglow phase, is computed by integrating the instantaneous\nluminosity per unit solid angle from the moment the jet becomes\nvisible along a given line of sight ($t_{\\rm{l.o.s.}}$) until the end\nof the burst:\n\\begin{equation}\n\\frac{dE}{d\\Omega}=\\int_{t_{\\rm{l.o.s.}}}^{T_{\\rm{GRB}}}\n\\frac{dL}{d\\Omega}\\,dt \\simeq\\int_{t_{\\rm{l.o.s.}}}^{T_{\\rm{GRB}}}\n\\frac{L(t)}{\\pi\\,\\theta_j^2(t)}\\,dt .\n\\label{eq:struj0}\n\\end{equation}\n$t_{\\rm{l.o.s.}}$ is obtained by inverting eq.~\\ref{eq:thopen}. Such\nintegration is valid provided that the jet opening angle at time\n$T_{\\rm{GRB}}$ is smaller than the natural opening angle of the jet:\n$T_{\\rm{GRB}}<4\\sqrt{3}(r_\\star\/c)\\log(\\theta_0\/\\theta_{\\rm{br}})$. For\nthe fiducial numbers assumed ($r_\\star=10^{11}$~cm,\n$\\theta_{j,\\rm{br}}=1\\degr$ and $\\theta_0=30\\degr$) this corresponds to\n$\\sim100$~s comoving burst duration. Interesting effects should be\nexpected for longer bursts, as discussed in \\S~4. Assuming a jet with\nconstant luminosity $L$ and for all the lines of sight that satisfy\n$t_{\\rm br} < t_{\\rm{l.o.s.}} \\ll T_{\\rm GRB}$, eq.~\\ref{eq:struj0}\ngives the jet structure\n\\begin{equation}\n\\frac{dE}{d\\Omega} = \\frac{2\\sqrt{3}\\,L\\,r_\\star}\n{\\pi\\,c}\\,\\theta^{-2} \\qquad\\theta_{j,{\\rm{br}}}\\le\\theta\\le\\theta_0\n\\label{eq:wow}\n\\end{equation}\nand $dE\/d\\Omega\\sim$constant inside the core radius\n$\\theta_{j,{\\rm{br}}}$. This angular dependence, which characterizes\na ``structured jet'' or ``universal jet'' (Rossi et al. 2002, 2004;\nZhang \\& Meszaros 2002; Salmonson 2003; Lamb, Donaghy \\& Graziani\n2003), is of high theoretical interest. Jets with this beam pattern\nreproduce afterglow observations. If the jet is powered by fall-back\nof material from the star to the accretion disk, the mass accretion\nrate would be anti-correlated with the radius of the star, for a given\nstellar mass. This would produce a roughly constant value of\n$L_j{}r_\\star$ that would reproduce the so-called Frail relation\n(Frail et al. 2001; Panaitescu \\& Kumar 2001) as a purely\nviewing-angle phenomenon. In this case all jets would be alike, but\ndifferent observers would see them from different angles, deriving\ndifferent energetics.\n\nIn the above equations we have assumed for simplicity that the jet\nreaching the surface of the star is uniform. As shown by simulations\n(e.g. Zhang et al. 2003), it is more likely that a Gaussian jet\nemerges from the star. On the contrary, boundary layers may be\nproduced by the interaction of the jet with the collimating star,\nresulting in edge brightened jets (see the inset of\nFig.~\\ref{fig:pat}). It can be shown easily that the $\\theta^{-2}$\npattern does not depend on the assumption of\nuniformity. Fig.~\\ref{fig:pat} shows the integrated energy\ndistribution for uniform, Gaussian and hollow intrinsic jets. Even\nthough small differences are present at the edges (the jet core and\nthe outskirts), the general behavior is always\n$dE\/d\\Omega\\propto\\theta^{-2}$.\n\n\\section{Discussion}\n\nOur prediction that the jet opening angle evolves with time has two\nmajor, in principle testable, consequences. First, the brightness of a\ntypical GRB should tend to decrease with time, assuming a constant\nefficiency of gamma-ray production, since the photon flux at earth is\nproportional to the energy per unit solid angle. This behavior is\ncertainly seen in FRED (Fast Rise Exponential Decay) single pulsed\nevents, while it is harder to make a quantitative comparison in the\ncase of complex light curves. Some events, like the bright GRB~990123,\nhave multi-peaked lightcurves which show a clear fading at late\ntimes. On the other hand there exist rare events, such as GRB~980923,\nin which an increase of luminosity with time is observed. Lacking\nafterglow observations, it is hard to determine whether such events\nare peculiar.\n\nTo understand the second consequence of the model, consider an\nobserver at an angle $\\theta_{\\rm{obs}}>\\theta_{\\rm{br}}$ where\n$\\theta_{\\rm{br}}$ is the jet opening angle at breakout. Initially,\nthis observer lies outside the beaming cone and does not detect the\nGRB. Later, as the jet spreads beyond $\\theta_{\\rm{obs}}$, the\nobserver detects the burst. The beginning of the GRB emission is\ntherefore observer dependent. This causes a correlation between the\nburst duration and energetics: the longer the burst the larger its\ndetected energy output. A correlation between burst duration and\nfluence is detected in the BATSE sample but whether it is intrinsic or\ndue to systematic effects is still a matter of debate. Finally, if any\nisotropic emission should mark the jet breakout (MacFadyen \\& Woosley\n1999; Ramirez-Ruiz, MacFadyen \\& Lazzati 2002), such emission would not\nbe followed immediately by $\\gamma$-rays for observers lying outside\nof $\\theta_{\\rm{br}}$. This delay may explain the unusually long\ndelays between precursors and main emission detected in several BATSE\nGRBs (Lazzati 2005).\n\nSome complications can arise for very compact stars and\/or very\nlong-lived jets. In computing the energy profile (eq.~\\ref{eq:wow}) we\nhave implicitly assumed that the jet expands exponentially until it\ndies. This approximation obviously fails if the engine is long-lived,\nsince at some point the jet reaches its natural opening angle --\ndetermined, e.g., by the accretion disk geometry -- and the expansion\nstops. Alternatively for some combination of the parameters, the\ncocoon may occupy a small opening angle (eq.~\\ref{eq:thc}) . As the\njet expands, it eventually hits the cold star material and a different\nexpansion law, driven by the jet pressure onto the stellar material,\nsets in. Again, the expansion of the jet is slowed down and as a\nconsequence a shallower distribution of energy with off-axis angle is\nexpected. Such modifications to the $\\theta^{-2}$ law would manifest\nthemselves in the late stages of the afterglow evolution. The higher\nthan expected energy at large angles would flatten the light curve\ndecay or produce a bump in the afterglow. Such behavior has been\nclaimed in several radio afterglows (Frail et al. 2004; Panaitescu \\&\nKumar 2004), and could also explain the two breaks observed in the\nlight curve of GRB~030329\\footnote{Note that a simple two-component\njet cannot explain the fast variability observed in the optical\nlightcurve. It can only reproduce the overall behavior of the optical\nand radio lightcurves.}. If the outflow reaches its natural opening\nangle or crashes into the cold stellar material while still active,\nthe time-integrated jet can be described as the sum of a $\\theta^{-2}$\njet plus a roughly uniform jet with a larger opening angle --- in\nother words, a two-component jet (Berger et al. 2003). This behavior\nis tantalizing also if we consider the interaction from the point of\nview of the star. The expansion of the cocoon drives a shock wave into\nthe cold stellar material where it deposits approximately\n$10^{51}$~erg (Zhang et al. 2003; Lazzati \\& Begelman in\npreparation). If the cocoon is small and the jet comes into contact\nwith the star while still active and expanding, additional energy is\ngiven to the star. Complex afterglows should therefore be associated\nwith particularly energetic hypernov\\ae, such as SN2003dh.\n\n\\section{Summary and conclusions}\n\nIn this Letter we have analyzed the evolution of a GRB jet during its\ntransition from stellar confinement to free expansion. We focused\nmainly on the resulting angular structure of the outflow, and reached\ntwo main conclusions. First, the angular structure of the jet in the\nafterglow phase does not necessarily reflect the way in which the\nenergy is released by the inner engine. If the jet opening angle is\nnot constant during the $\\sim100$~s of the engine activity, the\nangular structure of the jet results from the integration of this\nevolution and has little to do with the pristine beam pattern.\nSecond, we analyzed the most probable evolution in the collapsar\nscenario. We find that the interaction of the jet with the surrounding\ncocoon causes the opening angle to increase exponentially with\ntime. This evolution, for a constant energy release from the inner\nengine and independently of the intrinsic beam pattern, produces an\nangular structure with $dE\/d\\Omega\\propto\\theta^{-2}$. Such an energy\ndistribution produces a broken power-law afterglow (Rossi et al. 2002,\n2004; Salmonson 2003, Kumar \\& Granot 2003; Granot \\& Kumar 2003). It\ncan also explain the so-called Frail relation (Frail et al. 2001;\nPanaitescu \\& Kumar 2001) if $L_j r_\\star\/c$ is roughly constant for\ndifferent GRBs. The origin of this energy distribution is here\nexplained for the first time in the context of hydrodynamical\nfireballs (see Lyutikov et al. 2003 for a discussion on the context of\nelectromagnetically dominated fireballs). We briefly discuss in \\S~4\nsome testable prediction of this model.\n\n\\bigskip\n\nWe thank Miguel Aloy, Gabriele Ghisellini and Andrew MacFadyen for\nuseful discussions. This work was supported in part by NSF grant\nAST-0307502 and NASA Astrophysical Theory Grant NAG5-12035.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:intro}\n\nLondon Interbank Offered Rate (LIBOR) was established in 1986 by\nthe British Banking Association (BBA), who defines LIBOR as ``the\nrate at which an individual Contributor Panel bank could borrow\nfunds, were it to do so by asking for and then accepting\ninter-bank offers in reasonable market size, just prior to 11:00\n[a.m.] London time''. Every London business day each bank within\nthe Contributor Panel (selected banks from BBA) makes a blind\nsubmission (each banker does not know what the quotes of the other\nBanks are) and a compiler (Thomson Reuters) averages the second\nand third quartiles. In other words, LIBOR is the trimmed average\nof the expected borrowing rates of leading banks. LIBOR rates are\npublished for several maturities and currencies.\n\nOver the time LIBOR became a fundamental interest rate with three\nmain characteristics: (i) it was viewed as an (intended) measure\nof the borrowing cost in the interbank market, (ii) before the\nfinancial crisis, it was interpreted as a risk free rate and (iii)\nit is a signal of global credit market conditions. Libor is\nenormously influential due to its use for the valuation of\nfinancial products worth trillions of dollars (\\cite{BISstat})\n\nThe way in which LIBOR is fixed is peculiar, because it does not\narise from actual transactions. It is not the result of the competing forces of supply and demand. There is a panel of banks selected\nby the BBA. Each of them should submit their best estimate\naccording to the following question: ``At what rate could you\nborrow funds, were you to do so by asking for and then accepting\ninter-bank offers in a reasonable market size just prior to 11\nam?'' (\\cite{BBATrent}). At some point, individual bank LIBOR\nsubmissions are often regarded as a proxy for the financial health\nof the submitting entity. Usually, an employee or group of\nemployees responsible for cash management in a bank are in charge\nof the daily submission to BBA. They should base their submission\non the money market conditions for the bank, and should not be\ninfluenced by other bank divisions such as the derivatives trading\ndesks. A fair Libor could signals the state of the interbank money market, and the central banks could act to alleviate frictions in it.\n\nUntil May 29, 2008 LIBOR was presumed a pretty honest estimation\nof the borrowing costs of prime banks. On that day,\n\\cite{MollenkampWhitehouse} published an article on the Wall\nStreet Journal casting doubts on the transparency of LIBOR's\nsetting, implying that published rates were lower than those\nimplied by credit default swaps (CDS). Investigations conducted by\nseveral market authorities such as US Department of Justice, the\nEuropean Commission, and the Financial Services Authority\n(FSA)\\footnote{It is noteworthy that the Financial Services Act\n2012 renamed FSA as Financial Conduct Authority (FCA), raising the\nimportance of ``fair conduct'' in financial markets.} detected\ndata manipulation and imposed severe fines to banks involved in\nsuch illegal procedure.\n\nSeveral leading banks applied for leniency. Jurists use to say\n\\textsl{``confessio est probatio probatissima''}, i.e. confession\nis the best proof. Therefore, we can accept that, at least, there\nwas some kind of unfair individual submissions or even worse, a\ncollusion attempt by a cartel of banks. This manipulation had two\nmain objectives. On the one hand, low submissions were intended to\ngive the market a signal of the bank's own good financial health.\nIf a bank steadily submits greater rates, this could indicate\nproblems in raising money, generating concerns regarding a\nunderlying solvency problem. On the other hand, some low\nsubmissions could be aimed to earn money from certain portfolio\npositions, whose assets are valued according to LIBOR.\n\nThe effect of erroneous LIBOR extends beyond the financial\nmarkets. In addition to provide a biased interbank lending cost,\n\\cite{Stenfors} affirms that it corrupts a ``key variable in the\nfirst stage of the monetary transmission mechanism''.\n\nThe importance of a good pricing system is based on its\nusefulness for making decisions. As Hayek \\cite{Hayek45} affirmed ``we\nmust look at the price system as such a mechanism for\ncommunicating information if we want to understand its real\nfunction''. If the price system is contaminated, but perceived as\npure, the effect could reach also the real economy, making it\ndifficult to find a way out the financial crisis.\n\nThis rate-rigging scandal made economists to examine the\nevolution of LIBOR rates and compare it with other market rates.\n\\cite{TaylorWilliams2009} documented the decoupling of the LIBOR\nrate from other market rates such as the Overnight Interest Swap\n(OIS), Effective Federal Fund (EFF), Certificate of Deposits\n(CDs), Credit Default Swaps (CDS), and Repo rates. They\nhypothesize that the reasons for the divergent behavior were due\nto expectations of future interest rates and in the accompanying\ncounterpart risk. \\cite{SniderYoule} study individual quotes in\nthe LIBOR bank panel and corroborate the claim by\n\\cite{MollenkampWhitehouse} that LIBOR quotes in the US are not\nstrongly related to other bank borrowing cost proxies. In their\nmodel, the incentive for misreporting borrowing costs is profiting\nfrom a portfolio position. Consequently, the misreporting could\npoint upwards in one currency and downwards in another one,\ndepending on the portfolio exposition. The evidence of such\nbehavior is detected with the formation of a compact cluster of\nthe different panel bank quotes around a given point.\n\\cite{AbrantesMetz2011} track daily LIBOR rates over the period\n1987 to 2008.\n\n\\vskip 3mm\n\nIn particular, this paper analyzes the empirical distribution of\nthe Second Digits (SDs) of the Libor interest rate, and compares\nthem with the uniform and Benford's distributions. Taking into\naccount the whole period, the null hypothesis that the empirical\ndistribution follows either the uniform or the Benford's\ndistribution cannot be rejected. However, if only the period after\nthe sub-prime crisis is taken into account, the null hypothesis is\nrejected. This result puts into question the ``aseptic'' setting\nof LIBOR. In a recent paper Bariviera \\textit{et al.} \\cite{Bariviera2015} the authors\nuncover strange changes in the information endowment of LIBOR time\nseries, as measured by two information theory quantifiers, namely\npermutation entropy and permutation statistical complexity. Their\nresults allow to infer some degree of manipulation or, at least,\nchanges in the underlying stochastic process that governs interest\nrate's time series.\n\nAntitrust law enforcement is complex, because manipulation and fraud can be elegantly camouflaged. An\nstatistical procedure could hardly be accepted as legal proof in\na court of law. However, its use by surveillance authorities makes\nthe attempted manipulation more costly and more difficult to\nbe maintained. Consequently, we view our proposal as a market watch\nmechanism that could make manipulation and\/or collusion attempts\nmore difficult in the future. Additionally, an efficient\noverseeing mechanism could increase the incentives to apply for\nleniency at earlier stages of the manipulation\n(\\cite{AbrantesSokol2012}.)\n\n\nThe aim of this paper is to show that a forecasting method based\non Maximum Entropy Principle (MaxEnt) is very useful not only to\nproduce accurate forecasts, but also to detect some anomalous\nsituations in time series. In particular, we claim that, in\nabsence of data manipulation, forecast accuracy should be\napproximately the same at all times under examination. On the\ncontrary, manipulation would produce more predictable\nconsequences, increasing the predictive-power of our model, that\nwe apply here to LIBOR and other UK interest rates.\n\n\nThis paper is organized as follows. Section \\ref{sec:ME}\ndescribes our methodology based on the Maximum Entropy method.\nSection \\ref{sec:data} describes the data used in the paper and\ndeals with the results obtained with the proposed methodology.\nFinally, section \\ref{sec:conclusions} draws the main conclusions.\n\n\n\n\\section{MaxEnt approach for predictions in time-series}\\label{sec:ME}\n\nIn a recent paper, Mart\\'in \\textit{et al. }\\cite{Martin2014} developed an information\ntheory based method for time series prediction. Given its\noutstanding results in approaching the true dynamics underlying a\ngiven time series, we believe that it is a suitable method to\napply here. In order to make the paper self-contained, we review\nbelow the description of the method, taken from \\cite{Martin2014}.\n\n\\noindent The behavior of a dynamical system can be registrated as\na time series i.e. a sequence of measurements\n$\\{ v(t_n), n=1,\n\\ldots , N\\}$ of an observable of the system\nat discrete times $t_n$, where\n$N$ is the length of the time series.\n\n\n\n\\noindent The Takens theorem of 1981 asserts that, for $T \\in\n\\mathbb{R}$, $T>0$, there exists a functional form of the type,\n\\begin{equation} \\label{mapping}\nv(t+T)=F({\\bf v}(t)),\n\\end{equation}\nwhere\n\\begin{equation} \\label{vector v}\n{\\bf v}(t)=[v_1(t),v_2(t),\\ldots,v_d(t)],\n\\end{equation}\nand $v_i(t)=v(t-(i-1) \\Delta)$, for $i=1,\\ldots,d$. $\\Delta$ is\nthe time lag and $d$ is the embedding dimension of the\nreconstruction. $T$ represents the \\emph{anticipation time} and\nit is of fundamental importance for a prediction model.\n\n\\noindent We will consider (as in [\\cite{Martin2014}) and\nreferences therein] a particular representation for the mapping\nfunction of Eq. (\\ref{mapping}), expressing it, using Einstein's\nsummation notation, as an expansion of the form\n\n\\begin{eqnarray}\nF^*({\\bf v}(t))=&a_{0}+a_{i_1} v_{i_1}(t)~ +a_{i_1 i_2}v_{i_1}(t)~\nv_{i_2}(t)~ + ~a_{i_1 i_2 i_3}v_{i_1}(t)~ v_{i_2}(t)~v_{i_3}(t)~ +\n\\ldots \\label{Fasterisco}\n \\\\ \\nonumber & +a_{i_1 i_2... i_{n_p}}v_{i_1}(t)~\nv_{i_2}(t)~\\ldots v_{i_{n_p}}(t)~,\n\\end{eqnarray}\nwhere $1 \\le i_k \\le d $ with $1 \\le k \\le n_p$ and $n_p$ being\nan adequately chosen polynomial degree so as to to series-expand\nthe mapping $F^*$. The number of parameters in\nEq.(\\ref{Fasterisco}) corresponding to the terms of degree $k$\ndepends on the embedding dimension and can be calculated using\ncombination with repetitions,\n\\begin{equation} \\label{variaciok}\n\\left(\n\\begin{array}{c}\n d \\\\\n k\\\\\n\\end{array\n\\right)^* = \\frac{(d+k-1)!}{k! (d-1)!}\n\\end{equation}\n\n\n\n\\noindent Accordingly, the length of the vector of parameters\n$\\textbf{a}$, $N_c$, is\n\\begin{equation} \\label{Nc}\nN_c=\n \\sum_{k=1}^{n_p}\\left\n\\begin{array}{c}\n d \\\\\n k\\\\\n\\end{array\n\\right)^*\n\\end{equation}\n\n\n\n\n\\noindent The computations are made on the basis of a specific\ninformation supply, given by $M$ points of the series\n\\begin{equation}\\label{Ecuacion5}\n\\{{\\bf v}(t_n),v(t_n+T)\\},~~~n=1,\\ldots,M.\n\\end{equation}\n\n\n\n\\noindent Given the data set in Eq. (\\ref{Ecuacion5}), the\nparametric mapping in Eq. (\\ref{Fasterisco}) will be determined\nby the following condition,\n\n\\begin{equation}\\label{Ecuacion7}\nF^*({\\bf v}(t_n))=v(t_n+T)~~~~~~n=1,\\ldots,M ,\n\\end{equation}\nwhich can be expressed in matrix form as,\n\\begin{equation}\\label{Ecuacion12}\n W \\textbf{a} =\\textbf{v}_T,\n\\end{equation}\nwhere $W$ is a matrix of size $ M \\times N_c$, whose $n$-th row\nis \\\\ $[1,v_{i_1}(t_n), v_{i_1}(t_n) v_{i_2}(t_n),\\ldots ,\nv_{i_1}(t_n) v_{i_2}(t_n) \\ldots v_{i_{n_p}}(t_n)] $ (Cf.\nEq.(\\ref{Fasterisco})) and \\\\ $(\\textbf{v}_T)_n=v(t_n+T)$, for\n$n=1,\\ldots,M$. Shannon's entropy, defined for a discrete random\nvariable, can be extended to situations for which the random\nvariable under consideration is continuous.\n\n\\noindent In order to infer coefficients which are consistent with\nthe data we shall assume that each set $\\textbf{a}$ is realized\nwith probability $P(\\textbf{a})$. Thus,\n a normalized probability distribution over the possible sets $\\textbf{a}$ is introduced,\n\\begin{equation}\\label{EcuacionP}\n \\int_{I} P(\\textbf{a}) \\ d\\textbf{a} =1 ,\n\\end{equation}\nwhere $d\\textbf{a}=da_1da_2 \\cdots da_ {N_c}$ and $N_c$ is the\nnumber of parameters of the model.\n\n\n\n\\noindent The problem then becomes that of finding\n$P(\\textbf{a})$ subject to the requirement that the associated\nentropy $H$ be maximized, since this is the best way of avoiding\nany bias. The expectation value of $\\textbf{a}$, is defined by\n\\begin{equation}\\label{EcuacionEa}\n \\left\\langle \\textbf{a} \\right\\rangle = \\int_{I}{P(\\textbf{a}) \\textbf{a}\\ d\\textbf{a} }.\n\\end{equation}\n\\noindent Consider the continuous random variable $\\textbf{a}$\nwith probability density function $ p(\\textbf{a})$ on $I$ and $\\,\nI=(-\\infty,\\infty)$. The entropy is given by\n\n\\begin{equation}\\label{EcuacionH}\n\\displaystyle H(\\textbf{a})=-\\int_{I}{P(\\textbf{a})\\ \\ln\\\nP(\\textbf{a})\\ d\\textbf{a}},\n\\end{equation}\nwhenever it exists, and the relative entropy reads\n\n\\begin{equation}\\label{EcuacionHr}\n\\displaystyle H=-\\int_{I} P(\\textbf{a})\\ \\ln\\ \\frac\n{P(\\textbf{a})} {P_0(\\textbf{a})} \\ d\\textbf{a},\n\\end{equation}\nwhere $P_0(\\textbf{a})$ is an appropriately chosen a priori\ndistribution.\n\n\n\n\\noindent This measure exhibits many of the properties of a\ndiscrete entropy but, unlike the entropy of a discrete random\nvariable, that for a continuous random variable may be infinitely\nlarge, negative, or positive (Ash, 1965 [6]). \\noindent We\ncharacterize, via the entropic maximum principle, various\nprobability distributions, subject to the constraints\nEq.(\\ref{EcuacionP}) and Eq.(\\ref{Ecuacion12}) for the expectation\n$\\left\\langle \\textbf{a} \\right\\rangle$ of $\\textbf{a}$. \\noindent\nThe method for solving this constrained optimization problem is to\nuse Lagrange multipliers for each of the operating constraints and\nmaximize the following functional with respect to $P(\\textbf{a})$,\n\n\\begin{equation}\\label{EcuacionHrmultip}\n J=- \\left[\\int_{I} P(\\textbf{a}) \\ln \\frac {P(\\textbf{a})} {P_0(\\textbf{a})} d\\textbf{a} + \\lambda_0 \\left[\\int_{I} P(\\textbf{a}) d\\textbf{a}-1\\right] + {\\lambda}^t \\int_{I} \\left[W P(\\textbf{a}) \\textbf{a} - \\textbf{v}_T \\right] d\\textbf{a} \\right],\n\\end{equation}\nwhere $\\lambda_0$ and $\\lambda$ are Lagrange multipliers\nassociated, respectively, with the normalization condition and\nwith the constraints, Eq.(\\ref{EcuacionP}) and\nEq.(\\ref{Ecuacion12}).\n\n\n\n\\noindent Taking the functional derivative with respect to\n${P(\\textbf{a})}$ we get\n\\begin{equation}\\label{Ecuacion20}\n\\frac{ \\partial J}{\\partial P(\\textbf{a})} = \\ln\n\\left(\\frac{P(\\textbf{a})}{P_0(\\textbf{a})}\\right) +1 + \\lambda_0\n+ {\\lambda}^t W \\textbf{a} =0,\n\\end{equation}\nwhich implies that the maximum entropy distribution must have the\nform\n\\begin{equation}\\label{Ecuacion25}\n P(\\textbf{a})= \\exp -(1 + \\lambda_0 ) \\exp(\\lambda^t W \\textbf{a}) P_0(\\textbf{a})\n\\end{equation}\n\n\n\\noindent If the a priori probability distribution\n$P_0(\\textbf{a})$ is chosen to be proportional to $\\exp(\n-\\frac{1}{2} \\textbf{a}^t [\\sigma^2]^{-1} \\textbf{a})$, where\n$\\sigma^2$ is the covariance matrix, a Gaussian form for the\nprobability distribution $P(\\textbf{a})$ is obtained, with\n\\begin{equation}\\label{Ecuacion30}\n\\left\\langle \\textbf{a} \\right\\rangle = -\\sigma W^t \\lambda\n\\end{equation}\n\n\\noindent Considering Eq.(\\ref{Ecuacion12}), the Lagrange\nmultipliers $\\lambda$ can be eliminated:\n\n\n\\begin{equation}\\label{Ecuacion40}\n\\lambda= -\\sigma^{-1} (W W^t)^{-1} \\textbf{v}_T,\n\\end{equation}\nand one can thus write\n\n\\begin{equation}\\label{Ecuacion45}\n\\left\\langle \\textbf{a} \\right\\rangle = W^t (W W^t)^{-1}\n\\textbf{v}_T .\n\\end{equation}\n\\noindent The matrix $ W^t (W W^t)^{-1}$ is known as the\nMoore-Penrose pseudo-inverse of the matrix $W$ (see\n\\cite{Martin2014} and references therein). Consequently,\n this result shows that the maximum entropy principle coincides with a least square criterion.\n\\noindent Once the pertinent parameter vector $\\textbf{a}$ is determined, it is\nused to predict {\\bf new} series' values, $\\widehat v(t_n+T))_{n=1,\\ldots, M_P}$, according to\n\n\n\\begin{equation} \\label{Prediccion}\n(\\widehat v(t_n+T))_{n=1,\\ldots, M_P}= \\widehat{ W} \\textbf{a},\n\\end{equation}\n where $\\widehat W$ is the matrix of size $ M_P \\times N_c $ (see Eq.(\\ref{Ecuacion12})), obtained using $\\widehat v(t_n)$ values.\n\n\n\n\n\\section{Data and results }\\label{sec:data}\n\nWe analyze the Libor in pound Sterling. The data span is from 01\/01\/1999 until 21\/10\/2008, with a total of 2560 datapoints. All data were retrieved from DataStream.\n\nIn this section we present the results obtained using the methodology proposed in Section \\ref{sec:ME}. We consider the embedding dimension $d=4$ and the polynomial degree $n_p=2$. The length of the vector of parameters, according to Eq. \\ref{Nc} is $N_c=15$.\n\nWe fit our model with $M=700$ datapoints, corresponding to approximately two and a half years beginning on 01\/01\/1999. Once the model's parameters were determined, we forecasted the rest of the time series, up to 21\/10\/2008.\n\n\\begin{figure}[!ht]\n\\center\n\\includegraphics[width=16cm,height=10cm]{Figura1_29_mayo.eps}\n\\caption{Original and forecasted time series for different anticipation times}\n\\label{fig:T1}\n\\end{figure}\n\n\n\\begin{figure}[!ht]\n\\center\n\\includegraphics[width=16cm,height=10cm]{Figura2_29_mayo.eps}\n\\caption{Original and forecasted time series for different anticipation times}\n\\label{fig:T2}\n\\end{figure}\n\n\nIn the figures \\ref{fig:T1} and \\ref{fig:T2} the original time series values and the predicted ones are overlapped (blue and red refer to original and\npredicted values, respectively) for different anticipation time values. The time-interval between the beginning of the time series and the vertical dashed lines corresponds to the model interval, used to estimate the parameters. The other part corresponds to the out-of-sample forecasts.\n\nIn order to prove the robustness of our proposal we did forecast for different anticipation times (T=\\{7, 10, 13, 16\\} days).\n\n\nWe can observe in figures \\ref{fig:T1} and \\ref{fig:T2} that, as\nexpected, during the model interval period, the original and the\npredicted time series are very close. This is the consequence of\nthe adequate fitting power of the model. As is the case for any\nforecast method, one tries to mimic the behavior of the time\nseries to be estimated. When we move into the (out of the sample)\nprediction interval, we note that during the first months, our\nmethod behaves very well. We expect that, as economic theory\naffirms, competitive prices should behave randomly\n(\\cite{Samuelson65}). Consequently, if we assume that the time\nseries under study is generated by a memoryless stochastic\nprocess, accurate forecasts are not possible. In spite of the fact\nthat the original time series changes, we can see that the\npredicted time series is rather constant between 2002 and 2007.\nThis is the consequence of the stochastic nature of the original\ntime series. The prediction performance is very poor. In addition,\nthe distance between the original and the predicted series in this\nperiod increases monotonically with the anticipation time, as\nexpected. Surprisingly enough, beginning with 2007, our model\nbegins to fit real data very well. Predicted time series moves\n\\textit{pari passu} with the original one, even during the large\nincreases during 2008. A similar analysis can be done with\nreference to figure \\ref{errorpor}. In that figure, we display the\nrelative mean square error between the original and forecasted\ntime series, year by year.\n\nWhat could make the same model to change its forecast accuracy in\nsuch dramatic fashion? According to Wold's theorem\n(\\cite{Wold1954}), a time series can be separated into a\ndeterministic part and an stochastic part. If the stochastic part\ndominates the behavior of the time series, forecast is\nunsuccessful. This is what we can observe between 2002 and the end\nof 2006. On the contrary, beginning in 2007, and until the end of 2008,\nprediction becomes very accurate. Given that the prediction model\nis the same for both periods, we conjecture that the time series\nis dominated by a deterministic process in the last of the two\nperiods. Recalling the literature review of Section\n\\ref{sec:intro}, we can state that this result is an indirect\nproof of LIBOR manipulation. We emphasize that such\n``manipulation'' necessarily comprises the contamination of the\ntime series with a deterministic device, which was detected by the\nMaxEnt model.\n\n\n\\begin{figure}[!ht]\n\\center\n\\includegraphics[scale=.4]{relerror_junio.eps}\n\\caption{Relative mean square errors }\n\\label{errorpor}\n\\end{figure}\n\n\n\\section{Conclusions \\label{sec:conclusions}}\n\nIn this paper we present a novel prediction method based on the\nMaxEnt principle. Taking into account its previous performance\n(\\cite{Martin2014}), we believe it is suitable for the study of the\n``Libor Case''. We study Libor time series between 1999 until\n2009. Based on the prediction accuracy of our method, we are able\nto detect two distinctive regimes. The first one, extends between\n2002 and the end of 2006. In this period the time series behaves\nas predicted by standard economic theory, reflecting the random\ncharacter of prices in competitive environments. The prediction\npower is, consequently, poor. The second time-period spans 2007 and\n2008. In this period the time series changes its\nregime, moving to a more predictable one. We can safely think that\na deterministic device was introduce into the Libor setting. This\nsituation takes place at the time that what was called by the\nnewspapers as the ``Libor manipulation'' one. As a consequence,\nour paper is able to detect such manipulation, using exclusively\ndata from Libor time series. We would like to emphasize the\nrelevance of advanced statistical models in market's watch\nmechanisms. Our results could be of interest to surveillance\nauthorities, given the importance of fair market conditions in\nfree market economies.\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe Large Hadron Collider (LHC) at CERN confirmed the predictions of the Standard Model (SM) of particle physics by discovering the Brout-Englert-Higgs boson~\\cite{Aad:2012tfa,Chatrchyan:2012xdj} in 2012. However, until now, high energy searches did not discover any particles beyond the ones present in the SM. Therefore, great hopes of finding new physics (NP) rest on low energy precision physics where flavor experiments have accumulated intriguing hints for physics beyond the SM within the recent years, most prominently in $b\\to s\\ell^+\\ell^-$ data~\\cite{Aaij:2017vbb,Aaij:2019wad,Aaij:2020nrf}, $b\\to c\\tau\\nu$ transitions~\\cite{Lees:2012xj,Aaij:2017uff,Abdesselam:2019dgh} and the anomalous magnetic moment (AMM) of the muon ($a_\\mu=(g-2)_{\\mu}\/2$)~\\cite{Bennett:2006fi,Mohr:2015ccw,Abi:2021gix}. Interestingly, these hints for NP fall into a common pattern: they can be considered as signs of lepton flavor universality violation (LFUV)~\\footnote{Recently, it has been pointed out that also the Cabibbo Angle Anomaly can be interpreted as a sign of LFUV~\\cite{Coutinho:2019aiy,Crivellin:2020lzu}.}, which is respected by the SM gauge interactions and is only broken by the Higgs Yukawa couplings.\n\nAmong these anomalies, $a_\\mu$, which displays a $4.2\\,\\sigma$ deviation from the SM prediction~\\cite{Aoyama:2020ynm}, is most closely related to Higgs interactions as it is a chirality changing observable. I.e. it involves a chirality flip and therefore a violation of $SU(2)_L$ is required to obtain a non-zero contribution. Furthermore, the required NP effect to explain $a_\\mu$ is of the order of the electroweak (EW) SM contribution and TeV scale solutions need an enhancement mechanism, called chiral enhancement, to be able to account for the deviation (see e.g. Ref.~\\cite{Crivellin:2018qmi} for a recent discussion). Obviously, also $h\\to\\mu^+\\mu^-$ is a chirality changing process and any enhanced effect in $a_\\mu$ should also result in an enhanced effect here~\\footnote{Correlations between $a_\\mu$ and $h\\to\\mu^+\\mu^-$ were considered in the EFT in Ref.~\\cite{Feruglio:2018fxo} and in the context of vector-like leptons (see Ref.~\\cite{Crivellin:2020ebi} for a recent global analysis) in Ref.~\\cite{Kannike:2011ng,Dermisek:2013gta,Dermisek:2014cia,Crivellin:2018qmi}.}. Recently, both ATLAS and CMS measured $h\\to\\mu^+\\mu^-$, finding a signal strength w.r.t. the SM expectation of $1.2\\pm0.6$ \\cite{Aad:2020xfq} and $1.19^{+0.41+0.17}_{-0.39 -0.16}$ \\cite{CMS:2020eni}, respectively. \n\nThe mechanism of chiral enhancement, necessary to explain $a_\\mu$, has been well studied (see Ref.~\\cite{Crivellin:2018qmi} for a recent account). \nHere leptoquarks (LQs) are particularly interesting since they can give rise to an enhancement factor of {$m_t\/m_\\mu \\approx 1700$}~\\cite{Djouadi:1989md, Chakraverty:2001yg,Cheung:2001ip,Bauer:2015knc,Popov:2016fzr,Chen:2016dip,Biggio:2016wyy,Davidson:1993qk,Couture:1995he,Mahanta:2001yc,Queiroz:2014pra,ColuccioLeskow:2016dox,Chen:2017hir,Das:2016vkr,Crivellin:2017zlb,Cai:2017wry,Crivellin:2018qmi,Kowalska:2018ulj,Mandal:2019gff,Dorsner:2019itg,Crivellin:2019dwb,DelleRose:2020qak,Saad:2020ihm,Bigaran:2020jil,Dorsner:2020aaz}, allowing for a TeV scale explanation with perturbative couplings that are not in conflict with direct LHC searches. In fact, there are only two LQs, out of the 10 possible representations~\\cite{Buchmuller:1986zs}, that can yield this enhancement: the scalar LQ $SU(2)_L$ singlet ($S_1$) and the scalar LQ $SU(2)_L$ doublet ($S_2$) with hypercharge $-2\/3$ and $-7\/3$, respectively. In addition, there is the possibility that $S_1$ mixes with the $SU(2)_L$ triplet LQ $S_3$, where $S_1$ only couples to right-handed fermions~\\cite{Dorsner:2019itg}.\n\nFurthermore, LQs are also well motivated by the hints for LFUV in semi-leptonic $B$ decays, both in $b\\to s\\mu^+\\mu^-$~\\cite{Aaij:2017vbb,Aaij:2019wad,Aaij:2020nrf} and $b\\to c\\tau\\nu$ data~\\cite{Lees:2012xj,Aaij:2017uff,Abdesselam:2019dgh}, which deviate from the SM with up to $\\approx 6\\,\\sigma$~\\cite{Alguero:2019ptt,Aebischer:2019mlg,Ciuchini:2019usw,Arbey:2019duh} and $\\approx 3\\,\\sigma$~\\cite{Amhis:2019ckw,Murgui:2019czp,Shi:2019gxi,Blanke:2019qrx,Kumbhakar:2019avh}, respectively. Here possible solutions include again $S_1$~\\cite{Fajfer:2012jt, Deshpande:2012rr, Sakaki:2013bfa, Freytsis:2015qca, Bauer:2015knc, Li:2016vvp, Zhu:2016xdg, Popov:2016fzr, Deshpand:2016cpw, Becirevic:2016oho, Cai:2017wry, Buttazzo:2017ixm, Altmannshofer:2017poe, Kamali:2018fhr, Azatov:2018knx, Wei:2018vmk, Angelescu:2018tyl, Kim:2018oih, Crivellin:2019qnh, Yan:2019hpm}, $S_2$~\\cite{Tanaka:2012nw, Dorsner:2013tla, Sakaki:2013bfa, Sahoo:2015wya, Chen:2016dip, Dey:2017ede, Becirevic:2017jtw, Chauhan:2017ndd, Becirevic:2018afm, Popov:2019tyc} and $S_3$~\\cite{Fajfer:2015ycq, Varzielas:2015iva, Bhattacharya:2016mcc, Buttazzo:2017ixm, Barbieri:2015yvd, Kumar:2018kmr, deMedeirosVarzielas:2019lgb, Bernigaud:2019bfy}, where $S_1$ and $S_3$ together can provide a common explanation of the $B$ anomalies and the AMM of the muon~\\cite{Crivellin:2017zlb,Buttazzo:2017ixm,Marzocca:2018wcf, Bigaran:2019bqv,Crivellin:2019dwb}. We take this as a motivation to study these correlations for the LQs which can generate $m_t\/m_\\mu$ enhanced effects by considering three scenarios: 1) $S_1$ only, 2) $S_2$ only, 3) $S_1+S_3$ where $S_1$ only couples to right-handed fermions. Note that these are the only scenarios which can give rise to the desired $m_t\/m_\\mu$ enhanced effect.\n\n\n\\begin{figure}[t!]\n\t\\centering\n\t\\begin{overpic}[scale=.55,,tics=10]\n\t\t{higgs-decays_higgs.pdf}\n\t\t\\put(5,27){$h$}\n\t\t\\put(90,39){$\\mu$}\n\t\t\\put(90,6){$\\mu$}\n\t\t\\put(42,36){$t^{(c)}$}\n\t\t\\put(42,5){$t^{(c)}$}\n\t\t\\put(70,22){$S_{i}$}\n\t\\end{overpic}\n\t\\\\\n\t\\vspace{5mm}\n\t\\begin{overpic}[scale=.55,,tics=10]\n\t\t{LQ_on_shell_photon.pdf}\n\t\t\\put(5,45){$\\mu$}\n\t\t\\put(90,45){$\\mu$}\n\t\t\\put(57,5){$\\gamma$}\n\t\t\\put(48,48){$S_{i}$}\n\t\t\\put(27,24){$t^{(c)}$}\n\t\t\\put(66,24){$t^{(c)}$}\n\t\\end{overpic}\n\t\\caption{Sample Feynman diagrams which contribute to $h\\to\\mu^{+}\\mu^-$ (top) and the AMM of the muon (bottom). In addition, we have to include the diagrams where the Higgs and photon couple to the LQ, as well as self-energy diagrams.}\n\t\\label{FeynmanDiagrams}\n\\end{figure}\n\n\n\n\n\n\\section{Setup and Observables}\n\nThe most precise measurements of the anomalous magnetic moment (AMM) of the muon ($a_\\mu=(g-2)_{\\mu}\/2$) has been achieved by the E821 experiment at Brookhaven~\\cite{Bennett:2006fi,Mohr:2015ccw} and recently be the g-2 experiment at Fermilab~\\cite{Abi:2021gix}, which differs from the SM prediction by\n\\begin{equation}\n\\label{Delta_amu}\n\\delta a_\\mu=a_\\mu^{\\rm{exp}} - a_\\mu^{\\rm{SM}} = (251 \\pm 59) \\times 10^{-11} \\,,\n\\end{equation}\ncorresponding to a $4.2\\,\\sigma$ deviation~\\cite{Aoyama:2020ynm}\\footnote{This result is based on Refs.~\\cite{Aoyama:2012wk,Aoyama:2019ryr,Czarnecki:2002nt,Gnendiger:2013pva,Davier:2017zfy,Keshavarzi:2018mgv,Colangelo:2018mtw,Hoferichter:2019gzf,Davier:2019can,Keshavarzi:2019abf,Kurz:2014wya,Melnikov:2003xd,Masjuan:2017tvw,Colangelo:2017fiz,Hoferichter:2018kwz,Gerardin:2019vio,Bijnens:2019ghy,Colangelo:2019uex,Blum:2019ugy,Colangelo:2014qya}. The recent lattice result of the Budapest-Marseilles-Wuppertal collaboration (BMWc) for the hadronic vacuum polarization (HVP)~\\cite{Borsanyi:2020mff} on the other hand is not included. This result would render the SM prediction of $a_\\mu$ compatible with experiment. However, the BMWc results are in tension with the HVP determined from $e^+e^-\\to$ hadrons data~\\cite{Davier:2017zfy,Keshavarzi:2018mgv,Colangelo:2018mtw,Hoferichter:2019gzf,Davier:2019can,Keshavarzi:2019abf}. Furthermore, the HVP also enters the global EW fit~\\cite{Passera:2008jk}, whose (indirect) determination is below the BMWc result~\\cite{Haller:2018nnx}. Therefore, the BMWc determination of the HVP would increase tension in EW fit~\\cite{Crivellin:2020zul,Keshavarzi:2020bfy} and we opted for using the community consensus of Ref.~\\cite{Aoyama:2020ynm}.}. Therefore, it is very interesting to investigate if and how this discrepancy can be explained by physics beyond the SM.\n\n\n\\begin{table}\n\\begin{equation}\n\\renewcommand{\\arraystretch}{2}\n\\begin{tabular}{c|c|c}\n& $\\mathcal{G}_{\\text{SM}}$ & $\\mathcal{L}_{q\\ell}$\\\\\n\\hline\n$S_1$ & $\\bigg(3,1,-\\dfrac{2}{3}\\bigg)$ & $\\left(\\lambda_{fj}^{R}\\,\\bar{u}^c_f\\ell_{j}+\\lambda_{fj}^{L}\\,\\bar{Q}_{f}^{\\,c}i\\tau_{2}L_{j}\\right) S_{1}^{\\dagger}+\\text{h.c.}$\\\\\n$S_{2}$ & $\\bigg(3,2,\\dfrac{7}{3}\\bigg)$ & $\\gamma_{fj}^{RL}\\,\\bar{u}_{f}S_{2}^{T}i\\tau_{2}L_{j}+\\gamma_{fj}^{LR}\\,\\bar{Q}_f\\ell_j S_{2}+\\text{h.c.}$\\\\\n$S_{3}$ & $\\bigg(3,3,-\\dfrac{2}{3}\\bigg)$ & $\\kappa_{fj}^{}\\,\\bar{Q}^{\\,c}_{f}i\\tau_{2}\\left(\\tau\\cdot S_{3}\\right)^{\\dagger}L_{j}+\\text{h.c.}$\n\\end{tabular}\\nonumber\n\\end{equation}\n\\caption{Scalar LQ representations together with their couplings to quarks and leptons, generating the desired $m_t\/m_\\mu$ enhanced effect in the AMM of the muon. Here $\\mathcal{G}_{\\text{SM}}$ refers to the SM gauge group $SU(3)_c\\times SU(2)_L\\times U(1)_Y$, $L$ ($Q$) is the lepton (quark) $SU(2)_{L}$ doublet, $u$ ($\\ell$) the up-type quark (lepton) singlet and $c$ refers to charge conjugation. Furthermore, $j$ and $f$ are flavor indices and $\\tau_{k}$ the Pauli matrices.}\n\\label{LQrep}\n\\end{table}\n\nAs we motivated in the introduction, we will focus on the three scalar LQs $S_1$, $S_2$ and $S_3$ for explaing $a_\\mu$. These representations couple to fermions as given in Table~\\ref{LQrep}\\footnote{Note that ``pure'' LQs with couplings only to one quark and one lepton do not give rise to proton decays at any perturbative order. The reason for this is that di-quark couplings are necessary in order to break baryon and\/or lepton number which is otherwise an unbroken symmetry forbidding proton decay (see Ref.~\\cite{Dorsner:2012nq} for a recent detailed discussion).}. Since we are in the following only interested in muon couplings to third generation quarks, we define\n$\\lambda _R^{} \\equiv \\lambda _{32}^R$, $\\lambda _L^{} \\equiv \\lambda _{32}^L$, $\\gamma _{LR}^{} \\equiv \\gamma _{32}^{LR},$ $\\gamma _{RL}^{} \\equiv \\gamma _{32}^{RL}$, $\\kappa = {\\kappa _{32}}$. \n\nIn addition to the gauge interactions, which are determined by the representation under the SM gauge group, LQ can couple to the SM Higgs~\\cite{Hirsch:1996qy}\n\\begin{align}\n{{\\cal L}_H} &= {Y_{13}}S_1^\\dag \\left( {{H^\\dag }\\left( {\\tau \\cdot{S_3}} \\right)H} \\right) + {\\rm{h}}{\\rm{.c}}{\\rm{.}} \\label{eq:LQ_mixing}\\\\\n&- {Y_{22}}{\\left( {Hi{\\tau _2}{S_2}} \\right)^\\dag }\\left( {Hi{\\tau _2}{S_2}} \\right) - \\sum\\limits_{k = 1}^3 ( m_k^2 + {Y_k}{H^\\dag }H)S_k^\\dag {S_k}\n\\nonumber\n\\end{align}\nHere $m_k^2$ are the $SU(2)_L$ invariant bi-linear masses of the LQs. After $SU(2)_L$ breaking, the term $Y_{13}$ generates off-diagonal elements in the LQ mass matrices and one has to diagonalize them through unitary transformations in order to arrive at the physical basis. Therefore, non-zero values of $Y_{13}$ are necessary to generate $m_t\/m_\\mu$ enhanced effects in scenario 3). $Y_1$ and $Y_{2,22}$ are phenomenologically relevant for $h\\to\\mu^+\\mu^-$ in scenario 1) and 2), respectively, but not necessary for an $m_t\/m_\\mu$ enhancement.\n\n\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=0.54\\textwidth]{plot6.pdf}\n\t\\includegraphics[width=0.352\\textwidth]{plot7.pdf}\n\t\\caption{Correlations between the ${\\rm Br}[h\\to\\mu^+\\mu^-]$, normalized to its SM value, and the NP contribution in the AMM of the muon $\\delta a_\\mu$ for scenario 1) (left) and scenario 2) (right) with $m_{1,2}=1.5\\,$TeV. The predictions for different values of the LQ couplings to the Higgs are shown, where for scenario 1) $Y=Y_1$ while in scenario 2) $Y=Y_2+Y_{22}$. Even though the current ATLAS and CMS results are not yet constraining for these models, sizeable effects are predicted, which can be tested at future colliders. Furthermore, scenario 1) yields a constructive effect in $h\\to\\mu^+\\mu^-$ while the one in scenario 2) is destructive such that they can be clearly distinguished with increasing experimental precision. }\\label{S1S2}\n\\end{figure*}\n\nNow we can calculate the effects in $a_\\mu$ and $h\\to\\mu^+\\mu^-$~\\footnote{Correlations between the related modes $\\tau\\to\\mu\\gamma$ and $h\\to\\tau\\mu$ were studied in Refs.~\\cite{Dorsner:2015mja,Cheung:2015yga,Baek:2015mea} in the context of LQs} for which sample diagrams are shown in Fig.~\\ref{FeynmanDiagrams}. In both cases we have on-shell kinematics. For $a_\\mu$ the self-energies can simply be taken into account via the Lehmann-Symanzik-Zimmermann formalism and no renormalization is necessary. This is however required for $h\\to\\mu^+\\mu^-$ in order to express the result in terms of the physical muon mass. Here, the effective Yukawa coupling, which enters $h\\to\\mu^+\\mu^-$, is given by\n\\begin{equation}\nY_\\mu ^{{\\rm{eff}}} = \\frac{{m_\\mu ^{} - \\Sigma _{\\mu \\mu }^{LR}}}{v} + \\Lambda _{\\mu \\mu }^{LR}\\,,\n\\end{equation}\nwhere $\\Lambda _{\\mu \\mu }^{LR}$ is the genuine vertex correction shown in Fig.~\\ref{FeynmanDiagrams} and $\\Sigma _{\\mu \\mu }^{LR}$ is the chirality changing part of the muon self-energy. In these conventions $-i\\Sigma _{\\mu \\mu }^{LR}P_R$ equals the expression of the Feynman diagram for the self-energy. Note that $Y_\\mu ^{{\\rm{eff}}}$ is finite without introducing a counter-term. For $a_\\mu$ we expand in the muon mass and external momenta up to the first non-vanishing order, while in $h\\to\\mu^+\\mu^-$ external momenta can be set to zero from the outset but we expand in $m_h^2\/m_{1,2,3}^2$. The resulting amplitudes can be further simplified by expanding the LQ mixing matrices and mass eigenvalues in $v^2\/m_{1,2,3}^2$ and the loop functions in $m_h^2\/m_t^2$, which gives a very precise numerical approximation, resulting in\n\\begin{widetext}\n\t\\begin{align}\n\t\\dfrac{{{\\rm{Br}}\\left[ {h \\to \\mu^{+} \\mu^{-} } \\right]}}{{{\\rm{Br}}{{\\left[ {h \\to \\mu^{+} \\mu^{-} } \\right]}_{{\\rm{SM}}}}}} \\approx \\Bigg| \n\t1 + \\dfrac{m_t}{m_\\mu }\\dfrac{N_c}{8\\pi^2}\\bigg[ \\dfrac{\\lambda _R^*\\lambda_L}{m_1^2}\\left( \\dfrac{m_t^2}{8}\\mathcal{J}\\!\\left( {\\dfrac{{{m_h^2}}}{{m_t^2}},\\dfrac{{m_t^2}}{{m_1^2}}} \\right) + v^{2}Y_1 \\right)+v^{2}\\lambda_R^{*}\\kappa Y_{13} {\\dfrac{{\\log \\left( {m_3^2\/m_1^2} \\right)}}{{m_3^2-m_1^2}}}\\nonumber\\\\\n\t\\qquad\\qquad\\qquad+ \\dfrac{\\gamma_{LR}^*\\gamma _{RL}}{m_{2}^2}\\left( \\dfrac{m_t^2}{8}\\mathcal {J}\\!\\left( {\\dfrac{{{m_h^2}}}{{m_t^2}},\\dfrac{{m_t^2}}{{m_2^2}}} \\right) + v^{2}{(Y_2+ Y_{22})} \\right)\\bigg]\n\t\\Bigg|^2\\,,\\\\\n\t{\\delta a_\\mu }\\approx \\frac{{{m_\\mu }}}{{4{\\pi ^2}}}\\frac{{{N_c}{m_t}}}{{12}}{\\rm{Re}}\\left[ \n\t\\dfrac{\\gamma_{LR}^{}\\gamma _{RL}^*}{m_{2}^2} { {\\cal E}_{1}\\!\\left( {\\frac{{m_t^2}}{{m_2^2}}} \\right)} - \\frac{\\lambda_R}{m_{1}^2} \\left( {\\lambda _L^* {{\\cal E}_{2}\\!\\left( {\\frac{{m_t^2}}{{m_1^2}}} \\right)} + \\kappa {Y_{13}}\\frac{{{v^2}}}{{m_3^2}}{\\cal E}_{3}\\!\\left( {\\frac{{m_1^2}}{{m_3^2}}},\\frac{{m_t^2}}{{m_3^2}} \\right)} \\right)\\right]\\,,\n\\label{hmumuamuFormula}\n\t\\end{align}\n\\end{widetext}\nwith the loop functions given by\n\\begin{align}\n{\\cal J}\\left( x,y \\right) &= 2\\left( {x - 4} \\right)\\log (y) - 8 + \\frac{{13}}{3}x{\\mkern 1mu} \\,,\n\\end{align}\n\\begin{align}\n\\begin{aligned}\n{\\cal E}_{1}(x)&=1+4\\,\\log(x)\\,,\\;\\;\n{\\cal E}_{2}(x)=7+4\\,\\log(x)\\,,\\\\\n{\\cal E}_{3}( x,y ) &= {\\cal E}_{2}(y) + \\frac{4\\,{\\log (x)}}{{x - 1}} \\,.\n\\end{aligned}\n\\end{align}\nWe only considered the $m_t$ enhanced effects and neglected small CKM rotations, which in principle appear after EW symmetry breaking. As anticipated, in \\eq{hmumuamuFormula} one can see that scenario 3) only contributes if $Y_{13}$ is non-zero. Furthermore, since in this scenario $a_\\mu$ has a relative suppression of $v^2\/m_{1,3}^2$ with respect to $h\\to\\mu^+\\mu^-$, one expects here the largest effects in Higgs decays. In principle also $Y_1$, $Y_2$ and $Y_{22}$ enter in \\eq{hmumuamuFormula}. However, their effect is sub-leading as it is suppressed by $v^2\/m_{1,2}^2$.\n\n\\subsection{Effective Field Theory}\n\nIn the SM effective field theory (SMEFT), which is realized above the EW breaking scale and therefore explicitly $SU(2)_L$ invariant, there are only two chirality flipping 4-fermion operators~\\cite{Grzadkowski:2010es} which can give rise to $m_t$ enhanced effects in $a_\\mu$ and $h\\to\\mu^+\\mu^-$ via re-normalization group evolution (RGE) effects:\n\\begin{align}\n\\begin{aligned}\nQ_{{\\rm{\\ell equ }}}^{(1)} &= \\left( {\\bar \\ell _2^a{e_2}} \\right){\\varepsilon _{ab}}\\left( {\\bar q_3^b{u_3}} \\right)\\,,\\\\\nQ_{{\\rm{\\ell equ }}}^{(3)} &= \\left( {\\bar \\ell _2^a{\\sigma _{\\mu \\nu }}{e_2}} \\right){\\varepsilon _{ab}}\\left( {\\bar q_3^b{\\sigma ^{\\mu \\nu }}{u_3}} \\right)\\,.\n\\end{aligned}\n\\end{align}\nImportantly, while both operators mix at order $\\alpha_{(s)}$ with each other, only the second operators mixes (directly) into the magnetic operator~\\cite{Jenkins:2013zja,Jenkins:2013wua,Alonso:2013hga}\n\\begin{equation}\n\\begin{aligned}\n{Q_{eB}} &= {{\\bar \\ell }_2}{\\sigma^{\\mu \\nu }}{e_2}H{B_{\\mu \\nu }}\\,,\\\\\n{Q_{eW}} &= {{\\bar \\ell }_2}{\\sigma^{\\mu \\nu }}{e_2}{\\tau ^I}HW_{\\mu \\nu }^I\\,,\n\\end{aligned}\n\\end{equation}\ngiving rise to the AMM of the muon after EW symmetry breaking\\footnote{Note that LQs are the only renormalizable extensions of the SM that can generate these operator at tree-level~\\cite{deBlas:2017xtg}.}. Furthermore, as $Q_{{\\rm{\\ell equ }}}^{(1)}$ mixes into ${Q_{e\\varphi }} = {{H^\\dag }H} {{{\\bar \\ell }_2}{e_2}H}$ (generating modified Higgs couplings to muons) it is clear that a UV complete (or at least simplified) model is necessary to correlate $a_\\mu$ to $h\\to\\mu^+\\mu^-$.\n\nThe EFT approach is beneficial in our LQ setup since it allows for the inclusion of RGE effects, as recently done in Ref.~\\cite{Aebischer:2021uvt}. In a first step, the LQ model is matched on the SMEFT (at the LQ scale), giving tree-level effects in $C_{{\\rm{\\ell equ }}}^{(1,3)}$~\\cite{Alonso:2015sja} and a loop effect in ${Q_{eB}}$ and \n${Q_{eW}}$~\\cite{Gherardi:2020det}. Then the SMEFT is used to evolve the Wilson coefficients of these operators to the weak scale where the EW gauge bosons, the Higgs and the top quark are integrated out~\\cite{Crivellin:2013hpa,Dekens:2019ept,Hurth:2019ula}. Next, the magnetic operator of the muon is evolved to the muon scale~\\cite{Crivellin:2017rmk,Aebischer:2017gaw} where the AMM is measured. Ref.~\\cite{Aebischer:2021uvt} finds a reduction of $a_\\mu$ by $\\approx20\\%-30\\%$ compared to the leading order estimate of LQ masses between $1$--$10\\,$TeV. Furthermore, as $C_{{\\rm{lequ }}}^{(1)}$ is enhanced by $\\approx5\\%-10\\%$ by the running from the LQ scale to the EW scale~\\cite{Aebischer:2018bkb}, this leads to an important enhancement of $50\\%-70\\%$ of the prediction for ${\\rm Br}[h\\to\\mu^+\\mu^-]$ w.r.t the leading order calculation. To be conservative, we will use $50\\%$ in our following phenomenological analysis.\n\n\\section{Phenomenology}\n\\label{pheno}\n\nLet us now study the correlations between $a_\\mu$ and $h\\to\\mu^+\\mu^-$ in our three scenarios with $m_t$-enhanced contributions. First, we consider scenario 1) and 2) where $S_1$ and $S_2$ give separately rise to $m_t$-enhanced effects in $a_\\mu$ and $h\\to\\mu^+\\mu^-$. Since both processes involve the same product of couplings to SM fermions, the correlation depends only weakly via a logarithm on $m_t^2\/m_{1,2}^2$. However, there is a dependence on $Y_1$ and $Y_{22}+Y_2$ which breaks the direct correlation but cannot change the sign of the effect for order one couplings. This can be seen in Fig.~\\ref{S1S2}, where the correlations are depicted for $m_{1,2}=1.5$ TeV, respecting LHC bounds~\\cite{Sirunyan:2018ryt,Diaz:2017lit,Aaboud:2016qeg}. The predicted effect is not large enough such that the current ATLAS and CMS measurements are sensitive to it. However, note that it is still sizeable due to the $m_t$ enhancement and therefore detectable at future colliders where the ILC~\\cite{Behnke:2013lya}, the HL-LHC~\\cite{ApollinariG.:2017ojx}, the FCC-ee~\\cite{Abada:2019zxq}, CEPC~\\cite{An:2018dwb} or the FCC-hh~\\cite{Benedikt:2018csr} aim at a precision of approximately 10\\%, 8\\%, 6\\% and below 1\\%, respectively. Furthermore, the effect in ${\\rm Br}[h\\to\\mu^+\\mu^-]$ in scenario 1) is necessarily constructive while in scenario 2) it is destructive, such that in the future a LQ explanation of $a_\\mu$ by $S_1$ could be clearly distinguished from the one involving $S_2$. \n\nIn scenario 3), where $S_1$ only couples to right-handed fermions, the effect in ${\\rm Br}[h\\to\\mu^+\\mu^-]$ is even more pronounced due to the relative suppression of the contribution to $a_\\mu$ by $v^2\/m_{1,3}^2$, see \\eq{hmumuamuFormula}. Furthermore, in this case the correlation between $a_\\mu$ and $h\\to\\mu^+\\mu^-$ depends to a good approximation only on the ratio $m_1\/m_3$. As the effect is symmetric in $m_1$ and $m_3$ we fix one mass to $1.5$ TeV and obtain the band shown in Fig.~\\ref{S1S3Y13} by varying the other mass between $1.5$ and $3$ TeV. The effect in $h\\to\\mu^+\\mu^-$ within the preferred region for $a_\\mu$ is necessarily constructive and large enough that an explanation of the central value of $a_\\mu$ is already disfavored by the ATLAS and CMS measurements of $h\\to\\mu^+\\mu^-$. Clearly, with more data the LHC will be able to support (disprove) this scenario if it finds a (no) significant enhancement of the $h\\to\\mu^+\\mu^-$ decay, assuming $\\delta a_\\mu$ is confirmed. This scenario also leads to sizeable effects in $Z\\mu\\mu$~\\cite{Dorsner:2019itg} which are compatible with LEP data~\\cite{ALEPH:2005ab}, but could be observed at the ILC~\\cite{Behnke:2013lya}, CLIC~\\cite{Aicheler:2012bya} or the FCC-ee~\\cite{Abada:2019zxq}. \n\n\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=0.52\\textwidth]{plot5.pdf}\n\t\\caption{Correlations between the NP contribution to the AMM of the muon ($\\delta a_\\mu$) and ${\\rm Br}[h\\to\\mu^+\\mu^-]$, normalized to its SM value in scenario 3). This correlation depends to a good approximation only on the ratio $m_1\/m_3$. As the effect is symmetric in $m_1$ and $m_3$, we fix one mass to $1.5\\,$TeV and obtain the dark-blue band by varying the other mass between $1.5\\,$TeV and $3\\,$TeV. The effect in $h\\to\\mu^+\\mu^-$ within the preferred region for $a_\\mu$ is necessarily constructive and so large that an explanation is already constrained by the ATLAS and CMS measurements of $h\\to\\mu^+\\mu^-$.}\n\t\\label{S1S3Y13}\n\\end{figure*}\n\n\n\\section{Conclusions}\n\\label{conclusions}\n\nLQs are prime candidates for an explanation of the intriguing hints of LFUV. As LFUV within the SM only originates from the Higgs, chirality changing observables as the AMM of the muon and, of course, $h\\to\\mu^+\\mu^-$ are especially interesting. In particular, there are three possible LQ scenarios which can address the discrepancy in the AMM of the muon by an $m_t\/m_\\mu$ enhancement. This also leads to enhanced corrections in $h\\to\\mu^+\\mu^-$, which involve the same coupling structure as the $a_\\mu$ contribution. This leads to interesting correlations between $a_\\mu$ and $h\\to\\mu^+\\mu^-$, which we study in light of the recent ALTAS and CMS measurements. \n\nWe find that scenario 3), in which $S_1$ only couples to right-handed fermions and mixes after EW symmetry breaking with $S_3$, predicts large constructive effects in $h\\to\\mu^+\\mu^-$ such that the current ATLAS and CMS measurements are already excluding part of the parameter space. In case $\\delta a_\\mu$ is solely explained by $S_1$ or $S_2$ the effect in ${\\rm Br}[h\\to\\mu^+\\mu^-]$ is of the order of several percent and therefore detectable at future colliders, in particular at the FCC-hh. Furthermore, while the $S_1$ scenario predicts constructive interference in $h\\to\\mu^+\\mu^-$ for the currently preferred range of $a_\\mu$, the $S_2$ scenario predicts destructive interference such that they can be clearly distinguished in the future. \n\\medskip\n\n\\begin{acknowledgments}\nAcknowledgements -- A.C. thanks Martin Hoferichter for useful discussions. The work of A.C. and D.M. supported by a Professorship Grant (PP00P2\\_176884) of the Swiss National Science Foundation and the one of F.S. by the Swiss National Science Foundation grant 200020\\_175449\/1.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}