diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzheef" "b/data_all_eng_slimpj/shuffled/split2/finalzzheef" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzheef" @@ -0,0 +1,5 @@ +{"text":"\\section{\\bf Introduction}\n\nDynamical coherent structure (pattern) formation is a new phenomenon in\ndynamics of anharmonic lattices. So far patterns have been studied basically\nin continuous systems (see Ref.1 and references therein) and recently in\ngranular materials \\cite{patt2,patt3,patt4,patt5}. Some soliton-like\nlocalized structures have been studied in driven anharmonic lattices \\cite\n{pend_exp,pend_theor1,pend_theor2,rossl1}. Recently it has been found that\noptically driven Klein-Gordon (KG) lattice with quartic anharmonicity\npossesses pattern formation, i.e. formation of a combination of few standing\nwaves, or, in the other words, lattice spatial modes (LSMs), vibrating with\none and the same frequency $\\omega $ equal to that of the driving field \\cite\n{BurPrl}. The important factors for the pattern formation are: i) modulation\ninstability (MI) of the $k=0$ LSM directly excited by external field and\nresulting in generation of other LSMs composing the pattern; ii) destructive\ninterference of modulation instabilities of different LSMs resulting in the\npattern stability. Due to interference of constituent LSMs with each other\nthe pattern looks like a train of {\\it equal} intrinsic localized modes\n(ILMs) extensively studied lately \\cite\n{Dolgov,Sievers,Page,Burl,Page1,Bishop}. Optical creation of a random\nensemble of ILMs have been recently proposed by R\\\"{o}ssler and Page for\nmodel system \\cite{rossl2} and by Lai and Sievers for realistic spin wave\nsystems \\cite{LaiSievers}. Due to equality of the constituent ILMs the\npattern formation opens a new way of experimental identification of ILMs\nbased on dynamical breaking of the lattice translational symmetry.\n\nTo make further step toward experimental identification of patterns (or\nILMs) it seems expedient to study the pattern generation and stability in\noptically driven Klein-Gordon (KG) lattice with both cubic and quartic\nanharmonicity and in the presence of thermal fluctuations (random noise). In\nthe present paper I show that the pattern formation conditions in the KG\nlattice with cubic and quartic anharmonicity can be analyzed on a bases of\nthe lattice with pure quartic anharmonicity. The pattern generation and\nstability regions in the space of external excitation parameters are\npresented. It is shown that at certain conditions random noise results in\nfrustration of the stationary spatial patterns discovered in \\cite{BurPrl}\ngiving rise to a new type of patterns. The particles vibration amplitude in\nthe new patterns is modulated in both space and time so one may consider\nthem as spatio-temporal patterns. The latter can be generated close to the\nborders of the spatial patterns stability regions and probably correspond to\na transition regime from spatial patterns to chaos.\n\n\\section{MI of optically excited standing wave in the Klein-Gordon lattice}\n\nAt the initial stage of the pattern formation the perturbation LSM with $k_p$\nis amplified due to MI of the excited $k=0$ nonlinear LSM (referred below as\nto {\\it excited carrier LSM}, or {\\it excited LSM}). Simultaneously the\nhigher spatial harmonics $2k_p,$ $3k_p,$ ets. will be generated due to\n3-wave and\/or 4-wave mixing of the excited LSM and the perturbation LSM.\nGrowing up of the higher harmonics, their mutual interaction and coupling to\nthe excited LSM finally lead to the pattern formation. Thus, the MI of the\nexcited LSM works as an ignition for the pattern formation.\n\nSpecific feature of the KG lattice is that it possesses a single optical\nband of vibrations. The MI of free vibrating waves in various anharmonic\nlattices including the KG one was analyzed in \\cite\n{MI01,MI02,MI03,MI04,MI05,MI06,MI07}. Essential points for analysis of MI in\nthe driven lattice are those related to i) optical excitation at frequency $%\n\\omega $ which is different from the eigen frequency of the $k=0$ LSM; ii)\nphenomenological damping which mimic all the dissipative interactions\nbetween the phonon band under consideration and any other excitations in\nrealistic system. Note, that the dissipative interactions which involve only\nthe vibrations from the optical band of the KG lattice are rigorously\ntreated in the present analysis.\n\nMotion equation for $n-th$ particle of unity mass and charge in the KG\nlattice is\n\n\\begin{equation}\nd^2U_n\/dt^2+\\gamma \\cdot dU_n\/dt+\\omega _0^2U_n+K_2\\cdot \\left(\n2U_n-U_{n-1}-U_{n+1}\\right) +K_3U_n^2+K_4U_n^3=E_0e^{i\\omega t}+c.c.,\n\\label{eq1}\n\\end{equation}\nwhere $\\gamma $ is phenomenological damping constant, $\\omega _0^2$, $K_4$\nare incite and $K_2$ is intercite force constants, $E_0$ is the external\nfield amplitude. The trial solution of Eq.(\\ref{eq1}) for small\nanharmonicity can be chosen in the form\n\n\\begin{equation}\nU_n(t)=\\frac 12\\left[ V_{C0}+V_{C1}\\exp \\left( i\\omega t+i\\varphi _1\\right)\n+V_{C2}\\exp \\left( i2\\omega t+i\\varphi _2\\right) +c.c.\\right] , \\label{eq2}\n\\end{equation}\nwhere $V_{Cj}$ and $\\varphi $ are real amplitude and phase angle\nrespectively. Two types of the KG lattice are considered. The lattice with\npure quartic anharmonicity (L1): $\\omega _0=354,$ $K_2=6\\cdot 10^4,$ $K_3=0,$\n$K_4=-3.3\\cdot 10^6;$ the lattice with both cubic and quartic anharmonicity\n(L2): $\\omega _0=354,$ $K_2=6\\cdot 10^4,$ $K_3=6\\cdot 10^5,$ $K_4=5\\cdot\n10^5 $. The incite potential curves for a particle in the L1 and L2 lattices\nare shown in Fig.1a. A particle vibration in the right potential well is to\nbe considered. One can see from Fig.1a that the particle remains in the\nminimum until its vibration amplitude do not exceed the value $%\nU_{th}^0\\simeq 0.2$. The latter value can be considered as a natural\nthreshold for the point defect formation related to the out-off minimum jump\nof the particle. The parameters of the L1 and L2 lattices were chosen in a\nway for the $V_{C1}\\propto E_0$ dependences to be nearly the same (see\nFig.1b). This choice allows to understand a pure influence of cubic\nanharmonicity not related to anharmonic shift of the optical mode frequency.\nFor $\\omega <\\omega _0$ the $V_{Cj}\\propto E_0$ dependences show bistability\nand for $\\omega >\\omega _0$ these dependences are monotonous. Below only the\nlatter case is considered in detail.\n\nObviously $V_{C0}=V_{C2}=0$ in the L1 lattice and one may use the rotating\nwave approximation (RWA) for stability analysis of the excited LSM.\nAccording to Ref. \\cite{BurPrl} the perturbation growing rate (increment) $%\n\\mathop{\\rm Im}\n(\\Omega (q))$ can be determined from the equation \n\\begin{equation}\n\\left[ \\omega _q^2-\\left( \\omega +\\Omega \\right) ^2-i\\gamma \\cdot \\left(\n\\omega +\\Omega \\right) \\right] \\cdot \\left[ \\omega _q^2-\\left( \\omega\n-\\Omega \\right) ^2+i\\gamma \\cdot \\left( \\omega -\\Omega \\right) \\right]\n=\\beta ^2, \\label{eq4}\n\\end{equation}\nobtained after substitution of (\\ref{eq2}) with perturbation \n\\begin{equation}\n\\delta U_n=\\frac 12\\cos (qn)\\cdot \\left[ V_{P1}\\exp (i\\cdot (\\omega -\\Omega\n)t)+V_{P2}\\exp (i\\cdot (-\\omega -\\Omega )t)+c.c.\\right] , \\label{eq4a}\n\\end{equation}\ninto Eq(\\ref{eq1}) (here and below the wavevector of perturbation LSM is\ndenoted by $q$ while that of carrier LSM is denoted by $k$). Here $\\omega\n_q^2=\\omega _0^2+4K_2\\sin (q\/2)^2+\\frac 32K_4V_{C1}^2,$ $\\beta =\\frac 34%\nK_4V_{C1}^2$, $V_{Pj}$ is the complex amplitude, and $q$ and $\\Omega $ are\nthe wave vector and the complex frequency shift of the perturbation wave\nrespectively.\n\nTo study MI of the solution (\\ref{eq2}) in general case the perturbation has\nto be chosen in the form \n\\begin{equation}\n\\begin{array}{c}\n\\delta U_n=\\frac 12\\cos (qn)\\cdot \\left[ V_{P0}\\exp (-i\\Omega t)+V_{P1}\\exp\n(i\\cdot (\\omega -\\Omega )t)+V_{P2}\\exp (i\\cdot (-\\omega -\\Omega )t)+\\right.\n\\\\ \n\\left. V_{P3}\\exp (i\\cdot (2\\omega -\\Omega )t)+V_{P4}\\exp (i\\cdot (-2\\omega\n-\\Omega )t)+c.c.\\right] .\n\\end{array}\n\\label{eq3}\n\\end{equation}\nThe $%\n\\mathop{\\rm Im}\n(\\Omega (q))$ function then was calculated numerically equating to zero the\ndeterminant of the corresponding system of five liner equations. Relative\nincrement $%\n\\mathop{\\rm Im}\n(\\Omega (q)\/\\omega )$ for the L1 and L2 lattices and that for free carrier\nLSM with $k=0$ in the L2 lattice with $\\gamma =0$ are plotted in Figs.2a-c\nrespectively for $\\omega =1.05\\omega _0$ and three values of $V_{C1}$. One\ncan see that the instability regions ($%\n\\mathop{\\rm Im}\n(\\Omega (q))>0$) in the $q$ space for the excited carrier LSM in the L1\nlattice are much broader and the instability is stronger than those in the\nL2 lattice. Note that the instability of free carrier LSM is much stronger\nthan that of excited one (compare Fig.2b and Fig.2c). By symbols in Fig.2b\nare shown the $%\n\\mathop{\\rm Im}\n(\\Omega (q)\/\\omega )$ curves for the system with pure quartic anharmonicity\nwhich will be used for approximation of the L2 lattice. The effective\nquartic anharmonicity constant was chosen to be $K_4^{eff}=K_4\\cdot \\left(\n1+\\left( \\frac{K_3}{\\omega _0^2}\\right) ^2\\cdot V_{C1}^2\\right) -\\frac{11}{12%\n}\\left( \\frac{K_3}{\\omega _0}\\right) ^2$ and also the factor $\\beta $ in (%\n\\ref{eq4}) was changed in this case to $\\beta ^{eff}=\\frac 7{12}K_4V_{C1}^2$%\n. The numerical factors $\\frac{11}{12}$ and $\\frac 7{12}$ were chosen for\nthe best fit of the increment curves shown by lines in Fig.2b. One can see\nthat the system with the effective quartic anharmonicity approximates rather\nwell the original L2 lattice if the excitation field $E_0$ is not very high.\n\nThis approximation allows to suitably represent the excited LSM instability\nregions in the $\\left( E,\\gamma \\right) $ space. The MI regions of the\nexcited LSM for the perturbation wave vectors $q=\\pi \/3$ and $q=\\pi \/2$ are\nshown in Fig.3 by solid lines. The curves were obtained from Eq.(\\ref{eq4})\nunder conditions $\\Omega (q=\\pi \/3)=0$ and $\\Omega (q=\\pi \/2)=0$ for the\nleft and right closed regions respectively. Additionally, the excited LSM\namplitude must be restricted by a value much lower than the $U_{th}^0$. This\nis because the vibration amplitude of some particles in the pattern\nincreases compared to its value in the excited LSM and may result in the\nout-of minimum jump of these particles. In the case shown in Fig.3 the\nexcited LSM amplitude was restricted by the value $0.13\\cdot a$, i.e. about $%\n0.7\\cdot U_{th}^0$, what cuts the MI regions for high $E$ values. In fact,\nthe difference between particles vibration amplitudes in the excited LSM on\none hand and in the pattern on the other hand depends on damping so the MI\nregions can be more extended to the higher $E$ values at high $\\gamma $\nvalues.\n\n\\section{Pattern solutions for $K_3=0$ and their stability}\n\nThe perfect pattern can be generated starting from the seeding LSM $\\sim\n\\cos (q\\cdot n)$ at $t=0$. Then under action of external field $%\nE=E_0e^{i\\omega t}$ the system will pass through two stages: a) excitation\nof carrier LSM; b) growing up of the seeding LSM due to MI of the excited\ncarrier LSM and simultaneous generation of other LSMs due to four-wave\nmixing. Obviously, if the number of generated LSMs at the initial stage of\nthe pattern formation is fairly restricted ($3\\div 5)$ the LSMs may grow up\nto large enough amplitude to be as additional nonlinear waves (carrier LSMs)\nin the lattice and to stabilize the pattern. We restrict our consideration\nby the total number of carrier LSMs $N_{LSM}=3$ and $N_{LSM}=4$ (3-LSM- and\n4-LSM patterns, respectively). The former is build up of the carrier LSMs\nwith wave vectors $k_1=0,$ $k_2=\\pi \/2$ and $k_3=\\pi $ (lattice constant $%\na=1 $) while the latter with those $k_1=0,$ $k_2=\\pi \/3,$ $k_3=2\\pi \/3$ and $%\nk_4=\\pi $. The MI of the excited carrier LSM must have a maximum around $%\nq_{\\max }=\\pi \/2$ in case of $N_{LSM}=3$ and around $q_{\\max }=\\pi \/3$ for $%\nN_{LSM}=4$. This is the condition for $E_0$ or, in the other words, for the\nexcited carrier LSM amplitude $V_C$. For $N_{LSM}=2$ ($q_{\\max }=\\pi $) no\nsolutions have been found.\n\nBecause of the symmetry arguments the pattern may consist of standing LSMs\nonly. The trial pattern solutions are \n\\begin{equation}\n\\begin{array}{c}\nU_n^{3-LSM}(t)=\\frac 12\\left[ V_{C1}\\exp \\left( i\\omega t+i\\varphi _1\\right)\n+V_{C2}\\cos (\\frac \\pi 2n)\\exp (i\\omega t+i\\varphi _2)+\\right. \\\\ \n\\left. V_{C3}\\cos (\\pi n)\\exp (i\\omega t+i\\varphi _3)+c.c.\\right] ,\n\\end{array}\n\\label{eq6a}\n\\end{equation}\n\\begin{equation}\n\\begin{array}{c}\nU_n^{4-LSM}(t)=\\frac 12\\left[ V_{C1}\\exp \\left( i\\omega t+i\\varphi _1\\right)\n+V_{C2}\\cos (\\frac \\pi 3n)\\exp (i\\omega t+i\\varphi _2)+\\right. \\\\ \n\\left. V_{C3}\\cos (\\frac 23\\pi n)\\exp (i\\omega t+i\\varphi _3)+V_{C4}\\cos\n(\\pi n)\\exp (i\\omega t+i\\varphi _4)+c.c.\\right] ,\n\\end{array}\n\\label{eq6b}\n\\end{equation}\nfor 3-LSM and 4-LSM patterns respectively. No new LSMs important within RWA\nwill appear due to four-wave mixing of those chosen. Here again $V_{Cj}$ are\nreal amplitudes and $\\varphi _j$ are phase angles. According to $\\omega $\nand $\\gamma $ values a single stable nontrivial (all $V_{Cj}\\neq 0$)\nsolution $\\Phi _1$ of the form (\\ref{eq6a})-(\\ref{eq6b}) was found in the\nregion between the dotted curves in Fig.3. Outside this region the solution $%\n\\Phi _1$ is unstable in the sense discussed below. An additional and\nstrongly unstable solution $\\Phi _2$ \\cite{BurPrl} exists for the 3-LSM\npattern at $\\gamma <0.1\\cdot \\omega _0$ while for 4-LSM patterns no\nadditional solutions have been found. The examples of the 3- and 4-LSM\npatterns of the $\\Phi _1$-type calculated after substitution of (\\ref{eq6a})\nand (\\ref{eq6b}) respectively into Eq.(\\ref{eq1}) with the parameters\ncorresponding to the points {\\bf a} and {\\bf c} in Fig.3 are presented in\nFig.4. Note, that indeed the patterns can be regarded as the lattices of\nintrinsic localized vibrations of the odd parity \\cite{Dolgov,Sievers}.\n\nLinear stability analysis of the solution (\\ref{eq6a}) within RWA was given\nin \\cite{BurPrl}. Similar approach to the 4-LSM pattern stability requires\nthe total perturbation to contain all perturbation waves coupled to each\nother via four-wave mixing, i.e. all spatial harmonics resulting from a\nproduct of any three carrier LSMs from Eq.(\\ref{eq6b}) on a perturbation\nwave. One can see that a set of waves\n\n\\begin{equation}\n\\begin{array}{c}\n\\delta U_n=\\frac 12\\left\\{ \\exp (i\\cdot (\\omega -\\Omega )t)\\cdot \\left[\nV_{P1}\\cos (qn)+V_{P3}\\cos ((\\frac \\pi 3-q)n)+V_{P5}\\cos ((\\frac \\pi 3%\n+q)n)+\\right. \\right. \\\\ \n\\left. V_{P7}\\cos ((\\frac{2\\pi }3-q)n)+V_{P9}\\cos ((\\frac{2\\pi }3%\n+q)n)+V_{P11}\\cos ((\\pi -q)n)\\right] + \\\\ \n\\exp (i\\cdot (-\\omega -\\Omega )t)\\cdot \\left[ V_{P2}\\cos (qn)+V_{P4}\\cos ((%\n\\frac \\pi 3-q)n)+V_{P6}\\cos ((\\frac \\pi 3+q)n)\\right. \\\\ \n\\left. \\left. +V_{P8}\\cos ((\\frac{2\\pi }3-q)n)+V_{P10}\\cos ((\\frac{2\\pi }3%\n+q)n)+V_{P12}\\cos ((\\pi -q)n)+c.c.\\right] \\right\\}\n\\end{array}\n\\label{eq7}\n\\end{equation}\nfulfills this condition. Indeed the set of waves (\\ref{eq7}) contains\nspatial harmonics $k_p=\\pm q+\\frac \\pi 3m$ ($m=0,$ $1,$ $2,3$). After\ncoupling to any three carrier LSMs from (\\ref{eq6b}) it results in a set of\nspatial harmonics with wave vectors $k_p=\\pm q+\\frac \\pi 3m\\pm \\frac \\pi 3l$\n($l=0,1,2,3$) which obviously can be reduced to (\\ref{eq7}). System of\ntwelve linear equations derived after substitution of (\\ref{eq6b}) and (\\ref\n{eq7}) into Eq.(\\ref{eq1}) was solved numerically to determine $\\Omega (q).$\n\nThe perturbation growing rate $%\n\\mathop{\\rm Im}\n\\left( \\Omega \\right) $ versus wavevector is shown in Fig.5 for 4-LSM\npatterns generated at the points {\\bf a} and {\\bf b} in Fig.3. The pattern\n(a) in Fig.5 is obviously stable, i.e. $%\n\\mathop{\\rm Im}\n\\left( \\Omega (q)\\right) <0$ for $0\\leq q\\leq \\pi $, while the pattern (b)\nis unstable ($%\n\\mathop{\\rm Im}\n\\left( \\Omega (q)\\right) >0$ for some $q$ values). As some individual\ncarrier LSMs composing the pattern are unstable the pattern stability can\ncome from destructive interference between MIs of the carrier LSMs. To\nqualitatively understand this phenomenon one may imagine the simplified\npicture: i) perturbation LSMs of different symmetry compose the perturbation\nstate Eq. (\\ref{eq7}) via four-wave mixing; ii) this state is excited by\ndifferent (by symmetry) coherent excitations (different carrier LSMs).\nHence, whether the perturbation state will be continuously populated depends\non the amplitudes of the coherent excitations and on the phase angle(s)\nbetween them (the influence of phase angle between the carrier LSMs on the\npattern stability is demonstrated in Fig.5). It may happen that at certain\ncombination of the excitation parameters the perturbation state won't be\npopulaed at all. That means the MI-mediated influence of a given carrier LSM\non the perturbation (\\ref{eq7}) can be cancelled by that of the other(s)\nresulting in the stability of all the carrier LSMs, i.e. the pattern.\n\n\\section{Pattern formation in the presence of noise}\n\nThe stability regions for both 3-LSM and 4-LSM patterns corresponding to the\ncondition $%\n\\mathop{\\rm Im}\n\\left( \\Omega \\right) <0$ (damping of the perturbation) are shown by dotted\nlines in Fig.3. These stability regions were determined numerically via\nexcitation of the pattern starting from a small amplitude seeding wave\n(S-patterns). Since the stability regions are inside the MI regions of the\ncorresponding excited carrier LSM one might expect that the stable patterns\ncan be spontaneously generated there. This is not always true, however, if\nthe pattern is generated without any seed but in the presence of random\nnoise (N-pattern). When generated from a small amplitude noise the pattern\ncan be unstable because of simultaneous generation of waves with wrong ($\\pi\n\/q$ is not integer) wavevectors. The examples of the N-pattern formation are\ngiven in Fig.6. Numerical study was performed for 60 particle chain using\nthe standard conservative scheme of numerical integration of the motion\nequations (\\ref{eq1}) with cyclic boundary conditions. Random noise of the\nvalue between $\\pm 0.00005a$ was generated at each time step and added to\nthe current particles positions. The panels notation in Fig.6 correspond to\nthe points notation in Fig.3. The 4-and 3-LSM patterns generated\nrespectively at the points {\\bf a} and {\\bf c} in Fig.3 are really stable\nand can be attributed to the true spatial patterns since they are similar to\nthose calculated theoretically (compare dotted and solid lines in Fig.4).\nThe pattern at the point {\\bf d} is inside the 3-LSM stability region and\nshows periodic modulation in time similar to that of the pattern at the\npoint {\\bf b }outside the corresponding stability region. Therefore the\nN-patterns at the points {\\bf b} and {\\bf d} can be regarded as\nspatio-temporal patterns rather than the true spatial patterns.\n\nSpatio-temporal patterns probably also possess certain stability properties\nthough the latter are not so definite as for the spatial patterns. Typical\ntemporal evolution of the spatio-temporal pattern is shown in Fig.7.\nAccording to Fig.7 the pattern corresponding to the point {\\bf b} in Fig.3\nis formed over $\\sim 30$ periods of vibration and remains rather stable\nthough its contrast slightly decreases in time. Relatively long-time\nstability of the pattern outside the stability region of the true spatial\npattern can be qualitatively understood. Growing up of a perturbation $%\n\\delta U_n(t)$ in the presence of the unstable pattern at the point {\\bf b}\nresults in the shift of the quasiharmonic resonance frequency of the optic\nLSM \n\\[\n\\Delta \\omega (k=0)^2=3K_4\\cdot \\left\\langle \\delta U_n^2(t)\\right\\rangle\n_{n,t} \n\\]\nand consequently in the increase of the frequency mismatch $\\omega -\\omega\n(k=0)$. Due to this increase the pattern approaches stability region or even\nenters it and the growing conditions for the perturbation are violated.\nAfter damping of the perturbation the pattern returns to the initial point\nand the process starts again thus giving rise to slow modulation of the\npattern amplitude, i.e. results in formation of spatio-temporal pattern. The\nlatter can be stable if the pattern modulation is perfectly periodic, i.e.\nthe modulation amplitude doesn't grow up, otherwise it is obviously\nunstable. In the long-time scale the evolution of the unstable\nspatio-temporal pattern due to continues energy collapse can lead to a\nsituation when the vibration amplitude of a particle exceeds the threshold\nvalue $U_{th}^0$ what means the point defect formation.\n\n\\section{Optical intensity required for the pattern generation}\n\nOur numerical experiments show that to reach the threshold for the pattern\nformation with $N_{LSM}=4$ in the L1 lattice the particles in the excited\nLSM must vibrate with the amplitude $A_p\\simeq 0.05a$. The electric field\nstrength $E_0$ of the optical excitation at $\\omega =1.05\\omega _0\\simeq\n10^2 $ $cm^{-1}$ can be then estimated using motion equation in the harmonic\napproximation. Suggesting $a=5\\AA $ and $\\gamma =0.05\\omega _0$ one obtains $%\nE_0=A_pm_pm_e[(\\omega _0^2-\\omega ^2)^2+(\\gamma \\omega\n)^2]^{1\/2}\/(e_pe_e)\\simeq m_p\/e_p$ $[{\\rm {V\/cm}],}$ where $m_p$ and $e_p$\nare the particle mass and charge measured in the free electron units $m_e$\nand $e_e$ respectively. Accordingly, the field strength is of the order of $%\n1\\,{\\rm {V\/cm}}$ for light particles like electrons and of the order of $%\n10^5\\,{\\rm {V\/cm}}$ for ionic solids. The latter value of $E_0$ can be\nreached in the laser pulses. Obviously the pulse duration $T$ must be enough\nfor the pattern to be generated and detected. The generation stage lasts\nover $30-40$ periods of vibration and $10-20$ periods are probably needed\nfor the pattern detection what in total means $T\\simeq 20$ $ps$. Hence, for\nthe pattern formation in an electronic system (e.g. charge-density wave\nconductor) the IR laser pulses of energy $W\\simeq 10^{-11}$ $mJ$ focused\ninto $\\sim 0.01$ $cm^2$ are required. The pattern formation in an ionic\nsystem needs much higher pulse energy $W\\simeq 0.1$ $mJ$ unless the smaller\nfocusing area is used.\n\nThus, in real physical experiment a system will be subjected to external\nfield and examined during a finite time. Therefore it seems reasonable to\nuse some criterion of the pattern {\\it relative} stability (during finite\ntime period) rather than that given by linear stability analysis. The\nlatter, moreover, seems to be too complicated for spatio-temporal patterns\nespecially for a system with realistic potential. According to the above\nsaid the pattern stability regions in Fig.3 must be transformed taking into\naccount the pattern generation and probably detection conditions (see next\nSection).\n\n\\section{Pattern generation and stability for $K_3\\neq 0$}\n\nThe 3-LSM pattern solution in this case have the form \n\\begin{equation}\n\\begin{array}{c}\nU_n^{3-LSM}(t)=\\frac 12\\left[ V_{C10}+V_{C11}\\exp \\left( i\\omega t+i\\varphi\n_{11}\\right) +V_{C12}\\exp \\left( i2\\omega t+i\\varphi _{12}\\right) +\\right.\n\\\\ \n\\cos (\\frac \\pi 2n)\\cdot \\left( V_{C20}+V_{C21}\\exp \\left( i\\omega\nt+i\\varphi _{21}\\right) +V_{C22}\\exp \\left( i2\\omega t+i\\varphi _{22}\\right)\n\\right) + \\\\ \n\\left. \\cos (\\pi n)\\cdot \\left( V_{C30}+V_{C31}\\exp \\left( i\\omega\nt+i\\varphi _{31}\\right) +V_{C32}\\exp \\left( i2\\omega t+i\\varphi _{32}\\right)\n\\right) +c.c.\\right] .\n\\end{array}\n\\label{eq9}\n\\end{equation}\nIn contrast to Eqs (\\ref{eq6a}) and (\\ref{eq6b}) the pattern (\\ref{eq9})\ncontains static distortion and also the higher (second) temporal harmonic\ncan not be neglected. Hence the linear stability analysis of the pattern in\ncase $K_3\\neq 0$ is too complicated. Note, that the approximation of the\nsystem with $K_3\\neq 0$ by a system with effective quartic anharmonicity\ngives reasonably good results when applied to the carrier LSM stability but\nnot to the pattern stability. According to the estimation given in the\nprevious Section, one can revise the pattern stability conditions and\nconsider the pattern as being relatively stable if it is stable during few\ntens of periods $T=2\\pi \/\\omega $. As a new stability criterion one may\nconsider a requirement for a pattern to be stable if it shows well\npronounced spatial modulation (%\n\\mbox{$>$}\n10 per cent) of particle vibration amplitude for a time interval of about\n30-50 periods of vibration. The latter time for real system is determined by\nthe pattern detection time and can be even shorter.\n\nThe new regions of the pattern relative stability were determined from\nnumerical experiment on the bases of the aforementioned criterion (see\nshaded regions in Fig.8). The bubbles in Fig.8 show the pattern generation\n(excited LSM instability) regions for the L1\\ and L2 lattices (solid and\ndotted lines, respectively). Dashed-dotted thin lines denote the envelope\nfor the corresponding bubbles and determine the MI cut-off for the excited\ncarrier LSMs. These cut-off lines are determined from the system of\nequations \n\\begin{equation}\n\\left\\{ \n\\begin{array}{c}\n\\Omega (q)=0, \\\\ \n\\frac{\\partial \\Omega (q)}{\\partial q}=0, \\\\ \nE=\\sqrt{\\left( \\omega _0^2+\\frac 34K_4\\cdot V_{c1}^2-\\omega ^2\\right)\n^2+\\left( \\gamma \\omega \\right) ^2},\n\\end{array}\n\\right. \\label{eq10}\n\\end{equation}\nwhere $\\Omega (q)$ is determined by Eq. (\\ref{eq4}) and again $K_4$ and $%\n\\beta $ must be substituted with $K_4^{eff}$ and $\\beta ^{eff}$ if $K_3\\neq\n0 $. Note, that the stability regions are separated from the envelope lines\nbecause in the close vicinity to the latter the perturbation growing rate is\ntoo low to form the pattern with well pronounced modulation (at least 10 per\ncent) during the time of numerical experiment.\n\nIt is important to determine the pattern generation and relative stability\nregions in the $\\left( E,\\omega \\right) $ space, i.e. the space of external\nexcitation parameters. They are shown by shaded regions in Fig.9 for 3- and\n4-LSM patterns in the L2 lattice with $\\gamma =0.025\\cdot \\omega _0$. The\nlatter value seems quite reasonable for realistic systems. The stability\nregions were obtained numerically under condition that the pattern shows\nmore than 10 percent modulation in $\\left\\langle U_n^2(t)\\right\\rangle _t$\nfunction but doesn't result in the point defect formation (particle jump\nfrom its initial minimum in Fig.1a) during 120 periods of vibration. Even in\nthe case of nearly fastest evolution of the 3-LSM pattern it can be\nconsidered as stable for $\\sim 10$ periods of vibration (see Fig.10) though\nin the long time scale the system eventually arrives to chaotic state.\nTaking into account these short-living patterns one may expect the pattern\nformation phenomenon in rather broad region of the external excitation and\nsystem parameters. Note, that the particles vibration in the time interval $%\n3-4$ (90-120 periods) in Fig.10 is similar to that in the spatio-temporal\npattern in Fig.6c suggesting that the spatio-temporal patterns represent the\nintermediate state in transition from dynamical order to chaos.\n\n\\section{Conclusions}\n\nThe dynamical coherent structure (pattern) formation was studied in the\noptically driven Klein-Gordon lattice with cubic and quartic anharmonicity\nin the presence of random noise. It is shown that in general the patterns\nare characterized by both spatial and temporal modulation of the particles\nvibration amplitude (so-called spatio-temporal patterns) and can be\nrelatively stable in rather broad region of external excitation and the\nsystem parameters.\n\n\\section{Acknowledgments}\n\nThis work was supported by Russian Ministry of Science within the program\n''Fundamental Spectroscopy''.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe best way to compare the total dust input from evolved stars in a galaxy to \nthe total dust budget is to detect the entire population of dusty stars at infrared \n(IR) wavelengths and estimate the dust-injection rate of each. \nThese global measurements are not possible in our Galaxy, due to source\nconfusion\nin the Galactic plane, but have been attempted on nearby external galaxies, such as \nthe Small and Large Magellanic Clouds (hereafter SMC and LMC, respectively). \nExtragalactic studies have also the advantage that distances to sources within a galaxy \ncan be assumed to be the same.\n\nThe importance of the Magellanic Clouds as laboratories of dust enrichment by stellar sources\nhas been thoroughly discussed in the literature (Matsuura et al. 2009, 2013; Srinivasan et al. 2009; \nBoyer et al. 2012; Riebel et al. 2012). A growing body of \nobservational data has been made available to the community by means of dedicated large photometric surveys.\nAmong the others, the Magellanic Clouds Photometric Survey (MCPS, Zaritsky et al. 2004), the Two Micron\nAll Sky Survey (2MASS, Skrutskie et al. 2006), Surveying the Agents of a Galaxy's Evolution\nSurvey (SAGE) with the {\\it Spitzer} Space Telescope (SAGE-LMC, Meixner et al. 2006; SAGE-SMC, Gordon et al. 2011), and\n{\\it HERschel} Inventory of The Agents of Galaxy Evolution (HERITAGE, Meixner et al. 2010, 2013)\nhave provided catalogues of point sources as well as high-resolution maps of the emission by\nthe warm and cold dust components in the interstellar medium (ISM). \n\nIn addition, a wealth of complementary data allows to recontruct the recent and past star formation\nhistories of the galaxies (see, among others, Harris \\& Zaritsky 2004, 2009; Bolatto et al. 2011; Skibba et al. 2012;\nWeisz et al. 2013; Cignoni et al. 2013), their metal enrichment histories (see e.g. Carrera et al. 2008a, 2008b; Piatti 2012;\nPiatti \\& Geisler 2013), and their present-day global gas, stellar and dust content (see Meixner et al.\n2013 for a recent collection of observational data). \n\nHence, these two galaxies represent an excellent astrophysical laboratory to investigate the life-cycle\nof dust in the ISM, providing a fundamental benchmark to theoretical models.\n\nKnowledge of the amount and composition of dust formed by stars of intermediate mass \n($\\rm 1 M_{\\odot} \\le \\rm m_{star} \\le \\rm 8 M_{\\odot}$) and in the ejecta of core-collapse \nsupernovae ($\\rm m_{star} > 8 M_{\\odot}$, where $\\rm m_{star}$ is the stellar mass at zero-age main sequence) \nrepresents the first step towards the understanding\nof dust enrichment from the most distant galaxies to the Local Universe. The relative importance\nof these stellar sources of dust depends on the mass-dependent dust yields, on the stellar initial\nmass function (IMF), as well as on the star formation history (SFH) of each galaxy (Valiante et al. 2009, 2011). \nContrary to previous claims, dust production at high redshifts, $\\rm z > 6$, is not necessarily \ndriven by massive stars, as stars of intermediate mass on their Asymptotic Giant Branch (AGB)\ncan dominate dust production on a timescale which ranges between 150 and 500 Myr (Valiante et al. 2009).\nLocal observations of dust emission in the circumstellar shells of evolved stars or in \nrecent supernovae and supernova remnants have the potential to significantly reduce uncertainties\nassociated to theoretical dust yields. \n\nIn this paper, our main aim is to compare dust production rates by AGB stars \ncalculated by means of theoretical models to observations of evolved stars in the Magellanic\nClouds. In particular, we consider a new grid of dust yields for different stellar masses and \nmetallicities (Ventura et al. 2012a, 2012b, Di Criscienzo et al. 2013; Ventura et al. 2014)\nwhich is based on models calculated with a full integration of the stellar structure, following the\nevolution from the pre-main sequence phase using the code ATON (Ventura et al. 1998; Ventura \\& D'Antona \n2009). This represents an important difference with respect to previous studies by Ferrarotti \\& Gail, whose dust yields are based \non stellar properties computed from synthetic\\footnote{In this\ncontext, by synthetic models we refer to models in which the evolution \nis described with analytical relations derived by fitting the results\nof full evolutionary models.} models (Ferrarotti \\&\nGail 2001, 2002, 2006; Zhukovska, Gail \\& Trieloff 2008) or with respect to more recent hybrid models where the integration\nis limited to the envelope structure of the stars (Marigo et al. 2013; Nanni et al. 2013). \n As a consequence, the mass and metallicity dependence of carbon and \nsilicate dust yields based on ATON stellar models can not be reproduced by dust yields based\non synthetic models, due to the different treatment of physical processes, such as the third dredge-up \nand the hot bottom burning, which alter the surface chemistry of AGB stars (Ventura et al. 2014). \n\nBy comparing the predictions of different sets of AGB stars dust yields to observations of carbon-rich and oxygen-rich AGB stars in\nthe Magellanic Clouds, we can hope to constrain some of the model uncertainties. \nIn addition, the difference in the gas metallicity of the two galaxies, with $\\rm Z_{SMC} = 0.004$ and \n$\\rm Z_{LMC} = 0.008$, allows to explore the complex and poorly understood dependence of AGB stars dust\nproduction rates on their progenitors initial metallicity (Ventura et al. 2012b). Finally, we can assess the\nrelative importance of AGB stars and supernovae as dust producers in the two galaxies, comparing\nthe overall contribution of stellar sources to the estimated total dust mass in the ISM.\n\n\n\nIn a recent paper, Zhukovska \\& Henning (2013) have done a similar analysis, discussing the dust\ninput from AGB stars in the LMC. For the first time, theoretically calculated dust production rates \nof AGB stars have been compared to those derived from IR observations of AGB stars for the entire galaxy. \nIn their comparison, they consider synthetic yields by Zhukovska et al. (2008) but discuss also \nthe implications of their models when ATON yields are adopted. They find that while synthetic models\nlead to carbon and silicate dust production rates in good agreement with observations, \nATON (hereafter {\\bf old} ATON) models under-predict carbon-dust production rates, favouring silicate dust production, in contrast\nto the observations. Motivated by these findings, we have recently investigated the dependence of\nthe predicted dust yields on the macrophysics adopted to describe the AGB evolution (Ventura et al. 2014)\nand a new grid of ATON (hereafter {\\bf new} ATON) dust yields has been \ncomputed for different initial metallicities, which range between $3 \\times 10^{-4}$ to $8 \\times 10^{-3}$, \nincluding the metallicity of the SMC, $\\rm Z = 0.004$, \nthat has not been considered in previous calculations.\n\n\nHere we extend the comparison to this new ATON grid. In addition, we do not limit the analysis\nto AGB stars in the LMC but we test the models against observations in the SMC.\n\nThe paper is organized as follows: in Section 2, we briefly summarize the AGB stellar dust yields predicted by different theoretical models;\nin Section 3 we review the observationally constrained \nstar formation and chemical enrichment histories of the Magellanic Clouds; \nthe associated dust production rates for different sets of dust yields are presented in Section 4 and compared to observational\ndata. The best fit models are then used, in Section 5, to assess the role of AGB stars and\nsupernovae in the global dust budget of the LMC and SMC. Finally, in Section 6 we summarize\nand discuss our conclusions. \n\n\\begin{figure*}\n\\includegraphics[width=58mm]{AGBdustcomp01.eps}\n\\includegraphics[width=58mm]{AGBdustcomp04.eps}\n\\includegraphics[width=58mm]{AGBdustcomp08.eps}\n\\caption{Dust yields from AGB stars as a function of their initial mass for three initial metallicities: Z = 0.001, 0.004, and 0.008 (left, central, and right panels,\nrespectively), with separate contributions from carbon dust, silicates, SiC and Iron grains. In each panel, empty squares show the\npredictions from Zhukovska et al. (2008), filled circles and squares are the old and new ATON yields, respectively (see text).}\n\\label{fig:agbyields}\n\\end{figure*}\n\n\n\\section{Dust yields from AGB stars}\n\nTheoretical calculations of the total dust mass formed during the AGB phase has recently attracted many\nindependent studies.\nIn their pioneering work, Ferrarotti \\& Gail (2001, 2002, 2006) used synthetic\nstellar evolution models to compute the non-equilibrium grain condensation and estimated the dust\nyields for M, S, and C-type AGB stars of different ages and metallicities \n(Zhukovska, Gail \\& Trieloff 2008). These models have proved to be very useful tools, thanks to the\nsimplicity with which they can be incorporated into stellar population synthesis studies and chemical\nevolution models (Zhukovska \\& Henning 2013). However, their major drawback is to \nrely on simple evolutionary codes where some physical processes, such as the variation of the core mass,\nthe temperature at the bottom of the convective envelope, the core mass at which the Third Dredge Up (TDU) begins to\noperate, the extent of the inwards penetration of the surface convective zone, are described by means\nof analytic approximations (Marigo \\& Girardi 2007). Since these processes affect the chemical composition\nand physical conditions in the circumstellar atmosphere during the AGB phase, they also affect the\nresulting dust mass and composition.\n \nTo overcome this limitation, in a series of papers we have recently \npublished a grid of AGB stars dust yields for different stellar masses and metallicities using models\nthat follow stellar evolution from the pre-main sequence phase until the almost complete ejection\nof the stellar mantle by means of the code ATON (Ventura et al. 1998; Ventura \\& D'Antona 2009). \nIn the first paper of the series, Ventura et al. (2012a) computed the mass and composition of dust produced by stars with \nmasses in the range $\\rm 1 M_{\\odot} \\le m \\le 8 M_{\\odot}$ and with a metallicity of Z = 0.001 during \ntheir AGB and super-AGB phases. They found that the dust composition depends on the stellar mass:\nlow-mass stars, with $\\rm m < 3 M_{\\odot}$, produce carbon dust whereas more massive stars\nexperience Hot Bottom Burning, do not reach the C-star stage and produce silicates and iron grains.\n\nStarting from a higher initial metallicity, Z = 0.008, Ventura et al. (2012b) found a similar\ntrend, with a transition between carbon and silicate dust production occurring at a threshold mass of \n$\\rm 3.5 \\, M_\\odot$. While the yields of carbon dust do not depend on metallicity, the mass of silicate\ndust grows with metallicity due to the combined effect of the softer HBB experienced \nand of the larger silicon abundance. \n\n\nConversely, for stars with initial metallicity Z $< 0.001$, no silicate dust formation occurs due to the scarcity\nof silicon available in the envelope; carbon dust continue to form in stars with masses $\\rm \\le 2.5 M_{\\odot}$ \ndown to initial metallicities of Z $ = 3\\times 10^{-4}$, below which even these low-mass stars experience \nHBB, do not become carbon stars and AGB stars do not produce any dust (Di Criscienzo et al. 2013).\n\nIt is important to stress that some of these results could not be captured by previous models as they heavily\ndepend on fundamental physical processes occurring during the AGB evolution. In particular, the TDU \nand the Hot Bottom Burning (HBB) alter the surface chemistry of AGB stars. Silicate dust production depends \non the modelling of convection which, in turns, determines the strength of HBB. \nOn the other hand, the mass of carbon dust formed depends on the extent of the third dredge-up: a small\namount of extra-mixing from the borders of the convective shell developed during the thermal pulses\nfavour a much larger penetration of the convective envelope, leading to a much stronger third dredge\nup, hence a larger carbon dust production. These issues have been thoroughly discussed by Ventura et al. (2014),\nwho explored the impact of different physical assumptions concerning the extra-mixing and\nmass-loss during the C-star phase on the resulting dust yields. \n\\begin{figure*}\n\\includegraphics[width=80mm]{SMCstarformnew.eps}\n\\includegraphics[width=80mm]{LMCstarformnew.eps}\n\\caption{The star formation history of the Small (left panel) and Large (right panel) Magellanic\nClouds as a function of time from Harris \\& Zaritsky (2004, 2009). The solid lines show\nthe best-fit star formation history, with shaded regions representing the uncertainty on the fit. \nThe separate contribution of different metallicity bins is also shown with Z = $0.001, 0.0025, 0.004,$\nand $0.008$ plotted as dashed, long-dashed, dot-dashed, and dotted lines, respectively. The star formation\nhistory for the SMC starts at 4 Gyr as the oldest stellar age bin in Harris \\& Zaritsky (2004) is $\\rm 9.988 Gyr$.}\n\\label{fig:SFH}\n\\end{figure*}\nIn Fig.~\\ref{fig:agbyields}, we show the AGB stars dust yields predicted by the old and new ATON models. For\ncomparison, we also show the dust yields predicted by Zhukovska et al. (2008) using synthetic AGB stellar \nmodels (hereafter\nZ08 AGB stars dust yields).\nA detailed analysis of the differences between the different grids of dust yields has been given in\nVentura et al. (2014) and we refer the interested reader to the original paper for more details. Here we\nlimit the discussion to the features which are of interest for the purpose of our study, namely the \nmass and metallicity dependence of carbon dust and silicates. In particular, it is evident from Fig.~\\ref{fig:agbyields}\nthat the new ATON models predict larger carbon (and SiC) dust yields with respect to the old ATON models, \nindependent of the initial stellar metallicity. \nThis is due to the effect of the extra-mixing (larger efficiency of TDU) and - to a larger extent - to the increased mass loss rates in the\nC-star stage. In the new ATON models mass loss during the C-star stage is described following the formulation\nby Wachter et al. (2008) which accounts for carbon-dust production and the consequent acceleration of the wind. For\nall the other evolutionary phases, the mass loss prescription by Bl{\\\"o}cker (1995) is used, similarly to the old\nATON models. The mass of silicates (and iron) dust produced by AGB stars with larger mass is larger in the new ATON\nmodels, particularly for the lowest metallicity, due to the stronger HBB experienced.\n\nThe largest difference between ATON and Z08 dust yields concerns the behaviour of AGB stars with initial masses \n$ > 3 \\rm M_{\\odot}$: these contribute to carbon-dust production according to Z08, but never reach the carbon-star stage \naccording to ATON models. \nThis difference is due to the different treatment of convection, that reflects into a much stronger HBB experienced by our models, \nin comparison to Z08, causing the difference in the predicted silicate dust production. It is important to note that\nthe mass segregation between carbon dust production by low-mass AGB stars and silicate dust production by \nhigh-mass AGB stars is a feature unique to our dust models, and it is not apparent in the models by Zhukovska et al. (2008)\nnor in the most recent models by Nanni et al. (2013). We do not show in the figure the results of the \nlatter study as these are qualitatively in good agreement with those of Zhukovska et al. (2008). In fact,\nalthough Nanni et al. (2013) have based their calculation on a complete envelope model that represents \na significant step forward with respect to purely synthetic AGB models, their approach does not\nfollow the complete stellar evolution. It is important to stress that the HBB phenomenon, which is responsible\nfor the large differences in the mass and composition of dust produced by high-mass AGB stars, is\na consequence of the delicate coupling between the outer region of the degenerate core, \nthe CNO burning layer, and the innermost regions of the surface convective zone; {\\it by definition, this\ncannot be described within the framework of a synthetic modelling of the AGB phase.}\n\n\n\n\\section{The star formation and metal enrichment histories of the Magellanic Clouds}\n\\label{sec:sfr}\n\nSpatially resolved star formation histories of the SMC and LMC have been\nobtained by Harris \\& Zaritsky (2004, 2009) based on the Magellanic\nClouds Photometric Survey (MCPS), which includes over 6 million SMC stars and\n20 million LMC stars. The star formation history of each observed \nstellar population is obtained minimizing the difference between observed \nand synthetic color-magnitude diagrams (CMD) selected from a library generated\nby means of the StarFISH code (Harris \\& Zaritsky 2001) spanning appropriate\nranges in metallicity and ages. Adopting a Salpeter Initial Mass Function (IMF),\nwith a binary fraction of 0.5, the star formation rate as a function of the stellar\nages for 3 (4) metallicity bins has been inferred for 350 (1376) regions of the SMC (LMC).\nIn Fig.~\\ref{fig:SFH} we show the total best-fit star formation history (solid lines, with the\nshaded region representing the uncertainty on the fit) and the separate contribution of different\nmetallicity bins. \n\n\\begin{figure*}\n\\includegraphics[width=58mm]{MCcumSF.eps}\n\\includegraphics[width=58mm]{LMCagemet.eps}\n\\includegraphics[width=58mm]{SMCagemet.eps}\n\\caption{The left panel shows the cumulative star formation histories of the Magellanic Clouds \ncorresponding to the results shown in Fig.\\ref{fig:SFH}. The dashed lines illustrate the recent\nresults of Weisz et al. (2013) using deep HST data (see text). For illustrative purposes, \nthe horizontal dashed line indicates 50\\% of the total stellar mass. The central and right panels\nshow the age-metallicity relation obtained from the best-fit solution of Harris \\& Zaritsky (2004,\n2009) for the Magellanic Clouds. Filled squares (triangles) are the average results obtained by\nCarrera et al (2000a) for the LMC disk population (SMC); the average\nresults obtained by Piatti \\& Geisler (2013) for the LMC and by Piatti (2012) for the SMC are shown\nwith empty squares and triangles, respectively.}\n\\label{fig:MCevo}\n\\end{figure*}\n\nThere are several temporally coincident features in the global star formation histories of the two\ngalaxies, suggesting a common evolution for the Magellanic Clouds. \nBoth galaxies experienced a long ($\\sim 5-7$~Gyr) quiescent epoch starting roughly 10 Gyr ago,\nfollowed by a peak in the star formation rate roughly 2-3 Gyr ago and an enhanced star\nformation activity around 400 Myr ago (Harris \\& Zaritsky 2004, 2009). The best-fit global\npresent-day star formation rates are $\\rm 0.39 \\, M_{\\odot}\/yr$ and $\\rm 0.37 \\, M_{\\odot}\/yr$, for the\nLMC and SMC, respectively. Note that the smallest age bin used in the photometric analysis is \n$\\rm log(t_{age}\/yr) = 6.8$ ($6.6$) for the LMC (SMC) and that the uncertainties on the fit at\nsmaller stellar ages are quite large, with values in the range $\\rm [0.25 - 0.82] \\, M_{\\odot}\/yr$ \n($\\rm [0.13 - 5.7] \\, M_{\\odot}\/yr$) for the LMC (SMC).\n\nWide-field ground-based surveys, as MCPS, provide spatially comprehensive coverage, \nbut the resulting CMDs only extend below the oldest main-sequence turnoff. As a \nresult, the ancient ($> 4 - 5$~Gyr) SFH can not be well constrained. To overcome this limitation,\nSFH studies have been done using the {\\it Hubble Space Telescope} (HST), targetting - however -\nonly a limited number of fields (see e.g. Cignoni et al. 2012, 2013; Weisz et al. 2013). \nComparison between different studies is made difficult by the use of different\nassumptions regarding the stellar IMF, mass range, binary fraction and by the different\nspatial coverage of the galaxies. In Cignoni et al. (2012), the authors have \nderived the SFH for the two most crowded regions of the SMC; in their Fig.~22, they show that \nthe SFHs by Harris \\& Zaritsky (2004) for the same regions \nshow a significantly higher stellar production prior to 8.4 Gyr ag1o. \nA comparison between the normalized cumulative star formation histories is shown in the left panel of \nFig.\\ref{fig:MCevo}; the solid lines with shaded region represent the predictions of Harris \\& Zaritsky and\nthe dashed lines are the recent results of Weisz et al. (2013) who used HST archival data. \nThe LMC (SMC) formed 50 per cent of its mass $\\sim 4 - 6$~Gyr ($\\sim 3 - 4$~Gyr) ago and both\ngalaxies show a dramatic rise in SFR $\\sim 3.5 - 4$~Gyr ago. Similar features are found in \nHST-based studies, although deeper data seem to suggested a larger cumulative star formation\nhistory at earlier epochs. \n\nUsing $\\rm H_{\\alpha}$ and $24 \\mu$m emission to trace recent un-obscured and obscured star formation, \nBolatto et al. (2011) have predicted a present-day global star formation rate of $\\rm 0.037 M_{\\odot}\/yr$ for\nthe SMC. Following a similar approach, Skibba et al. (2012) have estimated present-day star formation rates\nin the range $\\rm [0.016 - 0.039] M_{\\odot}\/yr$ for the SMC and $\\rm [0.25 - 0.63] M_{\\odot}\/yr$ for the LMC.\nWhile the latter values are in good agreement with the prediction by Harris \\& Zaritsky (2009), \nthe former values seem to suggest a present-day star formation rate which is a factor of ten smaller than \nthe results found by Harris \\& Zaritsky (2004); even accounting for the large uncertainties associated to \nthe best fit value, different SMC observations do not appear to be consistent. \n\nIt is not surprising, therefore, that while the integrated star formation history of the LMC leads to a \npresent-day stellar masses of $\\rm M_{star} = 1.1 \\times 10^9 M_{\\odot}$, in good agreement with the \nvalue inferred from IR observations by Skibba et sl. (2012), the integrated stellar mass predicted\nby Harris \\& Zaritsky (2004) for the SMC is $\\rm 1.21 \\times 10^9 M_{\\odot}$, almost a factor of 4 larger\nthan the value inferred from IR data (see the left panels of Figs.~\\ref{fig:LMCmassevo} and \\ref{fig:SMCmassevo}\nin section 5). \n\nHence, there are independent lines of evidence which seem to suggest that the recent star formation rate\npredicted by Harris \\& Zaritsky (2004) for the SMC may be over-estimated or that $\\rm H_{\\alpha}$ and \nIR observations may have missed a significant fraction of obscured star formation (although this seems\nunlikely, given that the dust-to-gas ratio in the SMC is 1\/1000, approximately a factor of 5 smaller\nthan the Milky Way, Leroy et al. 2007). \n\nIn what follows, we will use the star formation histories for the LMC and SMC predicted by\nHarris \\& Zaritsky (2004, 2009), with the caveat that while the former appears to be\nrobust, the latter should be taken with caution, as different studies do not appear to converge\non the same result. \n\nIn the center and right panels of Fig.~\\ref{fig:MCevo} we compare the age metallicity relation \nimplied by the metallicity-dependent star formation histories of Harris \\& Zaritsky (2004, 2009) \nwith the average data inferred by Carrera et al. (2008a, 2008b) using metallicities measured from \nspectroscopic data on individual stars (filled symbols) and with the results of Piatti (2012) and \nPiatti \\& Geisler (2013) who used deep fotometric data on a large database of field stars (empty symbols). \nWhile the chemical evolution obtained by Harris \\& Zaritsky (2004, 2009) is in good\nagreement with metallicity and age measurements for star clusters, the best-fit age-metallicity relation\nof the LMC appears to be lower than the observational data for field stars. The agreement improves\nif the uncertainties due to the limited numbers of metallicity bins is considered. For this reason, in\nwhat follows we will show the model predictions adopting the best fit star formation (and chemical \nevolution) histories and the corresponding lower and upper limits. \n \n\\begin{figure*}\n\\includegraphics[width=80mm]{SMCdprAGB.eps}\n\\includegraphics[width=80mm]{LMCdprAGB.eps}\n\\caption{The dust production rate (DPR) of AGB stars in the Small (left panel) and Large (right panel) Magellanic\nClouds as a function of time using the Harris \\& Zaritsky (2004, 2009) metallicity-dependent star formation\nhistories. The solid lines show the predicted DPRs using the new ATON AGB yields, \nwith shaded regions representing the uncertainty on the best-fit star formation histories. For comparison,\nthe dashed and dotted lines show the predicted DPRs for the best-fit star formation histories adopting\nthe old ATON yields and the Z08 yields, respectively. Data points show the DPRs\nfrom C-AGB and O-AGB stars estimated by different studies and reported in Table~\\ref{table:dpr}: Matsuura et al.\n(2009, 2013, triangles), Srinivasan et al. (2009, squares), Boyer et al. (2012, stars), Riebel et al. (2012, circles).}\n\\label{fig:dpr}\n\\end{figure*}\n\n\n\n\n\\section{Dust production rate}\n\\label{sec:dprate}\nEstimating the total dust production rate contributed by evolved stars requires the identification of \npoint sources in a survey of a galaxy at near- and mid-infrared wavelengths, which are especially\nsuitable to study thermal dust emission. The Magellanic Clouds have been mapped by large photometric\nsurveys, such as the Magellanic Clouds Photometric Survey (MCPS, Zaritsky et al. 2004), the Two Micron\nAll Sky Survey (2MASS, Skrutskie et al. 2006), Surveying the Agents of a Galaxy's Evolution\nSurvey (SAGE) with the {\\it Spitzer} Space Telescope (SAGE-LMC, Meixner et al. 2006; SAGE-SMC, Gordon et al. 2011), and\n{\\it HERschel} Inventory of The Agents of Galaxy Evolution (HERITAGE, Meixner et al. 2010, 2013). \nSpectroscopic follow-up to the SAGE surveys has provided fundamental information for the spectral classifications of the\npoint sources observed in SAGE (SAGE-Spec, Kemper et al. 2010). \n\nIn principle, detailed radiative transfer modeling of each star should be carried out in order to \naccurately determine the rate of dust production around the star. In practice, this shows to be\nvery computationally expensive when samples of thousands of stars have to be analysed, \nalthough first applications on population studies have been attempted (Riebel et al. 2012). \nIn general, however, measurements of the global dust input from large stellar samples \nrely on photometric techniques based on IR colours (Matsuura et al. 2009) or on IR excesses\n(Srinivasan et al. 2009; Boyer et al. 2012). \n\n\\begin{table*}\n\\begin{center}\n\\caption{Dust production rates by carbon and oxygen-rich AGB stars obtained in different analyses of the LMC and\nSMC data. Where appropriate, sources classified as aO-AGBs have been included in O-AGBs and sources classified as \nx-AGBs have been included in C-AGBs. Note that in the recent analysis of Matsuura et al. (2013), the dust production\nrates by O-AGBs is reported together with the contribution of Red SuperGiants (RSG), hence we have left these two\ncontributions together in the corresponding lines.}\n\\label{table:dpr}\n\\begin{tabular}{c|c|c|c}\\hline\n & & Large Magellanic Cloud & \\\\ \\hline\nSources & $\\rm \\dot{M}_d [10^{-6} M_\\odot\/yr]$ & Number of Sources & Reference \\\\ \\hline\nC-AGB & $43$ (up to 100) \t\t & 1779 & Matsuura et al. (2009) \\\\\nO-AGB & $>> 0.4$ (expected $12$) \t\t & \t & \\\\ \\hline\nC-AGB & $26$ \t\t & 7200 \t & Srinivasan et al. (2009) \\\\\nO-AGB & $1.4$\t\t \t\t & 8200 \t & \\\\ \\hline\nC-AGB & $9.49$\t \t\t & 6076 & Boyer et al. (2012) \\\\\nO-AGB & $0.95$\t \t \t\t & 15243 & \\\\ \\hline\nC-AGB & $13.64 \\pm 0.62$ & 8049 & Riebel et al. (2012) \\\\\nO-AGB & $5.5 \\pm 0.2$\t\t & 19566 & \\\\ \\hline\nC-AGB & $40$\t \t\t & 906 & Matsuura et al. (2013) \\\\\nO-AGB+RSG & $40$\t \t \t\t & ... & \\\\ \\hline\n & & Small Magellanic Cloud & \\\\ \\hline\nC-AGB & $0.747$ \t& 1872 & Boyer et al. (2012) \\\\\nO-AGB & $0.078$\t\t & 3094 & \\\\ \\hline\nC-AGB & $4$ \t& 399 & Matsuura et al. (2013) \\\\\nO-AGB+RSG & $3$\t\t & 86 & \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\end{table*}\nA comparison between the dust production rates (DPR) inferred by different studies is made\ndifficult by the use of different classification criteria, methods to infer the dust mass loss\nrate, and dust opacitites. In most analyses, the following AGB stars sub-classes have been identified: \ncarbon-rich (C-AGB) and oxygen-rich (O-AGB) AGB stars, \n{\\it extreme} AGB stars (x-AGB), and anomalous O-rich (aO-AGB) sources \n(Cioni et al. 2006; Blum et al. 2006; Riebel et al. 2010; Boyer et al. 2011).\nThe latter are a sub-class of O-AGB stars while the x-AGB class is dominated by \ncarbon stars but likely includes a small number of extreme O-rich sources. \nFor the purpose of the present analysis, we have added together the aO-AGBs\/O-AGBs and the C-AGBs\/x-AGBs contributions \nand we show the corresponding dust production rate obtained by different analyses in Table \\ref{table:dpr}. \n\nThe theoretical DPR by AGB stars in the LMC and SMC can be computed using the star formation histories\nfor different metallicity bins described in section \\ref{sec:sfr}. To be consistent with the analysis of \nHarris \\& Zaritsky (2004, 2009), we adopt a Salpeter IMF in the stellar mass range $\\rm 0.1 M_{\\odot} \\le m \\le 100 M_{\\odot}$. \n\nFor a given grid of dust yields, $\\rm m_{dust}(m,Z)$, we compute the time dependent total DPR as,\n\\begin{eqnarray}\n\\rm \\dot{M}_{dust}(t) & = & \\rm \\Sigma_i \\int_{m(t,Z_i)}^{m_{up}} dm' \\, \\Phi(m') \\, m_{dust}(m', Z_i) \\\\ \\nonumber\n& & \\rm SFR[t-\\tau(m',Z_i),Z_i]\n\\label{eq:dpr}\n\\end{eqnarray} \n\\noindent\nwhere the index $\\rm i$ runs over the metallicity bins, $\\rm \\tau(m,Z)$ is \nthe mass and metallicity dependent stellar lifetime (Raiteri, Villata \\& Navarro 1996), \n$\\rm m_{up} = 100 M_\\odot$, and the lower mass limit is such that $\\rm \\tau(m,Z_i) = t$. The contribution of AGB stars to the \nDPR, $\\rm \\dot{M}_{dust}^{AGB}$, is obtained setting the upper and lower integration limits \nto $\\rm 0.1 M_{\\odot} \\le \\rm m(t,Z_i) \\le \\rm 8 M_{\\odot}$. We do not explicitly follow the metal enrichment of the galaxy; instead,\nfor each galaxy, we adopt the age metallicity relations implied by the metallicity dependent star formation histories. \n\nThe predicted time evolution of the AGB DPR in the Magellanic Clouds is shown in Fig.~\\ref{fig:dpr}, where we have separated the\ncontribution to carbon dust, silicates, SiC and iron grains production. The carbon dust and silicates \nproduction rates are compared with observational data of C-AGB and O-AGB stars (see Table~\\ref{table:dpr}). \nFor a given set of AGB stars dust yields, the time evolution of the DPRs is very similar in the two\ngalaxies, with present-day values for the best-fit star formation histories which differ by less than 30\\%.\n\nThe three sets of AGB stars dust yields shown in the figures predict a similar evolution of the carbon DPR, although with\ndifferent amplitudes. Since AGB stars carbon dust yields do not depend significantly on the initial metallicity of the\nprogenitor stars, the features observed in the DPRs reflect analogous features in the global star formation history,\nwith a time delay whose amplitude depends on the lifetime of the most massive star producing carbon dust (about $7\\,M_\\odot$\nfor Z08 AGB stars yields and $3 \\, M_\\odot$ for ATON models). Both galaxies experienced a phase of\nenrichment in the first 4 - 5 Gyr of evolution, then a quiescent epoch, followed by a second major episode\nof enrichment roughly 2-3 Gyr ago and an enhanced DPR in the last few hundreds Myr of the evolution. \n\nDue to the increased mass loss rates in the C-star stage, the production rates of carbon grains predicted by\nthe new ATON yields is one order of magnitude larger than those predicted by the old ATON yields, \nand the resulting evolution is much closer to the predictions by Zhukovska \\& Tielens (2013) based\non the Z08 AGB stars yields (see the dotted lines). However, while observations of C-AGB stars in the LMC seem to \nfavour such larger mass loss rates, the opposite is true for the SMC where carbon dust production rates\npredicted by the old ATON yields provides a better match to the observational data. \n\nThe production rates of silicates predicted by the ATON yields (note that the old and new yields are\nconsistent within the uncertainties due to the star formation histories) are in good agreement with the observations\nof O-AGB stars both in the LMC and in the SMC. Similarly good agreement is found when Z08 AGB stars yields are adopted.\nHowever, in this case silicate dust production is effective only in the last 2 Gyrs of the evolution. \nConversely, efficient HBB in ATON models allows silicate dust production since the earliest evolutionary\ntimes, as stars with $\\rm m_{star} > 3 \\, M_\\odot$ form silicate grains already at a metallicity $\\rm Z = 0.001$.\n\nThe production of iron grains follows a time evolution similar to that of silicate grains, as these\ntwo grain species form under HBB conditions in ATON models, although with different rates. Conversely, \nthe enrichment in SiC closely follows the evolution of carbon DPR, but with a smaller amplitude, reflecting\nthe relative abundance of these two species in the dust yields produced by low-mass stars at any metallicity.\n\n\\begin{figure*}\n\\includegraphics[width=80mm]{SMCdprAGB-piatti.eps}\n\\includegraphics[width=80mm]{LMCdprAGB-piatti.eps}\n\\caption{Same as in Fig.\\ref{fig:dpr} but adopting \nthe Harris \\& Zaritsky (2004, 2009) global star formation\nhistories and the Piatti (2012) and Piatti \\& Geisler (2013) age metallicity\nrelations for the SMC and LMC, respectively, to estimate the metal enrichment\nhistory of the ISM. In the left panel, silicates, SiC and Iron DPRs for the \nZ08 AGB stars yields (dotted lines in Fig.~\\ref{fig:dpr}) are much below\nthe plotted range (see text).}\n\\label{fig:dprpiatti}\n\\end{figure*}\n\nNote that with the adopted star formation histories, the carbon DPR in the last 4 Gyr of the evolution is\ndominated by $Z = 0.004$ AGB stars and that in both galaxies $Z=0.008$ stars dominate only in the last \nfew hundreds Myr of the DPR evolution. Given the small sensitivity of theoretical carbon dust\nyields to the metallicity of the progenitors, we do not expect a strong dependence on the \nadopted metal enrichment history for the galaxies. \n\nWe test this hypothesis by adopting a different metallicity evolution: we use \nthe age metallicity relations inferred by Piatti \\& Geisler (2013) for the LMC and \nby Piatti (2012) for the SMC (see Fig.~\\ref{fig:MCevo}) as a proxy for the metal enrichment of the ISM\nof the galaxies; hence, we can recompute the DPRs implied by the total star formation histories of \nHarris \\& Zaritsky, without differentiating among the different metallicity-dependent components.\nAs expected, Fig~\\ref{fig:dprpiatti} shows that the carbon DPRs predicted by a given set of \nAGB stars dust yields do not differ significantly from the ones shown in Fig.~\\ref{fig:dpr}. \n\nConversely, the lower metallicities implied by the age metallicity relations \n(particularly for the SMC, where there is no single stellar population with metallicity $\\rm Z > 0.004$) \nresult in a significant depression of the silicate DPRs predicted by Z08 AGB stars yields, \nwhich are below the observational data for the SMC by several orders of magnitudes.\n\nHence, we conclude that current samples of AGB stars in the LMC favour strong mass loss\nrates from C-stars, as predicted by the new ATON yields with the Wachter et al. (2008)\nmass loss rate prescription, and by the Z08 AGB stars yields. \nThe same yields, however, exceed the DPRs observed for C-AGB stars in the SMC by approximately\none order of magnitude. The latter data are better reproduced by the old ATON yields, which were\nbased on the less efficient mass loss rate prescription by Bl\\\"ocker et al. (1995). \n\nThis may be an indication of a metallicity dependence of the mass loss rates during the\ncarbon-star phase of AGB evolution. We note that Bl\\\"ocker et al. (1995) based their prescription\non a description of the circumstellar envelope of MIRA variables that neglects \nthe effects of radiation pressure on dust, probably underestimating mass loss rates in the \ncarbon-stage phase. Conversely, the formulation by Wachter et al. (2008) \nis based on hydrodynamical wind models which include carbon-dust formation; \nhowever, there is no significant metallicity dependence in their formulation\nand the resulting mass-loss rates are of the same order of magnitudes for solar metallicity\nmodels, models with the metallicity of the LMC, and models with the SMC metallicity.\n\nA unique feature of the ATON yields is that efficient HBB experienced by stars with $\\rm m > 3 M_{\\odot}$\nlead to silicate dust production by AGB stars with initial metallicity $\\rm Z \\ge 0.001$. Conversely,\nboth Zhukovska et al. (2008) and Nanni et al. (2013) find that silicate grains can only form in\nAGB stars when their initial metallicities are higher; for the metallicity range relevant to the\npresent analysis, silicate yields from Z08 AGB stars are not negligible only for stellar metallicity $\\rm Z = 0.008$.\nHence, if the dominant stellar population in the SMC have metallicities $\\rm Z \\le 0.004$, as observed by\nPiatti \\& Geisler (2013), the observed population of O-AGB stars suggests that silicate dust formation\nshould occurr already at lower metallicities, as predicted by efficient HBB occurring in the ATON models. \n\nFinally, since the total DPRs are dominated by carbon grains, their current values are not significantly\naffected by the metal enrichment history and depend more on the adopted dust yields. \nThe total DPR by AGB stars in the LMC is predicted to be $\\rm [9.86, 51.3, 67.1] \\times 10^{-6} \\, M_\\odot\/yr$ for the \nATON, new ATON, and the Z08 AGB stars dust yields. Although these values span the range \nindicated by observations of C-AGB and O-AGB stars, the new ATON and the Z08 yields lead to DPRs which are larger\nthan most of the observational result (except for the most recent Matsuura et al. 2013 determinations, which \nhowever refer to the global contributions of C-AGB, O-AGB stars and RSGs). \nSimilarly, the total DPR by AGB stars in the SMC is predicted to be $\\rm [7.62, 41.7, 48] \\times 10^{-6} \\, M_{\\odot}\/yr$\nfor the ATON, new ATON, and the Z08 AGB stars dust yields, with the latter two dust yields exceeding the observed DPRs by one\norder of magnitude. \n\nHow do these results depend on the uncertainties in the SMC star formation rate discussed in section~\\ref{sec:sfr}?\nEven if we were to scale-down\\footnote{ \nThe scale factor was chosen so as to reproduce the present-day global stellar mass in the SMC \nand - at the same time - have a recent star formation rate in agreement with $\\rm H_{\\alpha}$ and $24 \\mu$m observations \n(at least within the uncertainties).} the star formation rate predicted by Harris \\& Zaritsky (2009) by a factor 4, the present-day carbon DPR predicted by the new ATON AGB stars yields (and by Z08 AGB stars yields) \nwould be $\\rm \\approx 10^{-5}\\,M_\\odot\/yr$, larger than the observational data on C-AGB stars by \nBoyer et al. (2012) and Matsuura et al. (2013). The old ATON yields would decrease the DPR \nto $\\rm 5.43 \\times 10^{-7} \\, M_{\\odot}\/yr$, still in agreement with observations. \n\nIn the following, we will estimate the contribution of AGB stars to the total stellar\ndust budget if the MCs using the new ATON and the old ATON models as the reference dust yields for the LMC\nand SMC, respectively.\n\n\\begin{figure}\n\\includegraphics[width=80mm]{SNdustmasscomp.eps}\n\\caption{Mass of dust produced by supernovae as a function of the progenitor mass.\nThe data points in light blue are taken from the compilation published in Gall, Hjorth \\&\nAndersen (2011) and Otsuka et al. (2012): circles refer to observations of SNe (12 \nobjects) and squares to observations of SN remnants (4 objects). \nThe triangles indicate Herschel detected sources in the Milky Way (dark green) and in the LMC (red, see text).\nThe two sets of lines represent dust\nyields predicted by theoretical models for stars with initial metallicities $\\rm Z = Z_{\\odot}$\n(solid) and $\\rm Z = 0.1 Z_{\\odot}$ (dotted); the upper (lower) set of lines shows the mass of dust \nbefore (after) the passage of the reverse shock.}\n\\label{fig:snmass}\n\\end{figure}\n\n\n\n\\section{Total dust budget}\n\\begin{figure*}\n\\includegraphics[width=58mm]{LMCmstar.eps}\n\\includegraphics[width=58mm]{LMCmdust-new.eps}\n\\includegraphics[width=58mm]{LMCmdustnorev-new.eps}\n\\caption{Time evolution of the stellar (left panel) and dust masses (central panel) predicted by the LMC model.\nWe used the Harris \\& Zaritsky (2009) global star formation histories (solid lines indicate the predictions \nobtained adopting the best-fit SFH and the shaded regions illustrate the uncertainty on the fit) \nand the Piatti \\& Geisler (2013) age metallicity relation to estimate the metal enrichment\nhistory of the ISM. The observational data are taken from Skibba et al. (2012, star) and from Gordon et al. (2014, square). \nThe right panel shows the dust mass evolution adopting the no reverse shock models for the dust yields of \nmassive stars (see Fig.\\ref{fig:snmass}). The dashed lines show the mass of dust produced by \nAGB stars only, adopting the best-fit SFH.}\n\\label{fig:LMCmassevo}\n\\end{figure*}\n\n\\begin{figure*}\n\\includegraphics[width=58mm]{SMCmstar.eps}\n\\includegraphics[width=58mm]{SMCmdust-new.eps}\n\\includegraphics[width=58mm]{SMCmdustnorev-new.eps}\n\\caption{Same as Fig.\\ref{fig:LMCmassevo} but for the SMC model.\nWe used the Harris \\& Zaritsky (2004) global star formation histories (solid lines indicate the predictions \nobtained adopting the best-fit SFH and the shaded regions illustrate the uncertainty on the fit) \nand the Piatti (2012) age metallicity relation to estimate the metal enrichment\nhistory of the ISM.} \n\\label{fig:SMCmassevo}\n\\end{figure*}\n\nThe total mass of dust present in the interstellar medium of the LMC and SMC has\nbeen estimated using different techniques (extinction, emission, elemental depletions)\nand observational facilities (see Table~1 in Meixner et al. 2013 for a recent compilation). \nTotal dust masses of $\\rm M_{dust} = 1.2 \\times 10^{6} M_{\\odot}$ and $3\\times 10^5 M_{\\odot}$ \nhave been derived through analyses of {\\it Spitzer} SAGE maps of the LMC and SMC, respectively \n(Bernard et al. 2008; Leroy et al. 2007). Recently, Skibba et al. (2012) have provided new\nestimates of the total dust mass using {\\it Herschel} HERITAGE data; after convolving the \nphotometric data for the two galaxies to a common resolution, the authors inferred a total\ndust mass of $\\rm M_{dust} = 1.1 \\times 10^{6} M_{\\odot}$ for the LMC \nand $1.1\\times 10^5 M_{\\odot}$ for the SMC, in good agreement with previous estimates.\n At the time of paper submission, a new analysis of the dust surface density maps\nof the MCs by the HERITAGE team was completed (Gordon et al. 2014) and the\nresulting integrated dust masses were $\\rm (7.4 \\pm 1.7)\\times 10^5 M_{\\odot}$ and \n$\\rm (8.3 \\pm 2.1)\\times 10^4 M_{\\odot}$ for the LMC and SMC, respectively.\nThese values of the dust masses are significantly smaller than previous estimates,\nprobably due to different assumptions in the models used to fit the maps.\n\n\nIn the following analysis, we will compare {\\it Herschel} data \nto the mass of dust produced by AGB stars and supernovae (SN) adopting the star formation \nand metal enrichment histories discussed in the previous sections. \nWe take the new ATON (old ATON) yields to compute the mass of dust \nproduced by AGB stars in the LMC (SMC) as these provide a good match\nto the observed AGB dust production rates. \n\nDust yields for massive stars have been theoretically calculated by different groups \n(Todini \\& Ferrara 2001; Nozawa et al. 2003; Bianchi \\& Schneider 2007; Cherchneff \\& Dwek 2010;\nSarangi \\& Cherchneff 2012). The effective SN dust yields, i.e. the mass\nof dust that is able to survive the passage of the reverse shock enriching\nthe ISM, can be significantly smaller than the dust mass newly formed in the\nexpanding ejecta (Bianchi \\& Schneider 2007; Nozawa et al. 2007; Silvia et al. 2010). \nIn Fig.\\ref{fig:snmass} we compare theoretical models with observational data. \n Dust masses of the same source obtained by different groups have been \naveraged and represented by a single data point.\nHerschel-detected sources are marked as triangles: Crab and CasA,\n(dark green, Gomez et al. 2012; Dunne et al. 2009; Barlow et al. 2010) and SN1987A \nand N49 (red, Matsuura et al. 2011; Otsuka et al. 2010). The latter data has to be viewed as an\nupper limit to the newly formed dust due to possible contamination from ISM dust.\nWe consider Herschel-detected sources to provide a more complete census of the \ndust associated to the SN, being sensitive to the dynamically dominant cool dust \ncomponent. \nYet, none of the remnants is old enough (ages $< 10^3$ yr) for the reverse shock\nto have significantly affected the newly formed dust. Given this uncertainty,\nwe will compute the mass of dust produced by massive stars adopting the \nBianchi \\& Schneider (2007) dust yields for the no-reverse \nshock case shown with the upper lines and with a reverse shock model \nwhere the explosions take place in circumstellar\nmedium with average density of $\\rm n_{ISM} = 1 cm^{-3}$ (lower lines).\n\nIn what follows, our aim is to compare the maximum contribution to the existing dust mass by stellar sources.\nHence, we simply integrate Eq.~1, without considering the effective dust lifetime in the ISM, due to\nthe combined effects of destruction by interstellar shocks in the hot diffuse ISM or the process of grain growth by accretion of \ngas-phase elements in the dense ISM (see e.g. Valiante et al. 2011, 2014 for a complete chemical evolution model\nwith dust). The integrated dust masses that we compute here are to be viewed as the maximum dust budget that\nstellar sources may potentially contribute.\n\nThe results for the LMC are shown in Fig.\\ref{fig:LMCmassevo}. \nIn the left panel, we plot the time evolution of the \nstellar mass\\footnote{The stellar mass shown in the figure is not simply the time integral of the star formation\nrate, but takes into account the fraction of stellar remnants prediced by the adopted Salpeter IMF. Hence, it can\nbe viewed as the mass of active stars to be compared to the stellar mass inferred by the spectral energy distribution.}\npredicted by the Harris \\& Zaritsky (2009) star formation rate. The observational data points are taken from Skibba et al. (2012),\nwho used the calibration of Eskew et al. (2012) to convert from the 3.6 and 4.5 $\\mu$m flux densities \nto stellar mass, finding ${\\rm M_{star}} = 2 \\times 10^9 \\rm M_{\\odot}$ with approximately $30\\%$ uncertainty.\nIt is clear that the predicted stellar mass is consistent with the observational data point but only adopting the\nupper limit on the SFH from the Harris \\& Zaritsky (2009) analysis. \n\nThe total mass of dust produced by stellar sources is too small to account for the present-day\ndust mass in the LMC derived by Skibba et al. (2012) but it is in good agreement with the more recent\nestimates by the HERITAGE team (Gordon et al. 2014), if no significant destruction in the ISM occurs. This is shown in the central panel\nof the same figure, where we compare the mass of dust predicted by the model adopting \nthe new ATON yields for AGB stars and the reverse shock yields for SNe. The total mass of dust produced by stellar\nsources ranges between $\\rm 3.9 \\times 10^5 M_{\\odot}$ and $\\rm 8.7 \\times 10^5 M_{\\odot}$, with the best-fit \nvalue of $\\rm M_{dust} = 5.1 \\times 10^5 M_{\\odot}$.\nWhile the predicted dust masses are consistent with previous findings by Matsuura et al. (2009) and Zhukovska \\& Henning (2013),\nthe previously reported large discrepancies between dust input from stars and the existing interstellar dust mass are no longer\nsupported by the most recent data.\nIn the conservative limit of reverse shock yields, massive stars do not represent the dominant dust sources, with \nAGB stars contributing to $67\\%$ of the final dust mass.\n\n\nSimilar conclusions apply for the SMC, although with larger uncertainties (see Fig.~\\ref{fig:SMCmassevo}). In fact, \nthe star formation history by Harris \\& Zaritsky (2004) predicts a stellar mass \nlarger than the value inferred by Skibba et al. (2012), $\\rm M_{star} = 3.1 \\times 10^8 M_{\\odot}$.\nAlthough the latter value has been derived using a calibration based on a detailed analysis of\nthe LMC (Eskew et al. 2012), the difference between the final stellar masses is larger than\nthe observational uncertainty. With this caveat, stellar sources in the SMC model produce \na total mass of dust in the range $1.1 \\times 10^5 \\rm M_{\\odot} \\leq \\rm M_{dust} \\le \\rm 4.3 \\times 10^5 M_{\\odot}$,\nwith a best-fit value of $\\rm M_{dust} = 2.3 \\times 10^5 M_{\\odot}$. This is in good agreement with the observed\nmass of dust in the SMC, with massive stars dominating the total dust budget at all times and AGB stars \nproducing only $15\\%$ of the total ISM dust mass. Note that\nif we were to scale down the best-fit star formation rate of Harris \\& Zaritsky (2009) in order\nto reproduce the observed stellar mass (see section~\\ref{sec:dprate}), the total dust mass produced by stars would be\nin $\\rm 2.8 \\times 10^4 M_{\\odot} \\le \\rm M_{dust} \\le \\rm 10^5 M_{\\odot}$, with a best-fit value of $5.6 \\times 10^4 M_{\\odot}$.\nThese values appear to be more consistent with the results by \nBoyer et al. (2012) and Matsuura et al. (2013), who estimate the lifetime of dust in the SMC \nusing star formation rates in the range $[\\rm 0.037 - 0.08 M_{\\odot}\/yr]$, \nlower than the results by Harris \\& Zaritsky (2009) and closer to the values reported \nby Bolatto et al. (2011) and Skibba et al. (2012). Yet, while previous studies concluded that \nthe dust input from stars in the SMC was too low to account for the existing interstellar dust mass, we find that\nthe most recent observations by the HERITAGE team (Gordon et al. 2014) support a stellar origin for dust in the SMC.\n\nFinally, we have recomputed the dust mass evolution assuming that dust production in SN does not suffer\nany destruction by the reverse shock (no reverse shock models). The right panels of Figs.~\\ref{fig:SMCmassevo} \nand \\ref{fig:LMCmassevo} show that the mass of dust produced by stars {\\it exceeds} the observational data in both\ngalaxies. It is important to stress that all the above findings have been obtained assuming that \ndust grains injected by stars in the ISM do not suffer further reprocessing, like destruction by\ninterstellar shocks or grain growth in dense clouds. Hence, our analysis suggests that moderate\ndestruction by the reverse shock of the SNe, or by interstellar shocks might have occurred during\nthe evolution of the galaxies. A detailed investigation of dust reprocessing in the ISM requires\nthe development of a full chemical evolution model with dust (Valiante et al. 2009, 2011, 2014),\nincluding a two-phase description of the dense and diffuse phases of the ISM (de Bennassuti et al. 2014),\nwhich goes beyond the scope of the present analysis. \n\n\\section{Conclusions}\n\nIn this paper, we have compared theoretical dust yields for AGB stars to observations of dust production\nrates by carbon-rich and oxygen-rich AGB stars in the Small and Large Magellanic Clouds. Our aim is to test\nwhether current observations have the potential to discriminate among different models and to shed light\non the complex dependence of the dust yields on the mass and metallicity of progenitor stars.\n\nUsing metallicity dependent star formation histories inferred by Harris \\& Zaritsky (2004, 2009) based\non the MCPS survey, we find that:\n\\begin{itemize}\n\\item Observed dust production rates by carbon-rich AGB stars in the LMC favour theoretical models\nwith strong mass loss, as predicted by the new ATON models with Wachter et al. (2008)\nmass loss prescription.\n\\item The same yields, however, exceed the dust production rate observed for carbon-rich AGB stars\nin the SMC by approximately one order of magnitudes. Hence, current data of the SMC seem to favour\nthe old ATON yields, which were based on the less efficient mass loss rate prescription by \nBl{\\\"o}cker et al. (1995). This conclusion is independent of the uncertainties associated to the\nstar formation or the metal enrichment histories of the galaxy and may be an indication of a \nstronger metallicity dependence of the mass-loss rates during the carbon-star stage.\n\\item Efficient Hot Bottom Burning in ATON models allows stars with $\\rm m_{stars} > 3 \\, M_{\\odot}$ to \nenrich the interstellar medium with silicate and iron grains already at very low metallicities, $\\rm Z \\ge 0.001$,\nat odds with dust yields based on synthetic AGB models. If the dominant stellar populations in the SMC\nhave metallicities $\\rm Z \\le 0.004$, observations of dust production rates by oxygen-rich stars have the\npotential to confirm or refute the theoretical predictions.\n\\item The latest analysis by the HERITAGE team (Gordon et al. 2014) leads to integrated dust masses in the LMC and\nSMC that are a factor 2-4 smaller than previous estimates. When compared to our model predictions,\nwe find that the existing dust mass in the ISM of the MCs can have a stellar origin, even without\nresorting to extreme yields, unless significant\ndestruction of the newly formed dust in SN reverse shock or in the ISM takes place.\n\\end{itemize}\n\nOur study confirms the potential of the Magellanic Clouds as fundamental astrophysical laboratories\nto test our current understanding of the dust cycle in the interstellar medium. Yet, conclusions\nbased on detailed comparison between models and observations are hampered by uncertainties on the\nstar formation and chemical enrichment history of the galaxies, particularly of the Small Magellanic\nCloud. Future observational studies complemented by a more realistic modelling of the two brightest\nsatellites of the Milky Way in a cosmological context (Boylan-Kolchin et al. 2011) are needed to \nmake substantial progress.\n\n \n\n\n\\section*{Acknowledgments}\nWe thank Alberto Bolatto, Michele Cignoni and Mikako Matsuura for their kind collaboration and for useful discussions.\nThe research leading to these results has received funding from the European Research Council under the European Union's \nSeventh Framework Programme (FP\/2007-2013) \/ ERC Grant Agreement n. 306476.\nHH thanks support from the NSC grant NSC102-2119-M-001-006-MY3. FK acknowledges financial support from the NSC \nunder grant number NSC100-2112-M-001-023-MY3.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nData is the backbone of NLP research. One of the most fruitful approaches for making progress on NLP tasks has historically been \\textit{benchmarking}. Benchmarking is where the community adopts a high quality dataset for a particular task and tests various models against it to determine which is best. The process of benchmarking requires the effort of a large number of researchers, who collect and clean data, train and evaluate models, and work to understand model weaknesses. This process is iterative: once models perform very highly on the currently accepted community benchmark, another is created to push progress further. Taken as a whole, the benchmarking process is both notoriously difficult and expensive. This is due to a variety of facts: the community is a loose conglomeration of researchers with different areas of expertise, there is ever increasing need for larger datasets \\citep{halevy-etal-2009-unreasonable}, and the AI community has historically under-valued~\\citep{wagstaff-2012-machine} and under-invested in data collection and best practices \\citep{dynabenchpaper, sambasivan-etal-2021-everyone, mattson2022dataperf}.\n\nTo make matters worse, in recent years, benchmarks have been saturating with increasing speed. Taking the trends from the greater AI community into account, it took MNIST~\\citep{lecun1998mnist}, Switchboard~\\citep{godfrey1992switchboard}, and ImageNet~\\citep{deng2009imagenet} several years to saturate, and newer benchmarks such as SQuAD~\\citep{rajpurkar-etal-2016-squad}, GLUE~\\citep{wang2018glue}, and SuperGLUE~\\citep{wang2019superglue} about a year. Because of this, data-centric approaches are gaining more attention \\cite{data-centric-ai, mattson2022dataperf, lhoest-etal-2021-datasets,paullada2021data, luccioni2020data}. This trend is clear evidence of the urgency of finding a sustainable and data-centric way to support the full benchmarking ecosystem, from end-to-end, in a way that causes the least amount of friction for anyone who wants to use it.\n\nIn this paper, we introduce our answer to these issues: an easy-to-use, open source system that integrates the creation of benchmark datasets for any task, the selection of appropriate metrics, and the evaluation of models while natively supporting revisions to the benchmark as models saturate the original version. We share a unified library that enables these functionalities for the Dynabench platform \\citep{dynabenchpaper}.\n\n\\begin{figure*}\n\\centering\n\\begin{subfigure}{.24\\textwidth}\n \\centering\n \\begin{minted}[\n breaklines,\n gobble=4,\n frame=single,\n fontsize=\\scriptsize\n ]{yaml}\n context:\n - name: context\n type: string\n placeholder: Enter context...\n input:\n - name: hypothesis\n type: string\n placeholder: Enter hypothesis...\n - name: label\n type: multiclass\n labels:\n - entailed\n - neutral\n - contradictory\n as_goal_message: true\n output:\n - name: label\n - name: probs\n type: probs\n reference_name: label\n \\end{minted}\n\\end{subfigure}\\hfill\n\\begin{subfigure}{.43\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{nli-create.png}\n\\end{subfigure}\n\\begin{subfigure}{.32\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{nli-validate.png}\n\\end{subfigure}\n\\par\\bigskip\n\\rule[1ex]{\\textwidth}{0.5pt}\n\\par\\bigskip\n\\centering\n\\begin{subfigure}{.24\\textwidth}\n \\centering\n \\begin{minted}[\n breaklines,\n gobble=4,\n frame=single,\n fontsize=\\scriptsize\n ]{yaml}\n input:\n - name: image\n type: image\n display_name: image\n - name: labels\n type: multilabel\n labels:\n - Bird\n - Canoe\n - Croissant\n - Muffin\n - Pizza\n output:\n - name: labels\n \\end{minted}\n\\end{subfigure}\\hfill\n\\begin{subfigure}{.38\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{vision-dataperf-create.png}\n\\end{subfigure}\n\\begin{subfigure}{.37\\textwidth}\n\\centering\n\\includegraphics[width=\\textwidth]{vision-dataperf-validate.png}\n\\end{subfigure}\n\\caption{Two example config files and the data collection and validation interfaces they generate. Only config fields that impact the data collection interfaces are shown (e.g. metrics for model ranking are not shown). The Context and Input fields define the type of data that humans can enter. The Output field defines what models will output, given the Context and the Input. Crowdworkers are typically expected to provide the gold truth annotations for a task. In this case, Output will contain some of the object names from Input and Context. These gold truth annotations are removed from the Context and the Input before they are sent to models to get a model-in-the-loop output.\\\\\\\\ (\\textbf{Top}) The config implements a natural language inference task. The first image is the collection interface, after a crowdworker submits their example and gets a model-in-the-loop response. The second image is the validation interface. For brevity, the metadata field in the config is omitted. This field is used to define the UI components for additional information, such as the ``Explain why your example is correct...'' input field.\\\\\\\\ (\\textbf{Bottom}) The config implements an image labelling task. The first image is the collection interface, before a crowdworker submits their example. The second image is the validation interface with the same example.}\n\\label{fig:task_config_and_collection_interfaces}\n\\end{figure*}\n\n\\section{Background}\n\nDynabench was proposed as an open-source and community-driven platform to host dynamic benchmarks. The existing Dynabench tasks avoid saturation by leveraging crowdworkers who continually interact with state-of-the-art models. Crowdworkers either write examples that fool existing models~\\cite{nie-etal-2020-adversarial}, or collaborate with generative models to increase example diversity \\cite{bartolo2021models}. Each task is administered by one or more \\textit{task owners} from the research community who collect data, make the competition's design decisions, select metrics, and configure the task's leaderboard.\n\n\\citet{dynabenchpaper} introduced Dynabench with four English language NLP tasks: Natural Language Inference~\\cite{nie-etal-2020-adversarial}, Extractive QA~\\cite{Bartolo2020BeatTA}, Sentiment Analysis~\\citep{potts-etal-2020-dynasent} and Hate Speech Detection~\\citep{Vidgen2020LearningFT}. In follow-up work,~\\citet{ma2021dynaboard} updated Dynabench with additional leaderboard functionalities that allow task owners to upload task-specific models which are evaluated on each of the task's datasets, and can subsequently be included in model-ensembles that crowdworkers interact with.\nAs the platform kept expanding, it became clear that Dynabench needed a scalable and configurable system for adding new tasks.\n\nA \\textit{task} is an essential concept in understanding our work. On Dynabench, a distinct task is a particular relationship between inputs and outputs.\\footnote{Although, any user can set up a new task that is a duplicate of an existing one, with a duplicate config file.} Inputs and outputs are framed within some pre-specified format. For example, Natural Language Inference is a task on Dynabench. The input format is two strings and the output format is a classification label. The relationship between the inputs and outputs is defined by what humans would do when loosely instructed to treat the input strings as a context (sometimes called the ``premise'') and a hypothesis, and return a label for whether they think the hypothesis is entailed by the context. MNLI~\\cite{williams2017broad}, SNLI~\\citep{snliemnlp2015}, and ANLI~\\citep{nie-etal-2020-adversarial} can be viewed as different datasets that instantiate the same task. \\citet{schlangen-2021-targeting} takes a similar view.\n\n\\begin{figure*}\n\\centering\n\\begin{subfigure}{.29\\textwidth}\n \\centering\n \\begin{minted}[\n breaklines,\n gobble=4,\n frame=single,\n ]{yaml}\n aggregation_metric:\n type: dynascore\n perf_metric:\n type: squad_f1\n reference_name: answer\n delta_metrics:\n - type: fairness\n - type: robustness\n \\end{minted}\n\\end{subfigure}%\n\\begin{subfigure}{.71\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{qa-leaderboard.png}\n\\end{subfigure}\n\\caption{An example of a task config next to the generated model leaderboard. Only config fields that impact the leaderboard are shown. Throughput and memory do not need to be in the config; they are computed by default.}\n\\label{fig:task_config_and_leaderboard}\n\\end{figure*}\n\n\\section{Dynatask}\n\nBefore the introduction of Dynatask, adding a new task required close collaboration between task owners and the Dynabench team, and extensive software contributions to the Dynabench codebase. This paper presents a system that enables Dynabench to scale up to more tasks, including into multimodal and multilingual domains, without such requirements. Now, a task owner can create their own task page on Dynabench with a short task config file. The config file is used to automatically generate crowdworker data collection interfaces, as well as the model and dataset hosting\/evaluating infrastructure. The data collection interfaces and hosting overlay existing services such as Amazon Mechanical Turk,\\footnote{\\url{https:\/\/www.mturk.com}} which provide a workforce and payment mechanisms, but do not provide crowdworker interfaces for dynamic model-in-the-loop data collection or their corresponding backends. In fact, local installations of Dynabench can be run on Mechanical Turk. Overall, a Dynabench task owner can set up and host:\n\n\\paragraph{Crowdworker data collection:} Task owners can configure interfaces for data collection. Models-in-the-loop can be optionally added, so crowdworkers can receive real-time model responses from their data (Figure~\\ref{fig:task_config_and_collection_interfaces}).\n\\paragraph{Crowdworker data validation:} Task owners can configure interfaces for crowdworkers to label collected examples as correct or incorrect. (Figure~\\ref{fig:task_config_and_collection_interfaces}).\n\\paragraph{Dynamic dataset metrics:} Metrics on the crowdworker data are computed, such as verified model error rate (vMER) \\citep{nie-etal-2020-adversarial}. Crowdworker example leaderboards are displayed.\n\\paragraph{A train file leaderboard:} Task owners can enable users to upload training data files for the automatic creation, training, and evaluation of models in our evaluation cloud.\n\\paragraph{A dynamic and interactive model leaderboard~\\citep{ma2021dynaboard}:} Task owners can configure a leaderboard, selecting from a variety of metrics to determine model performance. Owners can also upload new datasets, which triggers automatic evaluation for all of the user-uploaded models. Every leaderboard model can be interacted with in real-time. See Figure \\ref{fig:task_config_and_leaderboard} for an example.\n\\paragraph{A model upload pipeline:} Once a new task goes live on Dynabench, our command line tool\\footnote{\\url{https:\/\/github.com\/facebookresearch\/dynalab}} allows anyone to create a handler script and upload models by following a few command line instructions. After models are uploaded, they are dockerized and deployed automatically. Models can be viewed on the leaderboard and put in-the-loop with crowdworkers for data collection.\n\n\\subsection{Task Configuration}\n\nTo become task owners, Dynabench users submit a short written proposal for their task which requires approval by an administrator. We are still developing procedures for how Dynabench accepts tasks; so far, we have reached out to have a discussion with the proposer before accepting their proposal and all non-spam proposals have been slated for acceptance. After approval, the task owner submits a task config file, which can be written in minutes. Once complete, the task is actively hosted on Dynabench; data collection, data validation, model hosting, and model evaluation starts immediately. A complete config file is the combination of a snippet in Figure \\ref{fig:task_config_and_collection_interfaces} with that in Figure \\ref{fig:task_config_and_leaderboard}.\n\nThe task config is a YAML file which allows someone to encode the specifications for their task---it can be viewed as a lightweight declarative programming language. Task owners can specify:\n\n\\textit{The datatypes of the task's inputs and outputs.} There are a variety to choose from, including String, String Selection, Multiclass, Multilabel, Probabilities, and Image. The datatype definition enables Dynatask to automatically construct the UIs for data collection, the dataset uploading and downloading infrastructure, and the model uploading and hosting infrastructure.\n\n\\textit{A variety of metrics} to understand the task's datasets and models. Several metrics can currently be computed for the leaderboard: Macro F1, F1 for Visual Question Answering, F1 for Question Answering, Accuracy, BLEU, robustness and fairness \\citep{ma2021dynaboard}, memory usage, and example throughput. Task owners select or propose an aggregation metric, which combines results across multiple datasets and metrics to arrive at a ranking for the leaderboard. Currently, the only supported aggregation metric is the Dynascore \\citep{ma2021dynaboard}, which combines metrics across datasets based on microeconomic utility \\cite{ethayarajh-jurafsky-2020-utility} of user provided weights. Metrics can also be specified for model-in-the-loop data collection to judge whether a model's output matches that of a crowdworker (i.e., whether the model is ``correct''). Dynatask supports a variety of such metrics, including a string F1 threshold (for outputs that are strings), exact match, and simply asking the crowdworker whether the model was correct.\n\n\\textit{Other optional items}, such as messages and instructions that appear in crowdworker interfaces, and options for train-file leaderboards.\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{task-owner-interface.png}\n\\caption{The task owner interface for ANLI.}\n\\label{fig:task_owner_interface}\n\\end{figure}\n\n\\subsection{Options After Task Configuration}\n\n\\paragraph{Crowdworker Interfaces, Data Generation, and Data Evaluation:} Data collection interfaces are automatically hosted at \\url{dynabench.org}. In order for Dynabench to scale, task owners source and pay crowdworkers themselves. If crowdworker management, compensation, and sourcing features are needed, an owner can clone Dynabench and run it on Mechanical Turk by hosting the data collection frontend using Mephisto.\\footnote{\\url{https:\/\/github.com\/facebookresearch\/Mephisto}} Task owners can upload context data for crowdworkers to use and download data collected from crowdworkers directly from the Dynabench web interface. Task owners can also initiate new rounds of data collection where they are free to upload entirely new contexts and models. As part of the data collection process, vMER, number of total collected examples, and number of validated examples are computed. Finally, task owners can alter instructions to crowdworkers at any time. They can also specify whether crowdworkers should validate non-model fooling examples, and provide a validation consensus threshold above which examples are considered fully validated. Figure \\ref{fig:task_owner_interface} shows an example of the interface that task owners use to adjust settings.\n\n\\paragraph{Model Submission, Interaction, and Evaluation:} Task owners can decide whether their task accepts model submissions, they can upload datasets for model evaluation, and they can download model evaluation logs for any dataset and model. Task owners can optionally allow users to download these logs to debug their models; see the example in Figure \\ref{fig:model_card}.\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{model-card.png}\n\\caption{A cross-section from a model card. The task owner has enabled downloading of model evaluation logs for each dataset via the buttons on the right.}\n\\label{fig:model_card}\n\\end{figure}\n\n\\section{Decentralized Evaluation-As-A-Service}\\label{sec:deaas}\n\nMost task owners currently use the centralized Dynabench evaluation and model deployment server. With Dynatask, however, we offer a decentralized evaluation feature that will increase the platform's flexibility even further. With this feature, task owners can set up a Dynabench model deployment and evaluation server or select an existing one. To set up a new server, an owner only needs to follow our documentation, creating an AWS account and installing some Dynabench code along the way. Distributed hosting of model building and evaluation enables Dynatask to scale: no one organization needs to fund hosting for all of the models on Dynabench, and every owner of a model deployment and evaluation server can flexibly upload or take down models to suit their budget. It is also designed with re-usability in mind: several tasks can share the same evaluation servers. Task owners do not need to do any setup if they have permission to use an existing evaluation server.\n\n\\section{Case Studies of Tasks Enabled so Far}\n\nTables \\ref{tab:dynabench_stats} and \\ref{tab:task_types} provide an overview of Dynabench so far. In this section, we report on some use cases. Most of the following projects (besides Image Labelling and Open Domain QA) were added to Dynabench before the introduction of Dynatask, which took months of coding in every case. With Dynatask, they can all be implemented in minutes.\n\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{lr}\n\\toprule\n Statistic & Count \\\\\\midrule\n Datasets Hosted & 191\\\\\n Unique Crowdworkers & 5,595\\\\\n Model Uploads & 589\\\\\n Data Collection Rounds & 38\\\\\n Tasks (incl. private) & 24 \\\\\n Examples Collected & 559,229\\\\\n Example Validations & 436,922\\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Current Dynabench statistics.}\n\\label{tab:dynabench_stats}\n\\end{table}\n\n\\begin{table*}[t]\n\\small\n\\centering\n\\begin{tabularx}{\\textwidth}{p{0.5\\textwidth} p{0.22\\textwidth} p{0.28\\textwidth}}\n\\toprule\n Selected Dynabench Tasks & Context and Input Types & Output Types \\\\\\midrule\n Hate Speech Detection \\tiny{\\url{https:\/\/dynabench.org\/tasks\/hs}} & String, String, Multiclass & Multiclass, Probs\\\\%\\midrule\n Visual QA \\tiny{\\url{https:\/\/dynabench.org\/tasks\/vqa}} & Image, String & String\\\\%\\midrule\n Extractive QA \\tiny{\\url{https:\/\/dynabench.org\/tasks\/qa}} & String, String, String Select & String Select, Probs\\\\%\\midrule\n Open Domain QA \\tiny{\\url{https:\/\/dynabench.org\/tasks\/qb}} & String, String, String & String, Probs\\\\%\\midrule\n Natural Language Inference \\tiny{\\url{https:\/\/dynabench.org\/tasks\/nli}} & String, String, Multiclass & Multiclass, Probs\\\\%\\midrule\n Sentiment Analysis \\tiny{\\url{https:\/\/dynabench.org\/tasks\/sentiment}} & String, String, Multiclass & Multiclass, Probs\\\\%\\midrule\n Machine Translation \\tiny{\\url{https:\/\/dynabench.org\/tasks\/flores}} & String, String, String, String & String\\\\%\\midrule\n Image Labelling \\tiny{\\url{https:\/\/dynabench.org\/tasks\/vision-dataperf}} & Image, Multilabel & Multilabel\\\\\n\\bottomrule\n\\end{tabularx}\n\\caption{IO types from the task config, for some tasks on Dynabench. Tasks share the same building blocks.}\n\\label{tab:task_types}\n\\end{table*}\n\n\\textit{Hate Speech Detection:} There are a number of hate speech detection projects on Dynabench, where a model must label strings as hateful or not. Groups from Oxford, The Alan Turing Institute, The University of Sheffield, and Facebook AI own task pages that focus on collecting adversarial data~\\citep{Vidgen2020LearningFT}, collecting emoji-based hate~\\citep{kirk2021hatemoji}, and evaluating models on a large number of hate speech data perturbations.\n\n\\textit{Visual Question Answering:} To combat saturating datasets for the VQA task, which is about answering a question based on an image, Facebook AI and Tecnol\u00f3gico de Monterrey introduced AdVQA~\\citep{sheng2021human} using Dynabench. The task's model leaderboard has an additional adversarial VQA dataset from Microsoft and Tsinghua~\\citep{li2021adversarial}.\n\n\\textit{Extractive Question Answering:} Groups from UCL and Facebook AI run SQuAD-style~\\citep{rajpurkar-etal-2016-squad} extractive QA projects on Dynabench. The Adversarial QA~\\citep{Bartolo2020BeatTA} project resulted in a popular dataset on the Hugging Face hub~\\citep{lhoest-etal-2021-datasets}. Follow-up projects explored the generation of synthetic adversarial QA data~\\citep{bartolo2021improving}, generative assistants in the loop to help annotators create examples~\\citep{bartolo2021models}, and a study of how adversarial model-in-the-loop training data affects generalization out of domain~\\citep{kaushik2021on}.\n\n\\textit{Open-Domain Question Answering:} A team at Facebook AI and The University of Maryland has started a model-in-the-loop data collection effort for the Quizbowl task \\cite{rodriguez2019quizbowl,wallace-etal-2019-trick}, as well as a model leaderboard. The task is open domain question answering, where both the question and answer are strings.\n\n\\textit{Natural Language Inference:} The NLI dataset ANLI \\cite{nie-etal-2020-adversarial} is currently a popular dataset on Hugging Face datasets~\\cite{lhoest-etal-2021-datasets} and an ongoing Dynabench project. Groups from Facebook AI, UC Berkeley, and UNC have set up additional NLI projects on distinct Dynabench task pages. These projects have ranged from an analysis of the contents of adversarially collected development sets \\citep{anlizing}, to an explication of the benefits of dynamic adversarial data collection over multiple rounds \\citep{wallace2021analyzing}, to model and leaderboard hosting for a large number of robustness-perturbed NLI datasets.\n\n\\textit{Sentiment Analysis:} In later rounds of their work, a team at Stanford used Dynabench to create a new adversarial sentiment analysis dataset, called Dynasent \\citep{potts-etal-2020-dynasent}. They added prompts to their data collection interfaces to encourage crowdworkers to generate naturalistic and diverse data.\n\n\\textit{Large-Scale Machine Translation:} The Workshop on Machine Translation~\\citep{wenzek2021findings} organizers created a Dynabench task page and hosted the FLORES benchmark competition~\\citep{goyal2021the} of over 10,000 language pairs. It featured competitors from Microsoft, Huawei, Tencent, and Facebook, and individual competitors. The result of the competition was a BLEU increase of over 10 points on the full task. The owners used Dynabench for its leaderboard, model upload, and evaluation-as-a-service feature, without collecting data on the platform yet.\n\n\\textit{Image Labelling:} DataPerf~\\cite{mattson2022dataperf} is a working group of the non-profit ML Commons, which focuses on dataset benchmarking for general AI. For their image labelling task hosted on Dynabench, they configured their task via the task config to accept training data file uploads. Users upload train files and models are automatically trained against them and evaluated in the evaluation cloud.\n\n\\section{Conclusion}\n\nWe introduced Dynatask, a collection of open source features in the Dynabench platform that empowers anyone to create and own a task on Dynabench with only a short config file. Dynabench started as an NLP project with only four English-only tasks. Since then, Dynatask has helped researchers produce several datasets and host competitions, expanding scalably into multimodal and multilingual domains with owners from various corners of the AI community.\nDynatask offers the functionalities of Dynabench to the broader research community by allowing them to easily create and host new AI tasks on the platform: it provides a one-stop shop for constructing datasets with or without models in the loop, hosting challenges and competitions, investigating the effects of models in the loop, characterizing distributional shift and continual learning, exploring annotator efficiency and expertise, and improving model robustness through collaboration with humans.\n\nFinally, Dynabench is an open source, community-driven effort. Anyone who wants to add a new input\/output type, a new metric, or any other new feature, need only submit a pull request. We hope that our our work can help enable new exciting scientific progress in data-centric AI research in general and dynamic (adversarial) data collection in particular.\n\n\\pagebreak\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe nontrivial properties of the vacuum state are among the most important\npredictions in quantum field theory. These properties are manifested in the\nresponse of the vacuum to external influences such as external fields. A\nsimple model of the influence is realized by imposing prescribed boundary\nconditions on the field operator. The distortion of the spectrum for the\nzero-point fluctuations of a quantum field by these conditions results in\nthe shifts in the vacuum expectation values of physical observables, such as\nthe vacuum energy density and stresses, and induces vacuum forces acting on\nconstraining boundaries. This is the well known Casimir effect (see \\cite%\n{Most97,Plun86,Bord01,Milt02} and references therein). The Casimir effect is\ncommon to all systems characterized by fluctuating quantities and has\nimportant implications on all scales, from cosmological to subnuclear. In\naddition to its fundamental interest this effect also plays an important\nrole in the fabrication and operation of nano- and micro-scale mechanical\nsystems and has become an increasingly popular topic in quantum field theory.\n\nAn interesting topic in the investigations of the Casimir effect has always\nbeen the dependence of the physical characteristics of the vacuum on the\ngeometry of constraining boundaries. Analytic results can usually be found\nonly for highly symmetric geometries including planar, spherically and\ncylindrically symmetric boundaries. Recently exact results for the Casimir\nforce in geometries of a sphere and a cylinder above a plate are obtained in\n\\cite{Bulg06,Emig06} (see also \\cite{Most07}). Aside from their own\ntheoretical and experimental interest, the problems with this type of\nboundaries are useful for testing the validity of various approximations\nused to deal with more complicated geometries. In the present paper we\nconsider a less symmetric exactly solvable geometry of boundaries which is a\ncombination of a wedge with coaxial cylindrical shells. The Casimir effect\nfor wedge-shaped regions is well investigated in literature \\cite%\n{Most97,jphy,Deutsch,brevikI,brevikII,Nest02}. For a conformally coupled\nscalar and electromagnetic fields the vacuum expectation value of the\nenergy-momentum tensor inside the wedge is azimuthal symmetric. In\nparticular, the vacuum energy-momentum tensor is finite everywhere apart\npoints on the edge. This property is a direct consequence of the conformal\ninvariance in the corresponding problems and does not take place for a\nnon-conformally coupled scalar field. For a scalar field with an arbitrary\ncurvature coupling parameter satisfying Dirichlet boundary condition on the\nwedge sides the vacuum energy-momentum tensor is evaluated in \\cite%\n{Reza02,Saha05cyl}. In addition to the azimuthal dependence this tensor,\nunlike to the case of conformally coupled fields, is also non-diagonal with\nnonzero azimuthal-radial off-diagonal component.\n\nThe investigations of quantum effects for cylindrical boundaries have\nreceived a great deal of attention. In addition to traditional problems of\nquantum electrodynamics under the presence of material boundaries, the\nCasimir effect for cylindrical geometries is also important in the flux tube\nmodels of confinement \\cite{Fish87,Barb90} and for determining the structure\nof the vacuum state in interacting field theories \\cite{Ambj83}. The\ncalculation of the vacuum energy for the electromagnetic field with boundary\nconditions defined on a cylinder turned out to be technically a more\ninvolved problem than the analogous one for a sphere. First the Casimir\nenergy of an infinite perfectly conducting cylindrical shell has been\ncalculated in Ref. \\cite{Dera81} by introducing ultraviolet cutoff and later\nthe corresponding result was derived by using other methods \\cite%\n{Milt99,Gosd98,Lamb99}. The local characteristics of the corresponding\nelectromagnetic vacuum such as energy density and vacuum stresses are\nconsidered in \\cite{Sah1cyl} for the interior and exterior regions of a\nconducting cylindrical shell, and in \\cite{Sah2cyl} for the region between\ntwo coaxial shells (see also \\cite{Saha00rev}). The electromagnetic vacuum\nforces acting on the boundaries in the geometry of two cylinders are also\nconsidered in Refs. \\cite{Mazz02}. In Ref. \\cite{Rome01} scalar vacuum\ndensities and the zero-point energy for general Robin boundary condition on\na cylindrical surface in arbitrary number of spacetime dimensions are\nstudied for massive scalar field with general curvature coupling parameter.\nThe corresponding problem for the geometry of two coaxial cylindrical shells\nis considered in \\cite{Saha06cyl}. A large number of papers is devoted to\nthe investigation of the various aspects of the Casimir effect for a\ndielectric cylinder (see, for instance, \\cite{Milt02,Nest04} and references\ntherein).\n\nIn the geometry of a wedge with coaxial cylindrical boundary the modes are\nstill factorizable for both scalar and electromagnetic fields and the\ncorresponding problems are exactly solvable. The total Casimir energy of a\nsemi-circular infinite cylindrical shell with perfectly conducting walls is\nconsidered in \\cite{Nest01} by using the zeta function technique. For a\nscalar field with an arbitrary curvature coupling parameter obeying\nDirichlet boundary condition the Wightman function, the vacuum expectation\nvalues of the field square and the energy-momentum tensor in the geometry of\na wedge with an arbitrary opening angle and with a cylindrical boundary are\ninvestigated in \\cite{Reza02,Saha05cyl}. The corresponding Casimir densities\nfor the electromagnetic field with perfect conductor boundary conditions on\nbounding surfaces are considered in \\cite{Saha07El}. The closely related\nproblem with a cylindrical shell in the geometry of a cosmic string is\ndiscussed in \\cite{Beze06Sc,Beze07El} for scalar and electromagnetic fields.\nIn both scalar and electromagnetic cases the application of a variant of the\ngeneralized Abel-Plana formula \\cite{Saha00rev} enables to extract from the\nvacuum expectation values the parts corresponding to the geometry of a wedge\nwithout the cylindrical shell and to present the shell induced parts in\nterms of rapidly converging integrals. This geometry is also interesting\nfrom the point of view of general analysis for surface divergences in the\nexpectation values of local physical observables for boundaries with\ndiscontinuities. The nonsmoothness of the boundary generates additional\ncontributions to the heat kernel coefficients (see, for instance, the\ndiscussion in \\cite{Nest04,Apps98,Dowk00,Nest03} and references therein).\n\nIn this paper we investigate one-loop vacuum quantum effects for a scalar\nfield in the geometry of a wedge with two coaxial cylindrical shells\nassuming Dirichlet boundary condition on bounding surfaces. This geometry\ngeneralizes various special cases previously considered in literature for\nwedge-shaped and cylindrical boundaries. In addition, we also study the role\nof nonzero mass of the field quanta. The presence of boundaries eliminates\nthe translational invariance and as a result the properties of the vacuum\nare nonuniform. The most important quantities characterizing the local\nproperties of the vacuum are the expectation values of the field square and\nthe energy-momentum tensor. In addition to describing the physical structure\nof the quantum field at a given point, the energy-momentum tensor acts as\nthe source of gravity in the Einstein equations. It therefore plays an\nimportant role in modelling a self-consistent dynamics involving the\ngravitational field. As the first step for the investigation of vacuum\ndensities we evaluate the positive frequency Wightman function. This\nfunction gives comprehensive insight into vacuum fluctuations and determines\nthe response of a particle detector of the Unruh-DeWitt type. Having the\nvacuum energy-momentum tensor we can derive the vacuum forces acting on\nconstraining boundaries evaluating the vacuum stresses at points on the\nbounding surfaces. As we will see below, in the geometry under consideration\nthese forces are position dependent on the boundary and cannot be obtained\nby the global method using the total Casimir energy (on the advantages of\nthe local method see also \\cite{Acto96}). In the limiting case from the\nresults of the present paper the local vacuum densities are obtained for the\ngeometry of a rectangular waveguide (for the local analysis of quantum\nfields confined in rectangular cavities see \\cite{Acto96,Acto94,Acto95}).\n\nThe paper is organized as follows. The next section is devoted to the\nevaluation of the Wightman function for a massive scalar field in the region\nbounded by two cylindrical shells and by the wedge walls. This function is\ndecomposed into three parts: the first one corresponds to the geometry of a\nwedge without cylindrical shells, the second one is induced by a single\ncylindrical shell when the second shell is absent, and the third one is\ninduced by the presence of the second shell. By using the formula for the\nWightman function, in section \\ref{sec:phi2EMT} the vacuum expectation\nvalues of the field square and the energy-momentum tensor are evaluated and\ntheir behavior is investigated in various asymptotic regions of the\nparameters. In section \\ref{sec:forces} we consider the vacuum forces acting\non bounding surfaces. For separate boundary elements these forces are\ndecomposed into self-action and interaction parts. The interaction forces\nare investigated in detail and numerical examples are presented. On the\nexample of interaction forces we also demonstrate the limiting transition to\nthe geometry of a rectangular waveguide. Finally, the results are summarized\nand discussed in section \\ref{sec:Conclusion}.\n\n\\section{Wightman function}\n\n\\label{sec:WF}\n\nWe consider a real scalar field $\\varphi $ inside a wedge with opening angle\n$\\phi _{0}$ and with two coaxial cylindrical shells of radii $a$ and $b$, $%\na2a$, the integrals over the arcs of the circle with large\nradius vanish, whereas the integrals over $(0,iak_{m})$ and $(0,-iak_{m})$\ncancel out. Introducing the Bessel modified functions one obtains%\n\\begin{eqnarray}\n\\int_{0}^{\\infty }dz\\frac{h(x)\/a}{J_{qn}^{2}(x)+Y_{qn}^{2}(x)}\n&=&\\int_{0}^{\\infty }dx\\,x\\frac{J_{qn}(xr)J_{qn}(xr^{\\prime })}{\\sqrt{%\nx^{2}+k_{m}^{2}}}\\exp (-i\\Delta t\\sqrt{x^{2}+k_{m}^{2}}) \\notag \\\\\n&&-\\frac{2}{\\pi }\\int_{k_{m}}^{\\infty }dx\\frac{xI_{qn}(ax)}{K_{qn}(ax)}\\frac{%\nK_{qn}(xr)K_{qn}(xr^{\\prime })}{\\sqrt{x^{2}-k_{m}^{2}}}\\cosh (\\Delta t\\sqrt{%\nx^{2}-k_{m}^{2}}). \\label{int1}\n\\end{eqnarray}%\nBy taking into account this relation, the Wightman function is presented in\nthe form%\n\\begin{eqnarray}\n\\left\\langle 0|\\varphi (x)\\varphi (x^{\\prime })|0\\right\\rangle\n&=&\\left\\langle \\varphi (x)\\varphi (x^{\\prime })\\right\\rangle\n_{0}+\\left\\langle \\varphi (x)\\varphi (x^{\\prime })\\right\\rangle _{a}-\\frac{q%\n}{2^{D-3}\\pi ^{D}}\\sum_{n=1}^{\\infty }\\sin (qn\\phi )\\sin (qn\\phi ^{\\prime\n})\\, \\notag \\\\\n&&\\times \\int d^{N}\\mathbf{k}e^{i\\mathbf{k}\\Delta \\mathbf{r}_{\\parallel\n}}\\int_{k_{m}}^{\\infty }dx\\,x\\frac{\\Omega _{a,qn}(ax,bx)}{\\sqrt{%\nx^{2}-k_{m}^{2}}}G_{qn}(ax,rx)G_{qn}(ax,r^{\\prime }x) \\notag \\\\\n&&\\times \\cosh (\\Delta t\\sqrt{x^{2}-k_{m}^{2}}). \\label{W4}\n\\end{eqnarray}%\nIn this formula,%\n\\begin{eqnarray}\n\\left\\langle \\varphi (x)\\varphi (x^{\\prime })\\right\\rangle _{0} &=&\\frac{q}{%\n2^{D-2}\\pi ^{D-1}}\\sum_{n=1}^{\\infty }\\sin (qn\\phi )\\sin (qn\\phi ^{\\prime\n})\\int d^{N}\\mathbf{k}\\,e^{i\\mathbf{k}\\Delta \\mathbf{r}_{\\parallel }} \\notag\n\\\\\n&&\\times \\int_{0}^{\\infty }dx\\,x\\frac{J_{qn}(xr)J_{qn}(xr^{\\prime })}{\\sqrt{%\nx^{2}+k_{m}^{2}}}\\exp (-i\\Delta t\\sqrt{x^{2}+k_{m}^{2}}), \\label{W0}\n\\end{eqnarray}%\nis the Wightman function for the wedge without cylindrical boundaries, and\n\\begin{eqnarray}\n\\left\\langle \\varphi (x)\\varphi (x^{\\prime })\\right\\rangle _{a} &=&-\\frac{q}{%\n2^{D-3}\\pi ^{D}}\\sum_{n=1}^{\\infty }\\sin (qn\\phi )\\sin (qn\\phi ^{\\prime\n})\\int d^{N}\\mathbf{k}\\,e^{i\\mathbf{k}\\Delta \\mathbf{r}_{\\parallel }} \\notag\n\\\\\n&&\\times \\int_{k_{m}}^{\\infty }dx\\,x\\frac{I_{qn}(ax)}{K_{qn}(ax)}\\frac{%\nK_{qn}(xr)K_{qn}(xr^{\\prime })}{\\sqrt{x^{2}-k_{m}^{2}}}\\cosh (\\Delta t\\sqrt{%\nx^{2}-k_{m}^{2}}), \\label{Wa}\n\\end{eqnarray}%\nis the part of the Wightman function induced by a single cylindrical shell\nwith radius $a$ in the region $r>a$. Hence, the last term on the right of (%\n\\ref{W4}) is induced by the presence of the second shell with radius $b$.\n\nAn equivalent form for the Wightman function is obtained from (\\ref{W4}) by\nusing the identity%\n\\begin{eqnarray}\n&&\\sum_{j=a,b}n_{j}\\Omega _{j,qn}(ax,bx)G_{qn}(jx,xr)G_{qn}(jx,xr^{\\prime })\n\\notag \\\\\n&=&\\frac{K_{qn}(bx)}{I_{qn}(bx)}I_{qn}(xr)I_{qn}(xr^{\\prime })-\\frac{%\nI_{qn}(ax)}{K_{qn}(ax)}K_{qn}(xr)K_{qn}(xr^{\\prime }), \\label{iden2}\n\\end{eqnarray}%\nwith the notations $n_{a}=1$, $n_{b}=-1$, and%\n\\begin{equation}\n\\Omega _{b,qn}(x,y)=\\frac{I_{qn}(x)\/I_{qn}(y)}{%\nK_{qn}(x)I_{qn}(y)-K_{qn}(y)I_{qn}(x)}. \\label{Omb}\n\\end{equation}%\nThis leads to the following representation for the Wightman function%\n\\begin{eqnarray}\n\\left\\langle 0|\\varphi (x)\\varphi (x^{\\prime })|0\\right\\rangle\n&=&\\left\\langle \\varphi (x)\\varphi (x^{\\prime })\\right\\rangle\n_{0}+\\left\\langle \\varphi (x)\\varphi (x^{\\prime })\\right\\rangle _{b}-\\frac{q%\n}{2^{D-3}\\pi ^{D}}\\sum_{n=1}^{\\infty }\\sin (qn\\phi )\\sin (qn\\phi ^{\\prime\n})\\, \\notag \\\\\n&&\\times \\int d^{N}\\mathbf{k}e^{i\\mathbf{k}\\Delta \\mathbf{r}_{\\parallel\n}}\\int_{k_{m}}^{\\infty }dx\\,x\\frac{\\Omega _{b,qn}(ax,bx)}{\\sqrt{%\nx^{2}-k_{m}^{2}}}G_{qn}(bx,xr)G_{qn}(bx,xr^{\\prime }) \\notag \\\\\n&&\\times \\cosh (\\Delta t\\sqrt{x^{2}-k_{m}^{2}}). \\label{W5}\n\\end{eqnarray}%\nIn this formula,%\n\\begin{eqnarray}\n\\left\\langle \\varphi (x)\\varphi (x^{\\prime })\\right\\rangle _{b} &=&-\\frac{q}{%\n2^{D-3}\\pi ^{D}}\\sum_{n=1}^{\\infty }\\sin (qn\\phi )\\sin (qn\\phi ^{\\prime\n})\\int d^{N}\\mathbf{k}\\,e^{i\\mathbf{k}\\Delta \\mathbf{r}_{\\parallel }} \\notag\n\\\\\n&&\\times \\int_{k_{m}}^{\\infty }dx\\,x\\frac{K_{qn}(bx)}{I_{qn}(bx)}\\frac{%\nI_{qn}(xr)I_{qn}(xr^{\\prime })}{\\sqrt{x^{2}-k_{m}^{2}}}\\cosh (\\Delta t\\sqrt{%\nx^{2}-k_{m}^{2}}) \\label{Wb}\n\\end{eqnarray}%\nis the part induced by a single cylindrical shell of radius $b$ in the\nregion $r0$ for $x\\pi $. The second term on the right of (\\ref{p2wcyl}) is\ndecomposed as\n\n\\begin{equation}\np_{2,\\mathrm{cyl}}=p_{2,\\mathrm{cyl}}^{(j)}+p_{2,\\mathrm{cyl}}^{(jj^{\\prime\n})}, \\label{p2}\n\\end{equation}%\nwhere $p_{2,\\mathrm{cyl}}^{(j)}=-\\langle T_{2}^{2}\\rangle _{j}|_{\\phi =0}$\nis the effective azimuthal pressure on the wedges induced by a single\ncylindrical boundary with radius $j$, $j=a,b$, and $p_{2,\\mathrm{cyl}%\n}^{(jj^{\\prime })}=-\\langle T_{2}^{2}\\rangle _{jj^{\\prime }}|_{\\phi =0}$ is\ninduced by the presence of the second cylindrical boundary. Substituting $%\ni=2 $ and $\\phi =0,\\phi _{0}$ in the formulae for the VEVs of the\nenergy-momentum tensor from the previous section, for the forces induced by\nthe shells we find\n\\begin{eqnarray}\np_{2,\\mathrm{cyl}}^{(a)} &=&-\\frac{q^{3}A_{D}}{r^{2}}\\sum_{n=1}^{\\infty\n}n^{2}\\int_{m}^{\\infty }dx\\,x\\left( x^{2}-m^{2}\\right) ^{\\frac{D-3}{2}}\\frac{%\nI_{qn}(ax)}{K_{qn}(ax)}K_{qn}^{2}(rx), \\label{p2cyla} \\\\\np_{2,\\mathrm{cyl}}^{(jj^{\\prime })} &=&-\\frac{q^{3}A_{D}}{r^{2}}%\n\\sum_{n=1}^{\\infty }n^{2}\\int_{m}^{\\infty }dx\\,x\\left( x^{2}-m^{2}\\right) ^{%\n\\frac{D-3}{2}}\\Omega _{j,qn}(ax,bx)G_{qn}^{2}(jx,rx). \\label{p2cyljjp}\n\\end{eqnarray}%\nThe expression for $p_{2,\\mathrm{cyl}}^{(b)}$ is obtained from (\\ref{p2cyla}%\n) by the replacements $a\\rightarrow b$, $I\\rightleftarrows K$. Single shell\nparts in the forces acting on the wedge sides, $p_{2,\\mathrm{cyl}}^{(j)}$,\nare finite for all values $r$ except the points on the edge $r=j$. The\nsecond shell-induced part, $p_{2,\\mathrm{cyl}}^{(jj^{\\prime })}$, is finite\nfor all $r$ except the points on the edge $r=j^{\\prime }$, $j^{\\prime }=a,b$%\n, $j^{\\prime }\\neq j$. Note that $p_{2,\\mathrm{cyl}}^{(jj^{\\prime })}=0$ for\n$r=j$. The integrands in (\\ref{p2cyla}) and (\\ref{p2cyljjp}) are positive\nand, hence, the corresponding vacuum forces are attractive. As before we can\nwrite%\n\\begin{equation}\np_{2,\\mathrm{cyl}}=\\sum_{j=a,b}p_{2,\\mathrm{cyl}}^{(j)}+\\Delta p_{2,\\mathrm{%\ncyl}}, \\label{p2interf}\n\\end{equation}%\nwhere the interference part $\\Delta p_{2,\\mathrm{cyl}}$ is finite for all\nvalues $a\\leqslant r\\leqslant b$. As it follows from (\\ref{p2cyla}), (\\ref%\n{p2cyljjp}), the corresponding forces do not depend on the curvature\ncoupling parameter.\n\nIn the limit $a\\rightarrow 0$ the main contribution into $p_{2,\\mathrm{cyl}%\n}^{(a)}$ and $\\Delta p_{2,\\mathrm{cyl}}$ comes from the term with $n=1$ and\nthese quantities behave like $a^{2q}$. In the limit $b\\rightarrow \\infty $\nand for a massive scalar field the parts $p_{2,\\mathrm{cyl}}^{(b)}$ and $%\n\\Delta p_{2,\\mathrm{cyl}}$ are exponentially suppressed by the factor $%\ne^{-2mb}$. In the same limit and for a massless field the main contribution\ncomes from the summand with $n=1$ and these parts behave as $1\/b^{D+2q-1}$.\nNow we consider the forces acting on the wedge sides in the limit of small\nvalues of the opening angle when the parameter $q$ is large, $q\\gg 1$. In\nthis limit the order of the modified Bessel functions is large and we can\nuse the uniform asymptotic expansions for these functions. By using these\nexpansions, it can be seen that the main contribution comes from the $n=1$\nterm and from the lower limit of the integral. To the leading order we find%\n\\begin{equation}\np_{2,\\mathrm{cyl}}^{(j)}\\approx -\\frac{q^{(D+3)\/2}\\exp [-2q|\\ln (j\/r)|]}{%\n(2\\pi )^{(D+1)\/2}r^{2}|r^{2}-j^{2}|^{(D-1)\/2}}. \\label{p2jcylq}\n\\end{equation}%\nIn the similar way, for the interference part of the force one has:%\n\\begin{equation}\n\\Delta p_{2,\\mathrm{cyl}}\\approx \\frac{2q^{(D+3)\/2}(a\/b)^{2q}}{(2\\pi\n)^{(D+1)\/2}r^{2}(b^{2}-a^{2})^{(D-1)\/2}}. \\label{Deltp2q}\n\\end{equation}%\nIn figure \\ref{fig2} we have plotted the quantities $a^{4}p_{2,\\mathrm{cyl}%\n}^{(j)}$, $j=a,b$, and $a^{4}p_{2,\\mathrm{cyl}}$ as functions of $r\/a$ for $%\nD=3$ massless scalar field. The graphs are given for the wedges with $\\phi\n_{0}=\\pi \/2$ (full curves) and $\\phi _{0}=3\\pi \/2$ (dashed curves) and for $%\nb\/a=1.5$.\n\n\\begin{figure}[tbph]\n\\begin{center}\n\\epsfig{figure=sahafig2.eps,width=8.cm,height=7cm}\n\\end{center}\n\\caption{Vacuum forces acting on the wedge sides due to the\npresence of cylindrical shells for $D=3$ massless scalar field:\n$a^{4}p_{2,\\mathrm{cyl}}^{(j)}$ and $a^{4}p_{2,\\mathrm{cyl}}$. The\nfull (dashed) curves correspond to the wedge with $\\protect\\phi\n_{0}=\\protect\\pi \/2$ ($\\protect\\phi _{0}=3\\protect\\pi \/2$) and we\nhave taken\n$b\/a=1.5$. The curves a (b) correspond to the effective pressure $a^{4}p_{2,%\n\\mathrm{cyl}}^{(a)}$ ($a^{4}p_{2,\\mathrm{cyl}}^{(b)}$)\\ when the shell with\nradius $a$ ($b$) is present only, and the curves ab are for the effective\npressure $a^{4}p_{2,\\mathrm{cyl}}$ when the both shells are present.}\n\\label{fig2}\n\\end{figure}\n\nNow we turn to the interaction forces acting on the cylindrical boundaries.\nThese forces are determined by the $_{1}^{1}$-component of the\nenergy-momentum tensor evaluated on the corresponding surfaces. Similar to\nthe previous case, the effective pressure on the cylindrical shell $r=j$ is\npresented as the sum\n\\begin{equation}\np^{(j)}=p_{1}^{(j)}+p^{(jj^{\\prime })}, \\label{p1j}\n\\end{equation}%\nwhere $p_{1}^{(j)}=-(\\langle T_{1}^{1}\\rangle _{0}+\\langle T_{1}^{1}\\rangle\n_{j})|_{r=j}$ is the radial vacuum stress on the cylinder with the radius $j$\nwhen the second cylinder is absent and $p^{(jj^{\\prime })}=-\\langle\nT_{1}^{1}\\rangle _{jj^{\\prime }}|_{r=j}$ is the additional stress on this\ncylindrical surface when the second cylinder is present. Note that the\noff-diagonal component $\\langle T_{1}^{2}\\rangle _{jj^{\\prime }}$ vanishes\non the shell $r=j$ and does not contribute to the force. The part $%\np_{1}^{(j)}$ includes the self-action force on the cylindrical shell and\nbelongs to the second group of quantities in the classification given in the\nprevious section. Its evaluation requires more realistic model for the\ninteraction of the quantum field. Unlike to the self-action force, the\ninteraction force given by the second term on the right of (\\ref{p1j}) is\nfinite for all nonzero distances between the shells and can be evaluated by\nboundary condition calculations. From the last term on the right of (\\ref%\n{vevemt_ii}) taking $i=1$ and $r=j$ one finds:%\n\\begin{equation}\np^{(jj^{\\prime })}=-\\frac{qA_{D}}{j^{2}}\\sum_{n=1}^{\\infty }\\sin ^{2}(qn\\phi\n)\\int_{m}^{\\infty }dx\\,x\\left( x^{2}-m^{2}\\right) ^{\\frac{D-3}{2}}\\Omega\n_{j,qn}(ax,bx). \\label{Deltap1j}\n\\end{equation}%\nFrom this formula we see that $p^{(jj^{\\prime })}<0$ and the corresponding\nforces are always attractive. The expression for the interaction forces\nbetween the cylindrical shells can also be written in the form%\n\\begin{equation}\np^{(jj^{\\prime })}=\\frac{qn_{j}A_{D}}{j}\\frac{\\partial }{\\partial j}%\n\\sum_{n=1}^{\\infty }\\sin ^{2}(qn\\phi )\\int_{m}^{\\infty }dx\\,x\\left(\nx^{2}-m^{2}\\right) ^{\\frac{D-3}{2}}\\ln \\left[ 1-\\frac{I_{qn}(ax)K_{qn}(bx)}{%\nI_{qn}(bx)K_{qn}(ax)}\\right] , \\label{pjint1}\n\\end{equation}%\nwhere, as before, $n_{a}=1$, $n_{b}=-1$. As for the forces acting on the\nwedge sides, the interaction forces do not depend on the curvature coupling\nparameter.\n\nNow we consider the behavior of the interaction forces in asymptotic regions\nof the parameters. In the limit $a\\rightarrow 0$ the main contribution in\nthe sum of formula (\\ref{Deltap1j}) comes from the $n=1$ term and $%\nj^{2}p^{(jj^{\\prime })}\\sim a^{2q}$. For large values of the exterior shell\nradius, $b\\rightarrow \\infty $, and for a massive field the interaction\nforces $p^{(jj^{\\prime })}$ are suppressed by the factor $e^{-2mb}$. In the\nsame limit and for a massless field one has $j^{2}p^{(jj^{\\prime })}\\sim\n1\/b^{D+2q-1}$. For small values of the wedge opening angle, assuming that $%\nq\\gg 1$, in the way similar to that used for the estimation of the forces\nacting on the wedge sides, one finds%\n\\begin{equation}\nj^{2}p^{(jj^{\\prime })}\\approx -\\frac{4q^{(D+3)\/2}(a\/b)^{2q}\\sin ^{2}(q\\phi )%\n}{(2\\pi )^{(D+1)\/2}r^{2}(b^{2}-a^{2})^{(D-1)\/2}}. \\label{pjjq}\n\\end{equation}%\nIn figure \\ref{fig3} we have plotted the interaction forces acting on\ncylindrical shells, $a^{4}p^{(jj^{\\prime })}$, as functions of $\\phi \/\\phi\n_{0}$ for wedges with $\\phi _{0}=\\pi \/2$ (full curves) and $\\phi _{0}=3\\pi\n\/2 $ (dashed curves) and for $b\/a=1.5$ in the case of $D=3$ massless scalar\nfield. The curves a are for $p^{(ab)}$ and the curves b are for $p^{(ba)}$.\n\n\\begin{figure}[tbph]\n\\begin{center}\n\\epsfig{figure=sahafig3.eps,width=8.cm,height=7cm}\n\\end{center}\n\\caption{Vacuum forces acting on the cylindrical shell due to presence of\nthe second shell, $a^{4}p^{(jj^{\\prime })}$ as functions on $\\protect\\phi \/%\n\\protect\\phi _{0}$, for $D=3$ massless scalar field. The full\n(dashed) curves correspond to the wedge with $\\protect\\phi\n_{0}=\\protect\\pi \/2$ ($\\protect\\phi _{0}=3\\protect\\pi \/2$) and in\nboth cases $b\/a=1.5$. The curves a (b) correspond to the forces\nacting on the shell with radius $a$ ($b$).} \\label{fig3}\n\\end{figure}\n\nNote that in the geometry of two coaxial cylindrical shells without a wedge\nthe corresponding interaction forces are given by the formula \\cite%\n{Saha06cyl}\n\\begin{equation}\np^{(jj^{\\prime })}=-\\frac{A_{D}}{2j^{2}}\\sideset{}{'}{\\sum}_{n=0}^{\\infty\n}\\int_{m}^{\\infty }du\\,u\\left( u^{2}-m^{2}\\right) ^{\\frac{D-3}{2}}\\Omega\n_{j,n}(au,bu), \\label{pjj2cyl}\n\\end{equation}%\nwhere the prime on the sum sign means that the term $n=0$ should be halved.\nFor $D=3$ massless scalar field and for $b\/a=1.5$ from this formula we have $%\np^{(ab)}\\approx -0.437\/a^{4}$ and $p^{(ba)}\\approx -0.254\/a^{4}$. As it has\nbeen shown in \\cite{Saha06cyl}, the interaction forces (\\ref{pjj2cyl}) can\nalso be obtained from the corresponding part in the total Casimir energy\ndifferentiating over the radii of cylindrical shells. In the geometry under\nconsideration in the present paper the Casimir forces are position dependent\non the boundary and cannot be obtained by global methods using the total\nCasimir energy.\n\nIn the limit $\\phi _{0}\\rightarrow 0$, $a,b\\rightarrow \\infty $, assuming\nthat $b-a\\equiv L_{1}$ and $a\\phi _{0}\\equiv L_{2}$ are fixed, from the\nformulae given above we obtain the corresponding results for the geometry of\na rectangular waveguide with sides $L_{1}$ and $L_{2}$. Here we discuss this\nlimiting transition for the case of the interaction forces $p^{(jj^{\\prime\n})}$. The consideration of the other quantities is done in the similar way.\nIn the limit under consideration the parameter $q$ is large and we can\nreplace the modified Bessel functions by the corresponding uniform\nasymptotic expansions. By using these expansions it can be seen that to the\nleading order we have%\n\\begin{equation}\n\\Omega _{j,\\nu }(a\\nu z,b\\nu z)\\approx \\frac{2\\nu \\sqrt{1+a^{2}z^{2}}}{%\ne^{2\\nu \\sqrt{1+a^{2}z^{2}}L_{1}\/a}-1},\\;\\nu =qn. \\label{Omlim}\n\\end{equation}%\nIntroducing in (\\ref{pjjq}) a new integration variable $z=x\/qn$ and by\nmaking use of (\\ref{Omlim}), after some transformations, to the leading\norder we find%\n\\begin{equation}\np^{(jj^{\\prime })}\\approx -\\frac{2\\pi A_{D}}{L_{1}^{D}L_{2}}%\n\\sum_{n=1}^{\\infty }\\sin ^{2}(\\pi ny\/L_{2})\\int_{0}^{\\infty }dx\\,\\frac{%\nx^{D-2}\\sqrt{x^{2}+c_{n}^{2}}}{e^{2\\sqrt{x^{2}+c_{n}^{2}}}-1}%\n,\\;c_{n}^{2}=m^{2}L_{1}^{2}+(\\pi nL_{1}\/L_{2})^{2}, \\label{pjjlim}\n\\end{equation}%\nwhere $y=a\\phi $. The expression on the right of this formula is the vacuum\ninteraction force per unit surface between the facets of the rectangular\nparallelepiped separated by the distance $L_{1}$ and $y$ is the Cartesian\ncoordinate parallel to these facets. Other facets of the parallelepiped are\nlocated at $y=0$ and $y=L_{2}$. Introducing in (\\ref{pjjlim}) $y=y^{\\prime\n}+L_{2}\/2$ and taking the limit $L_{2}\\rightarrow \\infty $ with fixed value $%\ny^{\\prime }$, from (\\ref{pjjlim}) the vacuum forces for two infinite\nparallel Dirichlet plates are obtained. Note that the local vacuum densities\nfor a quantum field confined within rectangular cavities are investigated in\n\\cite{Acto96,Acto94,Acto95} (for corresponding global quantities such as the\ntotal Casimir energy see \\cite{Most97,Milt02} and references therein).\n\n\\section{Conclusion}\n\n\\label{sec:Conclusion}\n\nIn this paper we have considered one-loop quantum vacuum effects for a\nmassive scalar field in the geometry of a wedge with two coaxial cylindrical\nshells. We have assumed that the field satisfies Dirichlet boundary\ncondition on the bounding surfaces. This geometry generalizes various\nspecial cases previously discussed in literature, including wedge-shaped\nregions, cylindrical boundaries, and rectangular waveguides. The most\nimportant local characteristics of the quantum vacuum are the VEVs for the\nfield square and the energy-momentum tensor. To evaluate these VEVs, as the\nfirst step we construct the positive frequency Wightman function. The\ncorresponding eigensum contains a summation over the zeros of the\ncombination of Bessel and Neumann functions. The application of the\ngeneralized Abel-Plana formula to the corresponding sum allows to present\nthe Wightman function in decomposed form given by formulae (\\ref{W4}) and (%\n\\ref{W5}). In this representations the first term on the right is the\nWightman function for the wedge without cylindrical boundary, the term $%\n\\left\\langle \\varphi (x)\\varphi (x^{\\prime })\\right\\rangle _{j}$ is induced\nby a single shell with radius $j$ when the second shell is absent, and the\nlast terms on the right are induced by the presence of the second shell. For\npoints away from the shells the last two terms are finite in the coincidence\nlimit and the renormalization is needed for the first term only. By taking\nthe coincidence limit, we have obtained similar representations for the VEVs\nof the field square and the energy-momentum tensor, formulae (\\ref{VEVphi2})\nand (\\ref{VEVemt}). More symmetric decompositions are given by formulae (\\ref%\n{interphi2}) and (\\ref{Tikren}), where the last interference term is finite\neverywhere including points on the shells. In the limit $a\\rightarrow 0$ the\ninterference parts tends to zero like $a^{2q}$. For large values of the\nexterior shell radius, $b\\rightarrow \\infty $, the interference terms in the\nVEVs behave as $e^{-2mb}\/b^{(D-1)\/2}$ for a massive field and as $%\n1\/b^{D+2q-1}$ for a massless one. For a wedge with small opening angle, $%\nq\\gg 1$, the main contribution into the interference parts of the VEVs comes\nfrom the summands with $n=1$ and these parts are suppressed by the factor $%\n(a\/b)^{2q}$.\n\nIn section \\ref{sec:forces} we have considered the vacuum forces acting on\nconstraining boundaries. In the geometry under consideration these forces\nare position dependent on the boundary and cannot be obtained by global\nmethods using the total Casimir energy. The forces acting on the wedge sides\nare determined by the $_{2}^{2}$-component of the vacuum energy-momentum\ntensor and are presented in the decomposed form (\\ref{p2wcyl}). In this\nrepresentation the first term on the right determines the force when the\nshells are absent and the second term is induced by the shells. In its turn\nthe latter is decomposed into a single shell and second shell induced parts\n(see formula (\\ref{p2})) given by formulae (\\ref{p2cyla}), (\\ref{p2cyljjp}).\nBoth these forces are always attractive and do not depend on the curvature\ncoupling parameter. Further we consider the forces acting on the cylindrical\nshells. These force are presented in the form (\\ref{p1j}) where the first\nterm on the right is the force acting on the cylindrical shell with radius $j\n$ when the second shell is absent and the second term is induced by the\npresence of the second shell. The latter, given by formula (\\ref{Deltap1j}),\nis always attractive and does not depend on the curvature coupling\nparameter. For large values of the parameter $q$, this part is suppressed by\nthe factor $(a\/b)^{2q}$. In the limit $\\phi _{0}\\rightarrow 0$, $%\na,b\\rightarrow \\infty $, assuming that $b-a$ and $a\\phi _{0}$ are fixed,\nfrom the results of the present paper we obtain the corresponding formulae\nfor the VEVs in the geometry of a rectangular waveguide. We have\ndemonstrated this on the example of the interaction force between the\ncylindrical shells.\n\nNote that we have considered quantities which are well defined\nwithin the framework of standard renormalization procedure of\nquantum field theory without boundaries. We expect that similar\nresults would be obtained from the model discussed in\n\\cite{Grah02} where instead of externally imposed boundary\ncondition the fluctuating field is coupled to a smooth background\npotential that reproduces the boundary condition in a limiting\ncase. The generalization of the results in the present paper for a\nscalar field with Neumann boundary conditions is straightforward.\nFor this case in the expressions (\\ref{eigfunc}) of the\neigenfunctions the function $\\cos (qn\\phi )$ stands instead of\n$\\sin (qn\\phi )$ and the quantum number $n$ takes the values\n$0,1,2,\\ldots $. The corresponding eigenvalues for $\\gamma $ are\nzeros of the function $J_{qn}^{\\prime }(\\gamma a)Y_{qn}^{\\prime\n}(\\gamma b)-Y_{qn}^{\\prime }(\\gamma a)J_{qn}^{\\prime }(\\gamma b)$.\nThe formula for the summation over these zeros is given in\n\\cite{Saha00rev}. The formulae for the Wightman function and the\nVEV of the field square in Neumann case are obtained from the\ncorresponding formulae for Dirichlet scalar by the\nreplacements $\\sin (qn\\phi )\\rightarrow \\cos (qn\\phi )$, $%\nI_{qn}(jx)\\rightarrow I_{qn}^{\\prime }(jx)$, $K_{qn}(jx)\\rightarrow\nK_{qn}^{\\prime }(jx)$, $j=a,b$, and with the term $n=0$ included in the\nsummation. In the expressions for the VEVs of the energy-momentum tensor\nthis leads to the change of the sign for the second term in the figure\nbraces on the right of (\\ref{vevemt_ii}) and to the change of the sign for\nthe off-diagonal component (\\ref{vevemt_12}).\n\n\\section*{Acknowledgements}\n\nAAS would like to acknowledge the hospitality of the Abdus Salam\nInternational Centre for Theoretical Physics, Trieste, Italy. The work was\nsupported by the Armenian Ministry of Education and Science Grant No. 0124.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:1-intro}\n\n\nOne major consequence of learning multiple tasks in a continual learning (CL) setting --- where tasks are learned sequentially, and the model can only have access to one task at a time --- is catastrophic forgetting~\\citep{McCloskey1989CatastrophicII}. This is in contrast to multitask learning (MTL), where the learner has simultaneous access to all tasks, which generally learns to perform well on all tasks without suffering from catastrophic forgetting.\nThis limitation hinders the ability of the model to learn continually and efficiently. In recent years, several approaches have been proposed to tackle this problem. \nThey have mostly tried to mitigate catastrophic forgetting by using different approximations of the \nmultitask loss. For example, some regularization methods take a quadratic approximation of the loss of previous tasks~\\citep[e.g.][]{EWC,yin2020sola}. As another example, rehearsal methods attempt to directly use compressed past data either by selecting a representative subset~\\citep[e.g.][]{Chaudhry2019HAL,titsias2019functional} or relying on generative models~\\citep[e.g.][]{shin2017continual,robins1995catastrophic}.\n\n\n\n\n\n\n\nIn this work, we depart from the literature and start from the non-conventional question of understanding \\emph{``What is the relationship, potentially in terms of local geometric properties, between the multitask and the continual learning minima?''}. \n\nOur work is inspired by recent work on mode connectivity~\\citep{draxler2018essentially,modeconnectivity-main, frankle2020linear} finding that different optima obtained by gradient-based optimization methods are connected by simple paths of non-increasing loss. \nWe try to understand whether the multitask and continual solutions are also connected by a manifold of low error, and what is the simplest form this manifold can take. Surprisingly, we find that a linear manifold, as illustrated in Fig.~\\ref{fig:intro-setup} right, reliably connects the multitask solution to the continual ones, granted that the multitask shares same initialization with the continual learning as described below. This is a significant finding in terms of understanding the phenomenon of catastrophic forgetting through the lens of loss landscapes and optimization trajectory and also for designing better continual learning algorithms.\n\n\\begin{wrapfigure}{R}{0.5\\textwidth}\n\\vspace{-2mm}\n \\includegraphics[width=0.5\\linewidth]{figs\/setup.pdf}\n \n \\hspace{-2.3mm}\n \\includegraphics[width=0.5\\linewidth]{figs\/mc-illustration.pdf}\n \\caption{\\textbf{Left:} Depiction of the training regime considered. First \\what{1} is learned on task 1. Afterwards we either reach \\what{2} by learning second task or \\wstar{2} by training on both tasks simultaneously. \\textbf{Right:} Depiction of linear connectivity between \\wstar{2} and \\what{1} and between \\wstar{2} and \\what{2}.}\n \\label{fig:intro-setup}\n \\vspace{-2mm}\n\\end{wrapfigure}\n\nTo reach this conclusion, we consider a particular learning regime described in Fig.~\\ref{fig:intro-setup} left, where after learning the first task using the data $\\mathcal{D}_1$, we either sequentially learn a second task obtaining \\what{2} or continue by training on both tasks simultaneously (i.e., train on $\\mathcal{D}_1 + \\mathcal{D}_2$), obtaining the multitask solution \\wstar{2}. We investigate the relationship between the two solutions \\what{2} and \\wstar{2}. Note that \\wstar{2} is not the typical multitask solution, which would normally start from $w_0$ and train on both datasets. We chose this slightly non-conventional setup to minimize the potential number of confounding factors that lead to discrepancies between the two solutions~\\citep{fort2019deep}. We also rely on the observation from~\\citep{frankle2020linear} that initialization can have a big impact on the connectivity between the solutions found on the same task, and sharing the same starting point, as we do between \\what{2} and \\wstar{2}, might warrant a linear path of low error between the two solutions.\nMoreover,~\\citet{Neyshabur2020WhatIB} noted that -- in the context of transfer learning, there is no performance barrier between two minima that start from pre-trained weights, which suggests that the pre-trained weights guide the optimization to a flat basin of the loss landscape. In contrast, barriers clearly exist if these two minima start from randomly initialized weights.\n\n\n\n\nOur contributions can be summarized as follows:\n\\begin{enumerate}[leftmargin=*]\n \\item To the best of our knowledge, our work is the first to study the connectivity between continual learning and multitask learning solutions.\n \\item We show that compared to conventional similarity measures such as Euclidean distance or Central Kernel Alignment~\\citep{kornblith2019similarity}, which are incapable of meaningfully relating these minima, the connectivity through a manifold of low error can reliably be established. And this connecting path is linear, even when considering more than 20 tasks in a row.\n \\item Motivated by this, we propose an effective CL algorithm (Mode Connectivity SGD or MC-SGD) that is able to outperform several established methods on standard CL benchmark. \n\\end{enumerate}\n{\\bf Related work.}\nWith the trending popularity of deep learning, continual learning has gained a critical importance because the catastrophic forgetting problem imposes key challenges to deploy deep learning models in various applications~\\citep[e.g][]{Lange2019ContinualLA,kemker2018measuring}. A growing body of research has attempted to tackle this problem in recent years~\\citep[e.g][]{Parisi2018ContinualLL,toneva2018empirical,nguyen2019toward,OGD, hsu2018re,rusu2016progressive,li2019learn,EWC,zenke2017continual,shin2017continual,rolnick2018experience,lopez2017gradient,AGEM,riemer2018learning,mirzadeh2020dropout,Wallingford2020InTW}. Among these works, our proposed MC-SGD bares most similarities to \\emph{rehearsal} based methods such us~\\citep[e.g.][]{shin2017continual,AGEM} and \\emph{regularization} based methods~\\citep[e.g.][]{EWC, zenke2017continual} similar to~\\citep{titsias2019functional}.\nFor more details please refer to Appendix~\\ref{app:sec:related-works}.\n\n\\section{The relation between multitask and continual minima}\n\\label{sec:3-minima-properties}\nOne question driving this work is understanding the relationship between the two different solutions: multitask learning vs. continual learning. In particular, we are interested in scenarios where a multitask solution for the considered tasks exists (for a discussion when this does not hold see~\\citep{he2019task}) and when both learning regimes have the same objective, finding a solution that performs well on all tasks. \nWe posit that this difference can not be reliably explained by simple and typical distance metrics used to measure similarity between the trained models. In particular we consider Euclidean distance and Central Kernel Alignment (CKA)~\\citep{kornblith2019similarity}. However, these solutions are connected by paths of low error, and, provided that learning starts from the same initial conditions, these paths can have a linear form. \n\nIn Fig.~\\ref{fig:setup-accs} left column we can see the performance of Naive SGD for multitask and continual learning. Details on the experimental setup can be found in the appendix~\\ref{sec:appendix-experimental-setup}. The dashed line represents the multitask solution at convergence, which achieves strong performance on all tasks. It further shows the performance of all tasks during the sequential learning experience (each point represents the performance after learning another task), highlighting how performance on past tasks degrades considerably. Note that as described in Fig.~\\ref{fig:intro-setup} and further detailed in Appendix~\\ref{app:sec:all-connectivity} Fig.~\\ref{fig:setup-3}, in the multitask learning scenario, tasks are added sequentially to the loss, to construct a parallel with the continual learning setting. This will be the case throughout this work. \n\n\\begin{figure*}[t]\n \\centering\n \\begin{subfigure}{.29\\textwidth}\n \\centering\n \\includegraphics[width=.99\\linewidth]{figs\/understanding\/mnist\/mnist_accs.pdf}\n \\caption{Rot-MNIST forgetting}\n \\label{fig:setup-accs-mnist}\n \\end{subfigure}\n \\begin{subfigure}{.29\\textwidth}\n \\centering\n \\includegraphics[width=.99\\linewidth]{figs\/understanding\/summary\/l2_rot.pdf}\n \\caption{Rot-MNIST Euclid. distance}\n \\label{fig:understanding-l2-mnist-summary}\n \\end{subfigure}\n \\begin{subfigure}{.29\\textwidth}\n \\centering\n \\includegraphics[width=.99\\linewidth]{figs\/understanding\/summary\/cka_rot_sum.pdf}\n \\caption{Rot-MNIST CKA distance}\n \\label{fig:understanding-cka-rot-summary}\n \\end{subfigure}\n \n \\begin{subfigure}{.29\\textwidth}\n \\centering\n \\includegraphics[width=.99\\linewidth]{figs\/understanding\/cifar\/cifar_accs.pdf}\n \\caption{Split CIFAR forgetting}\n \\label{fig:setup-accs-cifar}\n \\end{subfigure}\n \\begin{subfigure}{.29\\textwidth}\n \\centering\n \\includegraphics[width=.99\\linewidth]{figs\/understanding\/summary\/l2_cifar.pdf}\n \\caption{Split CIFAR Euclid. distance}\n \\label{fig:understanding-l2-cifar}\n \\end{subfigure}\n \\begin{subfigure}{.29\\textwidth}\n \\centering\n \\includegraphics[width=.99\\linewidth]{figs\/understanding\/summary\/cka_cifar_sum.pdf}\n \\caption{Split Cifar CKA distance}\n \\label{fig:understanding-cka-cifar-summary}\n \\end{subfigure}\n \\caption{Continual and Multitask learning performance and relation between minima. Top row: Rotation MNIST. Bottom row: Split CIFAR-100. Left column: Accuracy of all tasks during continual training. Middle: Euclidean distance. Right: CKA distance. Note that $\\hat{w}_5$ is not a good solution for task 1 although it's closer (more similar) to $\\hat{w}_1$ than $w^*_5$ in distance. Similarly, $\\hat{w}_5$ is closer (more similar) to $\\hat{w}_1$ than $w^*_5$ in terms of CKA distance. Therefore, neither Euclidean nor CKA distance is able to realize MTL is better in avoiding catastrophic forgetting.}\n \\label{fig:setup-accs}\n\\end{figure*} \n\n\n\\textbf{Eucledian distance}. It might be reasonable to expect that the less the parameter changes, the less the forgetting will be. \nOne can motivate this heuristic on a Taylor expansion of the loss, as done in~\\citep{mirzadeh2020understanding}, where, the forgetting is defined as:\n\\begin{align}\n L_1(\\hat{w}_2) - L_{1}(\\hat{w}_1) \\approx \\frac{1}{2} (\\hat{w}_2-\\hat{w}_1)^\\top \\nabla^2 L_1(\\hat{w}_1) (\\hat{w}_2-\\hat{w}_1) \\leq \\frac{1}{2} \\lambda_1^{\\text{max}} \\| \\hat{w}_2-\\hat{w}_1\\|^2\n \\label{eq:forgetting-bound}.\n\\end{align}\nHere, $L_1(w)$ is the empirical loss for task 1, and $\\lambda_1^{\\text{max}}$ is the largest eigenvalue of its Hessian at \\what{1}. \nNote that all terms of the Taylor expansion are multiplied with $\\hat{w}_2 - \\hat{w}_1$ and hence the norm of this delta will affect the amount of forgetfulness. \nIn fact, this is frequently done when pruning neural networks~\\citep{gupta2018, HanPTD15}, where weights are zeroed out based on magnitude, producing minimal Euclidean distance to unprunned model. \nBut, as observed in Fig.~\\ref{fig:setup-accs}, Eucledian distance does not correlate with not suffering from catastrophic forgetting (the CL solution on task 1 is closer to the CL solution of task 5 than the multitask solution after the first 5 tasks). One explanation could be that the bound defined by Eq.~(\\ref{eq:forgetting-bound}) will not be tight if the vector $\\hat{w}_5-\\hat{w}_1$ does not lie in the direction of the largest eigenvector. Appendix~\\ref{app:sec:distance-measures} contains further details on this topic.\n\n\\textbf{Centered Kernel Alignment}. Centered Kernel Alignment (CKA)~\\citep{kornblith2019similarity} measures the similarity of two representations on the same set of examples. \nGiven $N$ examples and two activation outputs on these examples, $R_1 \\in \\mathbb{R}^{N\\times d_1}$ and $R_2 \\in \\mathbb{R}^{N\\times d_2}$, CKA is defined by:\n\\begin{align}\n \\text{CKA}(R_1, R_2) = \\frac{\\| R_1^\\top R_2\\|_F^2}{\\| R_1^\\top R_1\\|_F \\| R_2^\\top R_2\\|_F},\n \\label{eq:CKA-score}\n\\end{align}\nwhere, $\\|.\\|_F$ is the Frobenius norm. \nRecent work by~\\citet{ramasesh2020anatomy} studies catastrophic forgetting on the CIFAR dataset by measuring the CKA similarity score of different layers of \\what{1} and \\what{2}. They argue that the later layers suffer more from catastrophic forgetting by showing that the CKA similarity of initial layers decreases less after training on sequential tasks. \n\nHowever, the CKA score suffers from a few shortcomings. \nIf the number of training epochs per task is small (e.g., in streaming case), the CKA does not change much, even though the accuracy for previous tasks drops drastically. For instance, in Fig.~\\ref{fig:setup-accs} right column, we show that the pairwise CKA between different layers of the first task minimum (\\what{1}) and CL and multitask minima of task 2 and task 5 are roughly the same. Although, the phenomenon observed in~\\citep{ramasesh2020anatomy} is still realizable by a very tiny margin. \nMoreover, we can see that the multitask minimum of task 2 (\\wstar{2}), and task 5 (\\wstar{5}) are more similar to \\what{1} compared to CL minima (\\what{2} and \\what{5}). \n\n\n\\subsection{Mode Connectivity}\n\\label{sec:understanding-mode-connectivity}\nMode connectivity has been studied empirically~\\citep{draxler2018essentially,modeconnectivity-main} and theoretically with some assumptions (e.g., over-parameterization or noise stability)~\\citep{kuditipudi2019explaining,venturi2019spurious}. \nWe note that these previous works were focused on ``single-task'' setup, where the data stays $i.i.d.$, and more importantly, the networks start from different initialization. Because of this assumption, it is shown that the minima are not linearly connected (\\textit{i.e.}, the loss is not necessarily decreasing in the interpolation line between the two minima). Instead, the path consists of several line-segments or curves~\\citep{kuditipudi2019explaining}.\n\nHowever, between our continual and multitask settings, a more favorable property can exist, namely that learning starts from the same initialization (i.e., \\what{1} = \\wstar{1}). For a ``single task'', common initialization leads to linear path between minima~\\citep{frankle2020linear}. Therefore, our first step is to investigate whether the same linear connectivity holds between multitask and CL solutions. As we show in Fig.~\\ref{fig:understanding-mode-con}, this is true for both MNIST and CIFAR benchmarks. In this figure, we can see that the multitask minima for task 2 (\\wstar{2}) is connected to both task 1 CL minima (\\what{1}) and task 2 CL minima (\\what{2}). We note that the surface plots in Fig.~\\ref{fig:understanding-mode-con} are not of low-dimensional projection types and are rather ``interpolation plots`` in parameter space computed on the hyper-plane connecting these three minima. More details are provided in Appendix~\\ref{sec:appendix-implementation-details}.\n\n\\begin{figure*}[th!]\n\\centering\n\\begin{subfigure}{.245\\textwidth}\n \\centering\n \\includegraphics[width=.99\\linewidth]{figs\/understanding\/surfaces-large\/mnist-t1.pdf}\n \\caption{MNIST Task 1 loss}\n \\label{fig:understanding-lmc-t1-mnist}\n \\end{subfigure}\n\\begin{subfigure}{.245\\textwidth}\n \\centering\n \\includegraphics[width=.99\\linewidth]{figs\/understanding\/surfaces-large\/mnist-t2.pdf}\n \\caption{MNIST Task 2 loss}\n \\label{fig:understanding-lmc-t2-mnist}\n\\end{subfigure}\\hfill\\hfill\n\\begin{subfigure}{.245\\textwidth}\n \\centering\n \\includegraphics[width=.99\\linewidth]{figs\/understanding\/surfaces-large\/cifar-t1.pdf}\n \\caption{CIFAR Task 1 loss}\n \\label{fig:understanding-lmc-t1-cifar}\n \\end{subfigure}\n\\begin{subfigure}{.245\\textwidth}\n \\centering\n \\includegraphics[width=.99\\linewidth]{figs\/understanding\/surfaces-large\/cifar-t2.pdf}\n \\caption{CIFAR Task 2 loss}\n \\label{fig:understanding-lmc-t2-cifar}\n\\end{subfigure}\\hfill\n\\caption{Cross-entropy validation loss surface of on rotation MNIST (a and b), and split CIFAR-100 (c and d), as a function of weights in a two-dimensional subspace passing through \\what{1}, \\what{2}, and \\wstar{2}.}\n\\label{fig:understanding-mode-con}\n\\end{figure*}\n\\begin{figure*}[th!]\n\\centering\n\\begin{subfigure}{.245\\textwidth}\n \\centering\n \\includegraphics[width=.99\\linewidth]{figs\/understanding\/mnist\/mnist_MC_task_1_all.pdf}\n \\caption{MNIST $\\hat{w}_1$ to next ones}\n \\label{fig:understaning-mc-mnist-surface-t1-all}\n\\end{subfigure}\n\\begin{subfigure}{.245\\textwidth}\n \\centering\n \\includegraphics[width=.99\\linewidth]{figs\/understanding\/mnist\/mnist_MC_task_2_all.pdf}\n \\caption{MNIST $\\hat{w}_2$ to next ones}\n \\label{fig:understaning-mc-mnist-surface-t2-all}\n\\end{subfigure}\n\\begin{subfigure}{.245\\textwidth}\n \\centering\n \\includegraphics[width=.99\\linewidth]{figs\/understanding\/cifar\/cifar_MC_task_1_all.pdf}\n \\caption{CIFAR $\\hat{w}_1$ to next ones}\n \\label{fig:understaning-mc-cifar-surface-t1-all}\n\\end{subfigure}\n\\begin{subfigure}{.245\\textwidth}\n \\centering\n \\includegraphics[width=.99\\linewidth]{figs\/understanding\/cifar\/cifar_MC_task_2_all.pdf}\n \\caption{CIFAR $\\hat{w}_2$ to next ones}\n \\label{fig:understaning-mc-cifar-surface-t2-all}\n\\end{subfigure}\n\\caption{Exploring the loss along the linear paths connecting the different solutions: The loss increases on the interpolation line between the first task solution(\\what{1}) and subsequent continual solutions, while the loss remains low on the interpolation line between \\what{1} and subsequent multitask minima (a and c). The same observation also holds for the second task solution (\\what{2}) (b and d)}.\n\\label{fig:understanding-mc-interpolate2}\n\\end{figure*}\n\n\nTo demonstrate linear connectivity holds for subsequent MTL and CL minima, we plot the validation loss on the line-segment interpolating between all subsequent minima for both task 1 and 2 on Rotated MNIST and CIFAR in Fig.~\\ref{fig:understanding-mc-interpolate2}. The low loss interpolation to \\wstar{}'s indicates that indeed along this linear path, the loss stays low. \n\n\\section{When does linear mode connectivity hold?}\nIt is worth trying to understand what is implied by the fact that these linear paths exist. Let us focus on the linear path between \\what{1} and \\wstar{2}. One potential justification is that while minimizing the multitask loss, the updates move us in the direction of low curvature for the Hessian of the first loss around \\what{1}. To see why, let us make the mild assumption that the function is smooth, which typically tends to hold in the over-parameterized regime (e.g., similar assumptions are needed for the NTK regime to hold~\\citep{bietti2019inductive}). Then we can not have a rapid change in the curvature. Hence, if moving along the line connecting \\what{1} and \\wstar{2}, the loss of task 1 stays very low, it means the curvature has also to be low as it can not show rapid fluctuations that will not be reflected in the loss. Implicitly the curvature is low at \\what{1} as well; hence this direction necessarily has to be within the subspace spanned by the eigenvectors with low curvature. \n\nThis seems not to hold for \\what{1} and \\what{2}, so we might expect that the line connecting them to move in directions of higher curvature. A symmetric argument can be made for \\wstar{2} and \\what{2}, looking now at the Hessian and loss of task 2. The smoothness argument makes it unlikely that curvature shows rapid fluctuations; hence loss staying low implies the alignment with eigenvectors of low curvature. \n\nTo validate this justification, we estimate the top 50 eigenvalues\/eigenvectors of the Hessian matrix around task 1 minima of rotation MNIST benchmark, similar to the setting we discussed in Section~\\ref{sec:3-minima-properties}. We use the deflated power iteration method to obtain the eigenspectrum of the Hessian. Fig.~\\ref{fig:eigenspectrum-eigenvals} shows the largest eigenvalues. We note that similar to the observation by~\\citet{Fort2019EmergentPO} and \\citet{mirzadeh2020understanding}, the eigenspectrum of the Hessian includes a bulk plus $C$ larger outlier eigenvalues where $C$ is the number of classes. \n\nTo measure the direction confinement in the subspace spanned by the eigenvectors, we measure the cosine angle of these directions and the Hessian eigenvectors. Fig.~\\ref{fig:eigenspectrum-necessary} shows that the direction from \\what{1} to \\what{2} lies within the subspace spanned by the top eigenvectors. Note that this agrees with our hypothesis. However, the direction between \\what{1} to \\wstar{2} does not lie within this subspace as the cosine angle between this direction and the eigenvectors are all near zero, suggesting that this direction is orthogonal to all these eigenvalues and hence is spanned by eigenvectors with low eigenvalues. The orthogonality has also been tried as a hard constraint in a number of continual learning papers explicitly~\\citep{OGD,he2017overcoming}.\nThis explains the less suffering from catastrophic forgetting as we move in directions of low curvature.\n\\begin{figure*}[h!]\n\\centering\n\\begin{subfigure}{.325\\textwidth}\n \\centering\n \\includegraphics[width=.99\\linewidth]{figs\/eigenspectrum\/eigens_vals.pdf}\n \\caption{}\n \\label{fig:eigenspectrum-eigenvals}\n\\end{subfigure}\n\\begin{subfigure}{.325\\textwidth}\n \\centering\n \\includegraphics[width=.99\\linewidth]{figs\/eigenspectrum\/eigens_close.pdf}\n \\caption{}\n \\label{fig:eigenspectrum-necessary}\n\\end{subfigure}\n\\begin{subfigure}{.325\\textwidth}\n \\centering\n \\includegraphics[width=.99\\linewidth]{figs\/eigenspectrum\/eigens_far.pdf}\n \\caption{}\n \\label{fig:eigenspectrum-sufficient}\n\\end{subfigure}\n\\caption{Comparison of the eigenspectrum of the Hessian matrix for $\\hat{w}_1$. (a): top Eigenvalues. (b and c): The overlap between Hessian eigenvectors and directions from \\what{1} to \\what{2} and \\wstar{2}.}\n\\label{fig:eigenspectrum}\n\\end{figure*}\n\n\n\nAn additional question is whether moving in the direction of low curvature is just necessary or is also sufficient. Fig.~\\ref{fig:eigenspectrum-sufficient} shows that this is not a sufficient condition. By increasing the number of epochs from 5 to 20 we show that both directions to \\what{2} and \\wstar{2} do not lie in this subspace of the top eigenvectors. That is, for both, if we move on the segment away from \\what{1}, we start by moving in directions of low curvature. But \\what{2} suffers from catastrophic forgetting and hence a considerable increase in the loss of task 1, while \\wstar{2} does not. \nThis suggests that while initially moving towards \\what{2} the curvature was low, higher order terms made the curvature to change magnitude before reaching \\what{2}. Overall this implies the multiple \\wstar{} that we observe, which gradually solve more and more tasks, lie within a region where a second order Taylor of the first task still holds and higher order derivatives do not play an important role. And within this region, learning is restricted to the subspace spanned only by eigenvectors corresponding to small eigenvalues for all previous tasks. \n\nWe empirically show that we can learn \\emph{up to 20 complex tasks and up to 50 permuted MNIST tasks in this regime} (see Fig.~\\ref{fig:mc_effeciveness} here and ~\\ref{fig:perm50-accs} in appendix~\\ref{sec:appendix-permuted-50}), hence it seems this region where the second order Taylor holds is quite expressive and sufficient to learn many tasks without interference between them.\nAdditionally, similar linearizations seem to hold reliably in overparametrized models throughout learning as shown by the NTK-based literature~\\citep[e.g][]{jacot2018neural,bietti2019inductive} and these assumptions are implicitly made by several successful continual learning algorithms~\\citep[e.g.][]{EWC,mirzadeh2020understanding}. \nIn particular, this observation is interesting when viewed from the perspective of regularization based CL methods. Many of these methods rely on a second order Taylor expansion (if not of previous losses then of the KL term with respect to how much the model changed) and forbid change in weights with high curvature. The fact that multitask learning adheres to the same principles, moves in the direction of low curvature and restricts oneself to the region where this approximation holds, suggests that this remains a promising direction. It also highlights the importance of better approximations of curvature and the importance of remaining within the area where this approximation holds.\n\nIn Appendix~\\ref{sec:AppendixBreaking}, we further investigate different aspects of our setting that might make it possible to find reliable solutions for all subsequent tasks in an area around the solution of the first task where the second order Taylor holds. We show, that as assumed earlier, sharing the same initial condition is vital for the linear connectivity to occur. Furthermore, the shared structure between tasks like low level features in images or semantically consistent labeling plays an important role as well. If we destroy any structure between tasks, we can not find a multitask solution for task 1 and 2 that is linearly connected to the first task. \n\n\n\n\n\n\\section{Continual Learning with Mode Connectivity}\nBased on the observation in Section~\\ref{sec:understanding-mode-connectivity} that there exist minima with linear mode connectivity to previous CL minima, in this section, we propose a continual learning algorithm that exploits this property. The proposed algorithm can be viewed as a combination of both replay and regularization methods in continual learning.\n\\subsection{MC-SGD}\nMode Connectivity SGD (MC-SGD) is a light-weight method that constrains the continual minima to lie in a low-loss valley to all previous minima. \nFor simplicity, we start by providing the loss function for two tasks, then we extend it for an arbitrary number of tasks. We define the loss objective that enforces linear connectivity to both \\what{1} and \\what{2} as below:\n\\begin{align}\n \\bar{w} = \\argmin_{w} \\quad \\int_{0 \\leq \\alpha \\leq 1} [\\loss{1}(\\hat{w_1} + \\alpha (w-\\hat{w}_1)) + \\loss{2}(\\hat{w_2} + \\alpha (w-\\hat{w_2}))] \\, d \\alpha, \\label{eq:mc-loss-2-tasks-main}\n\\end{align}\nwhere $\\loss{1}$ and $\\loss{1}$ are the loss function for task 1 and task 2 respectively and $\\alpha$ parameterizes the line connecting $w$ to \\what{1} and \\what{2} respectively in the first and second term.\nEssentially, Eq.~(\\ref{eq:mc-loss-2-tasks-main}) constrains the MC-SGD minima to have a low-loss path to both \\what{1} and \\what{2}. The integral in this equation can be approximated by averaging the loss over a few random $\\alpha$ in $[0, 1]$. However, our experiments showed that picking as few as $n=5$ equally spaced points between 0 and 1 is more than sufficient to get good results. \nWe can further decompose (\\ref{eq:mc-loss-2-tasks-main}) into:\n\\begin{align}\n \\bar{w} = \\argmin_{w} \\quad & \\loss{1}(w) + \\loss{2}(w) + \\underbrace{\\loss{1}(\\hat{w}_1) + \\loss{2}(\\hat{w}_2)}_{\\text{constant}} \\nonumber\\\\\n & + \\underbrace{\n \\frac{1}{n-1} \\sum_{\\alpha \\in \\{\\frac{1}{n}, \\ldots, \\frac{n-1}{n}\\}} [\\loss{1}(\\hat{w_1} + \\alpha (w-\\hat{w}_1)) + \\loss{2}(\\hat{w_2} + \\alpha (w-\\hat{w_2}))]}_{\\text{regularization}} \\label{eq:mc-loss-2-tasks-decomposed}\n\\end{align}\nIt's evident from Eq.~(\\ref{eq:mc-loss-2-tasks-decomposed}) that $\\bar{w}$ should minimize both task 1 and task 2 loss, in addition to having low-loss paths to both of the CL minima. \n\nIn a continual learning setting, however, we would not have access to $\\loss{1}$ once we start learning on $\\loss{2}$. Therefore we rely on experience replay like ER~\\citep{Chaudhry2019OnTE} and AGEM~\\citep{AGEM}. We use a small replay buffer with randomly sampled examples to approximate $\\loss{1}$, with the hope that given the restricted form of Eq.~(\\ref{eq:mc-loss-2-tasks-decomposed}) through the additional regularizer, we might need lesser data then if we would have used it to approximate directly the multitask loss.\n\nWe can further extend the mode connectivity loss to an online version that is cheap to compute:\n\\begin{align}\n \\bar{w}_t = \\argmin_{w} \\quad \\sum_{ \\alpha} [\\loss{t-1}(\\bar{w}_{t-1} + \\alpha (w-\\bar{w}_{t-1})) + \\loss{t}(\\hat{w}_{t} + \\alpha (w-\\hat{w}_{t}))]. \\label{eq:mc-loss-2-tasks-online}\n\\end{align}\nEq.~(\\ref{eq:mc-loss-2-tasks-online}) can be viewed as follows: when learning task $t$, assuming that we have a high-performance solution for the previous $t-1$ tasks (i.e., $\\bar{w}_{t-1}$), we try to minimize Eq.~(\\ref{eq:mc-loss-2-tasks-main}) as if in this equation, \\what{1} is $\\bar{w}_{t-1}$ and \\what{2} is \\what{t}.\nThe replay memory still consists of samples from all the previous $t-1$ tasks.\n\n\n\\section{Experiments and results}\n\\label{sec:experiments-results}\n\n\\label{sec:experimental-setup-summary}\nThe experimental setup, such as benchmarks, network architectures, continual learning setting (e.g., number of tasks, episodic memory size, and training epochs per task), hyper-parameters, and evaluation metrics are chosen to be similar to several other studies~\\citep{AGEM,mirzadeh2020understanding,Chaudhry2019OnTE,OGD,Chaudhry2019HAL}. For all experiments, we report the average and standard deviation over five runs with different random seeds. Appendix~\\ref{sec:appendix-experimental-setup} provides a detailed discussion on our experimental setup. \n\n\n{\\bf Benchmarks.} We report on three standard continual learning benchmarks: Permuted MNIST~\\citep{Goodfellow2013AnEI}, Rotated MNIST, and Split CIFAR-100. Although we are aware of the shortcomings of Permuted MNIST~\\citep{Farquhar2018TowardsEvalCL}, for the sake of consistency with literature, we report the result on this benchmark. However, we also report our results on Rotated MNIST and CIFAR-100, that are more challenging and realistic datasets for continual learning, especially when the number of tasks is large. Each task of permuted MNIST is generated by random shuffling of the pixels of images such that the permutation remains the same for the images of within the same task, but changes across different tasks. Rotated MNIST is generated by the continual rotation of the MNIST images where each task applies a fixed random image rotation (between 0 and $180^\\circ$). Split CIFAR is a variant of CIFAR-100 where each task contains the data from 5 random classes.\n\n{\\bf Architectures.} For the rotation and permutation MNIST benchmarks, we use a two-layer MLP with 256 ReLU units in each layer. For the split CIFAR-100 benchmark, we use a ResNet18, with three times fewer feature maps across all layers.\n\n{\\bf Evaluation.} Following several studies in the literature~\\citep{AGEM,mirzadeh2020dropout}, we report two metrics to compare CL algorithms when the number of tasks is large.\\\\ \n(1) Average Accuracy: The average validation accuracy after the model has been continually learned task $t$, is defined by\n$\n A_t= \\frac{1}{t} \\sum_{i=1}^t a_{t,i}\n$,\nwhere, $a_{t,i}$ is the validation accuracy on dataset $i$ after the model finished learning task $t$.\\\\\n(2) Average Forgetting: The average forgetting measures the backward transfer in continual learning. This metric is calculated by the difference of the peak accuracy and ending accuracy of each task, after the continual learning experience has finished. For a continual learning benchmark with $T$ tasks, it is defined by\n$\n F = \\frac{1}{T-1} \\sum_{i=1}^{T-1}{\\text{max}_{t \\in \\{1,\\dots, T-1\\}}~(a_{t,i}-a_{T,i})}\n$.\nFurther details are provided in the Appendix~\\ref{sec:appendix-experimental-setup}.\n\n\\subsection{Comparison with other methods}\n\\label{sec:experiment-comparison-20}\n\nIn this experiment, we compare MC-SGD with various established continual learning algorithms. Our setup consists of 20 tasks for three benchmarks: permuted MNIST, Rotated MNIST, and split CIFAR-100. The episodic memory size for A-GEM, ER-Reservoir, and MC-SGD is limited to be one example per class per task\\footnote{Anonymous links to the MC-SGD code, and reproducible runs are provided in Appendix~\\ref{sec-appendix:reproduce}}.\n\nTable~\\ref{tab:compare} compares the average accuracy and average forgetting (i.e., $A_t$ and $F$ defined above) for each method once the continual learning experience is finished (i.e., after learning task 20). Moreover, Fig.~\\ref{fig:comparison-20} shows the evolution of average accuracy for each method on each benchmark. We can see in the figure that MC-SGD outperforms other methods with its performance gap increasing as the number of tasks increases. In Appendix~\\ref{sec:appendix-permuted-50}, we show this trend also holds when the number of tasks increases from 20 to 50 for Permuted MNIST benchmark. Overall, the results validate a few aspects of our method. MC-SGD seems to make more efficient use of the episodic memory compared to replay based techniques A-GEM and ER-Reservoir, perhaps due to incorporating the additional knowledge of linear connectivity as a regularizer. Moreover, if we see our approach as restricting learning in the subspace of low curvature, this subspace is better approximated compared to EWC, which takes an explicit Taylor approximation of the KL term and adds a similar constraint (that learning can not change weights with high curvature).\n\n\\begin{table}[ht!]\n\\centering\n\\caption{Comparison between the proposed method (MC-SGD) and other baselines.}\n\\label{tab:compare}\n\\resizebox{\\textwidth}{!}{%\n\\begin{tabular}{lcccccc}\n\\hline\n\\multirow{2}{*}{\\textbf{Method}} &\n \\multicolumn{2}{c}{\\textbf{Permuted MNIST}} &\n \\multicolumn{2}{c}{\\textbf{Rotated MNIST}} &\n \\multicolumn{2}{c}{\\textbf{Split CIFAR100}} \\\\ \\cline{2-7} \n & Accuracy $\\uparrow$ & Forgetting $\\downarrow$&\n Accuracy $\\uparrow$ & Forgetting $\\downarrow$ & Accuracy $\\uparrow$ & Forgetting $\\downarrow$ \\\\ \\hline\nNaive SGD & 44.4 ($\\pm$2.46) & 0.53 ($\\pm$0.03) & 46.3 ($\\pm$1.37) & 0.52 ($\\pm$0.01) & 40.4 ($\\pm$2.83) & 0.31 ($\\pm$0.02) \\\\\nEWC \\text{\\citep{EWC}} & 70.7 ($\\pm$1.74) & 0.23 ($\\pm$0.01) & 48.5 ($\\pm$1.24) & 0.48 ($\\pm$0.01) & 42.7 ($\\pm$1.89) & 0.28 ($\\pm$0.03) \\\\\nA-GEM \\text{\\citep{AGEM}} & 65.7 ($\\pm$0.51) & 0.29 ($\\pm$0.01) & 55.3 ($\\pm$1.47) & 0.42 ($\\pm$0.01) & 50.7 ($\\pm$2.32) & 0.19 ($\\pm$0.04) \\\\\nER-Reservoir \\text{\\citep{Chaudhry2019OnTE}} & 72.4 ($\\pm$0.42) & 0.16 ($\\pm$0.01) & 69.2 ($\\pm$1.10) & 0.21 ($\\pm$0.01) & 46.9 ($\\pm$0.76) & 0.21 ($\\pm$0.03) \\\\\nStable SGD \\text{\\citep{mirzadeh2020understanding}} & 80.1 ($\\pm$0.51) & 0.09 ($\\pm$0.01) & 70.8 ($\\pm$0.78) & 0.10 ($\\pm$0.02) & 59.9 ($\\pm$1.81) & 0.08 ($\\pm$ 0.03) \\\\\nMC-SGD (ours) &\n \\multicolumn{1}{l}{\\textbf{85.3 ($\\pm$0.61)}} &\n \\textbf{0.06 ($\\pm$0.01)} &\n \\multicolumn{1}{l}{\\textbf{82.3 ($\\pm$0.68)}} &\n \\textbf{0.08 ($\\pm$0.01)} &\n \\textbf{63.3 ( $\\pm$2.21)} &\n \\textbf{0.06 ($\\pm$ 0.03)} \\\\ \\hline\nMultitask Learning & 89.5 ($\\pm$0.21) & 0.0 & 89.8($\\pm$0.37) & 0.0 & 68.8($\\pm$0.72) & 0.0 \\\\ \\hline\n\\end{tabular}%\n}\n\\end{table}\n\\begin{figure*}[h!]\n\\centering\n\\begin{subfigure}{.3\\textwidth}\n \\centering\n \\includegraphics[width=.99\\linewidth]{figs\/comparison-20\/perm-mnist-accs.pdf}\n \\caption{Permuted MNIST}\n \\label{fig:compare-20-perm-mnist}\n\\end{subfigure}\\hfill\n\\begin{subfigure}{.3\\textwidth}\n \\centering\n \\includegraphics[width=.99\\linewidth]{figs\/comparison-20\/rot-mnist-accs.pdf}\n \\caption{Rotated MNIST}\n \\label{fig:compare-20-rot-mnist}\n\\end{subfigure}\\hfill\n\\begin{subfigure}{.3\\textwidth}\n \\centering\n \\includegraphics[width=.99\\linewidth]{figs\/comparison-20\/cifar-accs.pdf}\n \\caption{Split CIFAR-100}\n \\label{fig:compare-20-cifar}\n\\end{subfigure}\n\\caption{Evolution of average accuracy during the continual learning experience with 20 tasks.}\n\\label{fig:comparison-20}\n\\end{figure*}\n\n\n\\subsection{Mode Connectivity of the MC-SGD minima}\nNext, we show that by minimizing Eq.~(\\ref{eq:mc-loss-2-tasks-online}), the minima found by MC-SGD are almost linearly connected to both the starting point of learning a new task (which represents the multitask solution of all previous task) as well as the solution of the task being learned. Fig.~\\ref{fig:mc_effeciveness} shows the interpolation plots for rotated MNIST and split CIFAR-100. In Fig.~\\ref{fig:effectiveness-mnist-t1}, we show the validation loss for task 1 on the interpolation line between \\what{1} and four subsequent MC-SGD, and continual minima obtained during the learning of all subsequent tasks. We choose \\what{5} and \\wbar{5} as minima in the early stages of learning, \\what{10} and \\wbar{10} for the middle stages of learning, and \\what{15}, \\wbar{15}, \\what{20}, and \\wbar{20} for the late stages of learning in our illustrations. We can see the losses on the interpolation lines between CL and MC minima are nearly flat compared to the losses on the lines among CL minima. Moreover, Fig.~\\ref{fig:effectiveness-mnist-t5} shows the interpolation plot for task 5 to make sure the conclusion holds for later minima as well. Similarly, Figs.~\\ref{fig:effectiveness-cifar-t1} and~\\ref{fig:effectiveness-cifar-t5} show the interpolation plots of split CIFAR-100 for task 1 and 5, respectively.\n\n\\begin{figure*}[th!]\n\\centering\n \\begin{subfigure}{.245\\textwidth}\n \\centering\n \\includegraphics[width=.99\\linewidth]{figs\/mc_effectiveness\/rot_mnist_t1.pdf}\n \\caption{MNIST - Task 1}\n \\label{fig:effectiveness-mnist-t1}\n \\end{subfigure}\\hfill\n \\begin{subfigure}{.245\\textwidth}\n \\centering\n \\includegraphics[width=.99\\linewidth]{figs\/mc_effectiveness\/rot_mnist_t5.pdf}\n \\caption{MNIST - Task 5}\n \\label{fig:effectiveness-mnist-t5}\n \\end{subfigure}\\hfill\n \\begin{subfigure}{.245\\textwidth}\n \\centering\n \\includegraphics[width=.99\\linewidth]{figs\/mc_effectiveness\/cifar_t1.pdf}\n \\caption{CIFAR - Task 1}\n \\label{fig:effectiveness-cifar-t1}\n \\end{subfigure}\\hfill\n \\begin{subfigure}{.245\\textwidth}\n \\centering\n \\includegraphics[width=.99\\linewidth]{figs\/mc_effectiveness\/cifar_t5.pdf}\n \\caption{CIFAR - Task 5}\n \\label{fig:effectiveness-cifar-t5}\n \\end{subfigure}\\hfill\n\\caption{Mode Connectivity between the CL minima found by Naive SGD and minima found by the proposed MC-SGD: (a) and (b) Rotated MNIST , (c) and (d) Split CIFAR-100}\n\\label{fig:mc_effeciveness}\n\\end{figure*}\n\n\\section{Conclusion}\nWhile both continual and multitask learning aim to find solutions that perform well across all tasks, the difference in their training regimes leads to two very different outcomes: multitask learning typically ends up achieving this goal, while continual learning suffers from catastrophic forgetting, performing well only on the recently seen tasks. In this work we investigated the relationship between these two solutions when all other potential confounding factors are minimized.\n\nWe considered a simple training regime in which multitask learning and continual learning start from similar conditions when incorporating a new task. We showed that in this condition, multitask minima are connected to continual learning ones by a linear path of low error, while continual learning solutions are not similarly connected. This can be understood through the glance of Taylor expansions as multitask objective restricts learning in the directions of low curvature within an area where a second order approximation holds. Such solutions seem to exists even when the process is applied repeatedly, solving more than 20 tasks in a row. \n\nFinally, we explored this observation and proposed a new algorithm for continual learning called Mode Connectivity SGD (MC-SGD). It relies on the assumption that there always exist a solution able to solve all seen tasks so far that is connected by a linear path of low loss. MC-SGD utilizes a replay buffer to approximate the loss of tasks that are not observed. However, compared to two other popular rehearsal based methods, it performs better with less data as it exploits the linear path to further constrain learning. The proposed algorithm performs surprisingly well on a few classical benchmarks for continual learning.\n\nWe believe that the efficiency of this algorithm, as well as the analysis of the local structure of the loss surface around the continual learning and multitask learning shed more light on how multitask learning manages to achieve better performance than continual learning. \n\n\n\\subsection*{Acknowledgements}\nSIM and HG acknowledge support from the United States National Science Foundation through grant CNS-1750679. The authors thank Jonathan Schwarz, and Behnam Neyshabur for their valuable comments and feedback.\n\n{\\footnotesize\n\\bibliographystyle{iclr2021_conference}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}