diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzdgti" "b/data_all_eng_slimpj/shuffled/split2/finalzzdgti" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzdgti" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction\\label{sec:introduction}}\n\nSolar active regions (ARs)\nincluding sunspots\nare generally thought to be\nthe consequence of flux emergence,\nthat is, the buoyant rise\nof the magnetic flux\nfrom the deep convection zone\n\\citep{par55}.\nObservationally,\nthe new emerging flux appears\nas a small and bright bipolar plage\nin the chromospheric \\ion{Ca}{2} H and K line cores\n\\citep{fox08,she69}.\nSoon afterwards,\nthe arch filament system (AFS)\ncomposed of parallel dark fibrils\nappears in the line core of H${\\rm \\alpha}$\n\\citep{bru67}.\nThe fibrils are magnetic field lines\nconnecting the faculae\nof positive and negative polarities.\nIn the photosphere,\nsmall pores are formed\nat the root of chromospheric filaments\nwith downflows up to\n$\\sim 1\\ {\\rm km\\ s}^{-1}$.\nThe faculae of opposite polarity\nseparates,\ninitially at the rate of $>2\\ {\\rm km\\ s}^{-1}$,\nand the rate drops\nto $1.3$--$0.7\\ {\\rm km\\ s}^{-1}$\nduring the next 6 hours \\citep{har73}.\nNew magnetic flux emerges continuously\nwithin the opposite polarities.\nIf the flux is sufficient,\nthe pores are gathered,\nand gradually sunspots are formed\nnear the leading and the following plages\n\\citep{zir72}.\n\nIn the last several decades,\nnumerical computations have been\nwell developed\nto reveal the dynamics\nof the flux emergence\nand the birth of the active region\n\\citep[e.g.][]{shi89}.\nIn our recent simulations\non the large-scale flux emergence\nfrom a depth of $20,000\\ {\\rm km}$,\nthe rising twisted flux tube\nin the convection zone\ndecelerates and makes\na flat structure\njust beneath the photosphere\n\\citep[e.g.][]{tor12}.\nIn this calculation,\nthe plasma,\nwhich is pushed up\nby the rising flux,\nescapes laterally\naround the surface.\nThe appearance\nof the divergent outflow\nat the photosphere\nwas found to be earlier than\nthat of magnetic flux,\nand, at this moment,\nthe outflow is mainly horizontal.\nHereafter\nwe call this preceding outflow\nas a horizontal divergent flow (HDF).\nA similar flow\nis also reported\nby \\citet{che10}.\nHowever,\nto our knowledge,\nthe HDF\nprior to the flux emergence\nhas not been confirmed clearly\nin previous observations\n\\citep{kos09}.\nHere, we use \nthe term ``horizontal''\nto indicate the direction\nparallel to the solar surface.\n\nThe aim of this study\nis to investigate the HDF\nand the evolving magnetic field\nat an early phase\nof the flux emergence.\nFor this purpose,\nwe used\nthe Dopplergrams and magnetograms\nof the Helioseismic and Magnetic Imager (HMI)\non board the Solar Dynamics Observatory (SDO),\nsince their continuous observations\nof the whole solar disk\nmake it possible\nto achieve information\nat the very moment of,\nor even before the flux emergence\nat the surface.\n\nOur numerical result\nindicates that,\nif the newly emerging region\nis located away from the disk center,\nif a pair of\npositive and negative Doppler patterns\nis detected\njust before the flux emergence,\nand if the positive (negative) pattern\nis limbward (disk-centerward),\nthe observed Doppler velocity\nis mainly horizontal\nrather than vertical.\nTherefore,\nwe can evaluate the horizontal velocity\nof the escaping plasma\nfrom the Doppler velocity,\nby considering the heliocentric angle\nof the active region\nfrom the disk center.\nOne advantage of this method\nover the ordinal local correlation-tracking method\n\\citep{nov88}\nis that\nthe horizontal velocity\nof the plasma\ncan be evaluated independently\nof the apparent motion\nof magnetic elements\nat the photosphere.\nAfter the flux has emerged,\nwe can not obtain\nthe horizontal speed\nfrom the Doppler velocity,\nsince it may contain a vertical motion\nsuch as rising of magnetic fields\nor a downflow\nin the convective collapse process.\n\nIn this Paper,\nwe report the first determination\nof the HDF\nprior to the flux appearance,\nusing SDO\/HMI Dopplergrams and magnetograms.\nWe also studied\nthe chromospheric reaction\nto the flux emergence\nin the photosphere\nby using H$\\alpha$ images\ntaken by\nthe Solar Magnetic Activity Research Telescope\n(SMART)\nat Hida Observatory.\nIn Section \\ref{sec:observation},\nwe will introduce the observations\nand the method of data reduction.\nAnalysis and the results will appear\nin Section \\ref{sec:results}.\nThen, in Section \\ref{sec:discussion},\nwe will discuss the observational results.\nFinally, we will summarize the Paper\nin Section \\ref{sec:summary}.\n\n\\section{Observation and Data Reduction\n \\label{sec:observation}}\n\nIn this Paper,\nwe studied NOAA AR 11081\nformed in 2010 June,\nin the northwest of the solar disk.\nTo measure the Doppler shift\nand line-of-sight (LoS) magnetic field\nin the photosphere,\nwe used Dopplergrams and magnetograms\ntaken by\nSDO\/HMI.\nAlso,\nto study the chromospheric response\nto the flux emergence,\nwe used H$\\alpha$ images\ntaken by SMART at Hida Observatory.\n\n\n\\subsection{SDO\/HMI Dopplergram and Magnetogram}\n\nSDO\/HMI continuously observes\nthe whole solar disk\nat the 6173 \\AA \\ion{Fe}{1} line,\nwhich is resolved by $4096^{2}$ pixels\n\\citep{sch12}.\nTo obtain the tracked data cubes\nof the birth of AR 11081,\nwe used {\\tt mtrack} module\n\\footnote{http:\/\/hmi.stanford.edu\/teams\/rings\/mod\\_mtrack.html}.\nThe data cubes\nof the Doppler velocity\nand the LoS magnetogram\nhave\na spatial resolution of $0.5\\ {\\rm arcsec}$\n(1 pixel corresponds to $\\sim 360\\ {\\rm km}$)\nwith $512^{2}$ pixel field-of-view (FoV),\nand a temporal resolution of $45\\ {\\rm s}$\nwith a duration of $36\\ {\\rm hr}$,\nstarting at 12:00 UT\non 2010 June 10.\nIn the initial state,\nthe center of the $512^{2}$ FoV\nis located at\nN$22^{\\circ}$ W$25.6^{\\circ}$,\nor ($+392, +383$) arcsecs\nin solar disk coordinates.\nHere, we applied Postel's projection,\nthat is, both Doppler and magnetic maps\nare projected\nas if seen from\ndirectly above.\nThen, to eliminate the effects of\nthe rotation of the Sun\nand the orbital motion of the satellite,\nand to determine the zero point\nof the LoS velocity,\nwe reduced the mean velocity\nfrom each Dopplergram.\nAlso, a 30-min (40-frame) moving average\nwas applied\nto the Dopplergrams and magnetograms.\n\nFigure \\ref{fig:fov} is\nthe HMI magnetogram\nof NOAA AR 11081\ntaken at 06:00 UT,\n2010 June 11,\nthat is,\nafter the emergence started.\nHere, white and black indicate\nthe positive and negative polarities,\nrespectively.\nThe diagonal line in this figure\nis the slit\nfor the time-sliced diagram\nin Section \\ref{sec:slice}.\nThe slit angle\nis chosen to fit\nthe first separating motion\nof both polarities.\nThe square indicates\nthe region analyzed\nin Section \\ref{sec:histogram}\nto measure the distributions\nof the Doppler velocity\nand the LoS\nfield strength.\n\n\n\\subsection{SMART H\\boldmath{$\\alpha$} Images}\n\nSMART\nat Hida Observatory,\nKyoto University,\nconsists of four different telescopes,\nwhich are T1, T2, T3 and T4, respectively\n\\citep{uen04}.\nThey are placed on a tower\nwith a height of $16\\ {\\rm m}$.\nT1 obtains H$\\alpha$\nfull solar disk images\nat high temporal and spatial resolution.\nFor studying the chromospheric reaction\nto the photospheric flux emergence,\nwe analyzed the H$\\alpha$ data\nof 01:00--05:00 UT,\n2010 June 11,\nwhich resolves the full solar disk\nwith $4096^{2}$ pixels\n(1 pixel corresponds\nto $\\sim 0.56\\ {\\rm arcsec}$)\nand has a maximum temporal resolution\nof 2 minutes.\n\nIn this study,\nwe only used H$\\alpha$ line core images\n(wavelength at $6562.8\\ {\\rm \\AA}$).\nFirst,\ndark-current subtraction\nand flat fielding\nwere performed\non the obtained SMART data.\nThen, by taking\na cross-correlation\nof the two consecutive images\nto fix the position\nof the target emerging active region,\nwe made a data cube\nof H$\\alpha$ images.\nNote that H$\\alpha$ image\nis a simple zoom-up\nof the full disk image,\nwhile\nPostel's projection is applied\nto the HMI images.\n\n\n\\section{Data Analysis and Results\n \\label{sec:results}}\n\nFigure \\ref{fig:evolution}\nshows the temporal evolution\nof the Dopplergram and the magnetogram\nfor 12 hours\nfrom 18:00 UT,\n2010 June 10.\nIn the Dopplergram,\nthe motion toward and away\nfrom the observer are\nshown in blue and red,\nrespectively.\nAt first,\nduring 18:00--00:00 UT,\nthe surface is relatively quiet\nwith some preceding magnetic elements\nof both positive\nand negative polarities.\nAn area \nwith strong blue shift\n($< -1\\ {\\rm km\\ s}^{-1}$) appears\nin the middle of the FoV\nat 01:00 UT on 11 June,\nwhich is gradually growing\nin size.\nAfter 3:00 UT,\nthe strong red shift\n($> 1\\ {\\rm km\\ s}^{-1}$) appears\nand magnetic field emergence\ntakes place.\nBoth positive and negative polarities\nmove apart from each other.\nHere, the separation\nof the magnetic elements\nis almost along the slit,\nwhich is indicated as a diagonal line.\nFinally, at 06:00 UT,\nthe red and blue areas\nbecome faint.\nThe separated magnetic elements stop\nand gather to form pores\nat the boundary\nof the emerging region.\n\nIn this section,\nwe first introduce the results\nof time-slices\nof the Dopplergrams and magnetograms\nin Section \\ref{sec:slice}.\nThen, in Section \\ref{sec:histogram},\nwe will clarify\nthe occurrence times\nof the HDF\nand the flux emergence,\nand evaluate\nthe horizontal speed\nof the HDF.\nSection \\ref{sec:chromosphere}\nis dedicated\nto showing\nthe chromospheric studies.\n\n\n\\subsection{Time-sliced Diagram\\label{sec:slice}}\n\nTo examine the motion\nof the magnetic elements\nof positive and negative polarities\nand the corresponding LoS velocity,\nwe made time-sliced diagrams\nof HMI Dopplergrams and magnetograms.\nThe spatial slit is indicated\nas a diagonal line\nin Figure \\ref{fig:fov}\nand Figure \\ref{fig:evolution},\nwhich is placed\nparallel to the separation\nof both polarities.\n\nFigure \\ref{fig:slice}\nis the time-sliced diagram\nof the Dopplergram and the magnetogram\nalong the slit.\nFrom the time-slice\nof the magnetogram,\nFigure \\ref{fig:slice}(b),\nwe can see that\nboth positive and negative polarities\nmove apart from each other\nfrom around 03:00 UT on June 11.\nThe speed of each element\nis estimated to be\n$\\sim 1.2\\ {\\rm km\\ s}^{-1}$,\nwhich then drops\nto $\\sim 0.4\\ {\\rm km\\ s}^{-1}$.\nThus, the separation\nspeed is $0.8$--$2.4\\ {\\rm km\\ s}^{-1}$.\nThis deceleration\nof the separated polarities\nmay reflect that\nthe polarities are reaching\nthe boundary\nof the active region.\nThese elements then gathered\nto create stronger pores,\nof which the absolute LoS\nfield intensity is\ngreater than $200\\ {\\rm G}$.\nOne would find that\nweak and small elements\nof both polarities appear\nbetween the main separating pores\nduring 03:00--09:00 UT\non June 11.\nAlso, the main positive pore\ncollides with\nthe preexisting negative polarity,\nand they cancel\neach other out.\n\nIn the Doppler slice,\nFigure \\ref{fig:slice}(a),\na pair of red and blue patterns\nemerged at around 02:00 UT, June 11,\nslightly earlier than\nthe appearance of the magnetic elements\nin Figure \\ref{fig:slice}(b).\nThe red and blue shift patterns\nimmediately started to separate,\nand the propagation speed\nof the patterns\n(the slope of the patterns)\nis about $0.4\\ {\\rm km\\ s}^{-1}$.\nHere,\nwe note that\nthe blue (red) pattern is located\ndisk-centerward (limbward),\nwhich indicates that\nthe flow is divergent.\nMoreover,\nfrom the fact\nthat the divergent outflow\ncame before\nthe flux emergence,\nwe can assume that\nthe outflow\nduring this period\nis caused by the plasma\nescaping from\nthe rising magnetic flux.\nIt should be noted that\nthe trend\nof the Doppler pattern\ncoming before the flux emergence\ndoes not change\nwhen we vary the thickness\nof the slit.\n\nHowever,\nthe determination\nof the appearance time\nof the Doppler pattern\nassociated with\nthe flux emergence\nis difficult,\nbecause the Doppler pattern,\nespecially the blue shift,\nappeared at the location\nwhere the supergranulation\nshowed blue shift\n(21:00--01:00 UT).\nThe definition\nof the flux emergence\nand the appearance of the related Doppler pattern\nis dealt with\nin the next subsection\n(\\S \\ref{sec:histogram}).\n\n\n\\subsection{Appearance times\n of the HDF and the flux emergence,\n and the velocity of the HDF\n \\label{sec:histogram}}\n\nIt is not easy to determine\nthe timings\nof the appearance of\nthe HDF\nand the associated\nflux emergence\nfrom Figures \\ref{fig:evolution}\nand \\ref{fig:slice}.\nIn particular,\nwe have to distinguish\nthe outflow\nrelated to the flux emergence\nfrom the preexisting\nconvective motions\nof the quiet Sun\n(e.g., granulations and supergranulations).\nTo clarify with significance\nwhen the HDF\noccurred\nand when the magnetic flux emerged,\nwe studied the temporal changes\nof the Doppler and magnetic patterns\nfrom those before the emergence,\nnamely, patterns of the quiet Sun.\nAlso, in this subsection,\nwe describe how we evaluate\nthe horizontal speed\nof the HDF.\n\nFirst, we plotted the histograms\nof the Doppler velocity\nand the absolute LoS\nfield strength\ninside the square\nof Figure \\ref{fig:fov}\nfor each frame.\nThe size of the square\nis $70\\times 70$ pixels\n$(\\sim 25\\times 25\\ {\\rm Mm}^{2})$,\nwhich is selected\nto include the emergence region.\nAs for the Dopplergram,\nthe apex of the histogram\nwas shifted\nto fit the zero point.\nThen, considering the photospheric condition\nin the 3 hours\nfrom 21:00 UT of June 10\nto be sufficiently quiet,\nwe averaged up each 240 histograms\nof the Dopplergrams and the magnetograms\nin this period,\nand regarded these averages\nas reference quiet-Sun profiles.\n\nIn the left column\nof Figure \\ref{fig:histogram},\nwe show histograms\nof the Doppler velocity\nat five different times of June 11,\nplotted over the reference\nquiet-Sun profile.\nHere we note that\nthe quiet-Sun profile\nobtained is similar\nto a Gaussian distribution.\nThe shade indicates\nthe standard deviation\nabove and below the reference.\nAs time goes by,\nthe profile becomes deviated\nfrom the reference,\nbecause the number of pixels\nof which the absolute Doppler velocity\nis greater\nthan $0.5\\ {\\rm km\\ s}^{-1}$\nincreases.\nThe right column\nof Figure \\ref{fig:histogram}\nis the residual\nof the Doppler histogram\nfrom the reference.\nOne standard deviation\nis also shown as a shaded area.\nAt first,\nthe residual is below\none standard deviation level\nfor most of the velocity range.\nFrom 02:00 UT, however,\nthe residual exceeds the deviation.\n\nFigure \\ref{fig:histogram_mag}\nis the same as Figure \\ref{fig:histogram},\nbut for the absolute\nfield strength\nof the LoS magnetograms.\nHere, the quiet-Sun profile\nconsists of a distribution with\na width of $\\sim 10\\ {\\rm G}$\n(about the precision\nof the HMI magnetogram)\nand some preexisting pores\nwithin the FoV.\nThus, the profile is\ndifferent from\na Gaussian distribution.\nThe residual in the range\nof $> 200\\ {\\rm G}$\nfurther exceeds\none standard deviation level\nfrom 04:00 UT.\nAfter this time,\nthe residual of $> 200\\ {\\rm G}$\nbecomes well over the standard deviation,\nbecause more and more flux is emerged\nand stronger pores are created.\n\nFor the significance\nof the measurement,\nwe define\nthe start time of the HDF\nand the flux emergence\nas the time\nwhen each residual\nof the Dopplergrams and the magnetograms\nexceeded one standard deviation level.\nTo know these times,\nwe show in Figure \\ref{fig:timing}\neach time-evolution\nof the residuals\n(taken from and averaged over\nthe range\n$[-0.8\\ {\\rm km\\ s}^{-1}, -0.4\\ {\\rm km\\ s}^{-1}]$\nand $[0.4\\ {\\rm km\\ s}^{-1}, 0.8\\ {\\rm km\\ s}^{-1}]$\nfor Dopplergram,\nand the range $[200\\ {\\rm G}, 300\\ {\\rm G}]$\nfor magnetogram),\nplotted over\none standard deviation.\nIn this figure,\nthe residual of the Dopplergram\nbecomes over the standard deviation\nat 01:23 UT on 11 June,\nwhile that of the magnetogram\nexceeds the level\nat 03:06 UT.\nThat is,\nthe appearance of the HDF\ncame before the flux emergence\nby about 100 minutes.\n\nDuring this period,\nit is expected that\nthe flow is mainly horizontal\nand a vertical component\nis less dominant.\nThus, we can calculate\nthe horizontal velocity\nfrom the residual distribution\nof the Doppler velocity\n(Figure \\ref{fig:histogram}),\nby considering\nthe geometric effect.\nThe relation between\nthe horizontal velocity $V_{\\rm h}$\nand the Doppler velocity $V_{\\rm D}$ is\n$V_{\\rm h}=V_{\\rm D}\/\\sin{\\theta}$,\nwhere $\\theta$ is the heliocentric angle\nof the emerging region\nmeasured from the disk center.\nFrom 01:23 to 03:06 UT,\nthe Doppler velocity range\nwhere the residual exceeds\nthe one standard deviation\nis typically\n0.4--$1.0\\ {\\rm km\\ s}^{-1}$,\nwhich is up to\n$1.5\\ {\\rm km\\ s}^{-1}$,\nand the heliocentric angle is\n$\\sim 40^{\\circ}$.\nTherefore,\nthe horizontal velocity\nis calculated to be\n$0.6$--$1.5\\ {\\rm km\\ s}^{-1}$,\nand the maximum is\n$2.3\\ {\\rm km\\ s}^{-1}$.\n\nHere, we comment\non the selection\nof the field-strength range\n($[200\\ {\\rm G}, 300\\ {\\rm G}]$)\nand its dependence\non the start time\nof the flux emergence.\nIf we use the lower strength range,\nfor example $[50\\ {\\rm G}, 100\\ {\\rm G}]$\nor $[100\\ {\\rm G}, 200\\ {\\rm G}]$,\nat which the residual exceeds\none standard deviation level faster\n(Figure \\ref{fig:histogram_mag}, right column),\nthe start time of the flux emergence\nis calculated to be much earlier.\nIn the present analysis,\nhowever,\nthe strength range\n$[200\\ {\\rm G}, 300\\ {\\rm G}]$\nis used,\nsince the number of the pixels of $>200\\ {\\rm G}$\nis so small in the quiet Sun\nthat the flux emergence is easily detected\nwhen it occurs.\nWe confirmed this fact\nby applying the same analysis\non the quiet-Sun data.\nAs for the dependence of the strength range\non the observation results,\nwe tested the analysis\nwith various ranges,\nwhich is summarized in\nTable \\ref{tab:range}.\nFrom this table\none can see that\nthe start time does not so change\nfor [$200\\ {\\rm G}, 300\\ {\\rm G}$],\n[$300\\ {\\rm G}, 400\\ {\\rm G}$],\nand [$400\\ {\\rm G}, 500\\ {\\rm G}$] cases.\n\nWe also checked the dependence\nof the size of the square\nwhere the histograms are made\n(Fig. \\ref{fig:fov}),\nwhich is summarized in\nTable \\ref{tab:size}.\nHere, the time difference\nis almost constant\nfor various square sizes\nand is about 100 min.\nWith increasing square size,\nthe ratio of high-speed or strong pixels\nin the square reduces.\nAt the same time,\nthe quiet-Sun reference profile\nbecomes more accurate\nand one standard deviation level decreases.\nTherefore, in total,\nthe time difference remains constant.\n\n\n\\subsection{Chromospheric Response\n \\label{sec:chromosphere}}\n\nIn this subsection,\nwe investigate\nthe time-evolution\nof the H$\\alpha$ intensity\nto examine the relation\nbetween the chromosphere\nand the photosphere\nin this studied event.\nFigure \\ref{fig:ha}(a)\nis a sample image\nof the SMART H$\\alpha$ data.\nThe color and contours\nindicate\nthe relative H$\\alpha$ intensity.\nIn this figure,\nthere are two bright regions\n(plages)\nin the middle of the FoV.\nThen,\nalong the slit\nof Figure \\ref{fig:ha}(a),\nwe made a time-sliced diagram\nfor 4 hours\nstarting at 01:00 UT, 11 June,\nwhich is shown\nas Figure \\ref{fig:ha}(b).\nNote that the slit\nin Figure \\ref{fig:ha}(a)\nis not exactly the same as\nthat in Figure \\ref{fig:fov},\nsince the H$\\alpha$ data\nis a simple closeup view\nof the full disk image,\nwhile Postel's projection\nis applied to the HMI data.\nThus,\nfrom this study,\nwe can only determine\nthe appearance time\nof the chromospheric brightening.\n\nIn Figure \\ref{fig:ha}(b),\nthe first bright source\nat the slit location\nof $5\\times 10^{4}\\ {\\rm km}$\nstarts at 02:40 UT.\nHowever, it was found that\nthis brightening\nis due to the activity\namong the preexisting quiet-Sun pores\nof both polarities,\nwhich later collide with\npositive patches\nof the newly emerging flux\n(see Section \\ref{sec:slice}).\nIt is difficult to\nseparate this bright source\ninto activity\nof the preexisting pores\nand that of the newly emerged\npositive pores.\nThe second source\nlocated at $7\\times 10^{4}\\ {\\rm km}$\nstarts at 03:20 UT,\nand there was\nno preceding pore\nin this region.\nTherefore,\nwe consider that\nthe second source\nis entirely due to\nthe newly\nemerged negative pores,\nand determine that\nthe chromospheric reaction\nstarts at this time\n(03:20 UT;\nindicated by a dashed line\nin Figure \\ref{fig:ha}(b)).\nThe two chromospheric sources\nare located\njust over the positive\nand negative polarities\nin the photosphere.\n\n\n\\section{Discussion\\label{sec:discussion}}\n\n\\subsection{Mechanism of the Time Difference\n \\label{sec:mechanism}}\n\nIn this Paper\nwe analyze\nthe newly emerging active region\nand find that\nthere is a time difference\nbetween the appearance of\nthe horizontal divergent flow (HDF)\nand the corresponding flux emergence;\nthe HDF\nappears prior to the flux emergence\nby about 100 minutes.\n\nAccording to the thin-flux-tube\nsimulation \\citep{fan09},\nthe rising speed of the flux tube\naccelerates\nfrom the top few tens of Mm\nof the convection zone.\nHowever, at the same time,\nthe flux tube expands\nas the external density (pressure) \ndecreases with height.\nThe radius of the tube eventually exceeds\nthe local pressure scale height\nat a depth of $\\sim 20\\ {\\rm Mm}$\nand the thin-flux-tube approximation\nbreaks out.\nRecently, our numerical simulations\nusing the fully compressed MHD,\nincluding the convection zone,\nthe photosphere,\nand the corona\nin a single computational box,\nhave revealed that\nthe rising flux tube\ndecelerates\nin the uppermost convection zone\n\\citep{tor11,tor12}.\nIt is because\nthe plasma on the flux tube piles up\nbetween the apex of the tube\nand the subadiabatically stratified photosphere ahead,\nand the plasma inhibits the rising motion of the flux tube.\nThen, the accumulated plasma\nin turn extends the tube laterally.\nThis accumulation becomes effective\nfrom the depth\nwhere the apex of the tube\nbecomes ``flat''.\nThis critical depth\nis also considered as\nbeing\nwhere the tube's radius exceeds\nthe local pressure scale height\n(depth $\\sim -20\\ {\\rm Mm}$).\nThe lateral expansion\nof the flux tube\nappears\nsimilar to those\nfound by \\citet{mag01} and \\citet{arc04}.\nHowever, their expansions occur\nbecause the tubes themselves\nmove into the subadiabatic photosphere.\n\nAs the rising tube approaches\nthe photosphere,\nthe accumulated plasma\non the rising tube\nescapes horizontally\naround the surface\nand is observed\nas an HDF,\nwhile the tube\nstops beneath the surface.\nSince the flux is\ncontinuously transported\nfrom below,\nthe magnetic pressure gradient\nat the photosphere\nenhances\nand the further emergence\nto the upper atmosphere\nstarts\ndue to the magnetic buoyancy instability.\nWhen the flux resumes rising,\nit can be observed as a ``flux emergence''\nat the photospheric level.\nTherefore,\nthe time difference\ndetected in this Paper implies\nthe period of latency\nduring which\nthe flux tube\nreaching the photosphere\ndevelops the magnetic buoyancy instability.\nThe growth time\nof the instability is,\nhowever,\ncomplicated\nand may be related\nto many parameters\nof the rising flux tube\nsuch as field strength,\ntotal flux, twist, etc.\nThus, we shall leave\nthe estimation of the time gap\nfor our future numerical research.\n\n\n\\subsection{Depth of the Magnetic Flux\n \\label{sec:model}}\n\nTo describe the relation\nbetween the HDF\nand the contributing upflow\nbelow the surface,\nwe make a simple model,\nwhich is schematically illustrated\nas Figure \\ref{fig:model}.\nWhen the magnetic flux tube has emerged\nfrom the deeper convection zone,\nan upflow region is formed\nin front of the flux tube.\nIf the typical size\nof this region is $L$\nand the velocity is $V_{\\rm up}$,\nthe mass flux passing through\nthe area of $\\pi L^{2}$\ncan be described as\n\\begin{eqnarray}\n F_{1}=\\rho_{1} V_{\\rm up} \\pi L^{2},\n \\label{eq:f1}\n\\end{eqnarray}\nwhere $\\rho_{1}$ is the plasma density.\nNext, the photospheric plasma\nthat escapes from the upflow\npropagates the surface\nas an HDF.\nIf we write the horizontal velocity\nat the radial distance $r$\nas $V_{\\rm h}(r)$,\nthe thickness as $T$,\nand the density as $\\rho_{2}$,\nthe mass flux passing through $2\\pi rT$ is\n\\begin{eqnarray}\n F_{2}=2\\pi r \\rho_{2}TV_{\\rm h}(r).\n \\label{eq:f2}\n\\end{eqnarray}\nThese fluxes,\n$F_{1}$ and $F_{2}$,\nare assumed to be conserved.\nTherefore,\nfrom Equations (\\ref{eq:f1}) and (\\ref{eq:f2}),\nthe upflow velocity is\n\\begin{eqnarray}\n V_{\\rm up}=\\frac{2\\rho_{2}}{\\rho_{1}}\\frac{rTV_{\\rm h}(r)}{L^{2}}.\n \\label{eq:vup1}\n\\end{eqnarray}\n\nAs a result of the observational study,\nthe horizontal speed is\n$V_{\\rm h}\\sim 1\\ {\\rm km\\ s}^{-1}$\nat $r=5000\\ {\\rm km}$.\nHere we assume that\n(a) plasma density is almost\nuniform\naround the photosphere,\ni.e., $\\rho_{1}\\sim \\rho_{2}$,\n(b) the thickness is about\nthe local pressure scale height,\n$T\\sim 200\\ {\\rm km}$,\nand (c) the size of the upflow\nis $4000\\ {\\rm km}$\n(the smallest distance\nbetween the blue and red patterns\nin Figure \\ref{fig:slice});\n$L\\sim 2000\\ {\\rm km}$.\nUnder these assumptions,\nEquation (\\ref{eq:vup1}) reduces to\n$V_{\\rm up}=0.5\\ {\\rm km\\ s}^{-1}$.\nThe time gap\nbetween the HDF\nappearance\nand the flux emergence\nwas observed to be $100\\ {\\rm min}$.\nTherefore,\nthe depth\nthat the apex of the magnetic flux\ntransited across\nafter it decelerated,\nis estimated to be\n$\\sim 3000\\ {\\rm km}$,\nif the flux tube rises\nat the same rate\nas the upflow.\n\nIn this section,\nfor simplicity,\nwe assumed that\nthe apex of the rising flux is circular,\nand that the outflow velocity $V_{\\rm h}$\nis only a function of $r$.\nFrom Figure \\ref{fig:evolution},\nhowever,\nit seems that the HDF is not axisymmetric\nand is stronger\nin the direction of flux emergence\n(the northwest-southeast slit in this figure).\nThis property is consistent\nwith our preceding numerical results;\nthe photospheric plasma flow\nis found to be\nalong the direction of flux emergence\n\\citep[see][Fig. 4]{tor12}.\nMoreover,\nin that simulation,\nthe twist of the rising flux tube\nis stronger\nand the magnetic field\nat the tube's surface\nis almost perpendicular\nto the axis of the tube.\nIn the later phase of\nthe target AR of this Paper,\nthe separation of\npositive and negative polarities\nshifted into the northeast-southwest direction,\ni.e., perpendicular to the diagonal line\nin Figure \\ref{fig:evolution}.\nTaking into account\nthe previous numerical results,\nand considering that\nthe observed NE-SW direction indicates\nthe axis of the flux tube\nthat forms this AR,\nwe can think that the twist\nof this flux tube is tight,\nand therefore the flow\nis in the NW-SE direction.\n\n\n\\subsection{Relations with Recent Observations:\n HDF as a precursor\n \\label{sec:seismology}}\n\nUsing SOHO\/MDI,\n\\citet{gri07} observed NOAA AR 10488\nand found that\nupflows of matter\nwith a high velocity\n($\\gtrsim 0.4\\ {\\rm km\\ s}^{-1}$)\npreceded flux emergences\nby 8 and 13 min.\nThus,\nthe last $\\sim 10$ min\nof the divergent Doppler pattern\nobserved in our study\nthat remained for 100 min,\nmay contain\nthe upward motion.\nHowever,\nfor most of the period,\nthe flow is expected\nto remain horizontal.\nNote that\nthe upflow velocity of\n$\\gtrsim 0.4\\ {\\rm km\\ s}^{-1}$\nreported\nby \\citet{gri07}\nmay be the speed\nof a magnetic flux\nrising in the photosphere.\nAs for the estimated velocity\n($V_{\\rm up}=0.5\\ {\\rm km\\ s}^{-1}$)\nin Section \\ref{sec:model},\nthis value indicates\nthe emergence speed\nof a magnetic flux\nin the uppermost convection zone.\n\nBy means of time-distance helioseismology,\n\\citet{ilo11} detected\nstrong acoustic travel-time anomalies\nas deep as 65 Mm,\n1 to 2 days\nbefore the flux rate reaches its peak,\nand (in most cases)\na few hours before\nthe start of\nthe flux appearance\nat the surface\n\\citep[see also][]{kos08,kos09}.\nThese anomalies are\nconsidered as\nsigns of the rising\nmagnetic flux.\nTaking account\nof our numerical simulations\n\\citep[e.g.][]{tor12},\nit is consistent\nto interpret\nthis helioseismic anomaly\nas a result\nof the effect\nsimilar to the plasma accumulation;\nexternal media\nmay be perturbed or compressed\nby the rising motion\nof the magnetic flux.\nThe importance\nof the helioseismic anomaly\nin \\citet{ilo11}\nand the HDF in our study\nis that\nthese phenomena occur\nprior to the flux emergence\nat the photosphere.\nThat is,\nthese are\nthe precursors\nof the flux emergence.\nBy combining two types\nof observations,\nsunspot appearances\nmay be predicted\nin the near future.\n\n\n\\subsection{Further Emergence to the Upper Atmosphere\n \\label{sec:further}}\n\nIn Section \\ref{sec:chromosphere},\nwe found that the H$\\alpha$ brightenings\n(plages) were located\nover the positive and negative pores\nin the photosphere.\nThis indicates that\nthe brightenings\nare caused by the plasma\nflowing down along magnetic loops\nthat connect the photospheric magnetic elements\n\\citep[see][Figure 10]{shi89}.\nThe appearance of the chromospheric source\nwas at 03:20 UT\non June 11,\nwhile the flux emergence\nwas at 03:06 UT.\nIf we assume the H$\\alpha$ formation height\nas $2000\\ {\\rm km}$,\nthe rise velocity of the magnetic field is \n$\\sim 2.5\\ {\\rm km\\ s}^{-1}$.\nThis value is smaller than\nthe observed speed\nof the chromospheric arch filament system (AFS)\nof $\\sim 20\\ {\\rm km\\ s}^{-1}$\n\\citep[e.g.][]{bru67},\nwhich implies that\nthe actual rise speed\nis faster than $2.5\\ {\\rm km\\ s}^{-1}$\nand it takes some time\nto create H$\\alpha$ plage\nafter the flux reaches\nthe chromospheric height.\n\n\n\\section{Summary\\label{sec:summary}}\n\nIn this Paper,\nwe have observed\nthe horizontal divergent flow (HDF)\nprior to the flux emergence\nby using SDO\/HMI Dopplergram and magnetogram.\nThe presence of the HDF\nwas predicted\nby our preceding numerical simulations\n\\citep[e.g.][]{tor12}.\nThe HMI's continuous observation\nof the whole solar disk provides\nthe means to analyze\nthe earlier stage\nof the flux emergence.\nThe summary of the observation\nis given\nas Table \\ref{tab:summary}.\n\nFirst, we made time-slices of\nDopplergram and LoS magnetogram\nof NOAA AR 11081.\nFrom the magnetic slice,\nwe found that\nthe magnetic elements\nof positive and negative polarities\nseparated from each other.\nThe apparent speed\nof a single element was,\nat first, $1.2\\ {\\rm km\\ s}^{-1}$.\nThe speed then dropped\nto $0.4\\ {\\rm km\\ s}^{-1}$\nand the elements gathered\nto create stronger pores\nof $>200\\ {\\rm G}$.\nIn the Doppler slice,\na pair of blue and red pattern\nwas observed to separate,\nslightly earlier than\nthe flux emergence,\nand the blue (red) pattern\nwas located disk-centerward (limbward).\nThis indicates that\nthe HDF\nappeared prior to the flux emergence.\nAccording to our previous numerical experiments,\nthe outflow is mainly horizontal\nduring the period\nfrom the appearance of the outflow\nto the emergence of the magnetic flux.\n\nSecondly,\nwe evaluated the times of the HDF\nappearance\nand the flux emergence.\nTo determine these times\nwith significance,\nwe studied the temporal changes\nof the Doppler and magnetic patterns\nfrom those of the quiet Sun,\nand defined them as\nthe times when each profile exceeded\none standard deviation\nof its quiet-Sun profile.\nAs a result,\nthe Doppler profile was found to\ndeviate from the quiet-Sun profile\nat 01:23 UT, 2010 June 11,\nwhile the magnetic profile\ndeviated at 03:06 UT.\nTherefore,\nthe time difference was\nabout 100 minutes.\nAlso, by considering the heliocentric angle,\nthe horizontal speed of\nthe HDF in this time gap\nwas estimated to be\n$0.6$--$1.5\\ {\\rm km\\ s}^{-1}$,\nup to $2.3\\ {\\rm km\\ s}^{-1}$.\n\nThe creation of the HDF\nis due to the density accumulated\non the apex of the flux tube\nduring its ascent\nin the convection zone.\nThis accumulation occurs\nbetween the flattened apex\nof the rising flux tube\nand the subadiabatically stratified photosphere.\nThe compressed plasma\nescapes horizontally\naround the photosphere,\nwhich was observed\nin this Paper.\nAfter the magnetic flux\nis sufficiently intensified,\nthe magnetic buoyancy instability\nis triggered\nand the magnetic field restarts\ninto the upper atmosphere,\nwhich was also seen\nas a flux emergence\nin this Paper.\nTherefore, the time difference\nof $\\sim 100$ min\nmay reflect\nthe latency\nduring which\nthe flux is waiting\nfor the instability onset. \n\nApplying a simple model\nof the horizontal flow\nand the corresponding upflow\nbeneath the surface,\nwe speculated that\nthe depth of the magnetic flux\nis about $3000\\ {\\rm km}$.\nPreviously,\nSOHO\/MDI found that\nan upflow preceded the flux emergence\nby about 10 minutes\n\\citep{gri07}.\nThis implies that the last\n$\\sim 10$ min of the divergent outflow\nmay include the upward motion.\nEven so,\nfor most of the period,\nthe outflow remains horizontal.\n\nMoreover,\nusing H$\\alpha$ images\ntaken by SMART,\nwe studied chromospheric response\nto the flux emergence\nat the photosphere.\nThe time-slice\nshowed a pair of\nH$\\alpha$ plages,\nwhich started from 03:20 UT,\nthat is,\n$\\sim 14$ min\nafter the flux emergence.\nThe location of these brightenings\nwere just over the photospheric pores.\nTherefore,\nwe speculated that\nthese brightenings are caused by\nthe plasma precipitating along\nthe magnetic fields\nthat connect photospheric pores\nof both polarities.\n\nThe time gap\nbetween the HDF occurrence\nand the flux emergence\nwill be investigated\nin our future numerical study.\nAs for the observational study,\nthe statistical analysis on HDFs\nwould be the next target.\nAnother importance\nof observing HDF is that\nthis phenomenon\ncan be considered as a precursor,\nwhich may allow us\nto predict sunspot formation\nthat occurs in several hours.\n\n\n\n\n\\acknowledgments\n\nWe thank the SDO\/HMI team\nfor data support\nand useful discussions.\nS.T. thanks Dr. A. Kosovichev\nfor arranging his stay\nat Stanford University.\nThis work was supported\nby the JSPS Institutional Program\nfor Young Researcher Overseas Visits,\nand by the Grant-in-Aid\nfor JSPS Fellows.\nWe are grateful\nto the GCOE program instructors\nof the University of Tokyo\nfor proofreading\/editing assistance.\nWe also appreciate\nthe thorough and helpful comments\nby the anonymous referee.\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\label{}}\n\n\\section{\\label{sec:intro}INTRODUCTION}\nSupersymmetry (SUSY) is one of the most compelling theories for physics beyond the Standard Model (SM)~\\cite{susy}. It predicts a new symmetry between bosons and fermions such that for every SM particle, a superpartner should exist with a spin value differing by one half unit. This hypothesis has strong theoretical and experimental implications. On the theory side, it naturally solves the hierarchy problem~\\cite{hierarchy}, a divergent value of the Higgs mass when considering radiative corrections and the SM valid up until the Planck scale. In addition, SUSY makes the unification of forces at a Grand Unification Scale (GUT)~\\cite{GUT} possible. On the experimental side, the existence of several new particles, including a dark matter candidate under certain conditions, are predicted. If no particular fine tuning is introduced in the theory, these particles should be light enough to be produced at the current hadron colliders.\n\nSince the mechanism that breaks SUSY is unknown, more than 100 new parameters are introduced in the Minimal Supersymmetric extension of the Standard Model (MSSM) to induce a soft breaking of the symmetry~\\cite{mssm}. To reduce them to a more manageable set, different approaches are typically considered. The so-called ``top-down'' approach, makes some assumptions at the GUT scale and via renormalization group equations the phenomenology at the electroweak scale is predicted. CMSSM~\\cite{CMSSM} or GMSB~\\cite{GMSB} are among the models most commonly used in this context. Alternatively, one can follow a ``bottom-up'' approach in which different phenomenological assumptions are made at the electroweak scale to simplify the number of particles expected and their relationships. Finally, limits can also be given generically as the product of cross section, efficiency and acceptance ($\\sigma\\cdot\\epsilon\\cdot A$). In this case, it is worth mentioning that this value is provided as an upper limit on the effective cross section given the luminosity and the number of expected and observed events, without any attempt of correcting for the experimental constraints.\n\nThe Tevatron and the LHC hadron colliders are actively looking for signs of SUSY and, in their absence, constraining further the SUSY parameter space beyond the LEP legacy~\\cite{LEPsusy}. Two multipurpose experiments are collecting data at each of the colliders: ATLAS and CMS at the LHC and CDF and D\\O~at the Tevatron. The LHC, being a proton-proton collider currently operating at a center-of-mass energy of 7~TeV, is particularly sensitive to colored SUSY particles such as squarks and gluinos (the superpartners of the quarks and gluons, respectively), even with relatively low luminosity. The Tevatron, with a center-of-mass energy of 1.96~TeV, was the first machine establishing limits beyond LEP constraints in pair production of SUSY particles. Nowadays, it profits from the large dataset of proton-antiproton collisions to search for non-colored SUSY particles and direct production of third generation squarks, establishing the most stringent limits up to date on these processes.\n\nThe SUSY searches are generically classified in $R$-parity conserving (RPC) or violating (RPV) analyses. $R$-parity~\\cite{Rparity} is a symmetry postulated to avoid some leptonic and baryonic number violating terms appearing in the SUSY superpotential. If $R$-parity is conserved, SUSY particles will always be produced in pairs and will decay in cascade until the Lightest Supersymmetric Particle (LSP) is produced. This particle is stable and constitutes a dark matter candidate, which will escape detection producing a characteristic signature of large momentum imbalance in the transverse plane (\\met). On the contrary, RPV signatures are mostly characterized by the possibility of producing mass resonances from the decay of SUSY particles fully into SM particles. \n\nA comprehensive overview of all the different searches carried out at the Tevatron and the LHC experiments is out of the scope of this document. The reader is referred to the dedicated pages of the experiments for further information~\\cite{wwwexp}. The rest of the document briefly describes the techniques and results of the different RPC and RPV searches carried out at the experiments at the Tevatron and the LHC colliders. Other more exotic scenarios, such as displaced vertices or R-hadrons, and results from indirect searches, in which experiments look for deviations of rare SM processes to constrain the SUSY parameter space, are not considered in this document.\n\n\n\n\n\\section{\\label{sec:RPC}RPC ANALYSES}\nThe most general signature of RPC processes is the presence of large \\met. In addition, SUSY cascade decays from the initial particles can be long or short and can include different number and flavor of leptons\\footnote{Throughout this document, hadronically-decaying taus are considered as jets unless otherwise stated} and jets. This rich phenomenology is used by the experiments to define dedicated searches and control different type of backgrounds.\n\n\\subsection{\\label{sec:nolep}Searches without Leptons}\nThe strong production of SUSY particles typically involves a relatively large number of jets and \\met. This is one of the most characteristic signatures in SUSY models and this is why searches without leptons are the most sensitive to a large variety of scenarios. By vetoing leptons, the SM backgrounds are dominated by QCD multijet processes that have extremely large cross sections but a very small $\\epsilon\\cdot A$ when requiring large \\met. This situation is very difficult to model with Monte Carlo (MC) simulations and different data-driven strategies are needed. Other important backgrounds are \\ensuremath{t\\bar t}, $W$+jet and $Z$+jet in which the $Z$ decays invisibly, constituting an irreducible background.\n\nATLAS carried out a search~\\cite{ATLAS_0lep} with 1.04\\invfb of integrated luminosity using the \\ensuremath{m_{\\mathrm{eff}}}~quantity, defined as the scalar sum of the \\pt~of the jets and the \\met. In order to maximize the sensitivity of the analysis to a variety of models, five different signal regions are defined requiring different jet inclusive multiplicities (from $\\geq 2$ to $\\geq 4$, with a leading jet of $\\pt>130$~GeV and subleading jets of $\\pt>40$~GeV), $\\met>130$~GeV and different \\ensuremath{m_{\\mathrm{eff}}}~thresholds ranging from 500 to 1100~GeV. Event selections reduce the QCD multijet contribution by requiring large \\met~relative to the hadronic activity and that no jet is aligned in the azimuthal plane with the \\met. For each signal region, five control regions enhancing different backgrounds are defined. The QCD multijet background is estimated using a completely data-driven technique, which consists in the generation of pseudo-events following a smearing of the jets according to their response function, as derived in a low \\met~significance region. This estimation is normalized appropriately in a region where at least one of the jets is aligned with the \\met. The rest of the backgrounds are estimated using MC or data-driven approaches in the control regions and then a MC-driven transfer function is used to estimate the contribution in the signal region. A global likelihood fit combines all this information and takes into account the correlation between the uncertainties. No significant deviations from SM expectations are found and some limits are derived. Figure~\\ref{fig:ATLAS_0lep} show the 95\\% CL limits in a model with simplified phenomenology, in which all SUSY particles except for squarks of the first and second generation and gluinos are set to the 5~TeV range. In this way, the SUSY colored particles produced are forced to decay directly to the LSP, which is considered massless. These results significantly extend previous limits and are valid up to an LSP mass of 200~GeV.\n\n\\begin{figure}[h!]\n \\includegraphics[width=75mm]{ATLAS_0lep_interp}%\n \\caption{\\label{fig:ATLAS_0lep} ATLAS 95\\% CL limits on gluino and squark masses derived from the search without leptons in a simplified model containing only gluinos, squarks from the first and second generation and a massless LSP. Previous limits are also shown for reference.}\n\\end{figure}\n\nA search aiming at large jet multiplicities was also carried out by ATLAS with 1.34\\invfb~\\cite{ATLAS_multijets}. In this case, signal regions are defined by six, seven or eight jets with \\pt~ranging 55 to 80~GeV. The main background contribution is from QCD multijet production, which is controlled by using the fact that $\\met\/\\sqrt{\\HT}$ (with \\HT~being the scalar \\pt~sum of the jets) is invariant under jet multiplicities. This assumption was validated in many different control regions. The rest of the backgrounds are estimated using MC and validated in dedicated control regions requiring one muon. No significant deviations are found and gluino masses below 520~GeV (680~GeV under the assumption that $\\msq = 2\\cdot \\mgl$) are excluded at 95\\% CL in a CMSSM model with $\\tan\\beta=10$, $A_0=0$ and $\\mu>0$.\n\nCMS carried out as well a series of searches aiming at the same signature but with a special focus on topological variables to discriminate against backgrounds. In the following, two of them are described: $\\alpha_T$~\\cite{CMS_alphaT} and Razor~\\cite{CMS_Razor} searches. The $\\alpha_T$ variable~\\cite{alphaT} is defined as the ratio between the \\pt~of the second leading jet and the transverse mass between the first two leading jets. In back-to-back topologies, such as QCD multijet production, this ratio shows a strong cutoff at 0.5, providing a good handle to discriminate against this type of background. In the case of more than two jets in the event, the two-jet topology is achieved by clustering all the jets that are relatively close in pseudo-rapidity and azimuthal distance, using a dedicated algorithm. The CMS analysis uses 1.1\\invfb of data and the fact that $R_{\\alpha_T}$, the ratio between events with $\\alpha_T>0.55$ and $\\alpha_T<0.55$, is flat versus \\HT~for the SM background. This information is exploited, together with some data-driven predictions, in a global likelihood fit. The experiment uses multiple \\HT~bins to maximize the sensitivity and good agreement between data and expectations is found in all of them. This result significantly extends the previous limits produced with only $35\\invpb$ of data and are interpreted in a CMSSM benchmark scenario of $\\tan\\beta=10$, $\\mu>0$ and $A_0=0$, as shown in Figure~\\ref{fig:CMS_tanb10}, together with other results from different searches.\n\n\\begin{figure}[h!]\n \\includegraphics[width=85mm]{CMS_SUSY_2011Limits_tanb10}%\n \\caption{\\label{fig:CMS_tanb10} CMS 95\\% CL exclusion limits from many different searches in a CMSSM scenario with $\\tan\\beta=10$, $\\mu>0$ and $A_0=0$.}\n\\end{figure}\n\nThe Razor quantity~\\cite{Razor} is also exploited by CMS in a dedicated analysis with 35\\invpb of data. This search clusters the jets until a dijet topology is obtained and then the system is boosted back to the center-of-mass frame. The $M_R$ quantity is defined to be the momentum of the jets in this system, where both jets are equal in momentum since the pair produced SUSY particles are of the same mass. This variable is defined only from energy and $z$-momentum components and has the property to peak at the mass difference between the produced particles and the invisible particles that escape detection, with a width that relates to the initial boost from radiation. In this way, the traditional search looking for an excess at the tails of some kinematic distributions can be converted into a bump-hunting search. The transverse version of this quantity, $M_{RT}$, is also defined and enters into the razor variable definition, $R=M_R\/M_{RT}$. In this way, $R$ is dimensionless and combines longitudinal and transverse information. The analysis performs a fit to evaluate the different backgrounds using some {\\it ansatz} defined at dedicated control regions. The signal region is defined as $R>0.5$ and $M_R>500$~GeV and $5.5\\pm 1.4$ events are expected, which is in agreement with the 7 events observed.\n\n\n\\subsection{\\label{sec:onelep}Searches with One Lepton}\nRequiring the presence of at least one lepton in the event reduces the yield of some type of background processes, like QCD multijet production, and makes the analysis sensitive to SUSY cascade decays involving leptons. ATLAS developed a search with 1.04\\invfb~\\cite{ATLAS_1lep} of data in which four signal regions are defined with three or four jets in the final state and with different kinematic thresholds in order to increase the sensitivity to a generic set of models. The transverse mass between the lepton and the \\met~together with the \\ensuremath{m_{\\mathrm{eff}}}~quantity, now with the lepton included in the definition, are exploited to increase sensitivity. The QCD multijet contribution is assessed in a completely data-driven manner using a matrix method~\\cite{ATLAS_MM}. The rest of SM backgrounds are predicted using MC normalized to data in dedicated control regions and multiplied by a MC-driven transfer factor to estimate the corresponding contribution in the signal region. The different results and their uncertainties are finally combined in an overall likelihood fit and found to be compatible with the observed number of events. These null results are interpreted in different models, such as the one shown in Figure~\\ref{fig:ATLAS_1lep}, where 95\\% CL limits are derived in a simplified topology where only the gluino, the LSP and an intermediate chargino are relevant. The colored scale indicates cross sections excluded for any beyond SM process with similar topology and the lines indicate the expected and observed exclusions in the MSSM case. CMS has also recently released a one-lepton analysis~\\cite{CMS_1lep} with 1.1\\invfb of data in which no deviation from SM expectations is found and these results are interpreted in the context of CMSSM, as shown in Figure~\\ref{fig:CMS_tanb10}. \n\n\\begin{figure}[h!]\n \\includegraphics[width=75mm]{ATLAS_1lep_interp}%\n \\caption{\\label{fig:ATLAS_1lep} ATLAS excluded cross sections at 95\\% CL with a dedicated one-lepton analysis for processes in which gluinos are pair-produced and each of them decays into a quark and chargino, subsequently producing a real or virtual $W$ and the LSP. The chargino is imposed to have a mass exactly at $x = (m_{\\ch} - m_{\\mathrm{LSP}})\/(\\mgl - m_{\\mathrm{LSP}})=1\/2$. The solid and dashed lines are the exclusion limits when the MSSM scenario is considered.}\n\\end{figure}\n\n\n\\subsection{\\label{sec:twolep}Searches with Two Leptons}\nSearches with two identified leptons in the final state are also sensitive to strong production processes. Different cases depending on whether the leptons have opposite sign (OS), same sign (SS), different flavor (DF), same flavor (SF) or combinations like OSSF can be addressed and lead to different background estimation techniques.\n\nCDF developed a SS dilepton analysis~\\cite{CDF_SS}, using 6.1\\invfb of data, aiming at squark or gluino pair-production with an intermediate neutralino and chargino decaying via a real or virtual $W$ or $Z$ boson. Backgrounds yields are dominated by processes containing real leptons (dibosons) and lepton misidentification from jets ($W$+jet and \\ensuremath{t\\bar t}) or conversions ($Z\/\\gamma^*$ and \\ensuremath{t\\bar t}). No deviations from the SM expectations are found.\n\nCMS also developed a SS dilepton analysis with a null result using 0.98\\invfb~\\cite{CMS_SS} of integrated luminosity. In this case, different flavor combinations (including taus) are considered together with several \\pt, \\HT~and \\met~thresholds. For each of the cases, a dedicated data-driven technique is used to estimate the different background contributions. The results are interpreted in terms of limits in the CMSSM scenario, as shown in Figure~\\ref{fig:CMS_tanb10}. \n\nCMS has also released results with 0.98\\invfb of integrated luminosity in a dilepton OS channel using two different approaches~\\cite{CMS_OS}. The first one investigates the presence of an excess in the OSSF combination. In SUSY, cascades such as $\\tilde\\chi_2^0\\to l\\tilde l\\to ll\\tilde\\chi_1^0$ are expected and the invariant mass of the OSSF leptons produced in this way would form a characteristic kinematic edge that relates to the mass difference between the SUSY particles. Thus, unbinned maximum likelihood fits are performed in control and signal regions, defined respectively as $100<\\HT<300$~GeV and $\\HT>300$~GeV. As shown in Figure~\\ref{fig:CMS_OSSF}, good agreement with the expectation is observed. The other approach follows a canonical counting experiment with two different signal regions defined at high \\met~or \\HT~and with three different data-driven methods to estimate the backgrounds. Good agreement between observed and expected yields in all cases is found and limits are also derived in the context of CMSSM, as shown in Figure~\\ref{fig:CMS_tanb10}.\n\nATLAS has also recently released results for OS, SS and OSSF dilepton combinations with 1\\invfb\\cite{ATLAS_2l} of data. For OS (SS) analyses, three (two) signal regions are defined, with at least one of them requiring large \\met~and no jet requirement. For the OSSF, an excess of SF over DF is tested over a background-only hypothesis calculated with pseudo-experiments and taking into account the different uncertainties. In all cases, no excess is observed with respect to the SM expectations.\n\n\\begin{figure}[h!]\n \\includegraphics[width=70mm]{CMS_OSSF_fit}%\n \\caption{\\label{fig:CMS_OSSF}Results of the maximum likelihood fit to the dilepton mass distribution for events in the CMS OSSF signal region.}\n\\end{figure}\n\n\\subsection{\\label{sec:multilep}Searches with Multiple Leptons}\nAnalyses requiring three leptons in the final state are particularly sensitive to production of uncolored particles such as a chargino and neutralino, which may decay via virtual $W$ or $Z$ bosons or via sleptons, if it is kinematically allowed. SM backgrounds producing three leptons in the final state and significant \\met~are small and mostly reduce to diboson production and \\ensuremath{t\\bar t}~with a lepton from a semi-leptonic decay of a $b$-jet. This final state has been considered as the golden signature for SUSY searches at the Tevatron due to the particularly favorable signal-to-background ratio. Thus, despite the fact that with a data sample $\\sim 1$\\invfb~the LHC may become as powerful as the Tevatron soon, the current most sensitive searches for these processes have been performed at CDF and D\\O.\n\nD\\O~developed a search with 2.3\\invfb of integrated luminosity in four different channels by combining electrons and muons with an isolated track and taus~\\cite{D0_trilep}. The trigger performance establishes the minimum possible \\pt~threshold of the objects: $\\pt>(12, 8)$~GeV for two-lepton triggers and 15~GeV for single muon trigger, needed for the tau case. Two different \\pt~selections per channel are implemented. An extensive set of cuts exploiting kinematic information such as invariant masses, \\HT, angular distributions, etc. are applied in each of the different channels, aiming at reducing the dominant backgrounds. No significant deviation from the background expectation is observed in any of the selections.\n\n\\begin{figure}[h!]\n \\includegraphics[width=80mm]{CDF_trilep_mumu_track_met.pdf}\n \\caption{\\label{fig:CDF_trilep}Distribution of \\met~in one of the signal regions of the CDF analysis with two muons and a track.}\n\\end{figure}\n\nCDF updated recently~\\cite{CDF_trilep} their previous study on trileptons by considering 5.8\\invfb of data and eight different exclusive channels, combining two electrons or two muons with a third object that could be an electron, a muon, a tau or a track in \\pt~ranges between 5 and 20~GeV. In order to control the description of the different backgrounds, mostly dominated by Drell-Yan with a misidentified jet, 24 (40) control regions were defined in the dilepton and track (trilepton) case. As shown in Figure~\\ref{fig:CDF_trilep} for the case of dimuon and track selection, no significant deviation from SM expectations is observed. CDF excludes at 95\\% CL chargino mass below 168~GeV in a CMSSM scenario with \\ensuremath{m_{0}}=60~GeV, $\\tan\\beta=3$, $A_0=0$ and $\\mu>0$. This limit is similar to the one obtained by D\\O.\n \n\n\\subsection{\\label{sec:bjets}Searches with $b$-jet Tagging}\nSUSY particles of the third generation, such as the stop, the sbottom or the stau, could have significantly lower masses than the rest of the SUSY particles due to the mixing between the weak left- and right-handed eigenstates.\n\nSearches for the direct production of sbottoms at CDF, using 2.65\\invfb~\\cite{CDF_sbottom} of data, and D\\O, using 5.2\\invfb~\\cite{D0_sbottom} of data, focus on the simplified case of $\\tilde b \\to b+\\tilde\\chi_1^0$. The final state signature of two $b$-jets~and \\met~is exploited by requiring one or two $b$-tagged jets, a lepton veto and some dedicated kinematic variables to reduce the top and QCD multijet backgrounds. One loose and one tight selections are imposed in both experiments in order to enhance the sensitivity to different $\\tilde b - \\tilde\\chi_1^0$ mass differences. Since no deviations from expectations are observed, sbottom masses between approximately 230 and 250~GeV are excluded when the LSP mass is below 70~GeV. \n\nD\\O~recently published a search for direct stop production with 5.4\\invfb~\\cite{D0_stop} of integrated luminosity. The stop can decay in many different final states depending on its own mass and that of other SUSY particles such as charginos, neutralinos and sleptons. In this analysis, the targeted scenario is a decay via a sneutrino: $\\tilde t\\bar{\\tilde t}\\to(b e\\tilde\\nu)(\\bar b\\mu\\tilde\\nu)$. The main backgrounds for OSDF dileptons are $Z\\to\\tau\\tau$, dibosons and dileptonic top. A discriminant using a linear combination of different variables is built and two selections optimized for small and large stop-sneutrino mass differences are considered. Since data is found to be in agreement with the SM, limits on the stop mass as a function of the sneutrino mass are derived, significantly extending the previous results, as shown in Figure~\\ref{fig:D0_stop}.\n\nSimilarly to the situation in direct gaugino production searches, with a dataset of 1\\invfb, the LHC is not yet as sensitive as the Tevatron in searches for direct production of third generation particles. Instead, ATLAS developed an analysis with 0.83\\invfb of integrated luminosity targeting gluino-mediated production of sbottom, which has a larger cross section and provides a striking signature of four $b$-jets~and \\met~\\cite{ATLAS_glsb}. The gluino is assumed to decay via on-shell or off-shell sbottom to the LSP and all other SUSY particles are assumed to be decoupled. Four different signal regions are defined requiring either one or two $b$-tagged jets and \\ensuremath{m_{\\mathrm{eff}}}~thresholds of 500 or 700~GeV. A lepton veto is also applied and the QCD multijet background is determined fully data-driven, as in the ATLAS search without leptons described in Section~\\ref{sec:nolep}. Other SM backgrounds are evaluated using MC and validated with semi data-driven estimations by requiring one lepton. No significant deviations are observed and these null results are interpreted in different theoretical models. Figure~\\ref{fig:ATLAS_glsbottom} shows the extension of the limits with respect to the Tevatron and the previous ATLAS results with only 35\\invpb of integrated luminosity, in the scenario in which the gluino is heavier than the sbottom and all the other SUSY particles are set at a higher scale except for the neutralino, which has a mass of 60~GeV.\n\n\\begin{figure}[h!]\n \\includegraphics[width=72mm]{D0_stop}%\n \\caption{\\label{fig:D0_stop} Observed and expected 95\\% CL exclusion regions on the scalar top mass for different sneutrino mass values in the direct stop search performed by D\\O. The shaded band around the expected limit shows the effects of the scalar top quark pair production cross section uncertainty. Other limits from previous analyses are also shown for reference. }\n\\end{figure}\n\n\\begin{figure}[h!]\n \\includegraphics[width=75mm]{ATLAS_glsb}%\n \\caption{\\label{fig:ATLAS_glsbottom} Exclusion limits at 95\\% CL in the gluino-sbottom mass plane for the ATLAS gluino-mediated sbottom production analysis. Here, the neutralino mass is set to 60~GeV and other limits are shown for reference, including the direct sbottom constraints from Tevatron in the same scenario.}\n\\end{figure}\n\nIn addition, ATLAS performed a gluino-mediated stop search with 1.03\\invfb~\\cite{ATLAS_glst} of integrated luminosity. In this case, the gluino is forced to decay to the LSP via an on-shell or off-shell stop. In the former case, the stop decays into $b\\tilde\\chi_1^\\pm$ or $t\\tilde\\chi_1^0$, depending on the mass. The search is performed requiring four jets, one lepton and at least one $b$-tagged jet, as well as large \\met, \\ensuremath{m_{\\mathrm{eff}}}~and transverse mass between lepton and \\met. The SM expectation is estimated via fully or semi data-driven techniques to be $54.9\\pm 13.6$ and 74 events are observed in data. Gluino masses are excluded approximately below 500~GeV with a small dependence on the stop mass.\n\n\\subsection{\\label{sec:photon}Searches with Photons}\nOne of the most favorable SUSY models with photons in the final state is GMSB~\\cite{GMSB}. In this model, SUSY particles acquire masses via gauge interactions, which are proportional to the breaking scale $\\Lambda$. In this context, the gravitino is always the LSP and different types of next-to-LSP (NLSP) can be considered. In the case of a $\\tilde\\chi_1^0$ NLSP being mostly bino\\footnote{The SUSY partner of the U(1) gauge boson\\label{fn:bino}}, the predominant decay is to a photon and a gravitino, yielding a diphoton and \\met~signature. Backgrounds to this signature can be classified in QCD ``instrumental'' (mainly from diphoton, photon+jet and dijet productions), electroweak ``genuine'' ($\\gamma+(W\\to e\\nu)$) and irreducible backgrounds ($(Z\\to\\nu\\nu)+\\gamma\\gamma$ and $(W\\to l\\nu)+\\gamma\\gamma$). The two former backgrounds can be treated using data-driven techniques and the latter is usually small and assessed using MC predictions.\n\nAll four experiments performed a search for this final state using very similar techniques and reported null results. Tevatron searches were focused on the GMSB SPS8 scenario~\\cite{SPS8}, which is dominated by gaugino pair production. D\\O, with 6.3\\invfb of data, excluded $\\tilde\\chi_1^0$ masses below 175~GeV~\\cite{D0_photons} and CDF, with a smaller dataset of 2.6\\invfb, constrained the NLSP masses also as a function of the NLSP lifetime~\\cite{CDF_photons}. Since the LHC is more sensitive to strong production, experiments targeted the Generalized Gauge Mediated (GGM) model~\\cite{GGM}, in which the constraints at the GUT scale have been relaxed to allow for almost arbitrary values of squark and gluino masses. Both ATLAS~\\cite{ATLAS_photons} and CMS~\\cite{CMS_photons}, with approximately 1\\invfb of data, excluded squarks (gluino) masses below $\\sim 700$ ($\\sim 800-900$)~GeV, when assuming all other SUSY particles at higher scales. In addition, as shown in Figure~\\ref{fig:ATLAS_SPS8}, ATLAS produced for the first time exclusion limits in the SPS8 scenario that extend D\\O~ limits by $\\sim 30$~GeV in the $\\tilde\\chi_1^0$ mass.\n\n\\begin{figure}%\n \\includegraphics[width=70mm]{ATLAS_SPS8_interp}%\n \\caption{\\label{fig:ATLAS_SPS8}ATLAS expected and observed 95\\% CL upper limits on the SPS8 production cross section as a function of $\\Lambda$ and the lightest chargino and neutralino masses.}\n\\end{figure}\n\n\n\\section{\\label{sec:RPV}RPV ANALYSES}\n$R$-parity violating terms in the SUSY lagrangian are strongly constrained by experimental limits (e.g. proton lifetime)~\\cite{Rparity}. Experiments usually assume all couplings to be zero except the less constrained couplings, such as $\\lambda'_{311}$ and $\\lambda_{312}$, where indices refer to the family and couplings are described in the superpotential as $\\lambda_{ijk}\\hat L_i \\hat L_j \\hat E_k +\\lambda'_{ijk}\\hat L_i \\hat Q_j \\hat D_k$. Searches in RPV scenarios focus on finding a resonance produced by the decay of the SUSY particles to SM particles.\n\n\\begin{figure}[h!]\n \\includegraphics[width=80mm]{D0_snutau}%\n \\caption{\\label{fig:D0_snutau}Invariant mass of $e\\mu$ final states for different SM processes and two signal samples used for reference in the D\\O~stau neutrino search.}\n\\end{figure}\n\n\\subsection{\\label{sec:snutau}Searches for Scalar Tau Neutrino}\nA search for RPV scalar tau neutrino decaying to an electron and a muon was carried out in D\\O~ using a data sample of 5.3\\invfb~\\cite{D0_staunu}. After some cuts to require exactly one electron and muon and to reduce the jet fake contamination, no evidence of a mass resonance peak is found, as shown in Figure~\\ref{fig:D0_snutau}. A similar analysis but requiring opposite sign leptons and some different background techniques was performed by ATLAS with 0.87\\invfb~of data~\\cite{ATLAS_staunu}. No deviation from SM was found and limits are translated in a plane of $\\tilde\\nu_\\tau$ production coupling ($\\lambda'_{311}$) against $\\tilde\\nu_\\tau$ mass for different decay coupling ($\\lambda_{312}$) values, as shown in Figure~\\ref{fig:ATLASD0_RPV}. These limits exemplify the current complementarity between the different experiments since D\\O~is more competitive at lower masses whereas it is limited at higher masses, which is the region in which ATLAS is more sensitive.\n\n\\begin{figure}[h!]\n \\includegraphics[width=85mm]{ATLAS_snutau}%\n \\caption{\\label{fig:ATLASD0_RPV}Upper 95\\% CL limits on the $\\lambda'_{311}$ coupling as a function of $\\tilde\\nu_\\tau$ mass for three values of $\\lambda_{312}$. Regions above the curves are excluded by either ATLAS or D\\O~scalar tau neutrino searches.}\n\\end{figure}\n\n\\subsection{\\label{sec:jetresonance}Searches for Jet Resonances}\nBoth CDF (with 3.2\\invfb of data)~\\cite{CDF_3jet} and CMS (with 35\\invpb of data)~\\cite{CMS_3jet} collaborations performed a search for gluino pair production decaying into three jets. The search for two 3-jet resonances in a 6 jet final state is performed by exploiting the kinematic relationship between the jet triplet scalar \\pt~and the invariant mass of the three jets. In this way, the experiments manage to reduce the combinatorics and reject the QCD multijet backgrounds, as shown in Figure~\\ref{fig:jetresonance}. The complementarity between the experiments allows to fully cover a mass range from 77 to 500~GeV. With this technique, CDF excludes RPV gluino masses below 144~GeV (a $2~\\sigma$ excess is found around the top mass) and CMS excludes gluino masses between 200 and 280~GeV (a $1.9~\\sigma$ excess is found at 380~GeV).\n \n\n\\begin{figure}[h!]\n \\includegraphics[width=85mm]{CMS_3jetresonance}%\n \\caption{\\label{fig:jetresonance}Simulated triplet jet invariant mass versus the triplet scalar \\pt~of all possible combinations for a 250~GeV gluino mass. All triplets falling to the right of the red dashed line pass the final selection. In the inset, the combinations before and after the selection are shown.}\n\\end{figure}\n\n\\section{\\label{sec:summary}SUMMARY AND OUTLOOK}\nSearches for supersymmetry have been carried out at the Tevatron and the LHC colliders. Thanks to the complementarity between machines, many different final states and mass ranges have been carefully scrutinized. Since no significant deviations from the SM predictions have been found, the vast parameter space available for SUSY has been substantially reduced and the most probable scenarios predicted by electroweak precision tests are now excluded or under some constraints after the new stringent limits. The question whether SUSY really exists or whether it is within the reach of the current collider experiments is becoming more relevant.\n\nOne of the great virtues of SUSY is the stabilization of the electroweak sector. The radiative corrections to the Higgs mass need of a relatively low stop mass in order to avoid too much fine tuning. This also means that gluino masses should be relatively light, since they contribute to the stop mass corrections. Thus, in order to preserve naturalness arguments for SUSY, two main scenarios can be envisioned: one is the existence of heavy squarks, intermediate gluino and light stop and gauginos, and the other is the presence of a SUSY spectrum compressed into a narrow range of masses, which would evade the current searches at colliders and would also mean that the SUSY breaking scale resides at relatively low energies. Both scenarios are still possible and will probably determine the roadmap of the searches in the coming years, at least until the LHC is able to reach the nominal 14~TeV center-of-mass energy and provide a more conclusive answer to the current open questions in our understanding of the universe.\n\n\n\\bigskip\n\\begin{acknowledgments}\nThe author would like to thank the organizers for their hospitality and their commitment to make this conference a successful event.\n\n\\end{acknowledgments}\n\n\\bigskip\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThis paper presents an integrated deep-learning-based system, contingent on monocular images and fixed single-beam echo-sounder (SBES) measurements, for navigating an underwater robot in unknown 3D environments with obstacles.\n\nObstacle avoidance is fundamental for Autonomous Underwater Vehicles (AUVs) to safely explore the largely unmapped underwater realms (e.g., coral reefs, shipwrecks).\nHowever, the underwater environment itself poses unique challenges in regards to safe navigation, which is still an open problem for AUVs~\\cite{petillot2019underwater}.\nThere are limited sensors and positioning systems (e.g., GPS) that accurately measure the surroundings and operate underwater, thus preventing the use of well-established navigation methods~\\cite{pfrunder2017real} that were originally designed for ground vehicles with sensors like LiDAR.\nIn addition, the sensor configurations in low-cost AUVs, equipped with monocular camera, inexpensive IMU, compass, and fixed SBES, bear their own individual drawbacks, such as no scale information and drifting\/uncertain measurements.\nThese challenges make the classic methods for obstacle avoidance and navigation in unknown environments -- i.e., those which (1) estimate the geometry of the space using sensors with direct~\\cite{engel2017direct} or indirect~\\cite{campos2021orb, rahman2019iros-svin2} state estimation methods and (2) apply specific behaviors or planning in the partial map (e.g., Vector Field Histogram~\\cite{Panagou2014}, Dynamic Window Approach~\\cite{fox1997dynamic}%\n) -- not directly applicable in underwater scenarios.\n\n\\begin{wrapfigure}[19]{R}{0.48\\textwidth}\n\\vspace{-2.5em}\n\\includegraphics[width=0.48\\textwidth]{figs\/beauty_obstacle_avoidance_pengzhi_3.pdf}\\vspace{-1.2em}\n\\caption{How to guide an underwater robot to 3D waypoints given only monocular images, fixed echo-sounder range measurements, and a localization system, but \\emph{no map}, while also avoiding obstacles?}\n\\label{fig:beauty}\n\\end{wrapfigure}\n\n\n %\n\n\n\nWith recent advances in deep reinforcement learning (DRL)\\\\~\\cite{kober2013reinforcement,%\nmnih2015human}, several end-to-end deep neural network based methods have emerged, from raw images to control outputs.\nThese end-to-end methods -- typically tasked to endlessly navigate or reach a visual target -- demonstrated good performance for ground robots in unknown environments~\\cite{xie2018wheels%\n}. Comparatively, underwater domains bring problems to learning-based vision navigation due to a more complex image formation model that results in, e.g., backscattering and light attenuation. %\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nThis paper proposes a goal-oriented end-to-end DRL navigation approach, given that classical planning methods are not straightforward to apply as they require accurate maps, which are difficult to obtain due to the underwater perception challenges described above. %\nIn particular, we design the first multi-modal end-to-end underwater navigation system in unstructured 3D environments for which no map is available, based on Proximal Policy Optimization (PPO) \\cite{wijmans2019ddppo}, which allows for continuous action space. The provided inputs are goal positions, estimated depth images, and range measurements from the fixed SBES. Monocular camera and fixed SBES keep the AUV's cost low, while exploiting and complementing the individual sensor' strengths -- i.e., large field of view from the monocular camera that can provide relative scene depth and the absolute range measurement from the SBES. %\n %\nWe also propose a method to mitigate the sim-to-real gap problem by leveraging domain randomization into our system. We generated realistic simulated environments with different underwater visibility and randomized training environments, enhancing the model robustness to the changing visual conditions in real underwater domain.\n %\n %\n Extensive experimental analysis with tests and ablation studies of the proposed navigation system were conducted both in simulation and real-world. Results demonstrated high safety and efficiency compared to traditional navigation baselines and other sensor\/model configurations, as well as reliable transferability to new environments.\n %\n \n %\n \n\n\n\\section{Related Work}\\label{sec:relatedwork}\nObstacle avoidance and navigation without a prior map has been studied starting with wheeled mobile robots equipped with bumpers and sonar sensors~\\cite{choset2005principles} and later branching off into different environments and sensor configurations.\nFor underwater domains, one of the main challenges is the limit of choices for sensors.\nWhile some underwater LiDAR solutions are available~\\cite{mcleod2013autonomous}, they are expensive (US\\$100,000 or more) and bulky -- requiring a laser scanner and a camera. In addition, there is a lack of global positioning systems and the acoustic based positioning systems are affected by noise, making mapping underwater challenging~\\cite{petillot2019underwater}.\nOur goal is to enable navigation for low-cost AUVs. Therefore, in the following, we discuss applications using sensors (i.e., SBES, cameras) that are typically configured on low-cost underwater robots.\n\n\nIn practice, many underwater navigation systems depend on acoustic, inertial, and magnetic sensors \\cite{kinsey2006navigation,williams2001navigation,paull2013auv}.\nFor example, Calado \\textit{et al.}~\\cite{calado2011obstacle} proposed a method where the robot used a SBES to detect obstacles and construct a map of them.\nHowever, SBES can only provide a fixed single distance measurement and has high uncertainty given the wide beam cone -- around \\ang{30}. \nTo infer more about the complex scene, the robot must frequently turn in multiple directions, which negatively affects navigation efficiency. \nAlternatively, multi-beam and mechanical scanning sonars can cover a larger field of view~\\cite{petillot2001underwater}. \nHern\\'{a}ndez \\textit{et al.}~\\cite{hernandez2015online} used a multi-beam sonar to simultaneously build an occupancy map of the environment and generate collision-free paths to the goals. Grefstad \\textit{et al.}~\\cite{grefstad2018navigation} proposed a navigation and collision avoidance method using a \nmechanically scanning sonar for obstacle detection.\nHowever, a scanning sonar takes a few seconds to scan a 360$^{\\circ}$ view. \nThe acoustic sensors' accuracy depends on the environment structure and the type of reflections that arise. In addition, multi-beam and mechanical scanning sonars are significantly more expensive than monocular cameras and SBES (in the order of $>$US\\$10k vs.\\ US\\$10 - US\\$100). \n\n\n %\n\n\nWhile cameras have shown to provide dense real-time information about the surroundings out of the water~\\cite{liu2015learning}, there are fewer underwater obstacle avoidance methods that use cameras. The underwater domain indeed poses significant challenges, including light attenuation and scattering. \nMost work considers reactive controls, i.e., no goal is specified. \nRodr{\\'\\i}guez-Teiles \\textit{et al.}~\\cite{rodriguez2014vision} segmented RGB images to determine the direction for escape. Drews-Jr \\textit{et al.}~\\cite{drews2016dark} estimated a relative depth using the underwater dark channel prior and used that estimated information to determine the action. \nThere has been recent efforts in 3D trajectory optimization for underwater robots.\nXanthidis \\textit{et al.}~\\cite{xanthidis2020navigation} proposed a navigation framework for AUV planning in cases when a map is known or when a point cloud provided by a visual-inertial SLAM system~\\cite{rahman2019iros-svin2} is available. Our proposed method navigates the robot to 3D waypoints without explicit representation of the environment.\n\n\n\n\nRecently, deep learning (DL) methods have shown to work well with underwater robots.\nManderson \\textit{et al.}~\\cite{manderson2018vision} proposed a convolutional neural network that takes input RGB images and outputs unscaled, relative path changes for AUV driving. The network was trained with human-labeled data with each image associated with desired changes in yaw and\/or pitch to avoid obstacles and explore interesting regions. \nLater it was extended with a conditional-learning based method for navigating to sparse waypoints, while covering informative trajectories and avoiding obstacles~\\cite{manderson2020vision}. Our proposed method does not require human-labeled data.\n\nAmidst the progress in DRL, there is more research on robots operating out of water with monocular cameras.\nSome of these methods addressed the problem of safe endless 2D navigation without specifying any target location. \nXie \\textit{et al.}~\\cite{xie2017monocular} trained a Double Deep Q-network to avoid obstacles in simulated worlds and tested it on a wheeled robot. Kahn \\textit{et al.}~\\cite{kahn2018self} proposed a generalized computation graph for robot navigation that can be trained with fewer samples by subsuming value-based model-free and model-based learning. \nOther works provided the goal as a target image instead of a location~\\cite{zhu2017target,devo2020towards,wu2020towards}. %\nSome methods, based on an end-to-end network, guided the robot to the goal using LiDAR or RGB-D cameras~\\cite{pfeiffer2017perception,%\nxie2018wheels,%\nzhang2017deep, liang2021crowd} and goal's relative position for path planning. %\nRecently, a DD-PPO based method was used to navigate a robot in an unknown indoor (simulated) environment, using a RGB-D camera, GPS, and compass~\\cite{wijmans2019ddppo}. Our method will be based on PPO, with the additional challenge of not having depth information directly from the camera.\n\nNevertheless, due to the difficulties of applying DRL in real-world environments, most works performed training in simulation.\nHowever, policies learned in simulated environments may not transfer well to the real-world environment, due to the existence of reality (sim-to-real) gap~\\cite{tobin2017domain}.\nTo address this, several methods utilized domain randomization, where parameters of the simulated world were varied so that policies learned remained robust in real-world domain.\nFor example, Sadeghi and Levine~\\cite{sadeghi2016cad2rl} proposed a DRL approach for indoor flight collision avoidance trained only in CAD simulation that was able to generalize to the real world by highly randomizing the simulator's rendering settings. %\n\nOur approach draws from the advances in DRL: we design an end-to-end pipeline for low-cost underwater robot navigation to address the underwater challenges, combining multiple sensors and applying domain randomization.\n\n\n\n\n\n\n\n\n\n\\begin{figure*}\n \\centering\n \\includegraphics[trim={0cm .4cm 0cm 0cm}, clip, width=.9\\textwidth]{figs\/flowchart.png}\n \\vspace{-1em}\n \\caption{\\textit{Flowchart for the Proposed End-to-End Underwater 3D Navigation System.}\n The pipeline includes two stages: a depth prediction module (DPT) followed by a decision making module (PPO). During training, at each episode $i$, the robot is deployed in a randomized simulated environment. Predicted depth map $o^\\textrm{imageDepth}_t$ of the raw RGB image $o^\\textrm{imageRGB}_t$, relative goal position $o^\\textrm{goal}_t$, echo-sounder reading $o^\\textrm{range}_t$, and previous executed action $a_{t-1}$ are stacked with past $k$ observations from the previous times steps to feed into the PPO network (solid lines). The robot performs the action sampled from the output policy distribution. New observations (dashed lines) are then obtained for computing the next action at time step $t+1$. %\n During real-world deployment, DPT's computationally less expensive counterpart MiDaS was used as the depth prediction module for real-time inference.\n }\n \\label{fig:system_overview}\n \\vspace{-15pt}\n\\end{figure*}\n\n\n\\section{Approach}\\label{sec:approach}\nThe problem considered in this paper is as follows: an underwater robot deployed in an unknown environment needs to navigate to a goal location $G \\in \\mathbb{R}^3$, minimizing the travel time, while avoiding collisions with obstacles. \n\nTo develop a mapless navigation solution for low-cost robots, we consider an underwater thruster-vectored robot that has an inexpensive sensor suite composed of: (1) a monocular camera, (2) a SBES placed below the camera and looking forward, (3) a compass, (4) pressure sensor for water depth, and (5) a (noisy) localization system. Selecting this sensor configuration allows us to exploit the larger field of view (FOV) covered by the camera while obtaining absolute front distance estimates with the fixed SBES.\n\nFor a general solution, robust to noise and changing visual conditions, we approach the real-time 3D navigation problem by devising an end-to-end system ( see \\fig{fig:system_overview} ) based on a neural network for dense depth prediction from monocular images and on a deep reinforcement learning method that takes as input %\nthe sensor suite data and outputs vertical and steering commands. %\nWe consider a window of prior measurements and executed actions given the absence of prior knowledge of the environment. \n\nIn the remainder of this section, we describe in detail the RL approach, the depth prediction network, and how to address the sim-to-real gap.\n\n\\subsection{Multi-Modal Deep Reinforcement Learning Navigation}\nGiven an unknown environment, the navigation problem can be formulated as a Partially Observable Markov Decision Process (POMDP), defined with a 6-tuple: state space $S$ that cannot be directly observed by the robot, action space $A$ modifying the current state of the robot, observation space $\\Omega$, a state-transition model $T$, the observation probability distribution $O$, and a reward function $R$ which returns the reward after a state transition.\n\n\\textbf{Observation space.} The observation $O_t$ at time step $t$ consists of: (1) the predicted depth image $o^\\textrm{imageDepth}_t \\in \\mathbb{R}^{128\\times160}$; (2) an SBES range measurement $o^\\textrm{range}_t \\in \\mathbb{R}$; (3) the current relative goal position $o^\\textrm{goal}_t \\in \\mathbb{R}^3$ -- specifically, $[D^h_t, D^v_t, \\theta^h_t]^\\top$, where $D^h_t$, $D^v_t$ are robot's current horizontal, vertical distances to the goal and $\\theta^h_t$ represents the relative yaw heading difference; and (4) the past executed actions $o^\\textrm{action}_t \\in \\mathbb{R}^2$. We stack observations considering a time window $k$ to capture the robot's progress towards the goal and to avoid obstacles that left the periphery view. In experiments, model using 5 time steps (decision period lasts 0.5 second for each step) showed good performance without adding too much computational expense.\n\n\\textbf{Action space.} The action space is $a_t = [v_t,\\omega_t] \\in \\mathbb{R}^2$, where $v_t$ is the vertical linear velocity and $\\omega_t$ is the yaw angular velocity. To generalize the applicability of the learned behavior to different robots, we consider the actions to be in a range of $[-1.0, 1.0]$ which will be linearly mapped to the range of velocities of a specific robot. Note that while we could include the horizontal forward linear velocity, we decided to keep it constant to facilitate surveying missions that require the same velocity to collect consistent high-quality measurements. \n\n\nThe action is then given by the policy:\n\\begin{equation}\n\\small\n a_t = \\pi(O_t)%\n %\n\\end{equation}\nThe goal is to find the optimal policy $\\pi^*$ which maximizes the navigation policy's expected return over a sequence $\\tau$ of observations, actions, and rewards:\n\\begin{equation}\n\\small\n \\pi^* = \\argmax_\\pi \\mathbb{E}_{r\\sim p(\\tau|\\pi)}\\Big[\\sum\\gamma^t r_t\\Big]\n\\end{equation}\n\\noindent where $\\gamma \\in [0,1.0]$ is the discount factor. The optimal policy would translate in a path that is safe and minimizes the time it takes to travel to the goal.\n\n\n\n\n\n\n\n\\textbf{Reward function.} Our reward function $r_t$ at time $t$ encodes the objectives to stay not too close to any obstacle ($r^{\\textrm{obs}}_t$) and to reach the goal area as soon as possible ($r^{\\textrm{goal}}_t$). \n\nWhen the robot is close to an obstacle, it will compute a negative reward: %\n\\begin{equation}\n\\small\n r_t^{\\textrm{obs}} = \n \\left\\{\n \\begin{array}{lr}\n %\n \n -r_{\\textrm{crash}}, & d_t^h < \\delta_h \\lor\\,d_t^v < \n \\delta_v \\lor\\,d_t^{\\textrm{sur}} < \\delta_v\\\\\n -s_0(2\\delta_h - d_t^h), & \\delta_h \\leq d_t^h < 2\\delta_h \\\\\n 0 & \\textrm{otherwise}\n \\end{array}\n \\right.\n\\end{equation}\nwhere $\\delta_h$, $\\delta_v$ represent the thresholds for the distances of the robot to the closest obstacle $d_t^h$, $d_t^v$ -- horizontally or vertically, respectively. \nWe also check the distance to the water surface $d_t^{\\textrm{sur}}$, as there might be surface obstacles that cannot be detected given the sensor configuration of the robot.\nThe threshold values $\\delta_h$, $\\delta_v$ should consider the robot's size and turning radius.\nWhen any of the constraints are met -- i.e., the robot is too close to an obstacle or the surface -- the current episode terminates with a large negative constant reward $-r_{\\textrm{crash}}$.\nIn addition, to guarantee safety, a penalty for motions within a range $[\\delta_h, 2\\delta_h)$ of distance to nearby obstacles is given according to the current distance.\nOtherwise, if the robot is far from the obstacles, no negative reward is applied.\n\nTo guide the robot towards the goal both horizontally and vertically, \nwe split the goal-based reward into two parts.\nFirst, the horizontal goal-based reward:\n\\begin{equation}\n\\small\n r_t^{\\textrm{goalh}} = \n \\left\\{\n \\begin{array}{lr}\n -s_1|\\theta_t^h|, & \\Delta_h < D_t^{h} \\\\\n r_{\\textrm{success}} - s_2|\\theta_t^h|, & \\textrm{otherwise}%\n \\end{array}\n \\right.\n\\end{equation}\nIf the robot's horizontal distance to the goal $D_h^t$ is greater than a threshold $\\Delta_h$, \nthen the penalty is based on the robot's orientation to the goal -- i.e., a robot already facing the goal gets a smaller penalty, as the constant forward velocity will ensure shorter arrival time.\nOtherwise, if the robot is within the goal area, then there is a positive reward with a preference to the robot's orientation towards the goal.\n\nLikewise, the vertical goal-based reward:\n\\begin{equation}\n\\small\n r_t^{\\textrm{goalv}} = \n \\left\\{\n \\begin{array}{lr}\n s_3|\\dot{D}_t^v|%\n , & \\dot{D}_t^v \\leq 0 \\land\\,\\Delta_h < D_t^h \\\\\n %\n - s_3|\\dot{D}_t^v|%\n , & \\dot{D}_t^v > 0 \\land\\,\\Delta_h < D_t^h \\\\\n %\n - s_4|D_t^v|, & \\textrm{otherwise}%\n \\end{array}\n \\right.\n\\end{equation}\nWhen the robot is not near the goal, the vertical goal-based reward is a positive value if the change in vertical distance over time $\\dot{D}_t^v$ is negative or 0 -- i.e., the robot is getting closer to the target depth.\nOn the contrary, it is a negative value if the change is positive -- i.e., the robot is getting farther from the target depth.\nOtherwise, if the robot is within goal area, the negative reward is relative to the distance to the target depth.\nThis split (horizontal and vertical) of the goal reward showed better stability in experiments than when a single combined goal reward was applied, potentially due to the separate focus of two mostly independent actions.\n\n\nThe above obstacle- and goal-based rewards conflict with each other; they could lead to oscillations at local optima when an obstacle is nearby.\nThus, we devised a priority-based strategy (when the robot is not in the goal area) that focuses on moving away from the obstacle by scaling $r_t^{\\textrm{goalh}}$:\n\\begin{equation}\n\\small\n \\begin{array}{lr}\n %\n r_t^{\\textrm{goalh}} \\ *\\!= \n s_5(d_t^h - \\delta_h)\/\\delta_h, & \\Delta_h < D_t^{h} \\land \\delta_h \\leq d_t^h < 2\\delta_h \\label{con:priority}\n %\n \\end{array}\n\\end{equation}\n\nIn all the reward equations, $s_0, \\ldots ,s_5$ are positive scaling factors. Intuitively, they are set so that rewards are in an appropriate scale for a balanced training performance. \n\nFinally, the collective reward at time $t$ can be obtained as:\n\\begin{equation}\n\\small\n r_t = r^{\\textrm{obs}}_t + r^{\\textrm{goalh}}_t + r^{\\textrm{goalv}}_t\n\\end{equation}\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[trim={0cm 0.cm 0.cm 0},clip,width=0.45\\textwidth]{figs\/network_structure.pdf}\n \\includegraphics[width=0.4\\textwidth]{figs\/training_env.png}\n \\vspace{-1em}\n \\caption{(left) \\textit{Network Architecture.} Predicted depth images are processed by three layers of convolutional layers (orange). Its output is flattened and concatenated with feature vectors (green) representing the stacked relative goal positions, echo-sounder readings, and past actions. The final fully-connected layer outputs a navigation policy and state value. (right) \\textit{Top View of the Training Env.} Our model was trained in the above simulated environment in area A (inside area with fewer obstacles and smaller space) and B (outside area with more obstacles and larger space).}\n \\label{fig:training_setting}\n \\vspace{-2em}\n\\end{figure}\n\n\\textbf{Network architecture.}\nThe network structure depicted in \\fig{fig:training_setting}(left) illustrates how we integrate the information vectors from the sensors. First, the stacked predicted depth images are processed by three convolutional layers, then the flattened output $\\in \\mathbb{R}^{512}$ is concatenated with processed feature vectors consisting of the stacked relative goal positions $\\in \\mathbb{R}^{96}$, SBES readings $\\in \\mathbb{R}^{32}$, and past actions $\\in \\mathbb{R}^{64}$. Specifically, the combined echo-sounder readings provide an implicit scale on the relative depth prediction without requiring calibration. The network will produce a navigation policy and state value. \n\n\\subsection{Image Depth Prediction Network} \\label{Combined Perception Inputs}\n\nAccurate image depth predictions is important for our navigation pipeline to work.\nPrevious work used ground truth simulated depth images with Gaussian noise as input for training and applied depth estimation during deployment~\\cite{xie2017monocular}.\nHowever, this broadens the sim-to-real gap as real-world noise in depth predictions is more complex than implemented simulated noise models~\\cite{sweeney2019supervised}.\nInstead, we utilized one of the latest monocular depth prediction networks, Dense Prediction Transformer (DPT)~\\cite{ranftl2021vision}, which has an encoder-decoder design and applies a transformer as the encoder's main building block. We selected DPT over other deep neural networks for depth prediction for its state-of-the-art performance in single-view depth estimation and robustness across diverse environments. %\n\n\n \n\n\n\\subsection{Transferable Model} \n\nDRL often has the problem of generalization: models trained in one domain fail to transfer to other domains even if there are small differences between the domains~\\cite{cobbe2019quantifying}. %\nUnlike in-air, images taken underwater will look drastically different across various environments due to the more complex lighting and backscattering effects~\\cite{akkaynak2018underwater}.\nThus, training the model in a single fixed environment would lead to over-fitting to that environment's visual conditions. %\nOne solution is to retrain the depth prediction network with an existing underwater image depth dataset, which, however, is not available. Another solution is to enhance the input underwater images to its approximate in-air counterpart~\\cite{roznere2019color, %\nakkaynak2018underwater}.\nYet, most image enhancement techniques require difficult-to-retrieve information (e.g., water attenuation coefficients, depth maps).\n\nOur approach is to integrate underwater features into the simulation used for training.\nWe modified an existing underwater simulator framework for games to create the training and testing simulations for our proposed approach. The framework contains custom shaders that incorporates a light transmission model to simulate underwater optical effects, thus providing a good amount of realism. \n\n\n\n\n\\textbf{Domain randomization.} \nWe integrated domain randomization to generate underwater environments with different visual conditions, thus enabling transferability. %\nIn particular, %\nat the start of every training episode, we randomize the underwater visibility -- the gradient and conditions in visibility over distance.\nVisibility was selected as it significantly impacts the relative depth estimation, thus affecting to a large extent how the robot perceives its surroundings.\n\nWe decided not to apply domain adaptation~\\cite{peng2020learning} -- i.e., the process of learning different environment encoding and corresponding adapted policy during training, so that during testing the best environment encoding will be found with the corresponding adapted policy -- because searching the best environment encoding is not very practical for underwater deployments.\nFor instance, the search would require robot motions towards obstacles to identify the (potentially changing) visibility feature of the specific environment. %\n\n\\textbf{Multi-scenario training.}\nWe built the simulated training environment via Unity Engine\\footnote{\\scriptsize \\url{http:\/\/www.unity.com\/}}. %\nWe generated two activity areas to represent two classes of environments that an AUV might encounter: \\textit{A} -- a small area with fewer obstacles, and \\textit{B} -- a big cluttered area with obstacles at various positions and heights (see \\fig{fig:training_setting}(right)).\nIn each training episode, the robot's starting pose and goal location are randomly reset in the environment.\nThis exposure to different training scenarios ensures that the learned policy will be more likely to handle more complex environments~\\cite{ %\ntobin2017domain}. %\n\n\n\n\n\n\n\n \n\n\n\n\n\\section{Experimental Results}\nWe trained and performed experiments in simulation, in real-world with a vector-thruster underwater robot, and with underwater datasets to validate our DRL-based multi-modal sensor navigation system.\nWe performed comparisons and ablation studies with other methods.\nOur framework is publicly available\\footnote{\\scriptsize\\url{https:\/\/github.com\/dartmouthrobotics\/deeprl-uw-robot-navigation}}.\n\n\\subsection{Training Experimental Settings}\nOur model was first trained and tested on a workstation with two 12GB NVIDIA 2080Ti GPUs.\nIt was implemented with PyTorch and Adam optimizer~\\cite{kingma2014adam}.\n\nIn simulation, the robot's forward velocity, vertical velocity range, and yaw angular velocity range were set to\n\\SI{0.345}{m\/s}, \n\\SIrange{-0.23}{0.23}{m\/s},\n\\SIrange[parse-numbers = false]{-\\text{$\\pi$}\/6}{\\text{$\\pi$}\/6}{rad\/s}, respectively.\nWhile the training environment allows for higher velocities, we chose low velocities to avoid any ``jerky'' motion that could happen with the AUV at high speed. The camera's horizontal and vertical FOVs were set to \\ang{80} and \\ang{64}. The simulated echo-sounder's max detection range was set to \\SI{4}{m}, which are all consistent with the real-world sensor configuration.\nThe simulation environments' visibility value was randomly chosen within the range of \\SIrange{3}{39}{m}.\n\nWe trained for 250 iterations -- each with at least 2048 time steps -- and observed the reward was stable after around 120 iterations (learning rate of 3e-5).\nThe detailed constant and threshold values for the reward function -- i.e., $r_{\\textrm{success}}$, $r_{\\textrm{crash}}$, $\\Delta_h$, $\\delta_h$, and $\\delta_v$ -- were set to $10$, $10$, \\SI{0.6}{m}, \\SI{0.5}{m} and \\SI{0.3}{m}, while the scaling factors $s_0, s_1, \\ldots, s_5$ were set to $2.0$, $0.1$, $1.0$, $1.0$, $8.0$, $1.0$.\n\n\n\\subsection{Performance Comparison with Different Sensor Configurations}\n\\vspace{-0.5em}\n\\begin{figure}[t]\n \\centering\n \\resizebox{\\columnwidth}{!}{\n \\begin{tabular}{cccc}\n \\includegraphics[height=.5in, valign=c,trim={0cm 0.7cm 3.5cm 2.3cm},clip]{figs\/test_with_different_configurations2.png}\n \\includegraphics[height=.5in, valign=c,trim={4.6cm 0.7cm 4.95cm 1.3cm},clip]{figs\/withbug2.png}\n & \\includegraphics[height=.5in, valign=c,trim={5.2cm 0.7cm 4.9cm 1.3cm},clip]{figs\/withoutechosounder.png}\n & \\includegraphics[height=.5in, valign=c,trim={5.2cm 0.7cm 4.9cm 1.3cm},clip]{figs\/withechosounder.png} \n \\end{tabular}\n }\n \\caption{\\textit{Partial top view of runs in Cluttered Env. (left): Bug2 (second), Our Model w\/o SBES (third), and Our Model w\/ SBES (right).} Legend: robot's start pose (green dot); obstacles (black dots); waypoints to reach in order (circled numbers). %\n %\n }\n \\label{fig:Waypoint_tests_trajectories}\n \\vspace{-2em}\n\\end{figure}\n\n\n\\begin{table*}[b]\n\\centering\n\\caption{\\textit{Waypoint Tests Results.} 10 runs for each of the three methods: Bug2 with multi-beam sonar, our model trained without fixed single-beam echo-sounder, and our proposed model.\nThe travel time average and standard deviation (in seconds) of successful runs for each waypoint were calculated, as well as the overall success ratio to reach all five waypoints.\n}\n\\label{Quantitative_Analysis_Waypoints}\n\\resizebox{\\textwidth}{!}{\n \\begin{tabular}{cccccccc}\n \\toprule\n \\multirow{2}*{Method} & \\multirow{2}*{Sensors} & \\multicolumn{5}{c}{Traveling Time\/s (less is better)} & Success Ratio \\\\\\cline{3-7}\n & & $wp1$ & $wp2$ & $wp3$ & $wp4$ & $wp5$ & (higher is better) \\\\\n \\midrule\n %\n Bug2 & MBS & 57.6 $\\pm$ 0.3 & 66.95 $\\pm$ 0.15 & 41.15 $\\pm$ 0.45 & 69.8 $\\pm$ 0.9 & 77.65 $\\pm$ 0.45 & \\textbf{100\\%} \\\\\n Ours\\ w\/o\\ SBES & Monocular Camera & 51.8 $\\pm$ 5.94 & 56.5 $\\pm$ 2.09 & 35.62 $\\pm$ 8.07 & 47.0 $\\pm$ 2.03 & 76.0 $\\pm$ 2.21 & 40\\% \\\\\n Ours\\ w\/ SBES & Monocular Camera \\& SBES & \\textbf{38.35 $\\pm$ 0.45} & \\textbf{49.8 $\\pm$ 0.78} & \\textbf{29.3 $\\pm$ 0.78} & \\textbf{44.3 $\\pm$ 0.6} & \\textbf{67.25 $\\pm$ 0.6} & \\textbf{100\\%} \\\\\\bottomrule\n \\end{tabular}\n}\n\n\\end{table*}\n\n\nWe first tested the efficiency of our proposed multi-modal low-cost navigation approach against a traditional metric-based goal-oriented navigation method that does not require any map, given that no map of the underwater environment is available. In particular, we selected Bug2 algorithm given its guarantees on the path length. To have Bug2 work effectively, we employed a multi-beam sonar (MBS), a common but expensive sensor for underwater obstacle avoidance, which emits multiple beams in a plane with a typical horizontal FOV of $120^{\\circ}$. %\nWe also considered our model trained without the echo-sounder as ablation study to observe the effect of the SBES. %\n\nWe generated a test environment in simulation with multiple obstacles. The robot's task was to navigate to five randomly set consecutive waypoints.\nWe set all waypoints at the same depth, as typical navigation with an MBS involves the robot first arriving to the target depth and then navigating along the 2D plane.\n\n\n\n\\fig{fig:Waypoint_tests_trajectories} shows the trajectories of the three navigation methods and \\tab{Quantitative_Analysis_Waypoints} reports the quantitative results measured in terms of traveling time and success ratio. \nOur proposed system with inexpensive monocular camera and SBES achieved the highest navigation efficiency with comparable safety to $\\textrm{Bug2}$ with MBS. While the $\\textrm{Bug2}$ trajectory appeared not to be affected by noise, it spent the longest navigation time especially when moving along the obstacles. \nNote the echo-sounder played a fundamental role in safe navigation. If the echo-sounder was excluded, the model relied solely on relative monocular image depth estimation to detect surrounding obstacles. As a result, at times the chosen action might be conservative, leading to sub-optimal paths in terms of distance, or too aggressive, increasing the likelihood of collision. \n\n\n \n\n\n\n\n\n\n\\subsection{Ablation Study with Transferability Tests} \\label{Ablation Study with Transferability Tests}\n\\vspace{-0.5em}\nTo show the transferability of our proposed model to different environments and visibilities, we performed an ablation study with the same hyper-parameters and protocols, but considering the following combinations of training settings in a simulated underwater environment:\n(1) \\textbf{\\textit{Rand}}: proposed domain randomization, (2) \\textbf{\\textit{No Rand (Water)}}: fixed underwater visibility (approximately \\SI{11}{m}), and (3) \\textbf{\\textit{No Rand (Air)}}: no underwater features. \nTo firstly exhibit the models' generalizability, another simulated environment\\footnote{\\scriptsize\\url{https:\/\/github.com\/Scrawk\/Ceto}} was employed for testing. With different materials, textures, lightings and custom shaders, it had a different visual appearance compared to the training environment. In this environment, the models were tested in three different scenes, constructed to resemble possible underwater obstacles present in the real-world, such as natural structures (Scene1), submerged wrecks (Scene2) and man-made structures (Scene3). \n\n\n\\begin{figure*}[t]\n \\centering\n \\resizebox{\\textwidth}{!}{\n \\begin{tabular}{ccccccc}\n \\textbf{Scenes} & \\multicolumn{2}{c}{\\textbf{\\SI[detect-weight=true]{8}{m}}} & \\multicolumn{2}{c}{\\textbf{\\SI[detect-weight=true]{12}{m}}} &\\multicolumn{2}{c}{\\textbf{\\SI[detect-weight=true]{20}{m}}} \\\\ \n \\textbf{Scene1}\n & \\includegraphics[height=1in,%\n valign=c]{figs\/rgb_env1_V_L.png} \n & \\includegraphics[height=1.2in,%\n valign=c,trim={3cm 1cm 3cm 3cm},clip]{figs\/rand_3_3000.png}\n & \\includegraphics[height=1in,%\n valign=c]{figs\/rgb_env1_V_M.png} \n & \\includegraphics[height=1.2in,%\n valign=c,trim={3cm 1cm 3cm 3cm},clip]{figs\/rand_3_2000.png} \n & \\includegraphics[height=1in,%\n valign=c]{figs\/rgb_env1_V_H.png} \n & \\includegraphics[height=1.2in,%\n valign=c,trim={3cm 1.5cm 3cm 1.5cm},clip]{figs\/rand_3_1000.png}\\\\ \n \\textbf{Scene2}\n & \\includegraphics[height=1in,%\n valign=c]{figs\/rgb_env2_V_L.png} \n & \\includegraphics[height=1.2in, %\n valign=c,trim={3cm 0.8cm 3cm 3cm},clip]{figs\/rand_5_3000.png} \n & \\includegraphics[height=1in,%\n valign=c]{figs\/rgb_env2_V_M.png} \n & \\includegraphics[height=1.2in, %\n valign=c,trim={3cm 0.8cm 3cm 3cm},clip]{figs\/rand_5_2000.png} \n & \\includegraphics[height=1in,%\n valign=c]{figs\/rgb_env2_V_H.png} \n & \\includegraphics[height=1.2in, %\n valign=c,trim={3cm 0.8cm 3cm 3.2cm},clip]{figs\/rand_5_1000.png} \\\\ \n \\textbf{Scene3}\n & \\includegraphics[height=1in,%\n valign=c]{figs\/rgb_env3_V_L.png} \n & \\includegraphics[height=1.2in, %\n valign=c,trim={4cm 1cm 3cm 3.2cm},clip]{figs\/rand_6_3000.png}\n & \\includegraphics[height=1in,%\n valign=c]{figs\/rgb_env3_V_M.png} \n & \\includegraphics[height=1.2in,%\n valign=c,trim={4cm 1cm 3cm 3.2cm},clip]{figs\/rand_6_2000.png}\n & \\includegraphics[height=1in,%\n valign=c]{figs\/rgb_env3_V_H.png} \n & \\includegraphics[height=1.2in,%\n valign=c,trim={4cm 1cm 3cm 3.2cm},clip]{figs\/rand_6_1000.png} \\\\ \n \\end{tabular}\n }\n \\caption{\\textit{Example of Trajectories in Different Scenes with Different Training.} Legend: robot's initial position and goal waypoint (green and red dots); robot collision (red ``X''); obstacles (approximated with polygons in the plots for simplicity).\n }\n \\label{figure of 3D trajectories}\n \\vspace{-2em}\n\\end{figure*}\n\n\n\n\\begin{table*}\n\\centering\n\\caption{\\textit{Quantitative Results for Transferability Tests.} 10 runs for the three models in three scenes with different visual conditions. %\nNote: N\/A means the method failed to reach the goal during the runs and bold means the best result.\n}\n\\label{Transferability Comparison Tests}\n\\resizebox{\\textwidth}{!}{\n \\begin{tabular}{ccccccccccc}\n \\toprule\n \\multirow{2}*{Method} & & \\multicolumn{3}{c}{$\\textrm{Scene1}$} & \\multicolumn{3}{c}{$\\textrm{Scene2}$} & \\multicolumn{3}{c}{$\\textrm{Scene3}$} \\\\\\cline{3-5}\\cline{6-8}\\cline{9-11}\n & & Blurry & Medium & Clear & Blurry & Medium & Clear & Blurry & Medium & Clear\\\\\n \\midrule\n %\n & reward & 5.74 $\\pm$ 2.17 & 6.5 $\\pm$ 5.95 & 28.14 $\\pm$ 2.85 & 0.43 $\\pm$ 2.26 &\n 10.93 $\\pm$ 11.31 & 12.05 $\\pm$ 8.92 & 24.64 $\\pm$ 10.19 & 20.58 $\\pm$ 13.7\n & 29.18 $\\pm$ 8.01\n \\\\\n No\\ Rand\\ (Air) & success & 0\\% & 10\\% & \\textbf{100\\%} & 0\\% &\n 40\\% & 50\\% & 70\\% & 60\\% & 90\\%\n \\\\ \n & trav. time & N\/A & 70.0 & 67.2 $\\pm$ 0.84 & N\/A &\n \\textbf{53.12 $\\pm$ 0.65} & 55.2 $\\pm$ 2.84 & 63.29 $\\pm$ 0.88 & 66.5 $\\pm$ 4.53\n & 66.11 $\\pm$ 1.07\n \\\\ \\hline\n & reward & \\textbf{25.27 $\\pm$ 8.42} & 18.35 $\\pm$ 11.18 & 13.46 $\\pm$ 14.51 & 2.19 $\\pm$ 1.78 &\n -1.58 $\\pm$ 5.94 & 15.04 $\\pm$ 10.6 & 18.03 $\\pm$ 11.32 & 30.14 $\\pm$ 7.5\n & 29.42 $\\pm$ 3.27\n \\\\ \n No\\ Rand\\ (Water) & success & \\textbf{90\\%} & 90\\% & 40\\% & 0\\% &\n 10\\% & 70\\% & 60\\% & \\textbf{90\\%} & \\textbf{100\\% }\n \\\\\n & trav. time & 70.5 $\\pm$ 4.93 & 88.17 $\\pm$ 18.36 & 69.25 $\\pm$ 1.35 & N\/A &\n 115.0 & 59.79 $\\pm$ 8.25 & 71.42 $\\pm$ 6.9 & 73.39 $\\pm$ 2.63\n & 65.35 $\\pm$ 0.78\n \\\\ \\hline\n \n \n \n & reward & 24.66 $\\pm$ 9.3 & \\textbf{28.39 $\\pm$ 2.26} & \\textbf{29.56 $\\pm$ 2.58} & \\textbf{21.68 $\\pm$ 9.61} &\n \\textbf{23.36 $\\pm$ 7.49} & \\textbf{24.86 $\\pm$ 2.92} & \\textbf{29.17 $\\pm$ 11.34} & \\textbf{30.26 $\\pm$ 9.25}\n & \\textbf{36.26 $\\pm$ 0.83}\n \\\\\n Rand & success & \\textbf{90\\%} & \\textbf{100\\%} & \\textbf{100\\%} & \\textbf{80\\%} &\n \\textbf{90\\%} & \\textbf{100\\%} & \\textbf{80\\%} & \\textbf{90\\%} & \\textbf{100\\%}\n \\\\\n & trav. time & \\textbf{67.56 $\\pm$ 0.44} & \\textbf{68.45 $\\pm$ 0.72} & \\textbf{67.05 $\\pm$ 1.27} & \\textbf{52.0 $\\pm$ 0.35} &\n 53.44 $\\pm$ 1.23 & \\textbf{50.75 $\\pm$ 0.46} & \\textbf{60.75 $\\pm$ 0.56} & \\textbf{62.56 $\\pm$ 0.98}\n & \\textbf{61.05 $\\pm$ 0.57}\n \\\\\n \\bottomrule\n \\end{tabular}\n}\n\n\\end{table*}\n\n\nWe considered three visibility scenarios: blurry, medium, and relatively clear, with maximum visibility ranges of \\SI{8}{m}, \\SI{12}{m}, and \\SI{20}{m}, respectively. \n\\fig{figure of 3D trajectories} shows snapshots of each scene and the resulting trajectories in some sample runs. \n\n\\textbf{Comparison metrics.} \nThe following metrics were used to compare the three methods' performances (see \\tab{Transferability Comparison Tests}):\n\\begin{itemize}\n \\item [1)] \n Rewards (higher is better): cumulative reward average and standard deviation over $10$ runs,\n \\item [2)]\n Success Ratio (higher is better): number of times the robot reached the goal with no collision over $10$ runs, %\n \\item [3)]\n Travel Time (less is better): average and standard deviation traveling time ($s$). Failed runs were not considered.\n\\end{itemize}\n\nFrom the results, training with underwater features has the highest gain. Adding domain randomization allows a further increase of the cumulative rewards, success rate, and travel time.\nModels trained without randomization did not previously encounter abundant visual conditions, thus explored a limited observation space. Accordingly, they would not be easily applicable to different visibility conditions and are more vulnerable to noise especially in low-visibility environments when depth estimations are inaccurate. Scene3 in particular was challenging with blurry visibility, due to the narrow passage between the logs. \n\n\n\n\n\n\n\n\n\\subsection{Performance Demonstration in Real-World Environment} \\label{Performance Demonstration in Real-World Environment}\n\n\nWe conducted real-world experiments with a BlueROV2 in a swimming pool. \nThe robot was equipped with a Sony IMX322LQJ-C camera\\footnote{\\scriptsize\\url{https:\/\/www.bluerobotics.com\/store\/sensors-sonars-cameras\/cameras\/cam-usb-low-light-r1\/}} \nwith a resolution of 5 MP, a horizontal and vertical FOV of \\ang{80} and \\ang{64}.\nThe fixed SBES has a \\ang{30} beam width and a maximum range set to \\SI{4}{m}.\nThe (noisy) robot's pose was provided by an on-board compass, a water-pressure sensor to recover water depth, and a short baseline acoustic positioning system (SBL)\\footnote{\\scriptsize\\url{https:\/\/waterlinked.github.io\/explorer-kit\/introduction\/}}. %\nA \\SI{2.8}{GHz} Intel i7 laptop with Nvidia Quadro M1200 was used for running the inference network through the Robot Operating System (ROS). For real-time inference, DPT was replaced with its computationally less expensive counterpart MiDaS~\\cite{ranftl2019towards} as our depth prediction network -- about 0.08 seconds per inference.\n\\begin{figure}\n \\centering\n \\resizebox{\\textwidth}{!}{\n \\begin{tabular}{c c c c}\n \\includegraphics[height=.5in,trim={.3cm .4cm .3cm .6cm}, clip,%\n valign=c]{figs\/pool_plot_0.png}\n & \\includegraphics[height=.5in,%\n trim={.3cm .4cm .3cm .6cm}, clip,valign=c]{figs\/pool_real_0.png} \n \\includegraphics[height=.5in,trim={3cm .4cm 1.5cm 3cm}, clip,%\n valign=c]{figs\/pool_plot_1.png}\n & \\includegraphics[height=.5in,%\n valign=c]{figs\/pool_real_1.png} \\\\\n \\includegraphics[height=.4in,trim={.3cm .4cm .3cm .6cm}, clip,%\n valign=c]{figs\/pool_plot_2.png}\n & \\includegraphics[height=.5in,trim={.3cm .4cm .3cm .6cm}, clip,%\n valign=c]{figs\/pool_real_2.png} \n \\includegraphics[height=.5in,trim={.3cm .4cm .3cm .6cm}, clip,height=.6in,%\n valign=c]{figs\/pool_plot_3.png}\n & \\includegraphics[height=.5in,%\n valign=c]{figs\/pool_real_3.png}\n \\end{tabular}\n }\n \\vspace{-1em}\n \\caption{\\textit{Pool Experiment.} Navigation trajectories with localization noise smoothing (legend: Start and goal green and red dots; obstacles, cuboids) and images from the robot's camera. Red arrows point to the approximate goal locations behind the boxes.}\n \\label{fig:table_of_paths_and_images}\n \\vspace{-1em}\n\\end{figure}\n\nThe swimming pool was about \\SI{20}{m} by \\SI{7}{m} in size with a shallow (\\SI{1}{m}) and deep (\\SI{3}{m}) end, and a slope in the middle. Two black boxes (approximate size: 0.8 x 0.5 x 0.3 m\nwere placed in two different configurations: side by side as a large obstacle and with a \\SI{1}{m} separation to create a channel. \n\n\nResulting paths and reference images are shown in \\fig{fig:table_of_paths_and_images}.\nOur proposed navigation approach successfully drove the BlueROV2 to different 3D waypoints, avoiding obstacles by going around, above, or through a channel (see \\fig{fig:table_of_paths_and_images}).\nWe observed that the SBL provided noisier position information compared to in simulation -- at times the robot's location jumped up to a meter. %\nWhile the noise affected the calculation of the relative position to the goal, our approach does not depend on the absolute robot location to infer obstacle distance, so the robot was able to avoid obstacles.\n\n\\vspace{-0.5em}\n\\subsection{Action Prediction from Static Underwater Images}\n\n\nWe also tested joint image and SBES reading data from past field trials (in the Caribbean Sea and lake) as input to our model for action prediction.\n\\fig{fig:table_of_oceanic_image_depth_estimation} shows a sample of such images with corresponding depth predictions, locations of the goal, and predicted actions.\nAs expected, with obstacles nearby the predicted action prioritized obstacle avoidance, steering the robot away, otherwise, the action's direction pointed towards the goal. This qualitative test demonstrates our model's generalizability to real-world applications.\n\\begin{figure}[t]\n \\centering\n \\begingroup\n\\renewcommand{\\arraystretch}{4} %\n\\resizebox{\\textwidth}{!}{\n \\begin{tabular}{c c c c c c c}\n \\includegraphics[width=.29\\columnwidth, valign=c]{figs\/lake_left.pdf}\n & \\includegraphics[width=.29\\columnwidth, valign=c]{figs\/lake_center.pdf}\n & \\includegraphics[width=.29\\columnwidth, valign=c]{figs\/lake_right.pdf} \n \\includegraphics[width=.27\\columnwidth, valign=c]{figs\/reef_left.pdf}\n & \\includegraphics[width=.27\\columnwidth, valign=c]{figs\/reef_center.pdf}\n & \\includegraphics[width=.27\\columnwidth, valign=c]{figs\/reef_right.pdf} \n \\\\\\hline\n \\includegraphics[width=.29\\columnwidth, valign=c]{figs\/depth_1594226769-743939.png}\n & \\includegraphics[width=.29\\columnwidth, valign=c]{figs\/depth_1594226904-039137.png}\n & \\includegraphics[width=.29\\columnwidth, valign=c]{figs\/depth_1594226900-919820.png}\n \\includegraphics[width=.29\\columnwidth, valign=c]{figs\/depth_left_reef.png}\n & \\includegraphics[width=.29\\columnwidth, valign=c]{figs\/depth_front-reef.png}\n & \\includegraphics[width=.29\\columnwidth, valign=c]{figs\/depth_right_reef.png}\n \\end{tabular}\n }\n \\endgroup\n \\vspace{-1em}\n \\caption{\\textit{Single Image Action and Depth Prediction.} 1st row: images from Lake Sunapee and Caribbean Sea. 2nd row: their respective depth predictions. Direction and magnitude of the action predicted (red arrow); approximate goal location (yellow arrow).}\n \\vspace{-2em}\n \\label{fig:table_of_oceanic_image_depth_estimation}\n \\end{figure}\n\n\n\n\n\n\\section{Conclusion and Future Work}\\label{sec:conclusion}\n\nWe presented the first 3D map-less underwater navigation approach, based on Proximal Policy Optimization Network (PPO) and domain randomization, for low-cost underwater robots with a monocular camera and a fixed single-beam echo-sounder. By choosing deep reinforcement learning over classic methods, we were able to address the intrinsic challenges of seamless underwater navigation (e.g., lack of low-cost efficiency sensor and difficulty in generating a map given noisy positioning and perception data). We validated our approach with several comparisons and ablation studies in different simulated environments, as well as real-world validation in a swimming pool and with static underwater images. Results showed that the robot is able to navigate to arbitrary 3D goals while avoiding obstacles inferred from estimated depth images and sonar readings.\n\nIn the future, we will investigate explicit sensor fusion of camera and SBES data to achieve better depth prediction with absolute scale, e.g. early fusion~\\cite{roznere2020iros}, as well as controller and SBL data. In addition, we will consider the generation of more complex environments, other real-world experiments, and the design of integrated models for different sensor configurations (e.g., stereo cameras) and dynamic models to adapt our method to heterogeneous underwater robots.\n\n\n\n\n\n{\n\\footnotesize\n\\vspace{-1em}\n\\section*{Acknowledgments}\n\\vspace{-1em}\nWe thank Devin Balkcom for access to the pool for experiments, and Bo Zhu, Mary Flanagan, and Sukdith Punjasthitkul for GPU access. This work is supported in part by the Burke Research Initiation Award and NSF CNS-1919647, 2024541, 2144624, OIA-1923004. \n}\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\subsection{}\\label{subsection:1.1}\n\nWe study global\/local Weyl modules for toroidal Lie algebras and an affine analog of current Lie algebras.\nThe notion of Weyl modules for affine Lie algebras has been introduced by Chari-Pressley in \\cite{MR1850556} as a family of integrable highest weight modules with a universal property.\nLater Chari-Loktev initiated in \\cite{MR2271991} to study Weyl modules for current Lie algebras in a graded setting.\nThe graded characters of local Weyl modules for current Lie algebras have been studied by many authors.\nNow they are known to coincide with Macdonald polynomials specialized at $t=0$, a.k.a.\\ $q$-Whittaker functions (Chari-Loktev~\\cite{MR2271991}, Fourier-Littelmann~\\cite{MR2323538}, Naoi~\\cite{MR2855081}, Sanderson~\\cite{MR1771615}, Ion~\\cite{MR1953294}, Lenart-Naito-Sagaki-Schilling-Shimozono~\\cite{MR3674171}).\n\nToroidal Lie algebras are natural generalization of affine Lie algebras.\nFor a finite-dimensional simple Lie algebra $\\frg$, the corresponding toroidal Lie algebra $\\tor$ is defined as the universal central extension of the double loop Lie algebra $\\frg \\otimes \\bbC[s^{\\pm 1}, t^{\\pm 1}]$ with the degree operators.\nWe can also consider a Lie algebra $\\tor^+$ which is defined by replacing $\\bbC[s^{\\pm 1}, t^{\\pm 1}]$ with $\\bbC[s, t^{\\pm 1}]$.\nSee Section~\\ref{subsection:toroidal} for precise definitions.\nWe expect that the characters of Weyl modules for $\\tor$ and $\\tor^+$ produce a very interesting class of special functions.\nIn this article, we study the first nontrivial example: the Weyl module associated with the level one dominant integral weight.\n\nA big difference between the toroidal and the affine Lie algebra is the structure of their centers.\nThe toroidal Lie algebra without the degree operators has an infinite-dimensional center, while the center of the affine Lie algebra is one-dimensional.\nThe Weyl modules are examples of modules over the toroidal Lie algebra on which the action of the center does not factor a finite-dimensional quotient.\nWe note that Chari-Le have studied in \\cite{MR2017585} local Weyl modules for a quotient of the toroidal Lie algebra.\nThe resulting quotient is an extension of the double loop Lie algebra by a two-dimensional center with the degree operators.\nIn particular, the Weyl modules considered in this article are possibly bigger than those studied in \\cite{MR2017585} (See \\ref{subsection:1.3} below).\n\n\\subsection{}\\label{subsection:1.2}\n\nLet us summarize contents and results of the article.\nIn Section~\\ref{section:Preliminaries}, we introduce the main object: the toroidal Lie algebra $\\tor$.\nWe also introduce an affine analog of the current Lie algebra which is denoted by $\\tor^+$.\nThen we recall their basic properties.\nAmong other things, a certain automorphism of $\\tor$ will play an important role.\nThe ring $\\bbC[s^{\\pm 1}, t^{\\pm 1}]$ admits an $\\mathrm{SL}_2(\\mathbb{Z})$-action by the coordinate change.\nThis action naturally induces automorphisms of $\\tor$.\nWe denote by $S$ the automorphism corresponding to the $S$-transformation.\n\nIn Section~\\ref{section:Weyl modules}, we define the global and the local Weyl modules following \\cite{MR1850556}, \\cite{MR2271991}, \\cite{MR2102326}, \\cite{MR2718936}, \\cite{MR2017585}.\nThe global Weyl module $\\glob(\\Lambda)$ for $\\tor$ is attached to each dominant integral weight $\\Lambda$ of the affine Lie algebra. \nWe identify the endomorphism ring of $\\glob(\\Lambda)$ with a symmetric Laurent polynomial ring $A(\\Lambda)$ in Proposition~\\ref{prop:endomorphism} and define the local Weyl module $\\loc(\\Lambda,\\mathbf{a})$ for each maximal ideal $\\mathbf{a}$ of $A(\\Lambda)$.\nThe argument is similar to known one for the affine and the current Lie algebras.\nThe global\/local Weyl modules $\\glob^+(\\Lambda)$ and $\\loc^+(\\Lambda,\\mathbf{a})$ for $\\tor^+$ are similarly defined.\nWe prove in Proposition~\\ref{prop:weight} a finiteness property for weight spaces of the Weyl modules.\nBy this property, the characters of the local Weyl modules are well-defined.\nThis result has been established for the case of the affine Lie algebra in \\cite{MR1850556} and for a quotient of the toroidal Lie algebra in \\cite{MR2017585}. \nWe remark that we need to investigate the action of the infinite-dimensional center, which is not treated in \\cite{MR2017585}.\nThen we turn to a special case where $\\Lambda$ is of level one.\nBy the diagram automorphism, we can reduce the general level one case to that for the basic level one weight $\\Lambda_0$.\nTherefore we only consider the case of $\\Lambda_0$ in the sequel.\nWe give an upper bound for the graded character of the level one local Weyl module $\\loc^+(\\Lambda_0,0)$ over $\\tor^+$ in Proposition~\\ref{prop:upper_bound}.\n\nIn Section~\\ref{section:Vertex operator construction}, we prove an isomorphism between the level one global Weyl module $\\glob(\\Lambda_0)$ over the toroidal Lie algebra $\\tor$ and the twist of a module $\\bbV(0)$ by the automorphism $S^{-1}$, where $\\bbV(0)$ has been constructed in works of Moody-Eswara Rao-Yokonuma~\\cite{MR1066569}, Iohara-Saito-Wakimoto~\\cite{MR1688100} and Eswara Rao \\cite{MR3076215}.\nThis is our main theorem.\n\n\\begin{thm}[Theorem~\\ref{thm:main}]\nWe have an isomorphism\n\\[\n\t\\glob(\\Lambda_0) \\stackrel{\\cong}{\\longrightarrow} (S^{-1})^*\\bbV(0)\n\\]\nof $\\tor$-modules.\n\\end{thm}\n\nAs a byproduct, we prove that the upper bound in Proposition~\\ref{prop:upper_bound} indeed gives the characters of the level one local Weyl modules (see Section~\\ref{subsection:Characters} for the definition of $\\ch_p$ and $\\ch_{p,q}$).\n\\begin{cor}[Corollary~\\ref{cor:character}]\nWe have\n\\[\n\t\\ch_{p} \\loc(\\Lambda_0,a) = \\ch_{p} \\loc^+(\\Lambda_0,a) = \\ch_p L(\\Lambda_0) \\left( \\prod_{n>0} \\dfrac{1}{1-p^n} \\right)\n\\]\nfor $a \\in \\bbC^{\\times}$ and\n\\[\n\t\\ch_{p,q} \\loc^+(\\Lambda_0,0) = \\ch_p L(\\Lambda_0) \\left( \\prod_{n>0} \\dfrac{1}{1-p^n q} \\right).\n\\]\nHere $L(\\Lambda_0)$ is the level one integrable irreducible module of the affine Lie algebra with highest weight $\\Lambda_0$. \n\\end{cor}\n\n\\subsection{}\\label{subsection:1.3}\n\nLet us give two comments regarding other works.\nThe first one is for \\cite{MR2017585} mentioned earlier.\nIn \\cite{MR2017585}, Chari-Le have studied local Weyl modules for some quotients of $\\tor$ and $\\tor^+$.\nThey have proved that the level one local Weyl modules in their setting are irreducible and are isomorphic to the evaluation modules \\cite[Theorem~4]{MR2017585}.\nHence we see by our results that the level one local Weyl modules for $\\tor$ and $\\tor^+$ are bigger than those studied in \\cite{MR2017585}.\nWe remark that one of our results (Proposition~\\ref{prop:upper_bound}) gives an alternative proof of \\cite[Theorem~4]{MR2017585}.\n\nThe second one is for \\cite{MR3908899}.\nIn \\cite[Theorem~3.8]{MR3908899}, Tsymbaliuk has proved that the level one Fock representation of Saito-Takemura-Uglov \\cite{MR1603798} and Feigin-Jimbo-Miwa-Mukhin \\cite{MR3023228} over the quantum toroidal algebra of type A is isomorphic to a twist of the vertex representation of Saito \\cite{MR1617066}.\nHere the twist is given by an automorphism analogous to $S^{-1}$ which has been constructed by Miki \\cite{MR1693755}.\nThis result motivated the present work.\nIn the situation of \\cite{MR3908899}, both the Fock and the vertex representations are known to be irreducible and hence it can be checked by comparing their highest weights to show the isomorphism.\nThus, although the calculation of $S^{-1}$ in the quantum toroidal case is much more involved, the argument to show the isomorphism is simple.\nIt is an interesting problem to establish results analogous to this article for quantum toroidal algebras and affine Yangians.\n\n\\subsection*{Acknowledgments}\nThe author is grateful to Ryo Sato who pointed out that the result of \\cite{MR3076215} can be used to improve this work.\nHe also would like to thank Yoshihisa Saito and Kentaro Wada for helpful discussion. \nThis work was supported by JSPS KAKENHI Grant Number 17H06127 and 18K13390.\n\n\\section{Preliminaries}\\label{section:Preliminaries}\n\n\\subsection{Simple Lie algebras}\n\nLet $\\frg$ be a finite-dimensional simple Lie algebra over $\\bbC$ with a fixed Cartan subalgebra $\\frh$.\nWe also fix a Borel subalgebra containing $\\frh$.\nThe index set of simple roots is denoted by $I$.\nLet $\\alpha_i$ ($i \\in I$) be simple roots.\nWe denote by $\\Delta$, $\\Delta^+$, $\\Delta^-$ the sets of roots, positive roots, negative roots, respectively.\nLet $\\frg_{\\alpha}$ ($\\alpha \\in \\Delta)$ be the corresponding root space and put $\\frg_0 = \\frh$.\nThe highest root is denoted by $\\theta$.\n\nLet $(\\,,\\,)$ be a nondegenerate invariant symmetric bilinear form on $\\frg$.\nWe denote by the same letter the bilinear form on $\\frh^*$ induced from $(\\,,\\,)$ and normalize them by $(\\theta,\\theta)=2$.\nPut $d_i = (\\alpha_i,\\alpha_i)\/2$.\nWe fix Chevalley generators $e_i, f_i, h_i$ ($i \\in I$) so that $(e_i,f_i)=d_i^{-1}$ and $h_i = [e_i,f_i]$.\nWe also fix root vectors $e_{\\theta} \\in \\frg_{\\theta}$ and $f_{\\theta} \\in \\frg_{-\\theta}$ so that $(e_{\\theta},f_{\\theta})=1$.\nWe denote by $h_{\\alpha} \\in \\frh$ the coroot corresponding to $\\alpha \\in \\Delta$.\nThe root lattice $Q$ is defined by $Q=\\bigoplus_{i \\in I} \\bbZ \\alpha_i$.\n\n\\subsection{Toroidal Lie algebras}\\label{subsection:toroidal}\n\nThe universal central extension of the Lie algebra $\\frg \\otimes \\bbC[s^{\\pm 1},t^{\\pm 1}]$ is given by\n\\[\n\t\\frg \\otimes \\bbC[s^{\\pm 1},t^{\\pm 1}] \\oplus \\Omega_{\\bbC[s^{\\pm 1},t^{\\pm 1}]} \/ \\Ima d.\n\\]\nHere $\\Omega_A$ for a commutative $\\bbC$-algebra $A$ denotes the module of differentials, and $d \\colon A \\to \\Omega_A$ the differential map.\nThe Lie bracket is given by\n\\[\n\t[x \\otimes a, y \\otimes b] = [x,y] \\otimes ab + (x,y) (da)b.\n\\]\nSee \\cite[Section~2]{MR1066569} for details.\n\nWe put\n\\[\n\tc(k,l) = \\begin{cases}\n\t\ts^k t^{l-1} dt & \\text{if } k \\neq 0,\\\\\n\t\ts^{-1} t^l ds & \\text{if } k = 0\n\t\\end{cases}\n\\]\nfor $(k,l) \\in \\bbZ^2 \\setminus \\{(0,0)\\}$ and $c_s = s^{-1} ds$, $c_t = t^{-1} dt$. \nThen $\\Omega_{\\bbC[s^{\\pm 1},t^{\\pm 1}]} \/ \\Ima d$ has a $\\bbC$-basis $c(k,l)$ with $(k,l) \\in \\bbZ^2 \\setminus \\{(0,0)\\}$, $c_s$, $c_t$.\nWe can explicitly describe the Lie bracket as follows:\n\\begin{equation}\n\t\\begin{split}\n\t\t&[x \\otimes s^k t^l, y \\otimes s^m t^n] \\\\\n\t\t&= \\begin{cases}\n\t\t\t[x,y] \\otimes s^{k+m} t^{l+n} + (x,y) \\dfrac{lm-kn}{k+m} c(k+m,l+n) & \\text{if } k+m \\neq 0,\\\\\n\t\t\t[x,y] \\otimes t^{l+n} + (x,y) k c(0,l+n) & \\text{if } k+m = 0 \\text{ and } l+n \\neq 0,\\\\\n\t\t\t[x,y] \\otimes 1 + (x,y) ( k c_s + l c_t ) & \\text{if } k+m = 0 \\text{ and } l+n = 0.\n\t\t\\end{cases}\\label{eq:bracket}\n\t\\end{split}\n\\end{equation}\nWe add the degree operators $d_s$, $d_t$ to this central extension and define the toroidal Lie algebra $\\tor$ by\n\\[\n\t\\tor = \\frg \\otimes \\bbC[s^{\\pm 1},t^{\\pm 1}] \\oplus \\bigoplus_{(k,l) \\in \\bbZ^2 \\setminus \\{(0,0)\\}} \\bbC c(k,l) \\oplus \\bbC c_s \\oplus \\bbC c_t \\oplus \\bbC d_s \\oplus \\bbC d_t,\n\\]\nwhere the additional commutation relations are as follows:\n\\begin{gather*}\n\t[d_s, x \\otimes s^k t^l] = k x \\otimes s^k t^l, \\quad [d_t, x \\otimes s^k t^l] = l x \\otimes s^k t^l, \\\\\n\t[d_s, c(k,l)] = k c(k,l), \\quad [d_t, c(k,l)] = l c(k,l),\\\\\n\t[d_s,c_s]=[d_t,c_s]=[d_s,c_t]=[d_t,c_t]=[d_s,d_t]=0.\n\\end{gather*}\n\n\\begin{rem}\nNote that we have\n\\[\n\tc(k,l) = \\begin{cases}\n\t\t(-k\/l) s^{k-1} t^{l} ds & \\text{if } k \\neq 0,\\\\\n\t\ts^{-1} t^l ds & \\text{if } k = 0\n\t\\end{cases}\n\\]\nfor $l \\neq 0$.\nIn particular, $c(k+1,l)$ is a nonzero multiple of $s^{k} t^{l} ds$ if $l \\neq 0$. \nWe will use this fact throughout the article.\n\\end{rem}\n\nLet $\\tor'$ be the Lie subalgebra of $\\tor$ without $d_s$:\n\\[\n\t\\tor' = \\frg \\otimes \\bbC[s^{\\pm 1},t^{\\pm 1}] \\oplus \\bigoplus_{(k,l) \\in \\bbZ^2 \\setminus \\{(0,0)\\}} \\bbC c(k,l) \\oplus \\bbC c_s \\oplus \\bbC c_t \\oplus \\bbC d_t.\n\\]\nWe also consider the following Lie subalgebra $\\tor^+$ of $\\tor$:\n\\[\n\t\\tor^+ = \\frg \\otimes \\bbC[s,t^{\\pm 1}] \\oplus \\bigoplus_{\\substack{k \\geq 1\\\\l \\in \\bbZ}} \\bbC c(k,l) \\oplus \\bbC c_t \\oplus \\bbC d_t.\n\\]\nThe Lie algebra $\\tor^+$ is the semidirect product of the universal central extension of $\\frg \\otimes \\bbC[s,t^{\\pm 1}]$ and the 1-dimensional abelian Lie algebra $\\bbC d_t$.\nIt is an affine analog of the current Lie algebra $\\frg \\otimes \\bbC[s]$ and has a $\\bbZ_{\\geq 0}$-graded Lie algebra structure by assigning\n\\[\n\t\\deg (x \\otimes s^k t^l) = k \\ (x \\in \\frg),\\quad \\deg c(k,l) = k \\ (k \\geq 1, l \\in \\bbZ),\\quad \\deg c_t = \\deg d_t = 0.\n\\]\n\n\\begin{rem}\nLater we will study graded $\\tor^+$-modules.\nIt is equivalent to considering modules of $\\tor^+ \\oplus \\bbC d_s$. \n\\end{rem}\n\nThe toroidal Lie algebra $\\tor$ contains two Lie subalgebras $\\aff^{(s)}$ and $\\aff^{(t)}$ isomorphic to the affine Lie algebra associated with $\\frg$:\n\\[\n\t\\aff^{(s)} = \\frg \\otimes \\bbC[s^{\\pm 1}] \\oplus \\bbC c_s \\oplus \\bbC d_s, \\quad \\aff^{(t)} = \\frg \\otimes \\bbC[t^{\\pm 1}] \\oplus \\bbC c_t \\oplus \\bbC d_t.\n\\]\nNote that $\\tor^+$ contains $\\aff^{(t)}$.\nWe have\n\\[\n\t\\tor = \\left(\\aff^{(t)}\\right)' \\otimes \\bbC[s^{\\pm 1}] \\oplus \\bigoplus_{\\substack{k \\in \\bbZ\\\\l \\neq 0}} \\bbC c(k,l) \\oplus \\bbC c_s \\oplus \\bbC d_s \\oplus \\bbC d_t,\n\\]\n\\[\n\t\\tor^+ = \\left(\\aff^{(t)}\\right)' \\otimes \\bbC[s] \\oplus \\bigoplus_{\\substack{k \\geq 1\\\\l \\neq 0}} \\bbC c(k,l) \\oplus \\bbC d_t,\n\\]\nwhere $\\left(\\aff^{(t)}\\right)' = \\frg \\otimes \\bbC[t^{\\pm 1}] \\oplus \\bbC c_t$.\nHere, the elements $c(k,0)=s^k t^{-1} dt$ are regarded as $c_t \\otimes s^k \\in \\left(\\aff^{(t)}\\right)' \\otimes s^k$.\n\n\\begin{rem}\\label{rem:CL}\nChari-Le~\\cite{MR2017585} have studied a version of toroidal Lie algebras which is the quotient of $\\tor$ modulo the elements $c(k,l)$ with $l \\neq 0$, namely, it is equal to\n\\[\n\t\\frg \\otimes \\bbC[s^{\\pm 1},t^{\\pm 1}] \\oplus \\bigoplus_{k \\neq 0} \\bbC c(k,0) \\oplus \\bbC c_s \\oplus \\bbC c_t \\oplus \\bbC d_s \\oplus \\bbC d_t\n\t=\\left(\\aff^{(t)}\\right)' \\otimes \\bbC[s^{\\pm 1}] \\oplus \\bbC c_s \\oplus \\bbC d_s \\oplus \\bbC d_t\n\\]\nas a $\\bbC$-vector space.\n\\end{rem}\n\nWe introduce presentations of $\\tor$ and $\\tor^+$.\nPut $\\affI = I \\sqcup \\{0\\}$.\nLet $(a_{ij})_{i,j \\in \\affI}$ be the Cartan matrix of $\\aff^{(t)}$ and set $d_0 = 1$.\n\\begin{dfn}\nLet $\\frt$ be the Lie algebra generated by $e_{i,k}$, $f_{i,k}$, $h_{i,k}$ ($i \\in \\affI$, $k \\in \\bbZ$), $c_s$, $d_s$, $d_t$ subject to the following defining relations:\n\\begin{gather*}\n\tc_s :\\text{central}, \\quad [h_{i,k},h_{j,l}]=d_j^{-1} a_{ij} k \\delta_{k+l,0} c_s, \\quad [e_{i,k},f_{j,l}]=\\delta_{ij} \\left( h_{i,k+l} + d_i^{-1} k \\delta_{k+l,0} c_s \\right),\\\\\n\t[h_{i,k},e_{j,l}] = a_{ij} e_{j,k+l}, \\quad [h_{i,k},f_{j,l}] = -a_{ij} f_{j,k+l},\\\\\n\t[e_{i,k},e_{i,l}] = 0, \\quad [f_{i,k},f_{i,l}] = 0,\\\\\n\t(\\ad e_{i,0})^{1-a_{ij}} e_{j,k} = 0, \\quad (\\ad f_{i,0})^{1-a_{ij}} f_{j,k} = 0, \\quad (i \\neq j)\\\\\n\t[d_s, e_{i,k}] = k e_{i,k}, \\quad [d_s, f_{i,k}] = k f_{i,k}, \\quad [d_s, h_{i,k}] = k h_{i,k},\\\\\n\t[d_t, e_{i,k}] = \\delta_{i,0} e_{i,k}, \\quad [d_t, f_{i,k}] = -\\delta_{i,0} f_{i,k}, \\quad [d_t, h_{i,k}] = 0,\\\\\n\t[d_s,d_t]=0.\n\\end{gather*}\n\\end{dfn}\n\n\\begin{dfn}\nLet $\\frs$ be the Lie algebra generated by $e_{i,k}$, $f_{i,k}$, $h_{i,k}$ ($i \\in \\affI$, $k \\in \\bbZ_{\\geq 0}$), $d_t$ subject to the following defining relations:\n\\begin{gather*}\n\t[h_{i,k},h_{j,l}]=0, \\quad [e_{i,k},f_{j,l}]=\\delta_{ij} h_{i,k+l},\\\\\n\t[h_{i,k},e_{j,l}] = a_{ij} e_{j,k+l}, \\quad [h_{i,k},f_{j,l}] = -a_{ij} f_{j,k+l},\\\\\n\t[e_{i,k},e_{i,l}] = 0, \\quad [f_{i,k},f_{i,l}] = 0,\\\\\n\t(\\ad e_{i,0})^{1-a_{ij}} e_{j,k} = 0, \\quad (\\ad f_{i,0})^{1-a_{ij}} f_{j,k} = 0, \\quad (i \\neq j)\\\\\n\t[d_t, e_{i,k}] = \\delta_{i,0} e_{i,k}, \\quad [d_t, f_{i,k}] = -\\delta_{i,0} f_{i,k}, \\quad [d_t, h_{i,k}] = 0.\n\\end{gather*}\n\\end{dfn}\n\n\\begin{thm}[\\cite{MR1066569} Proposition~3.5, \\cite{GRW} Proposition~4.4]\nWe have an isomorphism of Lie algebras $\\frt \\to \\tor$ such that\n\\begin{gather*}\n\te_{i,k} \\mapsto \\begin{cases}\n\t\te_i \\otimes s^k & \\text{if } i \\in I, \\\\\n\t\tf_{\\theta} \\otimes s^k t & \\text{if } i =0,\n\t\\end{cases}\\quad \n\tf_{i,k} \\mapsto \\begin{cases}\n\t\tf_i \\otimes s^k & \\text{if } i \\in I, \\\\\n\t\te_{\\theta} \\otimes s^k t^{-1} & \\text{if } i =0,\n\t\\end{cases}\\\\\n\th_{i,k} \\mapsto \\begin{cases}\n\t\th_i \\otimes s^k & \\text{if } i \\in I, \\\\\n\t\t-h_{\\theta} \\otimes s^k + s^k t^{-1} dt & \\text{if } i =0,\n\t\\end{cases}\\quad c_s \\mapsto c_s,\\quad d_s \\mapsto d_s,\\quad d_t \\mapsto d_t.\n\\end{gather*}\nMoreover this restricts to an isomorphism $\\frs \\to \\tor^+$.\n\\end{thm}\n\nUnder the isomorphism, the elements $e_{i,0}, f_{i,0}, h_{i,0}$ are in the Lie subalgebra $\\aff^{(t)}$ and identified with its Chevalley generators.\nWe sometimes denote them by $e_{i}, f_{i}, h_{i}$.\nNote that $e_{i,k}$, $f_{i,k}$, $h_{i,k}$ ($i \\in I$, $k \\in \\bbZ$), $c_s$, $d_s$ generate the Lie subalgebra $\\aff^{(s)}$ of $\\frt \\cong \\tor$.\n\nWe introduce notions for the affine Lie algebra $\\aff^{(t)}$.\nLet $\\affn^{(t)}$ be the Lie subalgebra of $\\aff^{(t)}$ generated by $e_i$ ($i \\in \\affI$), and $\\affnbar^{(t)}$ that generated by $f_i$ ($i \\in \\affI$).\nSet\n\\[\n\t\\affh^{(t)} = \\frh \\oplus \\bbC c_t \\oplus \\bbC d_t.\n\\]\nThe generator of imaginary roots is denoted by $\\delta$.\nWe put $\\alpha_0 = -\\theta + \\delta$ so that $\\alpha_i$ ($i \\in \\affI$) forms simple roots of $\\aff^{(t)}$.\nWe denote by $\\affDelta$, $\\affDelta^+$ the sets of roots, positive roots, respectively.\nLet $\\left(\\aff^{(t)}\\right)_{\\alpha}$ ($\\alpha \\in \\affDelta)$ be the corresponding root space.\nThe coroot is defined by $h_{\\beta+l\\delta}=h_{\\beta}+lc_t$ for $\\beta \\in \\Delta \\cup \\{0\\}$ and $l \\in \\bbZ$.\nWe set $\\affQ = \\bigoplus_{i \\in \\affI} \\bbZ \\alpha_i$ and $\\affQ^+ = \\sum_{i \\in \\affI} \\bbZ_{\\geq 0} \\alpha_i$. \n\nWe say that an element $\\Lambda$ of $\\Hom_{\\bbC} (\\affh^{(t)},\\bbC)$ is a dominant integral weight of $\\aff^{(t)}$ if $\\langle h_i, \\Lambda\\rangle \\in \\bbZ_{\\geq 0}$ holds for any $i \\in \\affI$.\nIn this article, they are further assumed to satisfy $\\langle d_t, \\Lambda\\rangle =0$ for simplicity.\nDefine the fundamental weights $\\Lambda_i$ ($i \\in \\affI$) by $\\langle h_j , \\Lambda_i \\rangle = \\delta_{ij}$ and $\\langle d_t, \\Lambda_i \\rangle = 0$.\nWe denote by $L(\\Lambda)$ the irreducible $\\aff^{(t)}$-module with highest weight $\\Lambda$.\nWe will use the symbol $L(\\Lambda)^{(s)}$ for the irreducible $\\aff^{(s)}$-module with highest weight $\\Lambda$.\n\n\\subsection{Triangular decomposition}\n\nLet $\\torn$ be the Lie subalgebra of $\\tor$ generated by $e_{i,k}$ ($i \\in \\affI$, $k \\in \\bbZ$), and $\\tornbar$ that generated by $f_{i,k}$ ($i \\in \\affI$, $k \\in \\bbZ$).\nSet\n\\begin{equation*}\n\t\\begin{split}\n\t\t\\torh &= \\frh \\otimes \\bbC[s^{\\pm 1}] \\oplus \\displaystyle\\bigoplus_{k \\neq 0} \\bbC c(k,0) \\oplus \\bbC c_s \\oplus \\bbC c_t \\oplus \\bbC d_s \\oplus \\bbC d_t \\\\\n\t\t&= \\left(\\frh \\oplus \\bbC c_t\\right) \\otimes \\bbC[s^{\\pm 1}] \\oplus \\bbC c_s \\oplus \\bbC d_s \\oplus \\bbC d_t.\n\t\\end{split}\n\\end{equation*}\n\n\\begin{prop}\nWe have\n\\[\n\t\\torn = \\affn^{(t)} \\otimes \\bbC[s^{\\pm 1}] \\oplus \\displaystyle\\bigoplus_{\\substack{k \\in \\bbZ \\\\ l \\geq 1}} \\bbC c(k,l),\\quad\n\t\\tornbar = \\affnbar^{(t)} \\otimes \\bbC[s^{\\pm 1}] \\oplus \\displaystyle\\bigoplus_{\\substack{k \\in \\bbZ \\\\ l \\leq -1}} \\bbC c(k,l).\n\\]\n\\end{prop}\n\n\\begin{proof}\nDenote by $\\torn'$ and $\\tornbar'$ the right-hand sides.\nThen we see by the formula of the Lie bracket (\\ref{eq:bracket}) that $\\torn \\supset \\torn'$ and $\\tornbar \\supset \\tornbar'$.\nWe also see that $\\tornbar + \\torh + \\torn = \\tornbar \\oplus \\torh \\oplus \\torn$.\nSince we have $\\tor = \\tornbar' \\oplus \\torh \\oplus \\torn'$, the assertion holds.\n\\end{proof}\n\nIn this article, we call\n\\[\n\t\\tor = \\tornbar \\oplus \\torh \\oplus \\torn\n\\]\nthe triangular decomposition of $\\tor$.\n\nIn $\\tor^+$, the elements $e_{i,k}$ ($i \\in \\affI$, $k \\in \\bbZ_{\\geq 0}$) generate \n\\[\n\t\\torn \\cap \\tor^+ = \\affn^{(t)} \\otimes \\bbC[s] \\oplus \\displaystyle\\bigoplus_{\\substack{k \\geq 1 \\\\ l \\geq 1}} \\bbC c(k,l),\n\\]\nand $f_{i,k}$ ($i \\in \\affI$, $k \\in \\bbZ_{\\geq 0}$) generate \n\\[\n\t\\tornbar \\cap \\tor^+ = \\affnbar^{(t)} \\otimes \\bbC[s] \\oplus \\displaystyle\\bigoplus_{\\substack{k \\geq 1 \\\\ l \\leq -1}} \\bbC c(k,l).\n\\]\nFurther set\n\\[\n\t\\torh' = \\torh \\cap \\tor' = \\left(\\frh \\oplus \\bbC c_t\\right) \\otimes \\bbC[s^{\\pm 1}] \\oplus \\bbC c_s \\oplus \\bbC d_t.\n\\]\n\n\\subsection{Automorphisms}\\label{subsection:auto}\n\nLet $S$ be the ring automorphism of $\\bbC[s^{\\pm 1},t^{\\pm 1}]$ defined by $s \\mapsto t$, $t \\mapsto s^{-1}$.\nIt naturally induces a Lie algebra automorphism of $\\tor$ which is denoted by the same letter $S$.\nLater we will rather use its inverse $S^{-1}$.\nIt corresponds to the assignment $s \\mapsto t^{-1}$, $t \\mapsto s$.\nIn particular we have $S^{-1}(c(k,l)) = c(l,-k)$, $S^{-1}(c_s) = -c_t$ and $S^{-1}(c_t) = c_s$.\n\nWe introduce Lie algebra automorphisms $T_0$ and $T_{\\theta}$ of $\\tor$ by\n\\[\n\tT_0 = \\exp\\ad e_0 \\circ \\exp\\ad (-f_0) \\circ \\exp\\ad e_0,\n\\]\n\\[\n\tT_{\\theta} = \\exp\\ad e_{\\theta} \\circ \\exp\\ad (-f_{\\theta}) \\circ \\exp\\ad e_{\\theta}.\n\\]\nWe can regard them as automorphisms of $\\tor^+$ by restriction.\n\n\\begin{lem}\\label{lem:induction}\nWe have $e_{\\theta} \\otimes s^k t^l = T_0 T_{\\theta} (e_{\\theta} \\otimes s^k t^{l+2})$.\n\\end{lem}\n\n\\begin{proof}\nBy a direct calculation.\nWe use the following:\n\\begin{align*}\n\tT_{\\theta} (e_{\\theta} \\otimes s^k t^{l+2}) &= - f_{\\theta} \\otimes s^k t^{l+2},\\\\\n\t\\exp\\ad e_0 (f_{\\theta} \\otimes s^k t^{l+2}) &= f_{\\theta} \\otimes s^k t^{l+2},\\\\\n\t\\exp\\ad (-f_0) (f_{\\theta} \\otimes s^k t^{l+2}) &= f_{\\theta} \\otimes s^k t^{l+2} - (h_{\\theta} \\otimes s^k t^{l+1}-s^kt^ldt) - e_{\\theta} \\otimes s^k t^{l},\\\\\n\t\\exp\\ad e_0 (h_{\\theta} \\otimes s^k t^{l+1}) &= h_{\\theta} \\otimes s^k t^{l+1} + 2 f_{\\theta} \\otimes s^k t^{l+2},\\\\\n\t\\exp\\ad e_0 (e_{\\theta} \\otimes s^k t^{l}) &= e_{\\theta} \\otimes s^k t^{l} - h_{\\theta} \\otimes s^k t^{l+1} + s^k t^l dt - f_{\\theta} \\otimes s^k t^{l+2}.\n\\end{align*}\n\\end{proof}\n\nLet $M$ be a module of $\\mathcal{A}=\\tor,$ $\\tor',$ or $\\tor^+$ and assume that $M$ is integrable as a $\\aff^{(t)}$-module.\nThen $T_0, T_{\\theta} \\in \\Aut M$ are similarly defined.\nMoreover they satisfy\n\\[\n\tT_0(xv) = T_0(x)T_0(v), \\quad T_{\\theta}(xv) = T_{\\theta}(x)T_{\\theta}(v)\n\\]\nfor $x \\in \\mathcal{A}$ and $v \\in M$.\n\nThe Lie algebra automorphism $\\tau_a$ ($a \\in \\bbC$) of $\\tor^+$ is induced from the map $s \\mapsto s+a$.\n\n\\subsection{Characters}\\label{subsection:Characters}\n\nLet $M$ be a module of $\\mathcal{A}=\\tor,$ $\\tor',$ or $\\tor^+$ and regard it as a $\\aff^{(t)}$-module by restriction.\nFor $\\lambda \\in \\frh^*$ and $m \\in \\bbC$, let $M_{\\lambda-m\\delta}$ be the corresponding weight space.\nIn this article, we always assume that any $\\aff^{(t)}$-module $M$ has the weight space decomposition and $M_{\\lambda-m\\delta}=0$ unless $m \\in \\bbZ$.\n\nWe define the $p$-character $\\ch_p M$ of $M$ by\n\\[\n\t\\ch_p M = \\sum_{\\substack{\\lambda \\in \\frh^*\\\\ m \\in \\bbZ}} (\\dim M_{\\lambda-m\\delta}) e^{\\lambda} p^{m}\n\\]\nif it is well-defined.\nThis is nothing but the ordinary $\\aff^{(t)}$-character with $p=e^{-\\delta}$. \nLet $M$ be a graded $\\tor^+$-module and $M_{\\lambda-m\\delta} = \\bigoplus_{n \\in \\bbZ} M_{\\lambda-m\\delta}[n]$ the decomposition of the weight space into graded pieces.\nWe define the $(p,q)$-character $\\ch_{p,q} M$ of $M$ by\n\\[\n\t\\ch_{p,q} M = \\sum_{\\substack{\\lambda \\in \\frh^*\\\\ m,n \\in \\bbZ}} (\\dim M_{\\lambda-m\\delta}[n]) e^{\\lambda} p^{m} q^{n}\n\\] \nif it is well-defined.\nFor two formal sums \n\\[\n\tf = \\sum_{\\substack{\\lambda \\in \\frh^*\\\\ m \\in \\bbZ}} f_{\\lambda,m} e^{\\lambda} p^{m}, \\quad g = \\sum_{\\substack{\\lambda \\in \\frh^*\\\\ m \\in \\bbZ}} g_{\\lambda,m} e^{\\lambda} p^{m} \\quad (f_{\\lambda,m}, g_{\\lambda,m} \\in \\bbZ),\n\\] \nwe say $f \\leq g$ if $f_{\\lambda,m} \\leq g_{\\lambda,m}$ holds for all $\\lambda$ and $m$.\nWe define an inequality $\\leq$ for \n\\[\n\tf = \\sum_{\\substack{\\lambda \\in \\frh^*\\\\ m,n \\in \\bbZ}} f_{\\lambda,m,n} e^{\\lambda} p^{m}q^{n}, \\quad g = \\sum_{\\substack{\\lambda \\in \\frh^*\\\\ m,n \\in \\bbZ}} g_{\\lambda,m,n} e^{\\lambda} p^{m}q^{n} \\quad (f_{\\lambda,m,n}, g_{\\lambda,m,n} \\in \\bbZ)\n\\] \nsimilarly.\n\n\\section{Weyl modules}\\label{section:Weyl modules}\n\n\\subsection{Definitions of global\/local Weyl modules}\n\n\\begin{dfn}\nLet $\\Lambda$ be a dominant integral weight of $\\aff^{(t)}$.\nThe global Weyl module $\\glob(\\Lambda)$ for $\\tor$ with highest weight $\\Lambda$ is the $\\tor$-module generated by $v_{\\Lambda}$ subject to the following defining relations:\n\\begin{gather*}\n\te_{i,k} v_{\\Lambda} = 0\\ (i \\in \\affI, k \\in \\bbZ),\\quad h v_{\\Lambda} = \\langle h,\\Lambda \\rangle v_{\\Lambda}\\ (h \\in \\affh^{(t)}),\\quad\tf_i^{\\langle h_i,\\Lambda \\rangle + 1} v_{\\Lambda} = 0\\ (i \\in \\affI), \\label{eq:global1} \\\\\n\t\tc_s v_{\\Lambda} = d_s v_{\\Lambda} = 0. \\label{eq:global2}\n\\end{gather*}\nThe global Weyl module $\\glob^+(\\Lambda)$ for $\\tor^+$ with highest weight $\\Lambda$ is the $\\tor^+$-module generated by $v_{\\Lambda}^+$ subject to the following defining relations:\n\\[\n\te_{i,k} v_{\\Lambda}^+ = 0\\ (i \\in \\affI, k \\in \\bbZ_{\\geq 0}),\\quad h v_{\\Lambda}^+ = \\langle h,\\Lambda \\rangle v_{\\Lambda}^+\\ (h \\in \\affh^{(t)}),\\quad \tf_i^{\\langle h_i,\\Lambda \\rangle + 1} v_{\\Lambda}^+ = 0\\ (i \\in \\affI).\n\\]\n\\end{dfn}\n\nWe describe the endomorphism rings of $\\glob(\\Lambda)$ and $\\glob^{+}(\\Lambda)$.\nThe following argument is the same as in the case of the affine and the current Lie algebras.\nWe give some details for completeness.\n\n\\begin{lem}\nWe have an action $\\varphi$ of $U(\\torh')$ on each weight space $\\glob(\\Lambda)_{\\Lambda-\\beta}$ $(\\beta \\in \\affQ^{+})$ defined by\n\\[\n\t\\varphi(a) (X v_{\\Lambda} ) = X (a v_{\\Lambda})\n\\]\nfor $a \\in U(\\torh')$ and $X \\in U(\\tor')$.\n\\end{lem}\n\n\\begin{proof}\nTo see that the action is well-defined, we need to check that $X v_{\\Lambda}=0$ implies $X (a v_{\\Lambda})=0$.\nBy the same argument as \\cite[3.4]{MR2718936}, we can show that if $v$ satisfies the relations \n\\[\n\te_{i,k} v = 0\\ (i \\in \\affI, k \\in \\bbZ),\\ h v = \\langle h,\\Lambda \\rangle v\\ (h \\in \\affh^{(t)}),\\ f_i^{\\langle h_i,\\Lambda \\rangle + 1} v = 0\\ (i \\in \\affI),\\ c_s v = 0,\n\\]\nthen so does $a v$.\nThis completes the proof.\n\\end{proof}\n\nLet $\\Ann v_{\\Lambda}$ be the annihilator ideal of $U(\\torh')$ and set\n\\[\n\t\\tilde{A}(\\Lambda) = U(\\torh') \/ \\Ann v_{\\Lambda}.\n\\]\nSince the action $\\varphi$ of $\\torh'$ factors through an abelian Lie algebra $\\torh' \/ \\bbC c_s \\oplus \\bbC d_t$, $\\tilde{A}(\\Lambda)$ is a commutative algebra.\n\n\\begin{lem}\\label{lem:highest_weight_space}\nThe action map\n\\[\n\t\\tilde{A}(\\Lambda) \\to \\glob(\\Lambda)_{\\Lambda}, \\quad a \\mapsto a v_{\\Lambda}\n\\]\ngives an isomorphism of $\\bbC$-vector spaces.\n\\end{lem}\n\n\\begin{proof}\nThe well-definedness and the injectivity immediately follow from the definition of $\\tilde{A}(\\Lambda)$.\nThe surjectivity holds since we have $\\glob(\\Lambda)_{\\Lambda} = U(\\torh') v_{\\Lambda}$.\n\\end{proof}\n\n\\begin{lem}\nThe natural map\n\\[\n\t\\tilde{A}(\\Lambda) \\to \\End_{\\tor'} \\glob(\\Lambda), \\quad a \\mapsto \\varphi(a)\n\\]\ngives an isomorphism of $\\bbC$-algebras.\n\\end{lem}\n\n\\begin{proof}\nBy the definition of $\\tilde{A}(\\Lambda)$, we have a natural injective algebra homomorphism\n\\[\n\t\\tilde{A}(\\Lambda) \\to \\End_{\\tor'} \\glob(\\Lambda), \\quad a \\mapsto \\varphi(a).\n\\]\nWe also have a natural $\\bbC$-linear map\n\\[\n\t\\End_{\\tor'} \\glob(\\Lambda) \\to \\glob(\\Lambda)_{\\Lambda}, \\quad f \\mapsto f(v_{\\Lambda})\n\\]\nand this is injective since $\\glob(\\Lambda)$ is generated by $v_{\\Lambda}$.\nThe composite of the maps\n\\[\n\t\\tilde{A}(\\Lambda) \\hookrightarrow \\End_{\\tor'} \\glob(\\Lambda) \\hookrightarrow \\glob(\\Lambda)_{\\Lambda}\n\\]\nis given by $a \\mapsto a v_{\\Lambda}$.\nSince this map is bijective by Lemma~\\ref{lem:highest_weight_space}, the two injective maps are bijective.\n\\end{proof}\n\nWrite $\\Lambda = \\sum_{i \\in \\affI} m_i \\Lambda_i$ with the fundamental weights $\\Lambda_i$ and $m_i \\in \\bbZ_{\\geq 0}$.\nWe define $A(\\Lambda)$ by\n\\[\n\tA(\\Lambda) = \\bigotimes_{i \\in \\affI} \\bbC[z_{i,1}^{\\pm 1}, \\ldots, z_{i,m_i}^{\\pm 1}]^{\\frakS_{m_i}},\n\\]\t\nthe symmetric Laurent polynomial algebra associated with $\\Lambda$.\n\n\\begin{prop}\nThe assignment\n\\[\n\t\\sum_{m=1}^{m_i} z_{i,m}^k \\mapsto h_{i,k}\n\\]\ngives an isomorphism $A(\\Lambda) \\cong \\tilde{A}(\\Lambda)$ of $\\bbC$-algebras.\n\\end{prop}\n\n\\begin{proof}\nThe well-definedness and the surjectivity of the map is proved in the same way as \\cite[Proposition~1.1 (i), (iv), (v)]{MR1850556}.\n\nWe follow the argument in \\cite[5.6]{MR3384485} to show the injectivity.\nTake a nonzero element $a$ of $A(\\Lambda)$ and fix a maximal ideal $\\mathfrak{m}$ which does not contain $a$.\nAssume that $\\glob(\\Lambda) \\otimes_{A(\\Lambda)} A(\\Lambda) \/ \\mathfrak{m}$ is nonzero.\nThen the image of $a$ in $A(\\Lambda) \/ \\mathfrak{m}$ acts on $\\glob(\\Lambda) \\otimes_{A(\\Lambda)} A(\\Lambda) \/ \\mathfrak{m}$ by a nonzero scaler.\nHence we conclude that $a$ acts on $\\glob(\\Lambda)$ nontrivially and the map $A(\\Lambda) \\to \\tilde{A}(\\Lambda) \\cong \\End_{\\tor'}\\glob(\\Lambda)$ is shown to be injective.\n\nThus it is enough to show that $\\glob(\\Lambda) \\otimes_{A(\\Lambda)} A(\\Lambda) \/ \\mathfrak{m}$ is nonzero.\nWe denote by $\\bar{p}_{k}^{(i)}$ ($i \\in \\affI$, $k \\in \\bbZ$) the image of the power some function $p_{k}^{(i)} = \\sum_{m=1}^{m_i} z_{i,m}^k$ in $A(\\Lambda)\/\\mathfrak{m}$.\nWe can choose a set of nonzero complex numbers $\\{ a_{i,m} \\}$ satisfying\n\\[\n\t\\sum_{m=1}^{m_i} a_{i,m}^k = \\bar{p}_{k}^{(i)}\n\\]\nunder an identification $A(\\Lambda)\/\\mathfrak{m} \\cong \\bbC$.\nFor each $a \\in \\bbC^{\\times}$, we have the evaluation map\n\\[\n\t\\ev_a \\colon \\tor' \\to \\aff^{(t)} \n\\]\ndefined as the composite of\n\\[\n\t\\tor' \\to \\tor' \/ \\bigoplus_{\\substack{k \\in \\bbZ\\\\ l \\neq 0}} \\bbC c(k,l) \\oplus \\bbC c_s \\cong \\left( \\aff^{(t)} \\right)' \\otimes \\bbC[s^{\\pm 1}] \\oplus \\bbC d_t\n\\]\nand the evaluation at $s=a$.\nThen we have a nonzero $\\tor'$-module homomorphism\n\\[\n\t\\glob(\\Lambda) \\otimes_{A(\\Lambda)} A(\\Lambda) \/ \\mathfrak{m} \\to \\bigotimes_{i \\in \\affI} \\bigotimes_{m=1}^{m_i} \\ev_{a_{i,m}}^{*}L(\\Lambda_i)\n\\]\nassigning $v_{\\Lambda} \\otimes 1$ to the tensor product of highest weight vectors.\nThis proves the assertion.\n\\end{proof}\n\nWe have a completely analogous story for the global Weyl module $\\glob^+(\\Lambda)$ over $\\tor^+$ if we replace $A(\\Lambda)$ with\n\\[\n\tA^+(\\Lambda) = \\bigotimes_{i \\in \\affI} \\bbC[z_{i,1}, \\ldots, z_{i,m_i}]^{\\frakS_{m_i}}.\n\\]\nWe can summarize the discussion so far as follows.\n\n\\begin{prop}\\label{prop:endomorphism}\nWe have $\\End_{\\tor'} \\glob(\\Lambda) \\cong A(\\Lambda)$ and $\\End_{\\tor^+} \\glob^+(\\Lambda) \\cong A^+(\\Lambda)$.\n\\end{prop}\n\nFor a maximal ideal $\\mathbf{a}$ of $A = A(\\Lambda)$ or $A^+(\\Lambda)$, we denote by $\\bbC_{\\mathbf{a}}$ the corresponding one-dimensional module $A\/\\mathbf{a}$.\n\n\\begin{dfn}\nWe call\n\\[\n\t\\loc(\\Lambda,\\mathbf{a}) = \\glob(\\Lambda) \\otimes_{A(\\Lambda)} \\bbC_{\\mathbf{a}}, \\quad \\loc^+(\\Lambda,\\mathbf{a}) = \\glob^+(\\Lambda) \\otimes_{A^+(\\Lambda)} \\bbC_{\\mathbf{a}}\n\\]\nthe local Weyl modules for $\\tor'$ and $\\tor^+$, respectively.\n\\end{dfn}\nWe denote the images of $v_{\\Lambda}$ and $v_{\\Lambda}^+$ in the local Weyl modules by $v_{\\Lambda,\\mathbf{a}}$ and $v_{\\Lambda,\\mathbf{a}}^+$.\n\n\\begin{rem}\nThe global\/local Weyl modules for $\\tor$ and $\\tor^+$ can be regarded as a sort of highest weight modules with respect to their triangular decompositions:\n\\[\n\t\\tor = \\tornbar \\oplus \\torh \\oplus \\torn, \\quad \\tor^+ = \\left( \\tornbar \\cap \\tor^+ \\right) \\oplus \\left( \\torh \\cap \\tor^+ \\right) \\oplus \\left( \\torn\\cap \\tor^+ \\right).\n\\]\n\\end{rem}\n\n\\subsection{Finiteness of weight spaces}\\label{subsection:finiteness_property}\n\nThe goal of this subsection is to prove the following.\n\n\\begin{prop}\\label{prop:weight}\n\\begin{enumerate}\n\\item\nEvery weight space $\\glob(\\Lambda)_{\\Lambda-\\beta}$ is finitely generated over $A(\\Lambda)$.\nEvery weight space $\\loc(\\Lambda,\\mathbf{a})_{\\Lambda-\\beta}$ is finite-dimensional.\n\\item\nEvery weight space $\\glob^+(\\Lambda)_{\\Lambda-\\beta}$ is finitely generated over $A^+(\\Lambda)$.\nEvery weight space $\\loc^+(\\Lambda,\\mathbf{a})_{\\Lambda-\\beta}$ is finite-dimensional.\n\\item\nWe have $\\loc(\\Lambda,\\mathbf{a}) = U(\\tor^+) v_{\\Lambda,\\mathbf{a}}$. \n\\end{enumerate}\n\\end{prop}\n\nWe start to prove the following lemma.\n\n\\begin{lem}\\label{lem:single}\nLet $\\Lambda$ be a dominant integral weight of $\\aff^{(t)}$.\n\\begin{enumerate}\n\\item\nFor each positive root $\\beta \\in \\affDelta^+$, there exists a nonnegative integer $N(\\beta)$ satisfying the following: we have\n\\[\n\t(X_{-\\beta} \\otimes s^{k}) v_{\\Lambda} \\in \\sum_{m=0}^{N(\\beta)} (X_{-\\beta} \\otimes s^{m}) A(\\Lambda) v_{\\Lambda}\n\\]\nfor any root vector $X_{-\\beta}$ of $\\affnbar^{(t)}$ corresponding to a negative root $-\\beta$ and any $k$.\n\n\\item\nFor each positive integer $l >0$, there exists a nonnegative integer $N_l$ satisfying the following: we have\n\\[\n\tc(k,-l) v_{\\Lambda} \\in \\sum_{m=1}^{N_l} c(m,-l) A(\\Lambda) v_{\\Lambda} + \\sum_{m=0}^{N_l} \\left( \\left( \\aff^{(t)} \\right)_{-l\\delta} \\otimes s^m\\right) A(\\Lambda) v_{\\Lambda} \n\\]\nfor any $k$.\n\\end{enumerate}\n\\end{lem}\n\n\\begin{proof}\nThe assertion (i) is proved in the same way as \\cite[Proposition~3.2 and Corollary~3.1]{MR2017585}. \n\nWe prove (ii).\nTake an arbitrary element $\\alpha$ of $\\Delta^+$ and fix root vectors $x_{\\alpha} \\in \\frg_{\\alpha}$ and $x_{-\\alpha} \\in \\frg_{-\\alpha}$ satisfying $(x_{\\alpha},x_{-\\alpha})=1$.\nThen we have\n\\begin{equation*}\n\t\\begin{split}\n\t\t(s^{k} t^{-l} ds)v_{\\Lambda} &= \\left( [x_{\\alpha} \\otimes s, x_{-\\alpha} \\otimes s^{k}t^{-l}] - h_{\\alpha} \\otimes s^{k+1} t^{-l} \\right) v_{\\Lambda} \\\\\n\t\t&= (x_{\\alpha} \\otimes s) (x_{-\\alpha} \\otimes s^{k}t^{-l}) v_{\\Lambda} - (h_{\\alpha} \\otimes s^{k+1} t^{-l}) v_{\\Lambda}.\n\t\\end{split}\n\\end{equation*} \nWe have\n\\[\n\t(x_{\\alpha} \\otimes s) (x_{-\\alpha} \\otimes s^{k}t^{-l}) v_{\\Lambda} \\in (x_{\\alpha} \\otimes s) \\sum_{m=0}^{N(\\alpha + l\\delta)} (x_{-\\alpha} \\otimes s^{m} t^{-l}) A(\\Lambda) v_{\\Lambda}\n\\]\nby (i).\nThe right-hand side is equal to\n\\[\n\t\\sum_{m=0}^{N(\\alpha + l\\delta)} (h_{\\alpha} \\otimes s^{m+1} t^{-l} + s^m t^{-l} ds ) A(\\Lambda) v_{\\Lambda} = \\sum_{m=1}^{N(\\alpha + l\\delta)+1} (h_{\\alpha} \\otimes s^{m} t^{-l} + c(m,-l) ) A(\\Lambda) v_{\\Lambda}.\n\\]\nWe have\n\\[\n\t(h_{\\alpha} \\otimes s^{k+1} t^{-l}) v_{\\Lambda} \\in \\sum_{m=0}^{N(l\\delta)} (h_{\\alpha} \\otimes s^{m} t^{-l}) A(\\Lambda) v_{\\Lambda}\n\\]\nagain by (i).\nHence we conclude that\n\\[\n\t(s^{k} t^{-l} ds) v_{\\Lambda} \\in \\sum_{m=1}^{N_l} c(m,-l) A(\\Lambda) v_{\\Lambda} + \\sum_{m=0}^{N_l} \\left( \\left( \\aff^{(t)} \\right)_{-l\\delta} \\otimes s^m\\right) A(\\Lambda) v_{\\Lambda}\n\\]\nif we put $N_l = \\max(N(l\\delta),N(\\alpha+l\\delta)+1)$.\n\\end{proof}\n\nThe following proposition is an analog of \\cite[Proposition~1.2]{MR1850556} for the case of the affine Lie algebra and of \\cite[Proposition~3.2 and Corollary~3.1]{MR2017585} for the quotient of $\\tor$ modulo the elements $c(k,l)$ with $l \\neq 0$ (cf.\\ Remark \\ref{rem:CL}).\n\n\\begin{prop}\\label{prop:span}\nFor each positive root $\\beta_j \\in \\affDelta^+$ and each positive integer $l >0$, there exist nonnegative integers $N(\\beta_j)$ and $N_l$ such that the weight space $\\glob(\\Lambda)_{\\Lambda-\\beta}$ for $\\beta \\in \\affQ^+$ is spanned by elements of the form\n\\begin{equation}\n\t(X_{-\\beta_1} \\otimes s^{k_1}) \\cdots (X_{-\\beta_a} \\otimes s^{k_a}) \\left( \\prod_{j=1}^{b} c(m_j,-l_j) \\right) A(\\Lambda) v_{\\Lambda}, \\label{eq:span}\n\\end{equation}\nwhere each $X_{-\\beta_{j}}$ is a root vector of $\\affnbar^{(t)}$ corresponding to a negative root $-\\beta_j$ and each $l_j > 0$ is a positive integer satisfying $\\beta = \\sum_{j=1}^a \\beta_j + \\left(\\sum_{j=1}^b l_j \\right) \\delta$ and $0 \\leq k_j \\leq N(\\beta_j)$, $1 \\leq m_j \\leq N_{l_j}$.\nA similar statement also holds for $\\glob^+(\\Lambda)_{\\Lambda-\\beta}$. \n\\end{prop}\n\n\\begin{proof}\nBy the PBW theorem, we see that $\\glob(\\Lambda)_{\\Lambda-\\beta}$ is spanned by elements of the form as (\\ref{eq:span}) without any conditions on $k_j$ and $m_j$.\nThen we use Lemma~\\ref{lem:single} to show the assertion by the induction on $a+b$. \n\\end{proof}\n\nThus we establish Proposition~\\ref{prop:weight} from Proposition~\\ref{prop:span}.\nWe also have the following.\n\n\\begin{prop}\\label{prop:character}\nLet $\\mathbf{a}$ be a maximal ideal of $A(\\Lambda)$ and regard it also as a a maximal ideal of $A^{+}(\\Lambda)$.\nThen we have $\\ch_p \\loc^+(\\Lambda,\\mathbf{a}) \\geq \\ch_p \\loc(\\Lambda,\\mathbf{a})$.\n\\end{prop}\n\\begin{proof}\nWe have a $\\tor^+$-homomorphism $\\loc^+(\\Lambda,\\mathbf{a}) \\to \\Res \\loc(\\Lambda,\\mathbf{a})$ assigning $v_{\\Lambda,\\mathbf{a}}^+ \\mapsto v_{\\Lambda,\\mathbf{a}}$.\nIt is surjective by Proposition~\\ref{prop:weight} (iii).\n\\end{proof}\n\n\\subsection{Upper bound for the level one Weyl module}\n\nIn this subsection, we consider the case $\\Lambda=\\Lambda_0$.\nThe ring $A(\\Lambda_0)$ is identified with $\\bbC[z^{\\pm 1}]$ and the action on $\\glob(\\Lambda_0)$ is given by \n\\[\n\tz^k (X v_{\\Lambda_0}) = X (h_{0,k} v_{\\Lambda_0})\n\\]\nfor $X \\in U(\\tor')$.\nThis identification induces $A^+(\\Lambda_0) = \\bbC[z]$.\n\n\\begin{lem}\\label{lem:h_{i,k}}\nWe have $h_{i,k} v_{\\Lambda_0} = 0$ for $i \\in I$ and $k \\in \\bbZ$.\n\\end{lem}\n\n\\begin{proof}\nThe defining relations $e_{i,k} v_{\\Lambda_0}=0$ and $f_i v_{\\Lambda_0} = 0$ for $i \\in I$ imply the assertion. \n\\end{proof}\n\nRecall that $\\sum_{i \\in \\affI} h_{i,k} = s^k t^{-1} dt$.\nBy Lemma~\\ref{lem:h_{i,k}}, wee see that the action of $A(\\Lambda_0)$ on $\\glob(\\Lambda_0)$ is given by $z^k \\mapsto s^k t^{-1} dt$.\nIn particular, $z$ acts by $c(1,0)=st^{-1}dt$.\n\nWe have defined the local Weyl modules $\\loc(\\Lambda_0,a)$ for $a \\in \\bbC^{\\times}$ and $\\loc^+(\\Lambda_0,a)$ for $a \\in \\bbC$ by\n\\[\n\t\\loc(\\Lambda_0,a) = \\glob(\\Lambda_0) \\otimes_{A(\\Lambda_0)} \\bbC_a, \\quad \\loc^+(\\Lambda_0,a) = \\glob^+(\\Lambda_0) \\otimes_{A^+(\\Lambda_0)} \\bbC_a.\n\\]\n\\begin{prop}\\label{prop:independent}\nThe p-character $\\ch_p \\loc^+(\\Lambda_0,a)$ is independent of $a \\in \\bbC$.\n\\end{prop}\n\n\\begin{proof}\nThe defining relations of $\\loc^+(\\Lambda_0,a)$ are given by\n\\begin{gather*}\n\t(\\torn \\cap \\tor^+) v_{\\Lambda_0,a}^+ = 0,\\quad h_{i,k} v_{\\Lambda_0,a}^+ = \\delta_{i,0} a^k v_{\\Lambda_0,a}^+ \\ (i \\in \\affI, k \\geq 0), \\quad d_t v_{\\Lambda_0,a}^+ = 0,\\\\\n\tf_0^2 v_{\\Lambda_0,a}^+ = 0,\\quad f_i v_{\\Lambda_0,a}^+ = 0 \\ (i \\in I). \n\\end{gather*}\nHence we have $\\tau_a^*\\loc^+(\\Lambda_0,0) \\cong \\loc^+(\\Lambda_0,a)$, where $\\tau_a$ is the automorphism of $\\tor^+$ defined in Section~\\ref{subsection:auto}.\nThis proves the assertion. \n\\end{proof}\n\nWe put \n\\[\n\tW(\\Lambda_0)=\\loc^+(\\Lambda_0,0) = \\glob^+(\\Lambda_0) \\otimes_{A^+(\\Lambda_0)} \\bbC_0\n\\]\nand denote its highest weight vector $v_{{\\Lambda_0},0}^+$ by $v_0$.\nThis $W(\\Lambda_0)$ is regarded as a graded $\\tor^+$-module by setting $\\deg v_0 = 0$. \n\n\\begin{lem}\\label{lem:f}\nWe have $f_{i,k} v_0 = 0$ for any $i \\in \\affI$ and $k \\geq 1$.\n\\end{lem}\n\n\\begin{proof}\nThe assertion for $i \\in I$ follows from $f_i v_0 =0$ and $h_{i,k} v_0 =0$.\nThe assertion for $i = 0$ follows from\n\\[\n\t0 = e_{0,k} f_0^2 v_0 = [e_{0,k}, f_0^2] v_0 = (-2f_{0,k} + 2 f_0 h_{0,k}) v_0 \n\\]\nand $h_{0,k} v_0 =0$ for $k \\geq 1$.\n\\end{proof}\n\n\\begin{lem}\\label{lem:key}\nLet $k \\geq 1$.\nWe have\n\\begin{enumerate}\n\\item\n\\[\n\t(e_{\\theta} \\otimes s^k t^{-l}) v_0 = \\begin{cases}\n\t\t0 & \\text{if } l \\leq k,\\\\\n\t\t\\displaystyle\\sum_{m=1}^{l-k} c(k,-l+m) (e_{\\theta} \\otimes t^{-m}) v_0 & \\text{if } l > k,\n\t\\end{cases}\n\\]\n\\item\n\\[\n\t(s^k t^{-l} ds) v_0 = \\begin{cases}\n\t\t0 & \\text{if } l \\leq k,\\\\\n\t\t\\displaystyle\\sum_{m=1}^{l-k} c(k,-l+m) (t^{-m}ds) v_0 & \\text{if } l > k.\n\t\\end{cases}\n\\]\n\\end{enumerate}\n\\end{lem}\n\n\\begin{proof}\nWe prove the assertions (i) and (ii) by induction on $l$.\n\nFor $l \\leq 0$, $e_{\\theta} \\otimes s^k t^{-l}$ is an element of $\\torn \\cap \\tor^+$, hence it kills $v_0$.\nFor $l = 1$, $e_{\\theta} \\otimes s^k t^{-1} = f_{0,k}$ kills $v_0$ by Lemma~\\ref{lem:f}.\nThen we have\n\\begin{equation*}\n\t\\begin{split}\n\t\t(s^k t^{-l} ds)v_0 = \\left( [f_{\\theta} \\otimes s, e_{\\theta} \\otimes s^k t^{-l}] - [f_{\\theta}, e_{\\theta} \\otimes s^{k+1}t^{-l}] \\right) v_0 =0\n\t\\end{split}\n\\end{equation*}\nfor $l \\leq 1$.\nWe thus have proved (i) and (ii) for $l \\leq 1$.\n\nLet $l \\geq 2$.\nWe assume the assertions (i) and (ii) for all $l' < l$.\nBy Lemma~\\ref{lem:induction}, we have\n\\begin{equation}\n\t\\begin{split}\n\t\t(e_{\\theta} \\otimes s^k t^{-l}) v_0 &= T_0 T_{\\theta} \\left( (e_{\\theta} \\otimes s^k t^{-l+2}) T_{\\theta}^{-1} T_0^{-1} v_0 \\right) \\\\\n\t\t&= T_0 T_{\\theta} \\left( (e_{\\theta} \\otimes s^k t^{-l+2}) T_{\\theta}^{-1} (f_0 v_0) \\right) \\\\\n\t\t&= T_0 T_{\\theta} \\left( (e_{\\theta} \\otimes s^k t^{-l+2}) T_{\\theta}^{-1} (f_0) v_0 \\right) \\\\\n\t\t&= T_0 T_{\\theta} \\left( T_{\\theta}^{-1}(f_0)(e_{\\theta} \\otimes s^k t^{-l+2}) v_0 + [e_{\\theta} \\otimes s^k t^{-l+2}, T_{\\theta}^{-1} (f_0)] v_0 \\right). \\label{eq:induction}\n\t\\end{split}\n\\end{equation}\nWe have\n\\begin{equation*}\n\t\\begin{split}\n\t\t[e_{\\theta} \\otimes s^k t^{-l+2}, T_{\\theta}^{-1} (f_0)] &= [e_{\\theta} \\otimes s^k t^{-l+2}, -f_{\\theta} \\otimes t^{-1}] \\\\\n\t\t&=- \\left( [e_{\\theta} \\otimes s^k t^{-l+1}, f_{\\theta}] + c(k,-l+1) \\right) \\\\\n\t\t&= [f_{\\theta}, e_{\\theta} \\otimes s^k t^{-l+1}] - c(k,-l+1).\n\t\\end{split}\n\\end{equation*}\nPut\n\\[\n\tA= T_{\\theta}^{-1}(f_0)(e_{\\theta} \\otimes s^k t^{-l+2}) v_0, \\quad B= f_{\\theta}(e_{\\theta} \\otimes s^k t^{-l+1}) v_0. \n\\]\nThen (\\ref{eq:induction}) is equal to $T_0 T_{\\theta}(A+B-c(k,-l+1)v_0)$.\nBy the induction assumption, we have\n\\[\n\tA= T_{\\theta}^{-1}(f_0) \\sum_{m=1}^{l-2-k} c(k,-l+2+m) (e_{\\theta} \\otimes t^{-m}) v_0,\n\\]\n\\begin{equation*}\n\t\\begin{split}\n\t\tB= f_{\\theta} \\sum_{m=1}^{l-1-k} c(k,-l+1+m) (e_{\\theta} \\otimes t^{-m}) v_0 = f_{\\theta} \\sum_{m=0}^{l-2-k} c(k,-l+2+m) (e_{\\theta} \\otimes t^{-m-1}) v_0.\n\t\\end{split}\n\\end{equation*}\nThen (\\ref{eq:induction}) is equal to\n\\begin{multline}\n\t\tT_0 T_{\\theta} \\Bigg( \\sum_{m=1}^{l-2-k} c(k,-l+2+m) \\Big( T_{\\theta}^{-1}(f_0) (e_{\\theta} \\otimes t^{-m}) + f_{\\theta} (e_{\\theta} \\otimes t^{-m-1}) \\Big) v_0 \\\\\n\t\t+ c(k,-l+2) f_{\\theta} (e_{\\theta} \\otimes t^{-1}) v_0 - c(k,-l+1) v_0 \\Bigg) \\label{eq:induction2}\n\\end{multline}\nif $l \\geq k+2$ and to $T_0 T_{\\theta}(- c(k,-l+1) v_0)$ if $l \\leq k+1$.\n\nWe prove (i) for $l$.\nFirst consider the case $l \\leq k$.\nIn this case, we have\n\\begin{equation*}\n\t(e_{\\theta} \\otimes s^k t^{-l}) v_0 = T_0 T_{\\theta}(- c(k,-l+1) v_0) = \\dfrac{k}{-l+1} T_0 T_{\\theta}( (s^{k-1} t^{-(l-1)} ds) v_0) = 0\n\\end{equation*}\nby the induction assumption.\nHence (i) holds for $l$.\nNext consider the case $l = k+1$.\nIn this case, we have\n\\begin{equation*}\n\t(e_{\\theta} \\otimes s^k t^{-l}) v_0 = T_0 T_{\\theta}(- c(k,-l+1) v_0) = - c(k,-l+1) T_0 T_{\\theta}(v_0).\n\\end{equation*}\nSince we have $T_0T_{\\theta} (v_0)=-f_0 v = -(e_{\\theta} \\otimes t^{-1})v_0$, (i) holds for $l=k+1$.\nFinally consider the case $l \\geq k+2$.\nThe equality (\\ref{eq:induction}) is valid even for $k=0$ and hence we have\n\\[\n\t(e_{\\theta} \\otimes t^{-m-2}) v_0 = T_0 T_{\\theta} \\Bigg( \\Big( T_{\\theta}^{-1} (f_0) (e_{\\theta} \\otimes t^{-m}) + f_{\\theta} (e_{\\theta} \\otimes t^{-m-1}) \\Big) v_0 \\Bigg)\n\\]\nfor each $m$.\nThis implies that (\\ref{eq:induction2}) is equal to\n\\begin{multline*}\n\t\t\\sum_{m=1}^{l-2-k} c(k,-l+2+m) (e_{\\theta} \\otimes t^{-m-2}) v_0\\\\\n\t\t+ c(k,-l+2) T_0 T_{\\theta} ( f_{\\theta} (e_{\\theta} \\otimes t^{-1}) v_0) + c(k,-l+1) (e_{\\theta} \\otimes t^{-1}) v_0.\n\\end{multline*}\nSince we can easily show $T_0 T_{\\theta} ( f_{\\theta} (e_{\\theta} \\otimes t^{-1}) v_0) = (e_{\\theta} \\otimes t^{-2})v_0$, (i) is proved for $l$.\n\nWe prove (ii) for $l$.\nBy (i), we have\n\\begin{equation*}\n\t\\begin{split}\n\t\t&(s^k t^{-l} ds)v_0 = \\left( [f_{\\theta} \\otimes s, e_{\\theta} \\otimes s^k t^{-l}] - [f_{\\theta}, e_{\\theta} \\otimes s^{k+1}t^{-l}] \\right) v_0\\\\\n\t\t&= (f_{\\theta} \\otimes s) \\sum_{m=1}^{l-k} c(k,-l+m) (e_{\\theta} \\otimes t^{-m}) v_0 - f_{\\theta} \\sum_{n=1}^{l-(k+1)} c(k+1,-l+n) (e_{\\theta} \\otimes t^{-n}) v_0 \n\t\\end{split}\n\\end{equation*}\nif $l > k$ and $(s^k t^{-l} ds)v_0 = 0$ otherwise.\nTherefore we may assume $l > k$. \nWe have\n\\begin{equation*}\n\t\\begin{split}\n\t\t(f_{\\theta} \\otimes s) (e_{\\theta} \\otimes t^{-m}) v_0 &= [f_{\\theta} \\otimes s,e_{\\theta} \\otimes t^{-m}]v_0 \\\\\n\t\t&= \\left( [f_{\\theta}, e_{\\theta} \\otimes s t^{-m}] + t^{-m}ds \\right) v_0 \\\\\n\t\t&= f_{\\theta} (e_{\\theta} \\otimes s t^{-m}) v_0 + (t^{-m}ds) v_0 \\\\\n\t\t&= f_{\\theta} \\sum_{n=1}^{m-1} c(1,-m+n)(e_{\\theta} \\otimes t^{-n}) v_0 + (t^{-m}ds) v_0.\n\t\\end{split}\n\\end{equation*}\nWe claim that\n\\[\n\t\\sum_{m=1}^{l-k} c(k,-l+m) \\sum_{n=1}^{m-1} c(1,-m+n)(e_{\\theta} \\otimes t^{-n}) v_0 = \\sum_{n=1}^{l-(k+1)} c(k+1,-l+n)(e_{\\theta} \\otimes t^{-n}) v_0\n\\]\nholds.\nIndeed this equality is obtained by applying $h_{\\theta} \\otimes s$ to both sides of (i).\nHence we conclude\n\\begin{equation*}\n\t\\begin{split}\n\t\t(s^k t^{-l}ds) v_0 &= \\sum_{m=1}^{l-k} c(k,-l+m) \\Bigg( f_{\\theta} \\sum_{n=1}^{m-1} c(1,-m+n)(e_{\\theta} \\otimes t^{-n}) v_0 + (t^{-m}ds) v_0 \\Bigg)\\\\\n\t\t&\\qquad - f_{\\theta} \\sum_{n=1}^{l-(k+1)} c(k+1,-l+n) (e_{\\theta} \\otimes t^{-n}) v_0 \\\\\n\t\t&= \\sum_{m=1}^{l-k} c(k,-l+m) (t^{-m}ds) v_0.\n\t\\end{split}\n\\end{equation*}\n\\end{proof}\n\nWe define the subalgebra $\\bar{C}$ of $U(\\tor^+)$ to be generated by $c(k,-l)$ ($k \\geq 1$, $l \\geq 1$).\nLet $\\bar{C}_1$ be the subalgebra of $\\bar{C}$ generated by $c(1,-l)$ ($l \\geq 1$).\n\n\\begin{lem}\\label{lem:degree_one}\nWe have $\\bar{C} v_0 = \\bar{C}_1 v_0$. \n\\end{lem}\n\n\\begin{proof}\nSuppose $k \\geq 1$ and $l \\geq 1$.\nWe rewrite Lemma~\\ref{lem:key} (ii) as\n\\[\n\t(s^{k} t^{-l} ds) v_0 = \\begin{cases}\n\t\t0 & \\text{if } l \\leq k,\\\\\n\t\t\\displaystyle\\sum_{m=1}^{l-k} \\dfrac{k}{l-m} (s^{k-1} t^{-l+m} ds) (t^{-m}ds) v_0 & \\text{if } l > k.\n\t\\end{cases}\n\\]\nThis implies that the action of $c(k+1,-l) = ((k+1)\/l) s^{k}t^{-l} ds$ on $v_0$ is written in terms of a polynomial in $c(1,-m) = (1\/m)t^{-m} ds$ with $m \\geq 1$.\n\\end{proof}\n\n\\begin{lem}\\label{lem:key2}\nWe have\n\\[\n\t\\left(\\affnbar^{(t)} \\otimes s\\bbC[s]\\right) v_0 \\subset \\bar{C}_1 U(\\affnbar^{(t)}) v_0.\n\\]\n\\end{lem}\n\n\\begin{proof}\nNote that we have\n\\begin{equation*}\n\t\\affnbar^{(t)} \\otimes s^k = \\bigoplus_{\\substack{\\alpha \\in \\Delta^+ \\cup \\{0\\}\\\\ l \\geq 1}} \\frg_{\\alpha} \\otimes s^k t^{-l} \\oplus \\bigoplus_{\\substack{\\alpha \\in \\Delta^- \\\\ l \\geq 0}} \\frg_{\\alpha} \\otimes s^k t^{-l}.\n\\end{equation*}\nSuppose $k \\geq 1$.\nWe show\n\\begin{equation}\n\t(x \\otimes s^k t^{-l}) v_0 \\in \\bar{C}_1 U(\\affnbar^{(t)}) v_0 \\label{eq:contain}\n\\end{equation}\nfor\n\\begin{itemize}\n\\item\n$x \\in \\frg_{\\alpha}$ ($\\alpha \\in \\Delta^+ \\cup \\{0\\}$) and $l \\geq 1$;\n\n\\item\n$x \\in \\frg_{\\alpha}$ ($\\alpha \\in \\Delta^-$) and $l \\geq 0$.\n\\end{itemize}\nLemma~\\ref{lem:key} (i) and \\ref{lem:degree_one} imply (\\ref{eq:contain}) for $x=e_{\\theta}$ and $l \\geq 1$.\nThen we obtain (\\ref{eq:contain}) for $x \\in \\frg_{\\alpha}$ ($\\alpha \\in \\Delta^+$) and $l \\geq 1$ by successively applying $f_i$'s ($i \\in I$) to $(e_{\\theta} \\otimes s^k t^{-l}) v_0$.\nWe obtain (\\ref{eq:contain}) for $x = h_i$ ($i \\in I$) and $l \\geq 1$ by applying $f_i$ to $(e_{i} \\otimes s^k t^{-l}) v_0$.\nWe show (\\ref{eq:contain}) for $x \\in \\frg_{\\alpha}$ ($\\alpha \\in \\Delta^-$) and $l \\geq 0$.\nThe case $l=0$ is immediate from Lemma~\\ref{lem:f}.\nAssume $l \\geq 1$.\nWe use $[h_{\\alpha} \\otimes s^k t^{-l}, x] = 2 x \\otimes s^k t^{-l}$ and $x v_0 = 0$ to deduce\n\\[\n\t(x \\otimes s^k t^{-l}) v_0 = -\\dfrac{1}{2} x(h_{\\alpha} \\otimes s^k t^{-l}) v_0 \\in x \\bar{C}_1 U(\\affnbar^{(t)}) v_0 \\subset \\bar{C}_1 U(\\affnbar^{(t)}) v_0.\n\\]\n\\end{proof}\n\n\\begin{prop}\\label{prop:upper_bound}\nWe have\n\\[\n\tW(\\Lambda_0) = \\bar{C}_1 U(\\affnbar^{(t)}) v_0.\n\\]\nIn particular, we have an inequality\n\\[\n\t\\ch_{p,q} W(\\Lambda_0) \\leq \\ch_p L(\\Lambda_0) \\displaystyle\\prod_{n > 0} \\dfrac{1}{1-p^n q}.\n\\]\n\\end{prop}\n\n\\begin{proof}\nLet $N$ be the $\\bbC$-span of monomials in $\\affnbar^{(t)} \\otimes s\\bbC[s]$.\nThen the PBW theorem and Lemma~\\ref{lem:degree_one} imply\n\\[\n\tW(\\Lambda_0) = U(\\tornbar \\cap \\tor^+)v_0 = \\bar{C}_1 U(\\affnbar^{(t)}) N v_0.\n\\]\nSince $\\affnbar^{(t)} \\otimes s\\bbC[s]$ is $\\ad \\affnbar^{(t)}$-invariant modulo central elements, we prove the assertion by Lemma~\\ref{lem:key2} and \\ref{lem:degree_one}. \n\\end{proof}\n\n\\begin{rem}\nWe will show in Corollay~\\ref{cor:character} that the equality\n\\[\n\t\\ch_{p,q} W(\\Lambda_0) = \\ch_p L(\\Lambda_0) \\displaystyle\\prod_{n > 0} \\dfrac{1}{1-p^n q}\n\\]\nholds.\n\\end{rem}\n\n\\begin{rem}\nBy Proposition~\\ref{prop:character}, \\ref{prop:independent} and \\ref{prop:upper_bound}, we have an inequality\n\\[\n\t\\ch_{p} \\loc(\\Lambda_0,a) \\leq \\ch_p L(\\Lambda_0) \\displaystyle\\prod_{n > 0} \\dfrac{1}{1-p^n}.\n\\]\nWe will show in Corollay~\\ref{cor:character} that the equality holds.\nIn fact, we can directly prove this inequality for $\\ch_{p} \\loc(\\Lambda_0,a)$ by a similar calculation for $\\loc(\\Lambda_0,a)$ instead of $W(\\Lambda_0)$.\nMore precisely, we can show $\\loc(\\Lambda_0,a) = \\bar{C}_1 U(\\affnbar^{(t)}) v_{\\Lambda_0,a}$.\nMoreover, we can show that\n\\[\n\t\\loc(\\Lambda_0,a) = \\bar{C}_0 U(\\affnbar^{(t)}) v_{\\Lambda_0,a}\n\\]\nalso holds, where $\\bar{C}_0$ is the subalgebra of $U(\\tor')$ generated by $c(0,-l)$ ($l \\geq 1$).\n\nHere we gave the calculation for $W(\\Lambda_0)$ by two reasons:\n\\begin{enumerate}\n\\item\nwe are interested in the $(p,q)$-characters of the graded local Weyl modules for $\\tor^+$;\n\n\\item\nthe calculation for $W(\\Lambda_0)$ is easier than that for $\\loc(\\Lambda_0,a)$.\n\\end{enumerate}\n\\end{rem}\n\n\\section{Vertex operator construction and Weyl modules}\\label{section:Vertex operator construction}\n\n\\subsection{Heisenberg Lie algebras}\\label{subsection:Heisenberg}\n\nWe assume that $\\frg$ is of type ADE in Section~\\ref{subsection:Heisenberg} and \\ref{subsection:vertex}.\nRecall that $\\affQ = \\bigoplus_{i \\in \\affI} \\bbZ \\alpha_i$ is the root lattice of $\\aff^{(t)}$.\nWe fix a bimultiplicative 2-cocycle $\\ve \\colon \\affQ \\times \\affQ \\to \\{\\pm 1\\}$ satisfying\n\\[\n\t\\ve(\\alpha,\\alpha) = (-1)^{(\\alpha,\\alpha)\/2}, \\quad \\ve(\\alpha,\\beta)\\ve(\\beta,\\alpha) = (-1)^{(\\alpha,\\beta)}, \\quad \\ve(\\alpha,\\delta)=1\n\\]\nas in \\cite[Section~4]{MR1066569}.\nLet $\\bbC[\\affQ]$ be the group algebra of $\\affQ$ with a $\\bbC$-basis denoted by $e^{\\alpha}$ ($\\alpha \\in \\affQ$).\nWe make $\\bbC[\\affQ]$ into a $\\bbC[\\affQ]$-module via $\\ve$, that is, we define $e^{\\alpha} \\cdot e^{\\beta} = \\ve(\\alpha,\\beta)e^{\\alpha+\\beta}$. \nWe denote by $\\bbC_{\\ve}[\\affQ]$ this module.\nWe define an action of $h \\in \\affh^{(t)}$ on $\\bbC_{\\ve}[\\affQ]$ by $h \\cdot e^{\\alpha} = \\langle h, \\alpha \\rangle e^{\\alpha}$.\n\nThe toroidal Lie algebra $\\tor$ contains a Heisenberg Lie algebra \n\\[\n\t\\calH = \\displaystyle\\bigoplus_{\\substack{i \\in \\affI\\\\k \\neq 0}} \\bbC h_{i,k} \\oplus \\bbC c_s.\n\\]\nDefine the Fock representation $\\affF$ of $\\calH$ by\n\\[\n\t\\affF = U(\\calH) \/ \\sum_{\\substack{i \\in \\affI\\\\ k >0}}U(\\calH) h_{i,k} + U(\\calH)(c_s-1).\n\\]\nWe set\n\\[\n\t\\bbV(0) = \\affF \\otimes \\bbC_{\\ve}[\\affQ].\n\\]\nDefine the degree on $\\bbV(0)$ by $\\deg e^{\\alpha}= (\\alpha,\\alpha)\/2$ and $\\deg h_{i,k}=k$.\nThen we regard $\\bbV(0)$ as a module of $\\torh = \\calH \\oplus \\affh^{(t)} \\oplus \\bbC d_s$ via the actions of $\\calH$ and $\\affh^{(t)}$ on $\\affF$ and $\\bbC_{\\ve}[\\affQ]$ respectively, and so that $d_s$ counts the degree.\n\nSimilarly we define $\\mathcal{F}$ to be the Fock representation for a Heisenberg Lie subalgebra\n\\[\n\t\\displaystyle\\bigoplus_{\\substack{i \\in I\\\\k \\neq 0}} \\bbC h_{i,k} \\oplus \\bbC c_s\n\\]\nof $\\aff^{(s)}$.\n\n\\subsection{Vertex representations}\\label{subsection:vertex}\n\nFor each $\\alpha \\in \\affDelta$, we set\n\\[\n\tX(\\alpha,u) = u^{(\\alpha,\\alpha)\/2} \\left( e^{\\alpha} u^{h_{\\alpha}} \\right) \\exp\\left( \\sum_{k>0} \\dfrac{h_{\\alpha} \\otimes s^{-k}}{k} u^{k} \\right) \\exp\\left( -\\sum_{k>0} \\dfrac{h_{\\alpha} \\otimes s^{k}}{k} u^{-k} \\right)\n\\]\nas an element of $( \\End_{\\bbC} \\bbV(0) )[[u^{\\pm1}]]$.\nHere $u^{h_{\\alpha}}$ acts by\n\\[\n\tu^{h_{\\alpha}} \\cdot e^{\\beta} = u^{(\\alpha,\\beta)} e^{\\beta}.\n\\]\nDefine $X_{k}(\\alpha)$ by the expansion\n\\[\n\tX(\\alpha,u) = \\sum_{k \\in \\bbZ} X_k(\\alpha) u^{-k}.\n\\]\n\n\\begin{thm}[\\cite{MR1066569} Proposition~4.3]\\label{thm:MEY}\nWe can extend the action of $\\torh = \\calH \\oplus \\affh^{(t)} \\oplus \\bbC d_s$ to $\\tor$ on $\\bbV(0)$ by\n\\[\n\te_{i,k} \\mapsto X_{k}(\\alpha_i), \\quad f_{i,k} \\mapsto X_{k}(-\\alpha_i).\n\\]\n\\end{thm}\n\nWe denote by $\\tau$ the action of $c(0,1)$ on $\\bbV(0)$.\nThen by \\cite[(4.1) and Proposition~5.3 (ii)]{MR1066569}, the action of $c(0,k)$ for $k \\neq 0$ is given by $\\tau^k$.\nThe subalgebra of $\\End_{\\bbC} \\bbV(0)$ generated by $\\tau^k$ ($k \\in \\bbZ$) is isomorphic to the Laurent polynomial algebra $\\bbC[\\tau^{\\pm 1}]$. \n\nWe denote by $\\delta(k)$ the action of $c(k,0)$ on $\\bbV(0)$ for $k<0$.\nThey freely generate a polynomial subalgebra of $\\End_{\\bbC} \\bbV(0)$ and we denote it by $D$. \nWe have an isomorphism of $\\bbC$-vector spaces \n\\[\n\t\\affF \\cong \\mathcal{F} \\otimes D.\n\\]\n\n\\begin{prop}[\\cite{MR1066569} Lemma~5.6]\\label{prop:freeness_vertex_rep}\nThe multiplication map gives an isomorphism\n\\[\n\t\\bbV(0) \\cong \\mathcal{F} \\otimes \\bbC_{\\ve}[Q] \\otimes D \\otimes \\bbC[\\tau^{\\pm 1}]\n\\]\nof $\\bbC$-vector spaces.\nIn particular, $\\bbV(0)$ is free over $\\bbC[\\tau^{\\pm 1}]$.\n\\end{prop}\n\nThe $\\aff^{(s)}$-submodule $\\mathcal{F} \\otimes \\bbC_{\\ve}[Q]$ is known to be isomorphic to the level one integrable irreducible $\\aff^{(s)}$-module $L(\\Lambda_0)^{(s)}$ with highest weight $\\Lambda_0$ by Frenkel-Kac \\cite{MR595581}. \nHence it has the following defining relations:\n\\begin{gather}\n\t(f_{\\theta} \\otimes s) (1 \\otimes e^0) = 0,\\quad e_i (1 \\otimes e^0) = 0 \\ (i \\in I), \\label{eq:Frenkel-Kac1}\\\\\n\tc_s (1 \\otimes e^0) = 1 \\otimes e^0,\\quad h_i (1 \\otimes e^0) = 0 \\ (i \\in I),\\quad d_s (1 \\otimes e^0) = 0,\\label{eq:Frenkel-Kac2}\\\\\n\t(e_{\\theta} \\otimes s^{-1})^2 (1 \\otimes e^0) = 0,\\quad f_i (1 \\otimes e^0) = 0 \\ (i \\in I).\\label{eq:Frenkel-Kac3}\n\\end{gather}\nWe will determine the defining relations of $\\bbV(0)$ as a $\\tor$-module as a main result of this article.\n\n\\subsection{General construction}\n\nWe review the construction of $\\tor$-modules given by Iohara-Saito-Wakimoto~\\cite{MR1688100} and Eswara Rao~\\cite{MR3076215}.\nAssume that $\\frg$ is an arbitrary simple Lie algebra.\nLet $D$ be the polynomial algebra generated by the elements $\\delta(k)$ ($k < 0$). \nFor a given smooth $\\aff^{(s)}$-module $M$, we will define a $\\tor$-module structure on\n\\[\n\tM \\otimes D \\otimes \\bbC[\\tau^{\\pm 1}]\n\\]\nas follows.\nFor an element $x$ of $\\frg$, we put $x(u) = \\sum_{k \\in \\bbZ} (x \\otimes s^k) u^{-k}$.\nDefine a formal series $\\Delta_l(u)$ for each $l \\in \\bbZ$ by\n\\[\n\t\\Delta_l(u) = \\exp \\left( \\sum_{k > 0} \\dfrac{l \\delta(-k)}{k} u^{k} \\right).\n\\]\nWe make $D$ into a graded algebra by $\\deg \\delta(k) = k$ and let $d^{(D)}$ be the operator which counts the degree on $D$.\nWe make $\\bbC[\\tau^{\\pm 1}]$ into a graded algebra by $\\deg \\tau = 1$ and let $d^{(\\tau)}$ be the operator which counts the degree on $\\bbC[\\tau^{\\pm 1}]$.\n\n\\begin{thm}[\\cite{MR1688100} Lemma~2.1, \\cite{MR3076215} Theorem~4.1]\\label{thm:ISW-E}\nLet $M$ be a smooth $\\aff^{(s)}$-module.\nThe assignment\n\\[\n\t\\sum_{k \\in \\bbZ} (x \\otimes s^k t^l) u^{-k} \\mapsto x(u) \\otimes \\Delta_l(u) \\otimes \\tau^l\n\\]\nfor $x \\in \\frg,$\n\\[\n\t\\sum_{k \\in \\bbZ} (s^{k-1} t^l ds) u^{-k} \\mapsto c_s \\otimes \\Delta_l(u) \\otimes \\tau^l, \\quad \n\ts^{k} t^{-1} dt \\mapsto \\begin{cases}\n\t\t\\id \\otimes \\delta(k) \\otimes \\id & \\text{ if } k < 0,\\\\\n\t\t0 & \\text{ if } k \\geq 0,\n\t\\end{cases}\n\\]\n\\[\n\td_s \\mapsto d_s \\otimes \\id \\otimes \\id + \\id \\otimes d^{(D)} \\otimes \\id, \\quad d_t \\mapsto \\id \\otimes \\id \\otimes d^{(\\tau)}\n\\]\ngives a $\\tor$-module structure on $M \\otimes D \\otimes \\bbC[\\tau^{\\pm 1}]$.\n\\end{thm}\n\n\\begin{rem}\nLet us give a remark on the results of \\cite{MR1688100} and \\cite{MR3076215} stated above.\nIn \\cite{MR1688100}, the authors consider a Lie algebra bigger than $\\tor$ and the module they construct is bigger than $M \\otimes D \\otimes \\bbC[\\tau^{\\pm 1}]$.\nIf one restricts the action to $\\tor$, we can take $M \\otimes D \\otimes \\bbC[\\tau^{\\pm 1}]$ as a $\\tor$-submodule.\nMoreover, although they assume that $\\frg$ is of type ADE in \\cite{MR1688100}, the construction does not need the assumption.\nLater this construction of $\\tor$-modules has been generalized in \\cite{MR3076215} to some Lie superalgebras. \n\\end{rem}\n\nTake $M$ as the level one integrable irreducible $\\aff^{(s)}$-module $L(\\Lambda_0)^{(s)}$ with highest weight $\\Lambda_0$ and set\n\\[\n\t\\bbV(0) = L(\\Lambda_0)^{(s)} \\otimes D \\otimes \\bbC[\\tau^{\\pm 1}].\n\\]\nThis definition is compatible with the construction given in Section~\\ref{subsection:Heisenberg} and \\ref{subsection:vertex} if $\\frg$ is of type ADE.\nIndeed, the definition of the vertex operator $X(\\alpha,u)$ implies that\n\\[\n\tX(\\beta+l\\delta,u) = \\begin{cases}\n\t\tX(\\beta,u) \\otimes \\Delta_l(u) \\otimes \\tau^l & \\text{if } \\beta \\in \\Delta,\\\\\n\t\t\\id \\otimes \\Delta_l(u) \\otimes \\tau^l & \\text{if } \\beta = 0,\n\t\\end{cases}\n\\]\nwhen we write $\\alpha \\in \\affDelta$ as $\\alpha = \\beta + l\\delta$ with $\\beta \\in \\Delta \\cup \\{0\\}$ and $l \\in \\bbZ$.\n\nLet $v^{(s)}$ be a highest weight vector of $L(\\Lambda_0)^{(s)}$. \nWe generalize the relations given in (\\ref{eq:Frenkel-Kac1}), (\\ref{eq:Frenkel-Kac2}), (\\ref{eq:Frenkel-Kac3}).\n\n\\begin{lem}\\label{lem:highest}\nWe have\n\\begin{gather}\n\t(f_{\\theta} \\otimes s) (v^{(s)} \\otimes 1 \\otimes 1) = 0,\\quad e_i (v^{(s)} \\otimes 1 \\otimes 1) = 0 \\ (i \\in I), \\label{eq:Frenkel-Kac1new}\\\\\n\tc_s (v^{(s)} \\otimes 1 \\otimes 1) = v^{(s)} \\otimes 1 \\otimes 1, \\quad h_i (v^{(s)} \\otimes 1 \\otimes 1) = 0 \\ (i \\in I),\\quad d_s (v^{(s)} \\otimes 1 \\otimes 1) = 0,\\label{eq:Frenkel-Kac2new}\\\\\n\t(e_{\\theta} \\otimes s^{-1})^2 (v^{(s)} \\otimes 1 \\otimes 1) = 0,\\quad f_i (v^{(s)} \\otimes 1 \\otimes 1) = 0 \\ (i \\in I).\\label{eq:Frenkel-Kac3new}\n\\end{gather}\n\\end{lem}\n\n\\begin{proof}\nThese are direct consequences of the definition of the action and the relations in $L(\\Lambda_0)^{(s)}$.\n\\end{proof}\n\n\\begin{lem}\\label{lem:vertex}\nWe have $\\aff^{(t)} (v^{(s)} \\otimes 1 \\otimes 1) = 0$.\n\\end{lem}\n\n\\begin{proof}\nWe have $\\frg (v^{(s)} \\otimes 1 \\otimes 1) = (\\frg v^{(s)}) \\otimes 1 \\otimes 1 = 0$.\nTo see the action of $e_0 = f_{\\theta} \\otimes t$, consider the assignment\n\\[\n\t\\sum_{k \\in \\bbZ} (f_{\\theta} \\otimes s^k t) u^{-k} \\mapsto f_{\\theta} (u) \\otimes \\Delta_1(u) \\otimes \\tau.\n\\]\nExpand $\\Delta_1(u) = \\sum_{k \\geq 0} \\Delta_1^{(-k)} u^k$.\nThen the action of $e_0 = f_{\\theta} \\otimes t$ is given by $\\sum_{k \\geq 0} (f_{\\theta}\\otimes s^k) \\otimes \\Delta_1^{(-k)} \\otimes \\tau$.\nSince we have $(f_{\\theta}\\otimes s^k) v^{(s)} = 0$ for $k \\geq 0$, we have $e_0(v^{(s)} \\otimes 1 \\otimes 1)=0$.\nSimilarly the action of $f_0 = e_{\\theta} \\otimes t^{-1}$ is given by $\\sum_{k \\geq 0} (e_{\\theta}\\otimes s^k) \\otimes \\Delta_{-1}^{(-k)} \\otimes \\tau^{-1}$, hence it acts on $v^{(s)} \\otimes 1 \\otimes 1$ by $0$.\nWe have $c_t (v^{(s)} \\otimes 1 \\otimes 1) = 0$ and $d_t (v^{(s)} \\otimes 1 \\otimes 1) = 0$ by the definition of the action of $c_t$ and $d_t$.\n\\end{proof}\n\n\\subsection{Isomorphisms}\n\nWe define a $\\tor$-module $\\bbV$ by the pull-back of $\\bbV(0)$ via the automorphism $S^{-1}$, that is, $\\bbV = (S^{-1})^*\\bbV(0)$.\nDenote the vector of $\\bbV$ corresponding to $v^{(s)} \\otimes 1 \\otimes 1 \\in \\bbV(0)$ by $\\bfv$.\n\nThe action of $c(1,0)$ on $\\bbV$ corresponds to $\\tau^{-1}$ on $\\bbV(0)$ via $S^{-1}$ since $S^{-1}(c(1,0)) = c(0,-1)$.\nWe regard $\\bbV$ as a module over $A(\\Lambda_0)=\\bbC[z^{\\pm 1}]$ via $z \\mapsto c(1,0)$ and then $\\bbV$ becomes a free $A(\\Lambda_0)$-module by Proposition~\\ref{prop:freeness_vertex_rep}.\nWe put $\\bbV_a = \\bbV \\otimes_{A(\\Lambda_0)} \\bbC_a$ for $a \\in \\bbC^{\\times}$.\nThis $\\bbV_a$ is a $\\tor'$-module.\nThe character of $\\bbV_a$ is given as follows.\n\n\\begin{prop}\\label{prop:character_V}\nWe have $\\ch_p \\bbV_a = \\ch_p L(\\Lambda_0) \\displaystyle\\prod_{n > 0} \\dfrac{1}{1-p^n}$.\n\\end{prop}\n\n\\begin{proof}\nThe assertion obviously follows from the construction of the action of $\\tor$ on $\\bbV(0) = L(\\Lambda_0)^{(s)} \\otimes D \\otimes \\bbC[\\tau^{\\pm 1}]$. \n\\end{proof}\n\nLet us study relation between the level one global Weyl module $\\glob(\\Lambda_0)$ and $\\bbV$. \n\n\\begin{lem}\\label{lem:relation}\nWe have \n\\[\n\th_{i,k} \\bfv = \\begin{cases} 0 & \\text{if } i \\in I, \\\\ z^k \\bfv & \\text{if } i=0 \\end{cases}\n\\]\nfor any $k \\in \\bbZ$.\t\n\\end{lem}\n\n\\begin{proof}\nWe have \n\\[\n\tS^{-1}(h_{i,k}) = \\begin{cases} h_i \\otimes t^{-k} & \\text{if } i \\in I, \\\\ s^{-1} t^{-k} ds - h_{\\theta} \\otimes t^{-k} & \\text{if } i=0. \\end{cases}\n\\]\nBy Lemma~\\ref{lem:vertex}, we have $(h_i \\otimes t^{-k}) (v^{(s)} \\otimes 1 \\otimes 1) = (h_{\\theta} \\otimes t^{-k}) (v^{(s)} \\otimes 1 \\otimes 1) =0$.\nSince we have $(s^{-1} t^{-k} ds) (v^{(s)} \\otimes 1 \\otimes 1) = \\tau^{-k} (v^{(s)} \\otimes 1 \\otimes 1)$ and $\\tau^{-1}$ corresponds to $z$, the assertion is proved.\n\\end{proof}\n\n\\begin{lem}\\label{lem:surjection}\nWe have a surjective homomorphism $\\glob(\\Lambda_0) \\to \\bbV$ of modules over both $\\tor$ and $A(\\Lambda_0)$.\n\\end{lem}\n\n\\begin{proof}\nThe equalities (\\ref{eq:Frenkel-Kac1new}), (\\ref{eq:Frenkel-Kac2new}), (\\ref{eq:Frenkel-Kac3new}) are equivalent to\n \\begin{gather*}\n\te_i \\bfv = 0 \\ (i \\in \\affI), \\\\\n\tc_t \\bfv = \\bfv, \\quad h_i \\bfv = 0 \\ (i \\in I),\\quad d_t \\bfv = 0,\\\\\n\tf_0^2 \\bfv = 0,\\quad f_i \\bfv = 0 \\ (i \\in I).\n\\end{gather*}\nMoreover we have \n\\begin{align*}\n\tc_s \\bfv &= S^{-1}(c_s)(v^{(s)} \\otimes 1 \\otimes 1) = c_t (v^{(s)} \\otimes 1 \\otimes 1) = 0,\\\\\n\td_s \\bfv &= S^{-1}(d_s)(v^{(s)} \\otimes 1 \\otimes 1) = d_t (v^{(s)} \\otimes 1 \\otimes 1) = 0\n\\end{align*}\nby Lemma~\\ref{lem:vertex}.\nWe need to check $e_{i,k} \\bfv = 0$ for $i \\in \\affI$ and $k \\in \\bbZ$.\nThis follows from $e_i \\bfv = 0$ and Lemma~\\ref{lem:relation}.\n\\end{proof}\n\nBy Lemma~\\ref{lem:surjection}, we have a surjective $\\tor'$-homomorphism $\\loc(\\Lambda_0,a) \\to \\bbV_a$ for every $a \\in \\bbC^{\\times}$. \nHence we have inequalities of the characters\n\\begin{equation}\n\t\\ch_p \\loc^+(\\Lambda_0,a) \\geq \\ch_p \\loc(\\Lambda_0,a) \\geq \\ch_p \\bbV_a \\label{eq:inequality}\n\\end{equation}\nby Proposition~\\ref{prop:character}.\n\n\\begin{thm}\\label{thm:main}\nWe have isomorphisms\n\\[\n\t\\glob(\\Lambda_0) \\stackrel{\\cong}{\\longrightarrow} \\bbV, \\qquad \\loc(\\Lambda_0,a) \\stackrel{\\cong}{\\longrightarrow} \\bbV_a\n\\]\nof modules over $\\tor$ and $\\tor'$ respectively.\n\\end{thm}\n\n\\begin{proof}\nFirst we prove the isomorphism $\\loc(\\Lambda_0,a) \\cong \\bbV_a$.\nWe have\n\\begin{equation}\n\t\\ch_p \\loc^+(\\Lambda_0,a) = \\ch_p W(\\Lambda_0) \\leq \\ch_p L(\\Lambda_0) \\prod_{n>0} \\dfrac{1}{1-p^n} = \\ch_p \\bbV_a \\label{eq:inequality2}\n\\end{equation}\nby Proposition~\\ref{prop:independent}, \\ref{prop:upper_bound}, \\ref{prop:character_V}.\nThen the inequalities (\\ref{eq:inequality}) and (\\ref{eq:inequality2}) imply $\\ch_p \\loc(\\Lambda_0,a) = \\ch_p \\bbV_a$.\nThis shows that the surjective homomorphism $\\loc(\\Lambda_0,a) \\to \\bbV_a$ is an isomorphism for every $a \\in \\bbC^{\\times}$.\nNext we prove the isomorphism $\\glob(\\Lambda_0) \\cong \\bbV$.\nSince $\\bbV$ is a free $A(\\Lambda_0)$-module, we can take a splitting of the exact sequence\n\\[\n\t0 \\to \\Ker \\to \\glob(\\Lambda_0) \\to \\bbV \\to 0\n\\]\nof $A(\\Lambda_0)$-modules.\nThe isomorphism $\\loc(\\Lambda_0,a) \\cong \\bbV_a$ implies $\\Ker \\otimes_{A(\\Lambda_0)} \\bbC_a = 0$ for every $a \\in \\bbC^{\\times}$.\nThen by Nakayama's lemma, we see that $\\Ker = 0$ and obtain the isomorphism $\\glob(\\Lambda_0) \\cong \\bbV$.\n\\end{proof}\n\n\\begin{cor}\\label{cor:character}\nWe have\n\\[\n\t\\ch_{p} \\loc(\\Lambda_0,a) = \\ch_{p} \\loc^+(\\Lambda_0,a) = \\ch_p L(\\Lambda_0) \\left( \\prod_{n>0} \\dfrac{1}{1-p^n} \\right)\n\\]\nfor $a \\in \\bbC^{\\times}$ and\n\\[\n\t\\ch_{p,q} W(\\Lambda_0) = \\ch_p L(\\Lambda_0) \\left( \\prod_{n>0} \\dfrac{1}{1-p^n q} \\right).\n\\]\n\\end{cor}\n\n\\begin{proof}\nThe equalities for the $p$-characters are verified in the proof of Theorem~\\ref{thm:main}.\nThe equality for the $(p,q)$-character follows from that for the $p$-character and Proposition~\\ref{prop:upper_bound}. \n\\end{proof}\n\n\\newcommand{\\etalchar}[1]{$^{#1}$}\n\\def\\cprime{$'$} \\def\\cprime{$'$} \\def\\cprime{$'$} \\def\\cprime{$'$}\n\\providecommand{\\bysame}{\\leavevmode\\hbox to3em{\\hrulefill}\\thinspace}\n\\providecommand{\\MR}{\\relax\\ifhmode\\unskip\\space\\fi MR }\n\\providecommand{\\MRhref}[2]\n \\href{http:\/\/www.ams.org\/mathscinet-getitem?mr=#1}{#2}\n}\n\\providecommand{\\href}[2]{#2}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{introduction}\nRecently, significant public attention has been drawn to the consequences of achieving human-level \nartificial intelligence. While there have been small communities analyzing the long-term impact of AI \nand related technologies for decades, these forecasts were made before the many recent \nbreakthroughs that have dramatically accelerated the pace of research in areas as diverse as robotics, \ncomputer vision, and autonomous vehicles, to name just a few \\cite{bostrom2014superintelligence, \nshanahan2015technological, chalmers2010singularity}. \\\\\n\nMost researchers and industrialists view advances in artificial intelligence as having the potential to be \noverwhelmingly beneficial to humanity. Medicine, transportation, and fundamental scientific research \nare just some of the areas that are actively being transformed by advances in artificial intelligence. On \nthe other hand, issues of privacy and surveillance, access and inequality, or economics and policy are \nalso of utmost importance and are distinct from the specific technical challenges posed by most \ncutting-edge research problems \\cite{tegmark2015open, russell2015research}. \\\\\n\nIn the context of AI forecasting, one set of issues stands apart, namely, the consequences of artificial \nintelligence whose capacities vastly exceed that of human beings. Some researchers have argued that \nsuch a ``superintelligence'' poses distinct problems from the more modest AI systems described above. \nIn particular, the emerging discipline of AI safety has focused on issues related to the potential \nconsequences of mis-specifying goal structures for AI systems which have significant capacity to exert \ninfluence on the world. From this vantage point, the fundamental concern is that deviations from \n``human-compatible values'' in a superintelligent agent could have significantly detrimental \nconsequences \\cite{bostrom2014superintelligence}. \\\\\n\nOne strategy that has been advocated for addressing safety concerns related to superintelligence is \nOracle AI, that is, an AI system that only answers questions. In other words, an Oracle AI does not \ndirectly influence the world in any capacity except via the user of the system. Because an Oracle AI \ncannot directly take physical action except by answering questions posed by the system's operator, \nsome have argued that it may provide a way to bypass the immediate need for solving the ``value \nalignment problem'' and would itself be a powerful resource in enabling the safe design of autonomous, \ndeliberative superintelligent agents \\cite{armstrong2012thinking, bostrom2014superintelligence, \nfallenstein2015reflective, armstrong2017, armstrong2017oracle}. \\\\\n\nA weaker notion of the term oracle, what we call a \\emph{domain-specific oracle}, refers to a modular \ncomponent of a larger AI system that is queried for domain-specific tasks. In this article, we view \ncomputer algebra systems as primitive domain-specific oracles for mathematical \ncomputation which are likely to become quite powerful on the time horizons on which many expect \nsuperintelligent AI systems to be developed \\cite{muller2016future, 2017arXiv170508807G}. \nUnder the assumption that math oracles prove to be \nuseful in the long-term development of AI systems, addressing well-defined architectural \nproblems with CASs and their integration with interactive theorem provers provides a concrete \nset of research problems that align with long-term issues in AI safety. In addition, such systems may also\nbe useful in proving the functional correctness of other aspects of an AI architecture. In Section \n\\ref{metascience}, we briefly discuss the unique challenges in allocating resources for AI safety \nresearch. In Section \\ref{oracle-overview}, we briefly summarize the motivation for developing\noracles in the context of AI safety and give an overview of safety risks and control strategies \nwhich have been identified for superintelligent oracle AIs. \nIn Section \\ref{oracle} we analyze contemporary question answering systems and argue that \nin contrast to computer algebra systems, current consumer-oriented, NLP-based systems are poor \ncandidates for rigorous analysis as oracles. In Section \\ref{itp-cas}, we review the differences between \ntheorem provers and computer algebra systems, efforts at integrating the two, and known architectural \nproblems with CASs. We close with a list of additional research projects related to mathematical computation \nwhich may be of interest to scientists conducting research in AI safety. \n\n\\section{Metascience of AI Safety Research}\\label{metascience}\nFrom a resource allocation standpoint, AI safety poses a unique set of challenges. Few areas of \nacademic research operate on such long and potentially uncertain time horizons. This is not to say that \nacademia does not engage in long-term research. Research in quantum gravity, for example, is \napproaching nearly a century's worth of effort in theoretical physics \\cite{Rovelli:2008}. However, the \nkey difference between open-ended, fundamental research in the sciences or humanities and AI safety \nis the possibility of negative consequences, indeed significant ones, of key technological \nbreakthroughs taking place without corresponding advances in frameworks for safety \n\\cite{bostrom2014superintelligence, russell2016should} . \\\\\n\nThese issues have been controversial, largely due to disagreement over the time-horizons for achieving \nhuman-level AI and the subsequent consequences \\cite{muller2016future, 2017arXiv170508807G}. \nSpecifically, the notion of an ``intelligence explosion,'' whereby the intelligence of software systems \ndramatically increases due their capacity to model and re-write their own source code, has yet to receive \nadequate scientific scrutiny and analysis \\cite{linstone2014singularity}. \\\\\n\nWe affirm the importance of AI safety research and also agree with those who have cautioned against \nproceeding down speculative lines of thinking that lack precision. \nOur perspective in this article is that it is \npossible to fruitfully discuss long-term issues related to AI safety while maintaining a connection to \npractical research problems. To some extent, our goal is similar in spirit to the widely discussed \nmanuscript ``Concrete Problems in AI Safety'' \\cite{amodei2016concrete}. However, we aim to be a bit \nmore bold. While the authors of ``Concrete Problems'' state at the outset that their analysis will set \naside questions related to superintelligence, our goal is to explicitly tackle superintelligence related \nsafety concerns. We believe that there are areas of contemporary research that overlap \nwith novel ideas and concepts that have arisen among researchers who have purely focused on \nanalyzing the consequences of AI systems whose capacities vastly exceed those of human beings. \\\\\n\nTo be clear, we do not claim that the strategy of searching for pre-existing research objectives that align\nwith the aims of superintelligence theory is sufficient to cover the full spectrum of issues identified by \nAI safety researchers. There is no doubt that the prospect of superintelligence raises entirely new \nissues that have no context in contemporary research. However, considering how young the field is, \nwe believe that the perspective adopted in this article is a down-to-earth and moderate stance to take \nwhile the field is in a critical growth phase and a new culture is being created. \\\\\n\nThis article focuses on one area of the AI safety landscape, Oracle AI. We identify a set of concrete software \nprojects that relate to more abstract, conceptual ideas from AI safety, to bridge the gap between \npractical contemporary challenges and longer term concerns which are of an uncertain time horizon. \nIn addition to providing concrete problems for researchers and engineers to tackle, we hope\nthis discussion will be a useful introduction to the concept of Oracle AI for newcomers to the subject. \nWe state at the outset that within the context of Oracle AI, our analysis is limited in scope to systems \nwhich perform mathematical computation, and not to oracles in general. Nonetheless, considering how \nlittle effort has been directed at the \nsuperintelligence control problem, we are confident that there is low-hanging fruit in addressing these \nmore general issues which are awaiting discovery. \n\n\\section{Brief Overview of Oracle AI}\\label{oracle-overview}\nAs described above, an Oracle AI is a system which only answers questions. Although the term has traditionally\nbeen used to denote fully general AI systems, we can also imagine oracles whose capabilities are more limited. \nThe latter systems are what we refer to as \\emph{domain-specific oracles}. AI safety researchers have \nargued that superintelligent oracles have the potential to help address safety concerns of autonomous, \ndeliberative superintelligent AI systems, as an Oracle AI's only ability to influence the world is through \nthe user of the system. Consequently, an Oracle AI itself may be in a position to assist researchers in\nsolving difficult technical and conceptual problems related to the design of independent, deliberative agents. \nYet superintelligent oracle AIs pose their own distinct set of safety concerns. \nThe problems identified in \\cite{armstrong2012thinking}, for example, presume a superintelligent AI \nsystem, which while limited in its ability to interact with the world, may nonetheless have the potential to \nnegatively impact the world by manipulating the user of the system. As Armstrong, Bostrom, and Sandberg write:\n\n\\begin{quote}\n{\\small\nWe are all vulnerable to the OAI's social engineering. An advanced OAI can literally offer us the world. \nGenuine altruism is no defence against this; it can offer a world free of poverty, hunger, war, disease or \ndeath, as well as all personal trinkets we could want. It could guess the weaknesses of each individual, \nand find the right arguments to convince us that granting it power or liberty is the moral and profitable \nthing to do. It will be able to play on rivalries, institutional weaknesses and ethical dilemmas with more \nvirtuosity than any human could. \\cite{armstrong2012thinking}\n}\n\\end{quote}\n\nGiven the potential dangers of human manipulation by an OAI, researchers have searched for control strategies\nto mitigate such risks. A few examples of control methods that have been proposed are:\n\\begin{itemize}\n\\item \\textbf{Physical security}- physically containing the OAI system\n\\item \\textbf{Limiting or eliminating external manipulators}- like the previous security measure, \nprecluding the OAI from physically manipulating the world in any direct manner\n\\item \\textbf{Epistemic capability control}- the OAIs output could be restricted even further beyond \nquestion answering, for example, restricting it to answer ``yes \/ no '' questions\n\\item \\textbf{Virtual worlds}- the OAI could be run in a simulated world so that even if the system's \ncapabilities were expanded, its behavior could be observed and modeled. \n\\item \\textbf{Resetting the OAI after each question}- this would prevent the OAI from engaging in long-term \nsocial engineering by manipulating the answers to the users' questions\n\\end{itemize}\n\nAlthough the capacities of domain-specific oracles are more limited, they can nonetheless pose safety risks. \nArchitectural deficiencies in such oracles might be exploited by a larger AI system to \nmanipulate the human user. It could give answers which are difficult to verify and which allow the OAI to \nexecute complex and intricate plans unbeknownst to the user. Therefore, while flaws in domain-specific oracles are not\ninherently risky if used solely in their domain of applicability, they may very well be dangerous as part of \na larger system with more general capabilities. Though not a ``control strategy'' in the narrowest sense, \ncreating ``robust'' domain-specific oracles is an\nimportant objective in designing safe OAIs. Furthermore,\nensuring the robustness of domain-specific subsystems might mitigate the need for stronger control strategies,\nas the OAI would have fewer weaknesses to exploit. \\\\\n\nIt should go without saying that the arguments presented above are highly schematic and do not dependent\non specific technologies. To our knowledge, there is very limited work on translating analyses of \nsuperintelligent oracle AIs into the concrete language of modern artificial intelligence \n\\cite{armstrong2016safely, armstrong2017, armstrong2017oracle}. Our goal in this manuscript is in this spirit, that is, to\nanchor schematic, philosophical arguments in practical, contemporary research. To do so, we will narrow our focus\nto the mathematical domain. In the remainder of the article, we will use the \nterm oracle in the more limited sense of a domain-specific subsystem, and in particular, oracles for performing \nmathematical computations. We hope that the analysis presented here will be of intrinsic value in \ndeveloping robust math oracles, as well as provide some intuition and context for identifying \nconcrete problems relevant to developing safe, superintelligent oracle AI systems. \n\n\\section{Are there contemporary systems which qualify as oracles?}\\label{oracle}\nThe obvious class of contemporary systems which would seem to qualify as oracles are question \nanswering systems (QASs). As we stated above, a basic criterion characterizing oracles is that their \nfundamental mode of interaction is answering questions posed by a user, or for domain-specific queries \nas part of a larger AI system. \\\\\n\nContemporary QASs are largely aimed at using natural language processing techniques to answer \nquestions pertaining to useful facts about the world such as places, movies, historical figures, and so \non. An important point to make about QASs is the highly variable nature of the underlying technology. \nFor instance, IBM's original Watson system which competed in Jeopardy, was developed prior to the \nrecent advances in deep learning which have fundamentally transformed areas ranging from computer \nvision, to speech recognition, to natural language processing \\cite{ferrucci2010building}. In this \nparticular task, the system was nonetheless able to perform at a level beyond that of the most \naccomplished human participants. The introduction of ``info panes'' into popular search \nengines, on the other hand, have been based on more recent machine learning technology, and indeed, \nthese advances are also what power the latest iterations of the Watson system \n\\cite{watson_upgrade}. On the other end of the spectrum is Wolfram $\\vert$ Alpha, which is also a question \nanswering system, but which is architecturally centered around a large, curated repository of structured \ndata, rather than datasets of unstructured natural language \\cite{wolfram_QAS}. \\\\\n\nWhile these systems are currently useful for humans in navigating the world, planning social outings, \nand arriving at quick and useful answers to ordinary questions, it is not clear that they will remain useful \nin quite the same capacity many years from now, or as standalone components of superintelligent AI \nsystems. Although the underlying techniques of deep learning or NLP are of fundamental interest in \ntheir own right, the fact that these systems are QASs at all seems to be more of an artifact of their utility for \nconsumers. \\\\\n\nAnother important observation about contemporary QASs is that much of their underlying NLP-based \narchitecture can be replaced by taking advantage of structured data, as the example of Wolfram | Alpha \ndemonstrates. For the other NLP or machine learning based \nsystems, the underlying technology can be used as part of larger, semi-automated pipelines to turn \nunstructured data from textual sources into structured data. Once again, this fact simply underscores \nthat contemporary QASs are not particularly appealing model systems to analyze from the Oracle AI \nsafety perspective.\\footnote{We emphasize that our argument that\ncontemporary QASs are not good candidates for analysis as Oracle AIs is not an argument \nagainst the traditional formulation of Oracle AI as a tool for AI safety. We fully expect significant \nbreakthroughs to be made in advancing the theory and practice of oracle-based\ntechniques for AI safety and we hope that this manuscript will provide some motivation \nto pursue such research. Rather, our point is that when viewing\ncontemporary systems from the lens of superintelligence, there seems little reason to believe that \ncurrent NLP-based QASs will remain sufficiently architecturally stable to be used as standalone components \nin AI systems many years from now. On the other hand, there are certainly important \\emph{present-day} problems \nto examine when evaluating the \nbroader impact of QASs, such as bias in NLP systems, overgeneralization, and privacy, to name just a \nfew. Some of these issues overlap with the set of problems identified in \\cite{amodei2016concrete} as \nexamples of concrete problems in AI safety. In addition, we are beginning to see conferences \ndevoted to contemporary ethical issues raised by machine learning. See, for example, the workshop \n\\href{https:\/\/www.aclweb.org\/portal\/content\/first-workshop-ethics-natural-language-processing}{Ethics in \nNatural Language Processing}.}\n\n\\subsection{Computer Algebra and Domain-Specific Oracles for Mathematical Computation}\nThe question answering systems described above all rely on natural language processing to varying \ndegrees. In addition, their domain of applicability has tended towards ``ordinary'' day-to-day knowledge \nuseful to a wide array of consumers. Another type of question answering system is a computer algebra \nsystem (CAS). Computer algebra has traditionally referred to systems for computing specific results to \nspecific mathematical equations, for example, computing derivatives and integrals, group theoretic \nquantities, etc. In a sense, we can think of computer algebra as a set of algorithms for performing what \nan applied mathematician or theoretical physicist might work out on paper and pencil. Indeed, some of \nthe early work in computer algebra came from quantum field theory---one of the first computer algebra \nsystems was Veltman's \\emph{Schoonschip} for performing field theoretic computations that led to the \ntheory of electroweak unification \\cite{Schoonschip}. \\\\\n\nAs computer algebra systems have grown in popularity, their functionality has expanded substantially\nto cover a wide range of standard computations in mathematics and theoretical physics, including differentiation,\nintegration, matrix operations, manipulation of symbolic expressions, symbolic substitution, algebraic equation solving,\nlimit computation, and many others. Computer algebra systems typically run in a \\texttt{read, evaluate, print} loop (\\texttt{repl}), \n and in the research and education context, their popularity has also grown as a result of the notebook model pioneered\nby the \\emph{Mathematica} system, allowing for computations in CASs to closely mimic the sequential, paper and pencil\nwork of mathematicians and theoretical physicists. \\\\\n\nIn assessing the long-term utility of CASs, it is important to note that there is little reason to believe that\ncomputer algebra will be subsumed by other branches of AI research such as machine learning. Indeed, \nrecent research has \ndemonstrated applications of machine learning to both computer algebra and theorem proving (which \nwe discuss in more detail below), via algorithm selection in the former case \\cite{huang2016machine} \nand proof assistance in the latter \\cite{irving2016deepmath, komendantskaya2012machine}. While \ncertainly not as visible as machine learning, computer algebra and theorem proving are very much \nactive and deep areas of research which are also likely to profit from advances in other fields of \nartificial intelligence, as opposed to being replaced by them \\cite{bundy_et_al:DR:2012:3731}. \nOn the time horizons on which we are likely to \nsee human-level artificial intelligence and beyond, we can expect that these systems will become quite \npowerful, and possess capabilities that \nmay be useful in the construction of more general AI systems. Therefore, it is worth examining such \nsystems from the perspective of AI \nsafety.\n\n\\subsection{Briefly Clarifying Nomenclature}\nBefore proceeding, we want to explicitly describe issues relating to nomenclature that have arisen in the \ndiscussion thus far, and state our choices for terminology. Given that the phrase ``Oracle AI'' has \nbecome common usage in the AI safety community, we will continue to use this phrase, with the first \nword capitalized, as well as the acronym OAI. Where clarification is needed, we may also use the full \nphrase ``superintelligent oracle AI,'' without capitalization. \\\\\n\nFor more modest use cases of the word oracle, we will either refer to ``domain-specific oracles,'' or state the \ndomain of knowledge where the oracle is applicable. We can, at the very least in the abstract, consider \nextending this terminology to other domains such as ``physics oracles,'' ``cell biology oracles,'' or \n``ethics oracles'' and so on. Therefore, the remainder of the article \nwill be concerned with safety and robustness issues in the design of ``math oracles.''\n\n\\section{Robust Computer Algebra and Integrated Theorem Proving}\\label{itp-cas}\n\\begin{quote}\n{\\small \\emph{Today we should consider as a standard feature much closer interaction between proof \nassistance and computer algebra software. Several areas can benefit from this, including specification \nof interfaces among components, certification of results and domains of applicability, justification of \noptimizations and, in the other direction, use of efficient algebra in proofs.}\\\\ -\n\\textbf{Stephen Watt in \\emph{On the future of computer algebra systems at the threshold of 2010}}}\n\\end{quote}\n\nAs we described above, computer algebra systems can be thought of as question answering systems \nfor a subset of mathematics. A related set of systems are interactive proof assistants or interactive \ntheorem provers (ITPs). While ITPs are also systems for computer-assisted mathematics, it is for a \ndifferent \nmathematical context, for computations in which one wishes to construct a proof of a general kind of \nstatement. In other words, rather than computing specific answers to specific questions, ITPs are used \nto show that candidate mathematical structures (or software systems) possess certain properties. \\\\ \n\nIn a sense, the \ndistinction between theorem proving and computer algebra should be viewed as a historical anomaly. \nFrom the perspective of philosophical and logical efforts in the early 20th century that led to the \n``mechanization of mathematics'' the distinction between computing the $n^{th}$ Laguerre polynomial \nand constructing a proof by induction might have been viewed as rather artificial, although with the \nbenefit of hindsight we can see that the two types of tasks are quite different in practice \n\\cite{beeson2004mechanization}. \\\\\n\nThe role of ITPs in the research world is very different from that of CASs. Whereas CASs allow researchers\nto perform difficult computations that would be impossible with paper and pencil, constructing proofs using \nITPs is often more difficult than even the most rigorous methods of pure mathematics. In broad terms, the \noverhead of using ITPs to formalize theorems arises from the fact that proofs in these systems must proceed\nstrictly from a set of formalized axioms so that the system can verify each computation. Consequently, ITPs\n(and related systems, such as automatic theorem provers) are largely used for verifying properties of \nmission-critical software systems which require a high-degree of assurance, or for hardware verification,\nwhere mistakes can lead to costly recalls \\cite{seL4, kaivola2009replacing, fix2008fifteen, kern1999formal, Kropf}. \\\\\n\nAs the quotation above suggests, many academic researchers view the integration of interactive proof \nassistants and computer algebra systems as desirable, and there have been numerous efforts over the \nyears at exploring possible avenues for achieving this objective \\cite{Ballarin, HOLCAS, Watt, \nTheorema} (a more complete list is given below). By integrating theorem proving with computer \nalgebra, we would be opening up a wealth of potentially interoperable algorithms that have to date \nremained largely unintegrated. To cite one such example, in \\cite{MapleIsabelle}, the authors have \ndeveloped a framework for exchange of information between the Maple computer algebra system and \nthe Isabelle interactive theorem prover. They show a simple problem involving the proof of an \nelementary polynomial identity that could be solved with the combined system, but in neither system \nalone (see Fig. \\ref{fig:maple_isabelle}). \\\\\n\n\\begin{figure*}[h]\n\\begin{center}\n\\includegraphics[width=\\textwidth]{maple_isabelle}\n\\caption{\\label{fig:maple_isabelle}Example of a polynomial identity proven by integrating the Maple \ncomputer algebra system with Isabelle. Maple's simplifier is used for expanding polynomials---a \npowerful complement to the theorem proving architecture of Isabelle which allows for the setup of a \nproof by induction.}\n\\end{center}\n\\end{figure*}\n\nWe cite this example to demonstrate how a simply stated elementary problem cannot be solved in \nexisting environments for either computer algebra or proof assistance. The computer algebra system \ndoes not have the capacity for structural induction and theorem provers generally have rather weak \nexpression simplifiers. There are numerous examples such as this one in the academic literature. \\\\\n\nAnother key difference between CASs and ITPs is the architectural soundness of the respective \nsystems. As we will discuss below, computer algebra systems have well-defined architectural \ndeficiencies, which while not a practical issue for the vast majority of use cases, pose problems for their \nintegration with theorem provers, which by their nature, are designed to be architecturally sound. In the \ncontext of superintelligent AI systems, the architectural problems of CASs are potential points of \nweakness that could be exploited for malicious purposes or simply lead to unintended and detrimental consequences. \nTherefore, we use the phrase ``robust \ncomputer algebra'' to refer to CASs which lack the problems \nthat have been identified in the research literature. In the section below, we combine the discussion \nof robust computer algebra and integration with interactive theorem provers, as there is a spectrum of \napproaches which address both of these issues to varying degrees. \n\n\\subsection{A Taxonomy of Approaches}\nThere are many possible avenues to tackle the integration of theorem provers with computer algebra \nsystems. We give 4 broad categories characterizing such integration efforts\\footnote{This classification \nwas first described by Kaliszyk and Wiedijk \\cite{HOLCAS} in a paper arguing for an architecture which \nwe list as the fourth category given above.}: \n\n\\begin{enumerate}\n\\item \\textbf{Theorem provers built on top of computer algebra systems:} These include Analytica, \nTheorema, RedLog, and logical extensions to the Axiom system \\cite{clarke1992analytica, Theorema, \ndolzmann1997redlog, jenks2013axiomtm, poll1998adding} .\n\\item \\textbf{Frameworks for mathematical exchange between the two systems:} This category includes \nMathML, OpenMath, OMSCS, MathScheme, and Logic Broker \\cite{miner2005importance, \nbuswell2004open, calmet2004toward, carette2011mathscheme, armando2000towards}. \n\\item \\textbf{``Bridges'' or ``ad-hoc'' information exchange solutions:} The pairs of systems in this \ncategory include bridges combining PVS, HOL, or Isabelle with Maple, NuPRL with Weyl, Omega with \nMaple\/GAP, Isabelle with Summit, and most recently, Lean with \\emph{Mathematica} \n\\cite{MapleIsabelle, adams2001computer, harrison1998skeptic, \nballarin1995theorems, jackson1994exploring, siekmann2002proof, ballarin1999pragmatic, \nLeanMathematica2017}. The \nexample given above, bridging Isabelle and Maple, is an example of an approach from this category.\n\\item \\textbf{Embedding a computer algebra system inside a proof assistant:} This is the approach \ntaken by Kaliszyk and Wiedijk in the HOLCAS system. In their system, all expressions have precise \nsemantics, and the proof assistant proves the correctness of each simplification made by the computer \nalgebra system \\cite{HOLCAS}.\n\\end{enumerate}\n\nOne primary aspect of integration that differentiates these approaches is the degree of trust the \ntheorem prover places in the computer algebra system. Computer algebra systems give the false \nimpression of being monolithic systems with globally well-defined semantics. In reality, they are large \ncollections of algorithms which are neatly packaged into a unified interface. Consequently, there are \noften corner cases where the lack of precise semantics can lead to erroneous solutions. Consider the \nfollowing example: \n\n\\begin{figure}[h]\n\\begin{center}\n\\frame{\\includegraphics[scale=.32]{solve_macsyma}}\n\\caption{\\label{fig:solve-error-simp}Example of an incorrect solution to a simple polynomial equation by \na computer algebra system.}\n\\end{center}\n\\end{figure}\n\nThe system incorrectly gives 1 as a solution, even though the given polynomial has an indeterminate \nvalue for $x = 1$. However, because the expression is treated as a fraction of polynomials, it is first \nsimplified before the solve operation is applied. In other words, there is an unclear semantics between \nthe solver module and the simplifier which leads to an incorrect result. \\\\\n\nAnother simple example is the following integral:\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[scale=.70]{evaluation_substitution}\n\\caption{\\label{fig:solve-error-noncomm}A problem arising in symbolic integration due to the non-commutativity \nof evaluation and substitution.}\n\\end{center}\n\\end{figure}\n\nMaking the substitution $n = -1$ gives an indeterminate result, while it is clear by inspection that the \nsolution to the integral for $n = -1$ is simply $ln(x)$. This belongs to a class of problems known as the \n\\emph{specialization problem}, namely that expression evaluation and variable substitution do not \ncommute \\cite{Ballarin}. So while we have seen above that theorem proving can benefit tremendously \nfrom the wealth of algorithms for expression simplification and mathematical knowledge in computer \nalgebra, there is the potential cost of compromising the reliability of the combined system. As a possible\napplication to current research in AI safety, consider the decision-theoretic research agenda \nfor the development of safe, superintelligent AI systems outlined in \\cite{yudkowsky2013tiling, lavictoire2015introduction, \nbarasz2014robust, fallenstein2014problems, soares2015toward}. If we require formal guarantees of \ncorrectness at any point in a sequence of computations in which computer algebra is used, current systems \nwould be unable to provide the necessary framework for constructing such a proof.\n\n\\subsubsection{Qualitatively Certified Computations} \nIn our taxonomy of approaches to bridging theorem provers with computer algebra, we described how a \nkey distinction was the degree of trust that the theorem prover places in the computer algebra system. \nFor instance, approaches which build theorem provers on top of computer algebra systems do not \naddress the architectural issues with CASs. They are integrative, but not more sound. On the other \nextreme, building a computer algebra system on top of a theorem prover allows for a degree of trust \nthat is on par with that of the theorem prover itself. However, this approach has the distinct \ndisadvantage that computer algebra systems represent many hundred man-years worth of effort. \\\\\n\nThe more intermediate approaches involving common languages for symbolic exchange or ad-hoc bridges, \nbring to light an important notion in the spectrum of provable safety, namely \nthe ability to assign probabilities for the correctness of computations. In \\cite{garrabrant2016logical}, \nthe authors present an algorithm for assigning probabilities to any statement in a formal language. We \nmight ask what strategies might look like that have a similar goal in mind, but are significantly weaker. \nInterfaces between theorem provers and computer algebra systems provide a concrete example where \nwe can ask a question along these lines. Fundamentally, in such an interface, the computer algebra \nsystem is the weaker link and should decrease our confidence in the final result. But by how much? \nFor instance, in the example given in Figure \\ref{fig:maple_isabelle}, how should we revise our \nconfidence in the result knowing that polynomial simplification was conducted within a computer algebra \nsystem? \\\\\n\nIt is worth asking for simple answers to this question that do not require major theoretical advances to \nbe made. For instance, we might imagine curating information from computer algebra experts about \nknown weaknesses, and use this information to simply give a qualitative degree of confidence in a \ngiven result. Or, for example, in a repository of formal proofs generated using integrated systems, steps \nof the proof that require computer algebra can be flagged and also assigned a qualitative measure of \nuncertainty. \\\\\n\nThe relationship that this highly informal method of giving qualitative certification to computations has \nwith the formal algorithm developed in \\cite{garrabrant2016logical} can be compared to existing \ntechniques in the software industry for ensuring correctness. On the one hand, unit testing is a \ntheoretically trivial, yet quite powerful practice, something along the lines of automated checklists for \nsoftware. The complexities of modern software would be impossible to handle without extensive \nsoftware testing frameworks \\cite{Beck2002, Osherove2013, maximilien2003assessing, \nerdogmus2005effectiveness, sarma2016unit}. On the other hand, formal verification can provide substantially stronger \nguarantees, yet is a major undertaking, and the correctness proofs are often significantly more \ndemanding to construct than the software itself. Consequently, as discussed in Section \\ref{itp-cas},\nformal verification is much less frequently used in industry, typically only in exceptional circumstances \nwhere high guarantees of correctness are required, or for \nhardware verification \\cite{seL4, kaivola2009replacing, fix2008fifteen, kern1999formal, Kropf}. \\\\\n\nIntegrated systems for computer algebra and theorem proving give rise to a quite interesting (and \nperhaps ironic) opportunity to pursue simple strategies for giving qualitative estimates for the \ncorrectness of a computation.\n\n\\subsubsection{Logical Failures and Error Propagation}\nAs the examples described above demonstrate, errors in initial \ncalculations may very well propagate and give rise to non-sensical results. As AI systems capable of performing\nmathematical computation become increasingly sophisticated and embedded as part of design workflows\nfor science and engineering (beyond what we see today), we could imagine such errors being quite costly\nand difficult to debug. In the case of a \nsuperintelligent AI system, more concerning scenarios would be if systematic errors in computer \nalgebra could be exploited for adversarial purposes or if they led to unintentional accidents on a large scale.\\\\\n\nThe issue of error propagation is another example of a concrete context for pursuing simple strategies \nfor assigning qualitative measures of certainty to computations performed by integrated theorem \nproving \/ computer algebra systems. For instance, we may be less inclined to trust a result in which the \ncomputer algebra system was invoked early on in a computation as opposed to later. With curated data \nfrom computer algebra experts on the reliability or failure modes of various algorithms, we might also \nchain together these informal estimates to arrive at a single global qualitative estimate. If multiple \nsystems were to be developed independently, or which were based on fundamentally different \narchitectures, we might also be significantly more confident in a result which could be verified by two \nseparate systems. \n\n\\subsubsection{Additional Topics}\nSome related ideas merit investigation in the broader context of mathematical computation:\n\\begin{itemize}\n\\item \\textbf{Integrating SMT solvers with interactive theorem provers:} Satisfiability modulo theories \n(SMT) solvers are an important element of automated reasoning and there have been efforts \nanalogous to those described above to bridge SMT solvers with interactive theorem provers \n\\cite{keller2013matter, armand2011modular}. \\\\\n\\item \\textbf{Identifying the most important \/ widely used algorithms in computer algebra:} Computer \nalgebra systems have grown to become massive collections of algorithms extending into domains well \noutside of the realm of mathematics. If the purely mathematical capacities of CASs prove to be \nuseful in future AI systems, it would be valuable to rank order algorithms by their popularity or \nimportance.\\\\\n\nOne approach would be to do basic textual analysis of the source code from GitHub or StackExchange. \nThis would also allow for more targeted efforts to directly address the issues with soundness in core \nalgorithms such as expression simplification or integration. In the context of the HOLCAS system \ndescribed above, for example, it would be valuable to have rough estimates for the number of man-hours \nrequired to implement a minimal CAS with the most widely used functionality on top of a theorem \nprover. \\\\\n\\item \\textbf{Proof checkers for integrated systems:}\nProof checkers are important tools in the landscape of formal verification and theorem proving. Indeed, \nas it is often much less computationally expensive to verify the correctness of a proof than to generate it \nfrom scratch, the availability of proof checkers for the widely used interactive theorem provers is one \nreason we can be confident in the correctness of formal proofs \n\\cite{harrison2006towards,pollack1998believe}. \\\\\n\nAs we described above, strategies for integrating computer algebra with theorem provers can \npotentially result in a combined system which is less trustworthy than the theorem prover alone. \nTherefore, the availability of proof checkers for combined systems would be a valuable resource in \nverifying proof correctness, and in certain mathematical domains, potentially provide an avenue for \nsurmounting the need to directly make the CAS itself more architecturally robust. \\\\\n\nThe development of integrated proof checkers is likely to be a substantial undertaking and require novel \narchitectures for integrating the core CAS and ITP systems distinct from what has been described \nabove. However, it is a largely unexplored topic that merits further investigation. \\\\\n\n\\item \\textbf{Analyzing scaling properties of algorithms for computer algebra and theorem proving as a \nfunction of hardware resources:}\nThe premise of the analysis presented above is that CASs (and integrated theorem proving) are likely to \nremain sufficiently architecturally stable and useful on a several decade time-horizon in the construction \nof AI \nsystems. On the other hand, as we argued earlier, it is much less clear that the same will be true of the \nmost visible, NLP-based, consumer-oriented question answering systems. To make these arguments \nmore rigorous, it would be valuable to develop quantitative predictions of what the capabilities will be of \nexisting algorithms for computer algebra and theorem proving when provided with substantially \nexpanded hardware resources. For instance, we might examine problems in mathematics or theoretical physics for \nwhich na\\\"{i}ve solutions in CASs are intractable with current resources, but which may be feasible with \nfuture hardware. \\\\\n\n\\item \\textbf{The cognitive science of computer algebra:}\nWhat role has computer algebra played in theoretical physics and mathematics? How has it influenced \nthe thinking process of researchers? Has computer algebra simply been a convenience that has shifted \nthe way problems are solved, or has it fundamentally enabled new problems to be solved that would \nhave been completely intractable otherwise? \\\\\n\nThe cognitive science of mathematical thought is a substantial topic which overlaps with many \nestablished areas of research \\cite{hardy1946psychology, dehaene2011number, drijvers2005computer, \ndrijvers2002learning, lakoff2000mathematics}. However, a systematic review of research in mathematics and theoretical \nphysics since the advent of computer algebra and its role in the mathematical thought process is an \nunderexplored topic. It would be an interesting avenue to pursue in understanding the role that CASs, \nITPs, and integrated systems may come to play in superintelligence, particularly in the case of neuromorphic\nsystems that have been modeled after human cognition. These questions also relate to \nunderstanding the scaling properties of CAS and theorem proving algorithms \nas well as cataloguing the most widely used algorithms in computer algebra. \n\n\\end{itemize}\n\n\\section{Conclusion}\nThe aim of this article has been to examine pre-existing research objectives in computer science and \nrelated disciplines which align with problems relevant to AI safety, thereby providing concrete, practical \ncontext for problems which are otherwise of a longer time horizon than most research. In particular, we \nfocused on the notion of ``Oracle AI'' as used in the AI safety community, and observed that the word \noracle has two meanings in the context of superintelligent AI systems. One usage refers to a \nsubsystem of a larger AI system queried for domain-specific tasks, and the other to superintelligent AI \nsystems restricted to only answer questions. \\\\\n\nWe examined contemporary question answering systems (QASs) and argued that due to their \narchitectural heterogeneity, consumer-oriented, NLP-based systems do not readily lend themselves to \nrigorous analysis from an AI safety perspective. On the other hand, we identified computer algebra \nsystems (CASs) as concrete, if primitive, examples of domain-specific oracles. We examined well-known architectural\ndeficiencies with CASs identified by the theorem proving community and argued that the integration of \ninteractive theorem provers (ITPs) with CASs, an objective that has been an area of research in the \nrespective communities for several decades, provides a set of research problems and practical software \nprojects related to the development of powerful and robust math oracles on a multi-decade time horizon. \nIndependent of their role as domain-specific oracles, such systems may also prove to be useful tools for \nAI safety researchers in proving the functional correctness of other components of an AI architecture. \nNatural choices of systems to use would be interfaces for the Wolfram Language, the most widely \nused computer algebra system, with one of the HOL family of theorem provers or Coq, \nboth of which have substantial repositories of formalized proofs \n\\cite{wolfram2015elementary, paulson1989foundation, paulson1994isabelle, bertot2013interactive}, \nor a more modern ITP such as Lean \\cite{de2015lean, LeanMathematica2017}. \\\\\n\nRather than representing a bold and profound new agenda, we view these projects as being\nconcrete and achievable goals that may pave the way to more substantial research directions. \nBecause the topics we have discussed have a long and rich academic history, there are a number \nof ``shovel-ready'' projects appropriate for students anywhere from undergraduates to PhD students \nand beyond. Good undergraduate research projects would probably start with some basic data science \nto catalogue core computer algebra algorithms by their usage and popularity. From there, it would be \nuseful to have an estimate of what certified implementations of these algorithms would entail, \nwhether formally verified implementations, or along the lines of Kaliszyk and Wiedijk's HOLCAS \nsystem where the CAS is built on top of a theorem prover. Also useful would \nbe a systematic study of role that computer algebra has played in mathematics and theoretical physics. \nThis would have some interesting overlap with cognitive psychology, and these three projects \ntogether would make for an approachable undergraduate thesis, or a beginning project for a \ngraduate student. A solid PhD thesis devoted to the topic of Oracle AI might involve tackling \napproaches to oracles stemming from reinforcement learning (RL) \\cite{armstrong2016safely, armstrong2017},\nas well as more advanced theorem proving and CAS related topics such as investigating \nthe development of a hybrid architecture that would allow for proof-checking. \nA student who worked on these projects for several years would develop a unique \nskill set spanning philosophy, machine learning, theorem proving, and computer algebra. \\\\\n\nIn the context of superintelligent oracle AIs which may possess the ability to manipulate\na human user, we differentiate between addressing architectural \nor algorithmic deficiencies in subsystems versus general control methods or containment strategies. \nGiven that strong mathematical capabilities\nare likely to be useful in the construction of more general AI systems, designing robust CASs \n(and any other domain-specific oracle)\nis an important counterpart to general control strategies, as the top-level AI system will have fewer loopholes to exploit. \nControlling OAIs poses a distinct set of challenges for which concrete mathematical \nanalysis is in its infancy \\cite{armstrong2016safely, armstrong2017, armstrong2017oracle}. Nonetheless, considering \nhow little attention has been given to the superintelligence control problem in general, we are optimistic \nabout the potential to translate the high-level analyses of OAIs that have arisen in the AI safety \ncommunity into the mathematical and software frameworks of modern artificial intelligence. \n\n\\section*{Acknowledgements}\nWe would like to thank Stuart Armstrong, David Kristoffersson, Marcello Herreshoff, \nMiles Brundage, Eric Drexler, Cristian Calude, and several anonymous reviewers \nfor insightful discussions and feedback on the manuscript. We would also like to thank\nthe guest editors of \\emph{Informatica}, Ryan Carey, Matthijs Maas, Nell Watson,\nand Roman Yampolskiy, for organizing this special issue. \n\n\\bibliographystyle{ieeetr}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}