{"text":"\n\\section{Introduction}\n\\label{sec:introduction}\nToday we know of almost 1800 validated exoplanets and more than 4000 exoplanet \ncandidates. Among these, the transiting exoplanets (TEPs) are essential in our \nexploration and understanding of the physical properties of exoplanets. While radial\nvelocity observations alone only allow us to estimate the minimum mass of a planet, we \ncan combine them with transit observations for a more comprehensive study of the physical \nproperties of the planet. \nWhen a planet transits, we can determine the inclination of the orbit and the radius of the\nplanet, allowing us to break the mass degeneracy and, along with the mass, determine\nthe mean density of the planet. The mean density of a planet offers us insight into its \ninterior composition, and although there is an inherent degeneracy \narising from the fact that planets of different compositions can have identical masses\nand radii, this information allows us to map the diversity and distribution of exoplanets\nand even put constraints on models of planetary structure and formation theories.\n\nThe occurrence rate of hot Jupiters in the Solar neighbourhood is around 1\\% \n(\\citealt{udrysantos:2007}; \\citealt{wright:2012}). With a transit probability\nof about $\\sim$10$\\%$, roughly one thousand stars need to be monitored in \nphotometry to find just a single hot Jupiter. Therefore, the majority of the known \ntransiting hot Jupiters have been discovered by photometric wide field surveys \ntargeting tens of thousands of stars per night. \n\nIn this paper we present the discovery of HAT-P-55b{} by the Hungarian-made \nAutomated Telescope Network (HATNet; see \\citealt{bakos:2004:hatnet}), a \nnetwork of six small fully automated wide field telescopes of which four \nare located at the Fred Lawrence Whipple Observatory in Arizona,\nand two are located at the Mauna Kea Observatory in Hawaii. Since HATNet \nsaw first light in 2003, it has searched for TEPs around bright stars (V $\\lesssim$ 13) \ncovering about 37\\% of the Northern sky, and discovered approximately 25\\%\nof the known transiting hot Jupiters. \n\nThe layout of the paper is as follows. In Section 2 we present the different\nphotometric and spectroscopic observations that lead to the detection and\ncharacterisation of HAT-P-55b{}. In Section 3 we derive the stellar and planetary\nparameters. Finally, we discuss the characteristics of HAT-P-55b{} in Section 4. \n\n\\section{Observations}\n\\label{sec:obs}\nThe general observational procedure used by HATNet to\ndiscover TEPs has been described in detail in previous\npapers (e.g. \\citealt{bakos:2010:hat11}; \\citealt{latham:2009:hat8}).\nIn this section we present the specific details for the discovery and\nfollow-up observations of HAT-P-55b{}.\n\n\\subsection{Photometry}\n\\label{sec:photometry}\n\n\\subsubsection{Photometric detection}\n\\label{sec:detection}\nHAT-P-55b{} was initially identified as a candidate transiting exoplanet \nbased on photometric observations made by the HATNet survey \n\\citep{bakos:2004:hatnet} in 2011. The observations of \nHAT-P-55{} were made on nights between February and August \nwith the HAT-5 telescope at the Fred Lawrence Whipple \nObservatory (FLWO) in Arizona, and on nights between May and August \nwith the HAT-8 telescope at Mauna Kea Observatory in Hawaii. \nBoth telescopes used a Sloan \\band{r} filter. HAT-5 provided a total of \n10574 images with a median cadence of 218s, and HAT-8 provided a \ntotal of 6428 images with a median cadence of 216s. \n\nThe results were processed and reduced to trend-filtered light curves using \nthe External Parameter Decorrelation method (EPD; see \\citealt{bakos:2010:hat11}) \nand the Trend Filtering Algorithm (TFA; see \\citealt{kovacs:2005:TFA}).\nThe light curves were searched for periodic transit signals using the Box\nLeast-Squares method (BLS; see \\citealt{kovacs:2002:BLS}). The individual \nphotometric measurements for HAT-P-55{} are listed in Table~\\ref{tab:phfu}, \nand the folded light curves together with the best-fit transit light curve model \nare presented in Figure~\\ref{fig:hatnet}.\n\n\\begin{figure}[h]\n\\plotone{fig1.eps}\n\\caption[]{\n HATNet light curve{} of HAT-P-55\\ phase folded with the transit period.\n The top panel shows the unbinned light curve,\n while the bottom panel shows the region zoomed-in on the transit, with\n dark filled circles for the light curve binned in phase with a\n binsize of 0.002. The solid line represents the best-fit light\n curve model.\n\\label{fig:hatnet}}\n\\end{figure}\n\n\\subsubsection{Photometric follow-up}\n\\label{sec:phfu}\nWe performed photometric follow-up observations of HAT-P-55{} \nusing the KeplerCam CCD camera on the {1.2 m} telescope at \nthe FLWO, observing a transit ingress on the night of 23 May 2013, \nand a full transit on the night of 7 April 2014. Both transits were\nobserved using a Sloan $i$-band filter. For the first event \nwe obtained 230 images with a median cadence of 64s, and \nfor the second event we obtained 258 images with a median \ncadence of 67s. \n\nThe results were reduced to light curves following the procedure of \n\\citet{bakos:2010:hat11}, and EPD and TFA were performed to\nremove trends simultaneously with the light curve modelling. The \nindividual photometric follow-up measurements for HAT-P-55{} \nare listed in Table~\\ref{tab:phfu}, and the folded light curves \ntogether with our best-fit transit light curve model are presented \nin Figure~\\ref{fig:lc}.\n\nSubtracting the transit signal from the HATNet light curve, we \nused the BLS method to search for additional transit signals and \nfound none. A Discrete Fourier Transform also revealed no other \nperiodic signals in the data. \n\n\\begin{figure}[]\n\\plotone{fig2.eps}\n\\caption{\n Unbinned transit light curves{} for HAT-P-55, acquired with KeplerCam at\n the \\mbox{FLWO 1.2\\,m}{} telescope. The light curves have been EPD- and\n TFA-processed, as described in \\citet{bakos:2010:hat11}. \n The solid lines represents the best fit from the global\n modeling described in \\refsecl{analysis}. Residuals from the fit \n are displayed below at the bottom of the figure. The error bars \n represent the photon and background shot noise, plus the readout \n noise.\n}\n\\label{fig:lc}\n\\end{figure}\n\n\\ifthenelse{\\boolean{emulateapj}}{\n \\begin{deluxetable*}{lrrrrr}\n}{\n \\begin{deluxetable}{lrrrrr}\n}\n\\tablewidth{0pc}\n\\tablecaption{\n Differential photometry of\n HAT-P-55\\label{tab:phfu}.\n}\n\\tablehead{\n \\colhead{BJD\\tablenotemark{a}} & \n \\colhead{Mag\\tablenotemark{b}} & \n \\colhead{\\ensuremath{\\sigma_{\\rm Mag}}} &\n \\colhead{Mag(orig)\\tablenotemark{c}} & \n \\colhead{Filter} &\n \\colhead{Instrument} \\\\\n \\colhead{\\hbox{~~~~(2,400,000$+$)~~~~}} & \n \\colhead{} & \n \\colhead{} &\n \\colhead{} & \n \\colhead{} & \n \\colhead{}\n}\n\\startdata\n\\input{phfu_tab_short.tex}\n\\enddata\n\\tablenotetext{a}{\n Barycentric Julian Date calculated directly from UTC, {\\em\n without} correction for leap seconds.\n}\n\\tablenotetext{b}{\n The out-of-transit level has been subtracted. These magnitudes\n have been subjected to the EPD and TFA procedures, carried out\n simultaneously with the transit fit for the follow-up data. For\n HATNet this filtering was applied {\\em before} fitting for the\n transit.\n}\n\\tablenotetext{c}{\n Raw magnitude values after correction using comparison stars, but\n without application of the EPD and TFA procedures. This is only\n reported for the follow-up light curves.\\\\\\\\\\\\\\\\\n}\n\\tablecomments{\n This table is available in a machine-readable form in the online\n journal. A portion is shown here for guidance regarding its form\n and content.\n}\n\\ifthenelse{\\boolean{emulateapj}}{\n \\end{deluxetable*}\n}{\n \\end{deluxetable}\n}\n\n\\ifthenelse{\\boolean{emulateapj}}{\n \\begin{deluxetable*}{lrrrrrr}\n}{\n \\begin{deluxetable}{lrrrrrr}\n}\n\\tablewidth{0pc}\n\\tablecaption{\n Relative radial velocities, and bisector span measurements of HAT-P-55.\n \\label{tab:rvs}\n}\n\\tablehead{\n \\colhead{BJD\\tablenotemark{a}} &\n \\colhead{RV\\tablenotemark{b}} &\n \\colhead{\\ensuremath{\\sigma_{\\rm RV}}\\tablenotemark{c}} &\n \\colhead{BS} &\n \\colhead{\\ensuremath{\\sigma_{\\rm BS}}} &\n \\colhead{Phase} &\n \\colhead{Instrument}\\\\\n \\colhead{\\hbox{(2,456,000$+$)}} &\n \\colhead{(\\ensuremath{\\rm m\\,s^{-1}})} &\n \\colhead{(\\ensuremath{\\rm m\\,s^{-1}})} &\n \\colhead{(\\ensuremath{\\rm m\\,s^{-1}})} &\n \\colhead{(\\ensuremath{\\rm m\\,s^{-1}})} &\n \\colhead{} &\n \\colhead{}\n}\n\\startdata\n\\input{rvtable.tex}\n\\enddata\n\\tablenotetext{a}{\n Barycentric Julian Date calculated directly from UTC, {\\em\n without} correction for leap seconds.\n}\n\\tablenotetext{b}{\n The zero-point of these velocities is arbitrary. An overall offset\n $\\gamma_{\\rm rel}$ fitted to these velocities in \\refsecl{analysis}\n has {\\em not} been subtracted. \n}\n\\tablenotetext{c}{\n Internal errors excluding the component of astrophysical jitter\n considered in \\refsecl{analysis}.\n}\n\\ifthenelse{\\boolean{rvtablelong}}{\n}{\n} \n\\ifthenelse{\\boolean{emulateapj}}{\n \\end{deluxetable*}\n}{\n \\end{deluxetable}\n}\n\n\\subsection{Spectroscopy}\n\\label{sec:hispec}\nWe performed spectroscopic follow-up observations of HAT-P-55{} \nto rule out false positives and to determine the RV variations\nand stellar parameters. Initial reconnaissance observations were \ncarried out with the Tillinghast Reflector Echelle Spectrograph (TRES; \n\\citealt{furesz:2008}) at the FLWO. We obtained 2 spectra near opposite \nquadratures on the nights of 4 and 31 October 2012. Using the Stellar Parameters Classification \nmethod (SPC; see \\citealt{buchhave:2012}), we determined the initial RV measurements \nand stellar parameters. We found a mean absolute RV of $-9.42$ km s$^{-1}$ \nwith an rms of 48 m s$^{-1}$, which is consistent with no detectable\nRV variation. The stellar parameters, including the effective temperature\n\\ensuremath{T_{\\rm eff\\star}} = 5800 $\\pm $ 50 K, surface gravity \\ensuremath{\\log{g_{\\star}}} = 4.5 $\\pm $ 0.1 \n(log cgs) and projected rotational velocity \\ensuremath{v \\sin{i}} = 5.0 $\\pm $ 0.4 km s$^{-1}$,\ncorrespond to those of a G2 dwarf.\n\nHigh-resolution spectroscopic observations were then carried out \nwith the SOPHIE spectrograph mounted on the 1.93 m \ntelescope at Observatoire de Haute-Provence (OHP) \n\\citep{perruchot:2011, bouchy:2013}, and with the FIES spectrograph \nmounted on the 2.6 m Nordic Optical Telescope \\citep{djupvik:2010}. We \nobtained 6 SOPHIE spectra on nights between 3 June and 12 June 2013, \nand 10 FIES spectra on nights between 15 May and 26 August 2013.\n\nWe reduced and extracted the spectra and derived radial velocities \nand spectral line bisector span (BS) measurements following the \nmethod of \\citet{boisse:2012:hat4243} for the SOPHIE data and the \nmethod of \\citet{buchhave:2010:hat16} for the FIES data. The final RV \ndata and their \nerrors are listed for both instruments in Table~\\ref{tab:rvs}, and \n\\newpage\n\\mbox{}\n\\newpage\n\\mbox{}\n\\newpage\n\\noindent\nthe folded \nRV data together with our best-fit orbit and corresponding residuals and \nbisectors are presented in Figure~\\ref{fig:rvbis}. To avoid underestimating the\nBS uncertainties we base them directly on the RV uncertainties, setting them\nequal to twice the RV uncertainties. At a first glance there do seem to be a \nslight hint of variation of the BSs in phase with the RVs suggesting there\nmight be a blend. This is not the case, however, as will we show in our detailed blend\nanalysis in Section 3.\n\nWe applied the SPC method to the FIES spectra to\ndetermine the final spectroscopic parameters of HAT-P-55{}. The \nvalues were calculated using a weighted mean, taking into account\nthe cross correlation function (CCF) peak height. The results are \nshown in Table~\\ref{tab:stellar}.\n\n\\setcounter{planetcounter}{1}\n\\begin{figure} [h]\n\\plotone{fig3.eps}\n\\caption{\n {\\em Top panel:} RV measurements from NOT~2.6\\,m\/FIES (filled\n circles) and OHP~1.93\\,m\/SOPHIE (open triangles) for\n \\hbox{HAT-P-55} shown as a function of orbital phase, along with\n our best-fit circular model (solid line; see\n \\reftabl{planetparam}). Zero phase corresponds to the time of\n mid-transit. The center-of-mass velocity has been subtracted.\n {\\em Second panel:} RV residuals from our best-fit circular\n model. The error bars include a ``jitter'' component\n (\\hatcurRVjitterA\\,\\ensuremath{\\rm m\\,s^{-1}}, and \\hatcurRVjitterB\\,\\ensuremath{\\rm m\\,s^{-1}}\\ for FIES and\n SOPHIE respectively) added in quadrature to the formal errors (see\n \\refsecl{hispec}). The symbols are as in the upper panel.\n {\\em Third panel:} Bisector spans (BS), adjusted to have a median of 0.\n {\\em Bottom panel:} RV residuals from our best-fit circular model vs. BS. \n There is no sign of correlation. \n Note the different vertical scales of the panels.\n}\n\\label{fig:rvbis}\n\\end{figure}\n\n\n\n\\section{Analysis}\n\\label{sec:analysis}\nIn order to rule out the possibility that HAT-P-55{} is a blended stellar \neclipsing binary system, and not a transiting planet system, we \ncarried out a blend analysis following \\cite{hartman:2012:hat39hat41}. \nWe find that a single star with a transiting planet fits the light curves \nand catalog photometry better than models involving a stellar eclipsing \nbinary blended with light from a third star. While it is possible to marginally \nfit the photometry using a G star eclipsed by a late M dwarf that is blended \nwith another bright G star, simulated spectra for this scenario are obviously \ncomposite and show large (multiple \\ensuremath{\\rm km\\,s^{-1}}) bisector span and RV variations \nthat are inconsistent with the observations. Based on this analysis we \nconclude that HAT-P-55{} is not a blended stellar eclipsing binary system, and \nis instead best explained as a transiting planet system. We also consider \nthe possibility that HAT-P-55{} is a planetary system with a low-mass stellar \ncompanion that has not been spatially resolved. The constraint on this \nscenario comes from the catalog photometric measurements, based on \nwhich we can exclude a physical companion star with a mass greater \nthan $0.7$\\,\\ensuremath{M_\\sun}. Any companion would dilute both the photometric transit \nand radial velocity orbit. The maximum dilution allowed by the photometry \nwould increase the planetary radius by $\\sim$15\\%.\n\nWe analyzed the system following the procedure of \\citet{bakos:2010:hat11}\nwith modifications described in \\cite{hartman:2012:hat39hat41}. In short, we \n(1) determined the stellar atmospheric parameters of the\nhost star HAT-P-55{} by applying the SPC method to the FIES \nspectra; (2) used a Differential Evolution Markov-Chain Monte Carlo\nprocedure to simultaneously model the RVs and the light curves, keeping \nthe limb darkening coefficients fixed to those of the \\cite{claret:2004} tabulations; \n(3) used the spectroscopically inferred effective temperatures\nand metallicities of the star, the stellar densities determined from the light\ncurve modeling, and the Yonsei-Yale theoretical stellar evolution models \n\\citep{yi:2001} to determine the stellar mass, radius and age, as well as\nthe planetary parameters (e.g. mass and radius) which depend on the\nstellar values (Figure~\\ref{fig:iso}).\n\n\\begin{figure}[h]\n\\plotone{fig4.eps}\n\\caption[]{\n Comparison between the measured values of \\ensuremath{T_{\\rm eff\\star}}\\ and\n \\ensuremath{\\rho_\\star}\\ (from SPC applied to the FIES spectra, and from our\n modeling of the light curves and RV data, respectively), and the\n Y$^{2}$ model isochrones from \\citet{yi:2001}. The best-fit\n values, and approximate 1$\\sigma$ and 2$\\sigma$ confidence\n ellipsoids are shown. The Y$^{2}$ isochrones are shown for ages of\n 0.2\\,Gyr, and 1.0 to 14.0\\,Gyr in 1\\,Gyr increments.\n\\label{fig:iso}}\n\\end{figure}\n\n\n\\ifthenelse{\\boolean{emulateapj}}{\n \\begin{deluxetable*}{lcr}\n}{\n \\begin{deluxetable}{lcr}\n}\n\\tablewidth{0pc}\n\\tabletypesize{\\scriptsize}\n\\tablecaption{\n Stellar Parameters for HAT-P-55{} \n \\label{tab:stellar}\n}\n\\tablehead{\n \\multicolumn{1}{c}{~~~~~~~~Parameter~~~~~~~~} &\n \\multicolumn{1}{c}{Value} &\n \\multicolumn{1}{c}{Source} \n}\n\\startdata\n\\noalign{\\vskip -3pt}\n\\sidehead{Identifying Information}\n~~~~R.A. (h:m:s) & \\hatcurCCra{} & 2MASS\\\\\n~~~~Dec. (d:m:s) & \\hatcurCCdec{} & 2MASS\\\\\n~~~~GSC ID & \\hatcurCCgsc{} & GSC\\\\\n~~~~2MASS ID & \\hatcurCCtwomass{} & 2MASS\\\\\n~~~~HTR ID & HTR 287-004 & HATNet\\\\\n\\sidehead{Spectroscopic properties}\n~~~~$\\ensuremath{T_{\\rm eff\\star}}$ (K)\\dotfill & \\ifthenelse{\\equal{\\hatcurSMEversion}{i}}{\\hatcurSMEiteff}{\\hatcurSMEiiteff}{} & SPC \\tablenotemark{a}\\\\\n~~~~$\\ensuremath{\\rm [Fe\/H]}$\\dotfill & \\ifthenelse{\\equal{\\hatcurSMEversion}{i}}{\\hatcurSMEizfeh}{\\hatcurSMEiizfeh}{} & SPC \\\\\n~~~~$\\ensuremath{v \\sin{i}}$ (\\ensuremath{\\rm km\\,s^{-1}})\\dotfill & \\ifthenelse{\\equal{\\hatcurSMEversion}{i}}{\\hatcurSMEivsin}{\\hatcurSMEiivsin}{} & SPC \\\\\n~~~~$\\gamma_{\\rm RV}$ (\\ensuremath{\\rm km\\,s^{-1}})\\dotfill& \\hatcurTRESgamma{} & TRES \\\\\n\\sidehead{Photometric properties}\n~~~~$B$ (mag)\\dotfill & \\hatcurCCtassmB{} & APASS \\\\\n~~~~$V$ (mag)\\dotfill & \\hatcurCCtassmv{} & APASS \\\\\n~~~~$I$ (mag)\\dotfill & \\hatcurCCtassmI{} & TASS \\\\\n~~~~$g$ (mag)\\dotfill & \\hatcurCCtassmg{} & APASS \\\\\n~~~~$r$ (mag)\\dotfill & \\hatcurCCtassmr{} & APASS \\\\\n~~~~$i$ (mag)\\dotfill & \\hatcurCCtassmi{} & APASS \\\\\n~~~~$J$ (mag)\\dotfill & \\hatcurCCtwomassJmag{} & 2MASS \\\\\n~~~~$H$ (mag)\\dotfill & \\hatcurCCtwomassHmag{} & 2MASS \\\\\n~~~~$K_s$ (mag)\\dotfill & \\hatcurCCtwomassKmag{} & 2MASS \\\\\n\\sidehead{Derived properties}\n~~~~$\\ensuremath{M_\\star}$ ($\\ensuremath{M_\\sun}$)\\dotfill & \\hatcurISOmlong{} & Isochrones+\\rhostar{}+SPC \\tablenotemark{b}\\\\\n~~~~$\\ensuremath{R_\\star}$ ($\\ensuremath{R_\\sun}$)\\dotfill & \\hatcurISOrlong{} & Isochrones+\\rhostar{}+SPC \\\\\n~~~~$\\ensuremath{\\log{g_{\\star}}}$ (cgs)\\dotfill & \\hatcurISOlogg{} & Isochrones+\\rhostar{}+SPC \\\\\n~~~~$\\ensuremath{L_\\star}$ ($\\ensuremath{L_\\sun}$)\\dotfill & \\hatcurISOlum{} & Isochrones+\\rhostar{}+SPC \\\\\n~~~~$M_V$ (mag)\\dotfill & \\hatcurISOmv{} & Isochrones+\\rhostar{}+SPC \\\\\n~~~~$M_K$ (mag,ESO{})\\dotfill& \\hatcurISOMK{} & Isochrones+\\rhostar{}+SPC \\\\\n~~~~Age (Gyr)\\dotfill & \\hatcurISOage{} & Isochrones+\\rhostar{}+SPC \\\\\n~~~~$A_{V}$ (mag) \\tablenotemark{c}\\dotfill & \\hatcurXAv{} & Isochrones+\\rhostar{}+SPC\\\\\n~~~~Distance (pc)\\dotfill & \\hatcurXdistred{} & Isochrones+\\rhostar{}+SPC\\\\\n~~~~$\\log{R'_\\textrm{HK}}$\\dotfill & $-5.0 \\pm 0.1$ & Boisse et al 2010\\\\\n\\enddata\n\\tablenotetext{a}{\n SPC = ``Stellar Parameter Classification'' method based on\n cross-correlating high-resolution spectra against synthetic\n templates \\citep{buchhave:2012}. These parameters rely primarily\n on SPC, but have a small dependence also on the iterative analysis\n incorporating the isochrone search and global modeling of the\n data, as described in the text. } \n\\tablenotetext{b}{\n Isochrones+\\rhostar{}+SPC = Based on the Y$^{2}$ isochrones\n \\citep{yi:2001}, the stellar density used as a luminosity indicator, and the \n SPC results.\n} \n\\tablenotetext{c}{ Total \\band{V} extinction to the star determined\n by comparing the catalog broad-band photometry listed in the table\n to the expected magnitudes from the\n Isochrones+\\rhostar{}+SPC model for the star. We use the\n \\citet{cardelli:1989} extinction law. }\n\\ifthenelse{\\boolean{emulateapj}}{\n \\end{deluxetable*}\n}{\n \\end{deluxetable}\n}\n\\ifthenelse{\\boolean{emulateapj}}{\n \\begin{deluxetable*}{lc}\n}{\n \\begin{deluxetable}{lc}\n}\n\\tabletypesize{\\scriptsize}\n\\tablecaption{Parameters for the transiting planet HAT-P-55b{}.\\label{tab:planetparam}}\n\\tablehead{\n \\multicolumn{1}{c}{~~~~~~~~Parameter~~~~~~~~} &\n \\multicolumn{1}{c}{Value \\tablenotemark{a}} \n}\n\\startdata\n\\noalign{\\vskip -3pt}\n\\sidehead{Light curve{} parameters}\n~~~$P$ (days) \\dotfill & $\\hatcurLCP{}$ \\\\\n~~~$T_c$ (${\\rm BJD}$) \n \\tablenotemark{b} \\dotfill & $\\hatcurLCT{}$ \\\\\n~~~$T_{14}$ (days)\n \\tablenotemark{b} \\dotfill & $\\hatcurLCdur{}$ \\\\\n~~~$T_{12} = T_{34}$ (days)\n \\tablenotemark{b} \\dotfill & $\\hatcurLCingdur{}$ \\\\\n~~~$\\ensuremath{a\/\\rstar}$ \\dotfill & $\\hatcurPPar{}$ \\\\\n~~~$\\ensuremath{\\zeta\/\\rstar}$ \\tablenotemark{c} \\dotfill & $\\hatcurLCzeta{}$\\phn \\\\\n~~~$\\ensuremath{R_{p}}\/\\ensuremath{R_\\star}$ \\dotfill & $\\hatcurLCrprstar{}$ \\\\\n~~~$b^2$ \\dotfill & $\\hatcurLCbsq{}$ \\\\\n~~~$b \\equiv a \\cos i\/\\ensuremath{R_\\star}$\n \\dotfill & $\\hatcurLCimp{}$ \\\\\n~~~$i$ (deg) \\dotfill & $\\hatcurPPi{}$\\phn \\\\\n\n\\sidehead{Limb-darkening coefficients \\tablenotemark{d}}\n~~~$c_1,i$ (linear term) \\dotfill & $\\hatcurLBii{}$ \\\\\n~~~$c_2,i$ (quadratic term) \\dotfill & $\\hatcurLBiii{}$ \\\\\n~~~$c_1,r$ \\dotfill & $\\hatcurLBir{}$ \\\\\n~~~$c_2,r$ \\dotfill & $\\hatcurLBiir{}$ \\\\\n\n\\sidehead{RV parameters}\n~~~$K$ (\\ensuremath{\\rm m\\,s^{-1}}) \\dotfill & $\\hatcurRVK{}$\\phn\\phn \\\\\n~~~$e$ \\tablenotemark{e} \\dotfill & $\\ensuremath{<0.139}{}$ \\\\\n~~~RV jitter NOT~2.6\\,m\/FIES (\\ensuremath{\\rm m\\,s^{-1}}) \\tablenotemark{f} \\dotfill & \\hatcurRVjitterA{} \\\\\n~~~RV jitter OHP~1.93\\,m\/SOPHIE (\\ensuremath{\\rm m\\,s^{-1}}) \\dotfill & \\hatcurRVjitterB{} \\\\\n\n\\sidehead{Planetary parameters}\n~~~$\\ensuremath{M_{p}}$ ($\\ensuremath{M_{\\rm J}}$) \\dotfill & $\\hatcurPPmlong{}$ \\\\\n~~~$\\ensuremath{R_{p}}$ ($\\ensuremath{R_{\\rm J}}$) \\dotfill & $\\hatcurPPrlong{}$ \\\\\n~~~$C(\\ensuremath{M_{p}},\\ensuremath{R_{p}})$\n \\tablenotemark{g} \\dotfill & $\\hatcurPPmrcorr{}$ \\\\\n~~~$\\ensuremath{\\rho_{p}}$ (\\ensuremath{\\rm g\\,cm^{-3}}) \\dotfill & $\\hatcurPPrho{}$ \\\\\n~~~$\\log g_p$ (cgs) \\dotfill & $\\hatcurPPlogg{}$ \\\\\n~~~$a$ (AU) \\dotfill & $\\hatcurPParel{}$ \\\\\n~~~$T_{\\rm eq}$ (K) \\tablenotemark{h} \\dotfill & $\\hatcurPPteff{}$ \\\\\n~~~$\\Theta$ \\tablenotemark{i} \\dotfill & $\\hatcurPPtheta{}$ \\\\\n~~~$\\langle F \\rangle$ ($10^{9}$\\ensuremath{\\rm erg\\,s^{-1}\\,cm^{-2}}) \\tablenotemark{j}\n \\dotfill & $\\hatcurPPfluxavg{}$ \\\\ [-1.5ex]\n\\enddata\n\\tablenotetext{a}{\n The adopted parameters assume a circular orbit. Based on the\n Bayesian evidence ratio we find that this model is strongly\n preferred over a model in which the eccentricity is allowed to\n vary in the fit. For each parameter we give the median value and\n 68.3\\% (1$\\sigma$) confidence intervals from the posterior\n distribution.\n}\n\\tablenotetext{b}{\n Reported times are in Barycentric Julian Date calculated directly\n from UTC, {\\em without} correction for leap seconds.\n \\ensuremath{T_c}: Reference epoch of mid transit that\n minimizes the correlation with the orbital period.\n \\ensuremath{T_{14}}: total transit duration, time\n between first to last contact;\n \\ensuremath{T_{12}=T_{34}}: ingress\/egress time, time between first\n and second, or third and fourth contact.\n}\n\\tablenotetext{c}{\n Reciprocal of the half duration of the transit used as a jump\n parameter in our MCMC analysis in place of $\\ensuremath{a\/\\rstar}$. It is\n related to $\\ensuremath{a\/\\rstar}$ by the expression $\\ensuremath{\\zeta\/\\rstar} = \\ensuremath{a\/\\rstar}\n (2\\pi(1+e\\sin \\omega))\/(P \\sqrt{1 - b^{2}}\\sqrt{1-e^{2}})$\n \\citep{bakos:2010:hat11}.\n}\n\\tablenotetext{d}{\n Values for a quadratic law, adopted from the tabulations by\n \\cite{claret:2004} according to the spectroscopic (SPC) parameters\n listed in \\reftabl{stellar}.\n}\n\\tablenotetext{e}{\n The 95\\% confidence upper-limit on the eccentricity from a model\n in which the eccentricity is allowed to vary in the fit.\n}\n\\tablenotetext{f}{\n Error term, either astrophysical or instrumental in origin, added\n in quadrature to the formal RV errors for the listed\n instrument. This term is varied in the fit assuming a prior inversely \n proportional to the jitter.\n}\n\\tablenotetext{g}{\n Correlation coefficient between the planetary mass \\ensuremath{M_{p}}\\ and\n radius \\ensuremath{R_{p}}\\ determined from the parameter posterior distribution\n via $C(\\ensuremath{M_{p}},\\ensuremath{R_{p}}) = <(\\ensuremath{M_{p}} - <\\ensuremath{M_{p}}>)(\\ensuremath{R_{p}} -\n <\\ensuremath{R_{p}}>)>\/(\\sigma_{\\ensuremath{M_{p}}}\\sigma_{\\ensuremath{R_{p}}})>$ where $< \\cdot >$ is the\n expectation value operator, and $\\sigma_x$ is the standard\n deviation of parameter $x$.\n}\n\\tablenotetext{h}{\n Planet equilibrium temperature averaged over the orbit, calculated\n assuming a Bond albedo of zero, and that flux is reradiated from\n the full planet surface.\n}\n\\tablenotetext{i}{\n The Safronov number is given by $\\Theta = \\frac{1}{2}(V_{\\rm\n esc}\/V_{\\rm orb})^2 = (a\/\\ensuremath{R_{p}})(\\ensuremath{M_{p}} \/ \\ensuremath{M_\\star} )$\n \\citep[see][]{hansen:2007}.\n}\n\\tablenotetext{j}{\n Incoming flux per unit surface area, averaged over the orbit.\n}\n\\ifthenelse{\\boolean{emulateapj}}{\n \\end{deluxetable*}\n}{\n \\end{deluxetable}\n}\n\n\nWe conducted the analysis inflating the SOPHIE and FIES RV uncertainties \nby adding a \"jitter\" term in quadrature to the formal uncertainties. This was \ndone to accommodate the larger than expected scattering of the RV \nobservations around the best-fit model. Independent jitters were used\nfor each instrument, as it is not clear whether the jitter is instrumental or \nastrophysical in origin. The jitter term was allowed to vary in the fit, yielding a \n$\\chi^2$ per degree of freedom of unity for the RVs in the best-fit model. \nThe median values for the jitter are \\hatcurRVjitterB\\,\\ensuremath{\\rm m\\,s^{-1}}\\ for the SOPHIE \nobservations and \\hatcurRVjitterA\\,\\ensuremath{\\rm m\\,s^{-1}}\\ for the FIES observations. This\nsuggests that either the formal uncertainties of the FIES instrument were\noverestimated, or that the jitter from the SOPHIE instrument is not from\nthe star but from the instrument itself. \n\nThe analysis was done twice: fixing the eccentricity to zero, and allowing\nit to vary. Computing the Bayesian evidence for each model, we found \nthat the fixed circular model is preferred by a factor of $\\sim$500. Therefore\nthe circular orbit model was adopted. The $95\\%$ confidence upper\nlimit on the eccentricity is $e \\ensuremath{<0.139}{}$. \n\nThe best-fit models are presented in Figures \\ref{fig:hatnet}, \\ref{fig:lc} and \n\\ref{fig:rvbis}, and the resulting derived stellar and planetary parameters \nare listed in Tables \\ref{tab:stellar} and \\ref{tab:planetparam}, respectively. \nWe find that the star HAT-P-55{} has a mass of $\\hatcurISOmlong{}$ $\\ensuremath{M_\\sun}$ \nand a radius of $\\hatcurISOrlong{}$ $\\ensuremath{R_\\sun}$, and that its planet HAT-P-55b{} \nhas a period of \\hatcurLCP\\ days, a mass of $\\hatcurPPmlong{}$ \n$\\ensuremath{M_{\\rm J}}$ and a radius of $\\hatcurPPrlong{}$ $\\ensuremath{R_{\\rm J}}$. \n\n\n\n\\section{Discussion}\n\\label{sec:discussion}\nWe have presented the discovery of a new transiting planet, \nHAT-P-55b, and provided a precise characterisation of its\nproperties. HAT-P-55b\\ is a moderately inflated $\\sim$0.5 \\ensuremath{M_{\\rm J}} \\ planet, similar in mass, radius and equilibrium \ntemperature to HAT-P-1b \\citep{bakos:2007:hat1}, WASP-34 \\citep{smalley:2011:wasp34}, and\nHAT-P-25b \\citep{quinn:2012:hat25}. \n\nWith a visual magnitude of V = 13.21, HAT-P-55b\\ is among \nthe faintest transiting planet host stars discovered by a wide field ground-based transit survey \n(today, a total of 11 transiting planet host stars with V $>$ 13 have been discovered by wide-field \nground based transit surveys, the faintest one is HATS-6 with V = 15.2 \\citep{hartman:2014:hats6}). \nOf course, V $>$ 13 is only faint by the standards of surveys like HATNet and WASP; most of the \nhundreds of transiting planets found by space-based surveys such as OGLE, CoRoT \nand Kepler have host stars fainter than HAT-P-55. It is worth noticing that despite the relative faintness \nof HAT-P-55, the mass and radius of HAT-P-55b\\ has been measured to better than 10\\% precision (relative\nto the precision of the stellar parameters) using modest-aperture facilities. This achievement was possible \nbecause the relatively large size of the planet to its host star provided for a\nstrong and therefore easy to measure signal. In comparison, only about 140 of all the 1175 known TEP's \nhave masses and radii measured to better than 10\\% precision. \n\n\\acknowledgements \n\n\\paragraph{Acknowledgements}\nHATNet operations have been funded by NASA grants NNG04GN74G and\nNNX13AJ15G. Follow-up of HATNet targets has been partially supported\nthrough NSF grant AST-1108686. G.\\'A.B., Z.C. and K.P. acknowledge\npartial support from NASA grant NNX09AB29G. K.P. acknowledges support\nfrom NASA grant NNX13AQ62G. G.T. acknowledges partial support from NASA\ngrant NNX14AB83G. D.W.L. acknowledges partial support from NASA's Kepler\nmission under Cooperative Agreement NNX11AB99A with the Smithsonian\nAstrophysical Observatory. D.J. acknowledges the \nDanish National Research Foundation (grant number DNRF97) for partial support.\n\nData presented in this paper are based on observations obtained at the HAT\nstation at the Submillimeter Array of SAO, and the HAT station at the\nFred Lawrence Whipple Observatory of SAO. Data are also based on observations \nwith the Fred Lawrence Whipple Observatory 1.5m and 1.2m telescopes of SAO. \nThis paper presents observations made with the Nordic Optical Telescope, \noperated on the island of La Palma jointly by Denmark, Finland, Iceland,\nNorway, and Sweden, in the Spanish Observatorio del Roque de los\nMuchachos of the Instituto de Astrof\\'isica de Canarias. \nThe authors thank all the staff of Haute-Provence Observatory for their\ncontribution to the success of the ELODIE and SOPHIE projects and their\nsupport at the 1.93-m telescope. The research leading to these results has \nreceived funding from the European Community's Seventh Framework \nProgramme (FP7\/2007-2013) under grant agreement number RG226604 \n(OPTICON). The authors wish to recognize and acknowledge the very significant cultural role \nand reverence that the summit of Mauna Kea has always had within the\nindigenous Hawaiian community. We are most fortunate to have the\nopportunity to conduct observations from this mountain. \n\n\n\\bibliographystyle{apj}\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n\nAccurate 3D information is vital for lots of computer vision tasks such as AR\/VR and robotic navigation. While today's LiDAR sensors can acquire reliable 3D measurements from the surrounding environment, the resulting depth image is still very sparse when comparing to a medium resolution RGB image (about 3\\~{}4\\% density). Depth completion refers to the task of estimating dense depth image from a sparse depth input, which aims at providing a more complete 3D description of the environment. With the rapid development of deep learning, great progress has been made in this field. The state-of-the-art methods could produce a dense depth map at around 750mm in root mean square error (RMSE) in KITTI benchmark\\cite{uhrig2017sparsity}. \n\n\nDespite low RMSE achieved, lots of defects still exist in the predicting results. Especially when we observe the completed depth map in 3D space, the point cloud is usually polluted by many outliers which appear to be long-tail or mixed-depth confusion areas, as shown in Fig \\ref{fig1:deeplidar1}(e). This phenomenon partly attributes to the sparsity of the ground truth. Taking the popular KITTI depth completion dataset as an example, the average density of ground truth is only about 14\\%, which means most pixels cannot be sufficiently supervised during training. Similarly, when evaluating the prediction, only the pixels with ground truth are evaluated. This makes the RMSE metric upon such sparse positions tends to be over-optimistic. Meanwhile, it is more difficult to apply effective structure constraint on the output given only sparse ground truth. Insufficient structure regulation usually leads to blur depth predictions on object boundaries.\n\nIn this paper, a novel real-time pseudo-depth guided depth completion method is proposed. We believe a dense reference depth map is essential for producing a high-quality dense prediction. We generate a pseudo depth map from sparse input by traditional morphological operations. Although not accurate, the dense pseudo depth contains more structural information of the scene and can be very helpful to this task. Specifically, the pseudo depth map works in our model in three aspects: (1) The entire model can be transformed into a residual structure, predicting only the residual depth upon the pseudo map; (2) Some erroneous points caused by the penetration problem in raw sparse input could be eliminated; (3) A strong structural loss enforcing the sharp object boundaries could be applied. Besides the novel model for depth completion, two new evaluation metrics based on RMSE, \\textit{i}.\\textit{e}., RMSE\\_GT+ and RMSE\\_Edge, are also proposed to better representing the true quality of the prediction. The former evaluates the prediction result on a denser ground truth while the latter is mainly focus on the edge quality of the prediction. Experimental results on KITTI depth completion dataset demonstrates the superior performance of our model. An example of our result is shown in Fig \\ref{fig1:pointcloud}. To explore the benefits of utilizing the predicted dense depth on downstream robotic tasks, we further apply our results on 3D object detection and RGB-D SLAM. As expected, great improvements are achieved comparing with the sparse input or dense depth from the baseline algorithm. \n\nIn summary, the contributions of this paper are: \n\n\\begin{itemize}\n\\item We propose a novel pseudo depth guided depth completion neural network. It is designed to predict residual upon pseudo depth, making the prediction more stable and accurate. Data rectification and 3D structure constraint are also guided by pseudo dense depth.\n\\item We propose two new metrics for better evaluating the quality of the predicted dense depth. RMSE\\_GT+ computes the depth error on a carefully complemented ground truth while RMSE\\_Edge evaluate the accuracy on edge areas of the depth map. Both metrics are more sensitive than RMSE. Combining these metrics together, more comprehensive evaluation of the quality can be obtained.\n\\item The entire network is implemented end-to-end and evaluated in KITTI datasets. Experimental results show that our model achieves comparable performance to the state-of-the-art methods at the highest frame rate of 50Hz. Further experiment on using the dense depth prediction on object detection and SLAM tasks also verifies the significant quality improvement of our method and demonstrates the potential of using high quality depth completion in related downstream tasks. \n\\end{itemize}\n\n\n\\section{Related Works}\nSome early depth completion methods rely on template dictionary to reconstruct the dense depth, such as compressive sensing\\cite{hawe2011dense} or wavelet-contourlet dictionary\\cite{liu2015depth}. IP-basic\\cite{ku2018defense} proposed a series of morphological operations such as dilation and hole filling to densify sparse depth in real time. Despite fast speed, limited performances are obtained for these traditional methods.\n\nRecently, deep learning approaches are leading the mainstream of depth completion. In \\cite{6618993}, repetitive structures are used to identify similar patches across different scales to perform depth completion. A sparsity invariant CNN architecture is proposed in \\cite{uhrig2017sparsity}, which explicitly considers the location of missing data during the convolution operations.\n\nInducing RGB image in depth completion task is helpful since RGB images contain abundant scene information which could be an effective guidance for completing the sparse depth \\cite{qiu2019deeplidar}\\cite{ma2019self}\\cite{zhong2019deep}\\cite{lee2020deep}. Sparse-to-dense\\cite{ma2019self} proposes a self-supervised network which exploits photometric loss between sequential images. CFCNet\\cite{zhong2019deep} learns to capture the semantically correlated features between RGB and depth information. CG-Net\\cite{lee2020deep} proposes a cross guidance module to fuse the multi-modal feature from RGB and LiDAR. \n\nSome methods rely on iterative Spatial Propagation Network (SPN) to better treat the difficulties made by the sparsity and irregular distribution of the input [10][11][12][13]. A convolutional spatial propagation network is first developed in CSPN\\cite{cheng2018depth} to learn the affinity matrix for depth prediction. CSPN++\\cite{CSPNplus} improves this baseline by learning adaptive convolutional kernel sizes and the number of iterations in a gated network. DSPN\\cite{xu2020deformable} utilizes deformable convolutions in spatial propagation network to adaptively generate receptive fields. NLSPN\\cite{park2020non} expands the propagation into non-local fields and estimates non-local neighbors' affinities for each pixel. Despite impressive results, these methods have to be trained and propagated iteratively, which are hard to implement end-to-end and run in real time.\n\nDue to the sparsity of the input and the ground truth, ambiguous depth prediction in object boundaries or backgrounds are very common for image-view based depth completion. Some works induce spatial point cloud as an extra view for effective feature extraction \\cite{learning2019yun}\\cite{hekmatian2019conf}\\cite{depth-coefficients-for-depth-completion}. UberATG\\cite{learning2019yun} applys 2D and 3D convolutions on depth pixels and 3D points respectively to extract joint features. Conf-Net\\cite{hekmatian2019conf} generates high-confidence point cloud by predicting dense depth error map and filtering out low-confidence points. DC-Coef \\cite{depth-coefficients-for-depth-completion} transforms continuous depth regression into predicting discrete depth bins and applies cross-entropy loss to improve the quality of the point cloud. PwP\\cite{yan2019completion} generates the normal view via principal component analysis (PCA) on a set of neighboring 3D points from sparse ground truth.\n\n\\begin{figure*}[h]\n \\centering\n \\includegraphics[width=16cm]{image\/visio\/pipeline.pdf} \n \\caption{\\textbf{The network structure of our DenseLiDAR model.} Given a pair of sparse depth and RGB image, a dense pseudo depth map is first computed by morphological processing. With the reference of pseudo depth, the raw sparse depth input can further be rectified before feeding into network. The network backbone employs only one DCU\\cite{qiu2019deeplidar}, taking the RGB, the concatenation of rectified sparse and pseudo depth images as input and producing a depth residual upon the pseudo depth. During training, both depth loss and structural loss are applied to regulate the prediction.}\n \\label{fig2:pipeline}\n\\end{figure*}\n\n\nSome methods adopt pre-trained modules from external training data in their network to acheive better performance. RGB-GC\\cite{van2019sparse} uses pre-trained semantic segmentation model on Cityscapes dataset\\cite{cordts2016cityscapes}. SSDNet\\cite{zou2020simultaneous} and RSDCN\\cite{zou2020rsdcn} exploit the Virtual KITTI dataset\\cite{gaidon2016virtual} to learn the semantic labels of each pixel in dense depth. DeepLiDAR\\cite{qiu2019deeplidar} jointly learns the depth completion and normal prediction tasks where the normal prediction module is pre-trained on a synthetic dataset generated from CARLA simulator\\cite{2017CARLA}.\n\nIn contrast to the methods above, this paper characters employing pseudo depth map which can be easily generated from simple morphological operations to guide and regularize the depth completion task. Our model is built upon a simple network backbone and can be trained by KITTI depth completion dataset without any pre-training. High accuracy and real-time performance could be achieved thanks to the effective guidance of pseudo depth.\n\n\\section{Method}\n\nIn this section, we first introduce the detailed structure of our DenseLiDAR model, including the residual structure, raw data rectification and loss function of the network. Then two new evaluation metrics are proposed to better evaluate the quality of the prediction. The overall structure of the proposed network is illustrated in Fig \\ref{fig2:pipeline}. \n\n\n\\subsection{Pseudo Depth Based Residual Prediction}\n\nMost depth completion methods often produce ambiguous depth in transition areas between foreground and background. This is partly due to the sparsity of the input and the properties of 2D convolutions used in the network. It's hard to regress sharp depth changes (\\textit{e}.\\textit{g}., regressing a '1' among eight '0's in a 3x3 grid), since 2D convolutions and L2 loss functions tend to regress mid-depth values in edge areas \\cite{depth-coefficients-for-depth-completion}. We seek a new way of depth regression that can not only predict accurate depth values, but also produce sharp object boundaries. Our solution is to transform the depth completion task from an absolute value regression to a residual depth regression problem. We first produce a dense pseudo depth map for a given sparse depth input by traditional method, then take it as a reference to construct a residual structure for the network, as shown in Fig \\ref{fig2:pipeline}.\n\n\\textbf{Pseudo Depth Map.} Pseudo depth map is generated from sparse input by fast morphological steps such as kernel-based dilation, small hole closure and large hole filling \\cite{ku2018defense}. The resulting dense pseudo depth map contains sharp edges and rich structural information, which makes it very suitable to be a coarse reference for the final output.\n\n\\textbf{Residual-based Regression.} As suggested in Fig \\ref{fig2:pipeline}, our network does not directly regress the absolute depth value for each pixel, but predicts the residual depth upon the dense pseudo depth map. Comparing with the former solution, residual-based regression has two advantages:\n\n1) The output is pre-normalized to each pixel. In conventional solution, pixels with larger depth tend to be penalized more due to the usage of L2 loss in training. By predicting the residual depth, the unbalanced loss distribution problem caused by absolute depth is largely mitigated. \n\n2) Given the coarse pseudo depth, boundary pixels with large residual will be more focused by L2 loss, which is helpful for producing more accurate predictions on object boundaries. In other words, it makes the prediction more structure-adaptive and decreases the mix-depth errors. \n\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[width=0.45\\textwidth]{image\/visio\/penetration.pdf} \n \\caption{\n \\textbf{Penetration problem in projecting 3D points into depth map.} Because of the displacement between LiDAR and camera, some distant points (marked blue) may penetrate through close objects (marked red) in the projected depth image, resulting in some erroneous depth pixels (marked purple) in the close object.}\n \\label{fig3:penetration}\n\\end{figure}\n\n\\subsection{Data Rectification}\n\n\\textbf{Penetration Problem} Due to the displacement between RGB camera and LiDAR, projecting sparse LiDAR points to the image view may cause mixed-depth pixels between foreground and background. Those ambiguous areas are mainly around object boundaries, where distant points may penetrate through and appear within the close object on the projected depth image, as illustrated in Fig \\ref{fig3:penetration}.\n\nWe seek to solve the penetration problem from the source with the help of dense pseudo depth map. Comparing to the raw sparse depth, the pseudo map is relatively immune to the mix-depth problem because small \"holes\" caused by mix-depth pixels will be filled by nearby foreground pixels during dilation operation. Therefore, we can rectify the raw sparse input by removing those pixels whose difference with the pseudo depth are larger than a threshold. Ablation studies in Experiments part will show the effectiveness of this rectification strategy. \n\nConcatenated with the pseudo depth map, the rectified sparse depth as well as the RGB image is fed into the network which is composed of a single Depth Completion Unit (DCU) \\cite{qiu2019deeplidar} to produce residual dense depth prediction. \n\n\n\\subsection{Loss Functions}\nIn order to address more attention to shape and structure, besides standard depth loss, we seek to define a structural loss that penalizes depth distortions on scene details, which usually corresponds to the boundaries of objects in the scene. Such kind of loss are popular in leaning tasks when dense supervision are available, \\textit{e}.\\textit{g}., single image depth estimation\\cite{hu2019revisiting}\\cite{alhashim2018high}\\cite{Ummenhofer_2017}. As ground truth depth is also sparse and lacks of structure and shape details, we rely on pseudo ground truth (GT) map for structure supervising. Pseudo GT map follows the same computing process of pseudo depth map, but is generated from sparse ground truth. \n\nFor training our network, we define the total loss $L$ as the weighted sum of two loss functions, as shown in Eq (1):\n\n\\begin{equation}\n\\setlength{\\abovedisplayskip}{-0.2cm}\n\\begin{aligned}\nL(D,\\hat{D}) &= L_{depth}(D,\\hat{D}+\\tilde{D}) + \\lambda L_{structural}(\\overline{D},\\hat{D}+\\tilde{D}) \\\\\n\\end{aligned}\n\\vspace{-0.2cm}\n\\label{eq_loss}\n\\end{equation} \n\nwhere $D$, $\\tilde{D}$, $\\overline{D}$ and $\\hat{D}$ are the ground truth depth, the dense pseudo depth, the dense pseudo GT and the residual depth prediction, respectively. The first loss term $L_{depth}$ is the pixel-wise L2 loss defined on the residual depth:\n\n\n\\begin{equation}\n \\setlength{\\abovedisplayskip}{-0.2cm}\n \n \\begin{aligned}\n L_{depth}(D,\\hat{D}+\\tilde{D}) = \\frac{1}{n} \\sum_{i=1}^n (|D_i -\\tilde{D_i} - \\hat{D_i}|)^2 \n \\end{aligned}\n \\vspace{-0.2cm}\n \\label{eq_depth}\n\\end{equation} \n \n\nThe second loss term $L_{structural}$ is composed of two commonly used loss in image reconstruction tasks, i.e., $L_{grad}$ and $L_{SSIM}$, as shown in Eq (3)-(5). The former enforces smoothness of the depth based on the gradient $\\nabla$ of the depth image. The latter is the structural similarity constraint SSIM\\cite{wang2004image} which consists of luminance, contrast and structure differences to the pseudo GT map.\n\n\n$$\n\\setlength{\\abovedisplayskip}{3pt}\n\\setlength{\\belowdisplayskip}{3pt}\nL_{structural}(\\overline{D},\\hat{D}+\\tilde{D}) = L_{grad}(\\overline{D},\\hat{D}+\\tilde{D}) + L_{SSIM}(\\overline{D},\\hat{D}+\\tilde{D}) \\eqno{(3)}\n$$\n$$\n\\setlength{\\abovedisplayskip}{3pt}\n\\setlength{\\belowdisplayskip}{3pt}\nL_{grad}(\\overline{D},\\hat{D}+\\tilde{D}) = \\frac{1}{n} \\sum_{i=1}^n |\\nabla_x (\\overline{D_i},\\hat{D_i}+\\tilde{D_i})| + | \\nabla_y (\\overline{D_i},\\hat{D_i}+\\tilde{D_i}) | \\eqno{(4)}\n$$\n$$\n\\setlength{\\abovedisplayskip}{3pt}\n\\setlength{\\belowdisplayskip}{3pt}\nL_{SSIM}(\\overline{D},\\hat{D}+\\tilde{D}) = \\frac{1}{2} ( 1 - SSIM(\\overline{D},\\hat{D}+\\tilde{D})) \\eqno{(5)}\n$$\n\nWe simply set $\\lambda = 1$ in Eq (\\ref{eq_loss}) throughout the experiments.\n\n\\begin{figure*}[t]\n \\centering\n \\subfloat[RMSE\\_GT+]{\n \\centering\n \\label{fig4:gt+}\n \\includegraphics[width=0.2\\textwidth]{image\/4\/gt+.pdf} }\n \\subfloat[Edge areas used by RMSE\\_Edge]{\n \\centering\n \\label{fig4:edge}\n \\includegraphics[width=0.6\\textwidth]{image\/4\/edge.pdf} }\n \\caption{\\textbf{Illustration of GT+ and high gradient areas in pseudo depth map. } In (a), From top to bottom, left to right: the RGB crop of two cyclists, the corresponding crop in GT, raw sparse depth and our complemented GT+. In (b), two examples of edge areas used by RMSE\\_Edge. From top to bottom: RGB image, pseudo ground truth depth and high-gradient areas in the pseudo depth map (marked white).}\n \\label{fig4:newmetrics}\n\\end{figure*}\n\n\\subsection{New Evaluation Metrics}\n\nWhile RMSE and MAE are effective metrics for evaluating dense depth output, it cannot well represent the true quality of quality in terms of 3D shape and structure, as previously illustrated in Fig \\ref{fig1:pointcloud}. This is mainly due to the following two reasons: (1) The excessive deletion of pixels in small and moving objects when constructing the ground truth; (2) The lack of quality evaluation on object boundaries. For each scan of sparse LiDAR in KITTI dataset\\cite{uhrig2017sparsity}, the ground truth for depth completion is generated by accumulating multiple sweeps of LiDAR scans and projecting to the image space. Most of the possible mixed-depth areas are simply removed instead of rectified\\cite{uhrig2017sparsity}. In order to remove outliers caused by occlusion, dynamic objects or measurement artifacts, most points in small or moving objects influenced by the accumulation are further removed. This results in a quasi-dense ground truth focusing more on ground and static backgrounds. Better metrics capable of representing the structure and shape details of the scene are highly demanded.\n\n\\textbf{RMSE\\_GT+.} We seek to make a supplement for ground truth with the rectified sparse depth from current frame. Rectified sparse is free from penetration problem and still contains many measurement points in small or moving objects. This supplemented ground truth is called Ground Truth+ (GT+). It contains more effective pixels than the original one, as shown in Fig \\ref{fig4:gt+}. We also use root mean square error between the prediction and GT+ namely RMSE\\_GT+ to evaluate the quality of the scene structure.\n\n\\textbf{RMSE\\_Edge.} As illustrated in Fig \\ref{fig1:deeplidar1}(e), RMSE over the entire image cannot well represent the prediction quality on object boundaries. To focus more on edge areas of depth map, we can set up a metric evaluating the accuracy only on those edge pixels. We set the mean value of the gradient of pseudo depth map as a threshold to locate the edge pixels, as shown in Fig \\ref{fig4:edge}. RMSE\\_Edge metric is defined as the RMSE error between the predicted depth and GT+ on those edge pixels. Note that both RMSE\\_GT+ and RMSE\\_Edge metrics are only used for evaluation purpose.\n\n\\section{Experiments}\n\nExtensive experiments on public dataset are conducted to verify the effectiveness of our method, as well as the validity of the proposed new metrics.\n\n\\subsection{Setup}\n\nWe evaluate our model on publicly available KITTI dataset. KITTI depth completion dataset\\cite{uhrig2017sparsity} contains 86,898 frames for training, 1,000 frames for validation and another 1,000 frames for testing. During training, we ignored regions without LiDAR points (\\textit{i}.\\textit{e}. top 100 rows) and center crop 512x256 patches as training examples. We train the network for 25 epochs on two NVIDIA 2080Ti GPUs with batch size of 12. We use the Adam optimizer\\cite{2014Adam} and set the initial learning rate 0.001, which is decayed by 0.5 every 5 epochs after the first 10 epochs. The network is trained from scratch without inducing any additional data.\n\n\n\n\\subsection{Comparative Results}\n\n\\begin{table*}[t]\n \n \n \\caption{\\textbf{Performance comparsion on KITTI Benchmark}(Unit in \\emph{mm}).}\n \\label{table:test}\n \\begin{center}\n \n \\scalebox{1.0}{\n \\begin{tabular}{c|c|c|c|c|c|c}\n \\toprule\n Method & Realtime &RMSE & MAE & iRMSE & iMAE & FPS\\\\\n \n \\midrule\n B-ADT \\cite{9130033} & No & 1480.36 & 298.72 & 4.16 & 1.23 & 8.3 \\\\\n CSPN\\cite{cheng2018depth} & No & 1019.64 & 279.46 & 2.93 & 1.15 & 1 \\\\\n DC\\_coef\\cite{depth-coefficients-for-depth-completion} & No&965.87 & 215.75 & 2.43 & 0.98 & 6.7 \\\\\n SSGP \\cite{schuster2021ssgp} & No & 838.22 & 244.70 & 2.51 & 1.09 & 7.1\\\\\n CG\\_Net \\cite{lee2020deep} & No & 807.42 & 253.98 & 2.73 & 1.33 & 5\\\\\n CSPN++\\cite{CSPNplus} & No&743.69 & 209.28 & 2.07 & 0.90 & 5 \\\\\n NLSPN\\cite{park2020non} &No &\\bf 741.68 & \\bf 199.59 & \\bf 1.99 & \\bf 0.84 & 4.5 \\\\\n \\midrule\n pNCNN\\cite{Eldesokey_2020_CVPR} & Yes &960.05 & 251.77 & 3.37 & 1.05 & \\bf 50 \\\\\n Sparse to Dense\\cite{ma2019self} & Yes & 954.36 & 288.64 & 3.21 & 1.35 & 25 \\\\\n PwP\\cite{yan2019completion} & Yes &777.05 & 235.17 & 2.42 & 1.13 & 10 \\\\\n \n \n DeepLiDAR\\cite{qiu2019deeplidar} & Yes &758.38 & 226.50 & 2.56 & 1.15 & 14.3 \\\\\n UberATG\\cite{learning2019yun} & Yes & \\bf 752.88 & 221.19 & 2.34 & 1.14 & 11.1 \\\\\n \\midrule\n Ours & Yes & 755.41 & \\bf 214.13 & \\bf2.25 & \\bf0.96 & \\bf 50 \\\\\n \\bottomrule\n \\end{tabular}}\n \\end{center}\n\\end{table*}\n\n\nTable \\ref{table:test} shows the comparative results of our model on KITTI depth completion benchmark, where the evaluation is carried out on KITTI testing server automatically. Our method achieves state-of-the-art performance with the highest frame rate at 50 Hz. Except for RMSE which is very close to \\cite{learning2019yun}, other metrics of our method rank the first among all real-time methods. In comparison to the latest non-real-time works, \\textit{e}.\\textit{g}., CSPN++\\cite{CSPNplus} and NLSPN\\cite{park2020non}, our model achieves close performance, but runs 10 times faster, which creates more possibilities for downstream tasks.\n\n\n\\subsection{Evaluation on New Metrics}\n\n\nBefore using the new metrics, we first verify the quality and the credibility of the rectified sparse depth and the complemented ground truth (GT+). Towards this goal, we follow the process of creating the ground truth of KITTI depth completion dataset\\cite{uhrig2017sparsity}, which exploits the manually cleaned KITTI 2015 stereo disparity maps\\cite{Menze2018JPRS} as reference. The disparity maps are transformed into depth values using the calibration files provided by KITTI. The evaluation results are listed in Table \\ref{table:2015}, where \"KITTI outliers\" is defined as the ratio of pixels whose depth value larger than 3 meters and the relative depth error larger than 5\\%, just as \\cite{uhrig2017sparsity}.\n\n\\begin{table}[t]\n\\caption{\\textbf{Evaluation of Depth Maps Using KITTI 2015 Training Dataset.} All metrics are in \\emph{mm}.}\n\\label{table:2015}\n\\begin{center}\n\\scalebox{1.0}{\n\\begin{tabular}{c|c|c|c|c}\n\\toprule\nMethod & Density & MAE & RMSE & KITTI Outliers\\\\\n\\midrule\nSparse LiDAR & 3.99\\% & 509.04 & 2544.62 & 2.12\\% \\\\\nRectifed Sparse & 3.54\\% & \\bf 255.66 & \\bf 643.66 & \\bf 0.87\\% \\\\\n\\midrule\nGroundTruth\\cite{uhrig2017sparsity} & 14.43\\% & 388.28 & 938.00 & 2.33\\% \\\\\nGroundTruth+ & 15.80\\% & \\bf 371.92 & \\bf 923.12 & \\bf 2.21\\% \\\\\n\\bottomrule\n\\end{tabular}}\n\\end{center}\n\\end{table}\n\nFrom Table \\ref{table:2015}, where we can see both rectified sparse and GT+ have better quality than their counterpart in all three metrics (MAE, RMSE and KITTI Outliers). For the rectified sparse depth, after removing those mixed-depth pixels with the help of pseudo depth map, most points are reserved with significant quality improvement. For GT+, which fuses the rectified sparse into the original ground truth, it contains about 1.50\\% more high-confidence points and less outliers than the original ground truth. Therefore, GT+ are convincible for evaluating the quality of the dense prediction. \n\n\n\n\\subsection{Ablation Studies}\n\nIn order to investigate the impact of different modules on the final result, we conduct ablation studies on the KITTI validation dataset. The results are shown in Table \\ref{table:module}, where Baseline, Rectification, Structural loss and Residual represent backbone with a single DCU, rectification on sparse depth, training with structural loss and predicting the residual depth, respectively.\n\n\\begin{table}[t]\n \n \n \\caption{\\textbf{Ablation Study on KITTI Validation Set.(mm) }}\n \\label{table:module}\n \\begin{center}\n \\scalebox{0.7}{\n \\begin{tabular}{c|c|c|c|c|c|c}\n \\toprule\n Baseline & Rectification & StructuralLoss & Residual & RMSE & RMSE\\_GT+ & RMSE\\_Edge \\\\\n \n \\midrule\n \\checkmark & & & & 829.87 & 1596.22 & 2794.98 \\\\\n \\checkmark & \\checkmark & & & 822.10 & 1572.31 & 2367.21 \\\\\n \n \n \n \\checkmark & \\checkmark &\\checkmark & & 810.38 & 1392.73 & 2271.23 \\\\\n \n \\checkmark & \\checkmark& \\checkmark& \\checkmark & \\bf 795.97 & \\bf 1335.20 & \\bf 2171.76 \\\\\n \\bottomrule\n \\end{tabular}}\n \n \\end{center}\n\\end{table}\n\n\nAs shown in Table \\ref{table:module}, adding each design leads to a performance rise. The final complete model behaves the best upon all of the three metrics. Comparing with the baseline, our final model has performance improvement at about 4.08\\% in RMSE, 16.40\\% in RMSE\\_GT+ and 22.30\\% in RMSE\\_Edge, respectively. \n\n\n\\begin{figure}[t]\n \\centering\n \\subfloat[Before Post-processing]{\n \\centering\n \\label{fig7:withoutpp}\n \\includegraphics[width=0.2\\textwidth]{image\/7\/wopp.pdf} }\n \\subfloat[After Post-processing]{\n \\centering\n \\label{fig7:withpp}\n \\includegraphics[width=0.2\\textwidth]{image\/7\/withpp.pdf} }\n \\caption{\\textbf{Illustration of dense point cloud before\/after post-processing.} }\n \\label{fig7:pp}\n\\end{figure}\n\n\\begin{figure*}[t]\n \\centering\n \\subfloat{\n \\centering\n \\includegraphics[width=0.3\\textwidth]{image\/6\/134.pdf} }\n \\subfloat{\n \\centering\n \\includegraphics[width=0.3\\textwidth]{image\/6\/145.pdf} }\n \\subfloat{\n \\centering\n \\includegraphics[width=0.3\\textwidth]{image\/6\/254.pdf} }\n\n \\caption{\\textbf{Qualitative results on KITTI 3D Object Detection Dataset. } This figure lists RGB image, predicted dense depth before and after post-processing and 3D object detection results. GroundTruth and predictions are marked by blue and red boxes, respectively.}\n \\label{fig6:3ddetection}\n\\end{figure*}\n\n\\subsection{Generalization Studies}\n\\subsubsection{Sparsity variation}\n\nTo verify the generalization of our method on LiDAR data with different degree of sparsity, we conduct the ablation experiments on LiDAR scans with 64, 32 and 16 lines, respectively. Due to the lack of depth completion datasets produced on real 32 and 16-line LiDARs, we synthesize them by subsampling the KITTI 64-line LiDAR data. To keep the same scanning manner as real Lidars, the subsampling is carried out on a range-view image with elevation and azimuth angles as two coordinate axises. The obtained downsampled 3D points are then projected to RGB image plane to produce sparse depth images. We retrain the network for each data modal before evaluation and the error of the completed depth are listed in Table \\ref{table:sparsedepth}. \n\n\\begin{table}[t]\n \\caption{\\textbf{Sparisty Experiments.(mm)}}\n \\label{table:sparsedepth}\n \\begin{center}\n \\scalebox{0.9}{\n \\begin{tabular}{c|c|c|c|c} \n \\toprule\n LiDAR resolution & RMSE & MAE & RMSE\\_GT+ & RMSE\\_Edge\\\\ \n \\midrule\n 64 lines(Baseline) & 829.87 & 240.32 & 1596.22 & 2794.98 \\\\\n 64 lines(Ours) & 795.97 & 213.64 & 1335.20 & 2171.76 \\\\\n 32 lines(Baseline) & 2031.23 & 597.12 & 3494.53 & 5153.33 \\\\\n 32 lines(Ours) & 1780.55 & 540.28 & 2833.96 & 4348.86 \\\\\n 16 lines(Baseline) & 2357.61 & 695.28 & 4072.46 & 5743.28 \\\\\n 16 lines(Ours) & 1989.48 & 592.78 & 3256.02 & 4588.20 \\\\ \n \\bottomrule\n \\end{tabular}}\n \n \\end{center}\n\\end{table}\n\nFor all of the data modals our DenseLiDAR performs much better than the baseline method, and the leading gaps are getting larger with respect to the sparser depth input. On the other hand, sparser depth input does bring challenge to the completion task. As shown in Table \\ref{table:sparsedepth}, depth input of 32-line results in about three times larger errors in all metrics than those of 64-line data. Feeding depth of 16-line further expands the gap. We can infer that with the reduction of LiDAR's scan lines, the network has to rely more on RGB image features and becomes much closer to a pure depth prediction work.\n\n\\subsubsection{Indoor dataset}\n\\begin{table}[t]\n \\caption{\\textbf{RMSE Metric Performances in NYUv2 Dataset.(m)}}\n \\label{table:nyuv2}\n \\begin{center}\n \\scalebox{0.8}{\n \\begin{tabular}{c|c|c|c|c|c|c|c} \n \\toprule\n Sampled Points & 100 & 300 & 500 & 1000 & 3000 & 5000 & 10000\\\\ \n \\midrule\n DeepLiDAR\\cite{qiu2019deeplidar} & 0.381 & 0.339 & 0.320 & 0.289 & 0.255 & 0.189 & 0.149\\\\\n Ours & 0.298 & 0.270 & 0.258 & 0.231 & 0.199 & 0.147 & 0.101\\\\\n \\bottomrule\n \\end{tabular}}\n \\end{center}\n\\end{table}\n\nWe conduct experiments on NYUv2 dataset\\cite{Silberman:ECCV12}, which consists of dense RGBD images collected from 464 indoor scenes and has very different depth distribution patterns to Lidar. We use a uniform sparsifier with varying sampling numnber to produce sparse depth images from the dense ones. For convenience, we only use the labelled data (about 2K images) in the dataset with a split ratio of 2:1 for training and testing, respectively. The results are shown in Table \\ref{table:nyuv2}, where we can see that in all levels of sparsity our method obtains better results than DeepLiDAR\\cite{qiu2019deeplidar}.\n\n\\section{Applications}\n\nTo evaluate the depth quality and explore the possibility of utilizing the obtained high-quality dense depth maps, we also apply our completed depth in two different robotic tasks: 3D object detection and RGB-D SLAM. To further improve the quality of the resulting point cloud, post-processing can be carried out before taking the predicted dense depth as input. \n\n\\subsection{Post-processing} We utilize the pseudo depth map once again in this step. We compute pixel-wise difference between the predicted depth and its corresponding pseudo depth map. The pixels whose depth difference with the pseudo map larger than a threshold are regarded as outliers and hence removed from the point cloud. In this step, the selection of a reasonable depth threshold becomes essential. We conduct experiments in different thresholds and results are listed in Table \\ref{table:postprocessing}. Overall, the quality of the depth image gets to be better when smaller depth threshold is used, but at a cost of density decrease. In particular, RMSE can be halved from 802mm to 426mm when a global threshold of 1 meter is used, with a cost that less than 4\\% of points with ground truth are removed. Taking both accuracy and density into account, we set a dynamic threshold based on a piecewise depth range to remove outliers. We set threshold of 0.1m for pixels closer than 10m, 0.3m for pixels between 10m to 40m, and 0.5m for those beyond 40m, respectively. The final RMSE can be further decreased to 324mm with about 89\\% points with ground truth retained. An example of the obtained 3D point cloud before and after post-processing is illustrated in Fig \\ref{fig7:pp}.\n\n\n\\begin{table}[t]\n \\caption{\\textbf{Influence of Thresholds Used in Post-processing. }}\n \\label{table:postprocessing}\n \\begin{center}\n \\scalebox{0.7}{\n \\begin{tabular}{c|c|c|c|c|c|c}\n \\toprule\n Threshold & None & 10m & 5m & 3m & 1m & Dynamic \\\\\n \\midrule\n Remained GT Points & 100.00\\% & 99.67\\% & 99.28\\% & 98.81\\% & 96.63\\% & 89.70\\% \\\\\n \\midrule\n RMSE(\/mm) & 802.16 & 672.12 & 605.38 & 551.84 & 426.19 & 324.50 \\\\\n \\bottomrule\n \\end{tabular}}\n \\end{center}\n\\end{table}\n\n\n\\subsection{3D Object Detection} \n\nWe choose F-PointNet\\cite{qi2018frustum} as our object detection method to verify the quality of the produced dense depth. F-PointNet\\cite{qi2018frustum} projects depth pixels inside a 2D bounding box into 3D point cloud and performs object detection within this point frustum, where the sampling points in a single frustum is fixed to 1,024. For most distant or small objects, the number of measurement point can fall down to 50~200 points or even less, which means a lot of points have to be sampled repeatedly to meet the constraint of 1,024. More effective dense points can possibly contribute to better detection results. In our experiments, we first sample points from original sparse data, then if the point number within the frustum is less than 1,024, we add the predicted dense points from depth completion. We follow the same training and validation splits as \\cite{chen20153d} in KITTI 3D object detection dataset\\cite{Geiger2012CVPR}, which contains 3712 samples for training and 3769 for testing respectively.\n\n\\begin{table}[h]\n \n \\caption{\\textbf{$AP_{3D}$(in \\%) of 3D Object Detection on KITTI Dataset.} PP Stands for Post-processing. }\n \\label{table:3ddetection}\n \\begin{center}\n \\scalebox{0.7}{\n \\begin{tabular}{c|c|c|c|c|c|c|c|c|c}\n \\toprule\n \\multirow{2}{*}{Method} & \\multicolumn{3}{|c|}{$Car$} &\\multicolumn{3}{|c|}{$Pedestrian$} & \\multicolumn{3}{|c}{$Cyclist$}\\\\\n \\cmidrule{2-10}\n & Eas. & Mod. & Har. & Eas. & Mod. & Har. & Eas. & Mod. & Har.\\\\\n \\midrule\n Sparse LiDAR & 83.83 & 71.17 & 63.28 & 64.93 & 55.61 & 49.14 & 76.20 & 56.91 & 53.07 \\\\\n DeepLiDAR & 78.99 & 62.13 & 53.93 & 63.98 & 54.05 & 46.73 & 62.42 & 44.72 & 41.53\\\\\n DCU & 78.43 & 61.43 & 51.39 & 59.95 & 50.74 & 43.15 & 62.84 & 43.39 & 41.18\\\\\n \\midrule\n Ours(w\/o PP) & \\bf 84.47 & 68.92 & 61.21 & \\bf 67.22 & \\bf 56.50 & 48.92 & \\bf 79.95 & 55.75 & 51.43\\\\\n Ours(w PP) & \\bf 85.52 & \\bf 72.33 & \\bf 64.39 & \\bf 69.88 & \\bf 60.88 & \\bf 52.36 & \\bf 83.70 & \\bf 60.16 & \\bf 56.02\\\\\n \\bottomrule\n \\end{tabular}}\n \n \\end{center}\n\\end{table}\n\n\\begin{table*}[!t]\n \\caption{\\textbf{Quantative Results of KITTI Odometry Sequences.}'-' Denotes Tracking Failure. 'MP' Denotes Average Matched ORB Points}\n \\label{table:rgbdslam}\n \\begin{center}\n \\scalebox{0.8}{\n \\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c}\n \\toprule\n $t_{rel}(\\%)$ \/ $r_{rel}(deg\/100m)$ & 00 & 01 & 02 & 03 & 04 & 05 & 06 & 07 & 08 & 09 & 10 & MP \\\\\n \\midrule\n Monocular & 0.80\/1.32 & - & - & \\textbf{0.49}\/0.37 & 0.70\/0.25 & 0.88\/0.90 & 1.10\/0.42 & 0.89\/0.23 & 3.06\/1.41 & - & 1.01\/9.16 & 162\\\\\n Stereo & 0.71\/0.25 & \\textbf{1.48}\/\\textbf{0.21} & 0.80\/0.26 & 0.80\/\\textbf{0.20} & \\textbf{0.47}\/\\textbf{0.15} & \\textbf{0.39}\/\\textbf{0.16} & 0.47\/\\textbf{0.15} & 0.49\/0.28 & \\textbf{1.03}\/\\textbf{0.30} & 0.89\/0.26 & \\textbf{0.66}\/0.30 & 318\\\\\n +SparseD & - & - & - & - & - & - & - & - & - & - & - & 0 \\\\\n +DeepLiDARD & 0.83\/0.39 & 7.41\/2.43 & 1.00\/0.37 & 1.25\/0.39 & 3.83\/3.18 & 1.30\/0.36 & 3.52\/1.08 & 1.08\/0.45 & 2.23\/0.60 & 2.44\/0.47 & 2.83\/1.01 & 359 \\\\\n \\midrule\n +OursD(w\/o PP) & 0.81\/0.38 & 45.70\/9.24 & 1.03\/0.39 & 1.24\/0.37 & 1.17\/1.29 & 0.51\/0.32 & 0.57\/0.39 & 0.46\/0.38 & 1.33\/0.48 & 0.94\/0.35 & 0.89\/0.53 & 417\\\\\n +OursD(w PP) & \\textbf{0.70}\/\\textbf{0.24} & 19.71\/6.21 & \\textbf{0.77}\/\\textbf{0.25} & 1.13\/0.23 & 0.71\/0.47 & 0.53\/0.22 & \\textbf{0.43}\/0.19 & \\textbf{0.44}\/\\textbf{0.25} & 1.04\/0.33 & \\textbf{0.85}\/\\textbf{0.25} & 0.94\/\\textbf{0.29} & 402 \\\\\n \\bottomrule\n \\end{tabular}}\n \\end{center}\n\\end{table*}\n\nWe report $AP_{3D}$(in\\%) which corresponds to average precision with 40 recall positions of 3D bounding box with rotated IoU threshold 0.7 for Cars and 0.5 for Pedestrian and Cyclist, respectively. The evaluation results are listed in Table \\ref{table:3ddetection}. Instead of achieving better results, directly using the dense depth from DeepLiDAR\\cite{qiu2019deeplidar} or baseline (a single DCU backbone) produces much worse performance than using the raw sparse data. This is due to the distorted object shape caused by the erroneous mixed-depth points around object boundaries, as shown in Fig \\ref{fig1:deeplidar2}. They would have a severer negative impact on regression and classification of objects. On the other hand, using the high-quality dense depth from our DenseLiDAR leads to much better detection performance than DeepLiDAR and baseline. Using the point cloud after post-processing, the highest AP values are achieved in all three difficulty levels. Comparing with raw sparse data in Table \\ref{table:3ddetection}, using our dense point cloud after post-processing achieves about 1.10\\% AP increase for $Cars$, 5.20\\% for $Pedestrian$ and 3.25\\% for $Cyclist$ in Moderate difficulty level, respectively. Thanks to the dense and accurate 3D point clouds from our DenseLiDAR, the geometric features of the objects are highly enhanced, resulting in a notable performance improvement. The result also shows that small or distant objects could benefit more from denser point clouds than large objects. Some qualitative 3D object detection results using our dense point cloud are also illustrated in Fig \\ref{fig6:3ddetection}. \n\n\\subsection{RGB-D SLAM}\n\nWe choose open-source ORB-SLAM2 \\cite{murORB2}, a popular real-time SLAM library for Monocular \\cite{murTRO2015}, Stereo or RGB-D camera configurations, as our evaluation baseline. With the dense depth maps, we are able to run the RGB-D mode of ORB-SLAM2 in KITTI odometry dataset. For all of the 11 sequences, we run each of them for 25 times and record the average relative translation $t_{rel}$ and rotation $r_{rel}$ errors in Table \\ref{table:rgbdslam}.\n\nWe have some interesting findings in Table \\ref{table:rgbdslam}. Firstly, RGB image with a sparse depth map is not applicable for RGB-D mode, where depth value for every ORB feature point is required. Using the dense completed depth, good positioning results can be obtained with RGB-D mode. Secondly, except for sequence 01, our performance is robust across different sequences and much better than using depths from other depth completion methods or pure monocular images. \nSequence 01 is a challenging highway scene with large area of sky and little structured objects alongside. Few training samples for depth completion are from this kind of scene, which results in dense depth image with large error as input of RGB-D SLAM and leads to some kinds of failure. Thirdly, our method is able to produce more matched feature points on average than other modes because of more reliable depth pixels, as shown in last column of Table \\ref{table:rgbdslam}. It can also contribute to better positioning accuracy. Lastly, the post-processing for removing outliers in dense depth is also helpful in RGB-D SLAM. \n\n\\begin{figure}[thpb]\n \\centering\n \\includegraphics[width=0.45\\textwidth]{image\/9\/trajectory.pdf} \n \\captionsetup{justification=centering}\n \\caption{\\textbf{Estimated Trajectory of KITTI 00.}}\n \\label{fig:trajectory}\n\\end{figure}\n\nAs an typical example, the positioning results estimated for sequence 00 are illustrated in Fig \\ref{fig:trajectory}, where we can observe that the trajectory of our method is closer to the ground truth than other methods.\n\n\n\n\\section{Conclusions}\n\nWe present DenseLiDAR, a novel real-time pseudo depth guided depth completion network. We exploit the pseudo depth map in constructing a residual-based prediction, rectifying sparse input and supervising the network with a structural loss. We point out that popular RMSE metric is not sensitive for evaluating the real quality difference of the completed depth because of its sparsity. We propose RMSE\\_GT+ and RMSE\\_Edge metrics to better represent the true quality of the prediction. Experimental results on KITTI depth completion dataset demonstrate that our model achieves comparable performance to the state-of-the-art methods on both RMSE and new metrics with the highest running speed. Extensive experiments in applying the dense depth in 3D object detection and RGB-D SLAM also verify the quality improvement of our prediction and further demonstrate the potential benefits of combining our depth completion with other downstream perception or localization tasks in robotic fields. \n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n\nIntuition about the behavior of equations definining a secant variety embedded in projective space arises from consideration of both algebra and geometry, and our main goal is to bring together some of these ideas. In this paper we will be concerned with syzygies of secant varieties of smooth curves. We will begin with the algebraic point of view with the aim of giving the reader tools for computing secant varieties of curves using Macaulay 2 \\cite{mac2}.\nWe then turn to the geometric point of view, based on work of Aaron Bertram \\cite{bertram}, which led to the second author's original conjectures on cubic generation of secant ideals and linear syzygies \\cite{vermeiresecreg}. These conjectures were refined and strengthened in \\cite{sidver} using Macaulay 2 \\cite{mac2}. Bertram's setup is also used by Ginensky \\cite{ginensky} to study determinantal equations for curves and their secant varieties. Our hope is to make some of the geometric intuition accessible to readers familiar with \\cite{geomSyz} and \\cite{hartshorne}, and that the examples we discuss will be of help in reading the existing literature.\n\nWe want to study the minimal free resolution of the homogeneous coordinate ring of a secant variety. If $X \\subset \\ensuremath{\\mathbb{P}}^n$ is a variety, by which we mean a reduced, but not necessarily irreducible, scheme, then we define its $k$th \\emph{secant variety}, denoted $\\Sigma_k,$ to be the Zariski closure of the union of the $k$-planes in $\\ensuremath{\\mathbb{P}}^n$ meeting $X$ in at least $k+1$ points. We often write $\\Sigma$ for $\\Sigma_1.$ We are primarily interested in the situation in which the maps $\\Gamma(X, \\ensuremath{\\mathcal{O}}_X(k)) \\to \\Gamma(\\ensuremath{\\mathbb{P}}^n, \\ensuremath{\\mathcal{O}}_{\\ensuremath{\\mathbb{P}}^n}(k))$ are surjective, or equivalently that the homogeneous coordinate ring $S_X$ is \\emph{normally generated}, as then $S_X \\cong \\oplus \\Gamma(X, \\ensuremath{\\mathcal{O}}_X(k))$ and geometric techniques can be used to study $S_X.$ We will assume throughout that if $X \\subset \\ensuremath{\\mathbb{P}}^n$ is a curve, then it is embedded via a complete linear system.\n\nTwo examples of notions that have geometric and algebraic counterparts are \\emph{Castelnuovo-Mumford regularity}, or \\emph{regularity}, and the Cohen-Macaulay property, both of which can be defined algebraically in terms of minimal free resolutions. We begin algebraically, and let $M$ be a finitely generated graded module over the standard graded ring $S= k[x_0, \\ldots, x_n].$ The module $M$ has a minimal free resolution\n\\[\n0 \\to \\underset{j}{\\oplus} S(-j)^{\\beta_{n,j} }\\to \\cdots \\to \\underset{j}{\\oplus} S(-j)^{\\beta_{1,j} }\\to \\underset{j}{\\oplus} S(-j)^{\\beta_{0,j}} \\to M \\to 0.\\]\nThe computer algebra package Macaulay 2 \\cite{mac2} computes minimal free resolutions and displays the \\emph{graded Betti numbers} $\\beta_{i,j}$ in a Betti table arranged as below\n\\[\n\\begin{array}{c|ccccc}\n\t& 0 & 1 & 2 & \\cdots & j\\\\\n\t\\hline\\\\\n0\t& \\beta_{0,0} & \\beta_{1,1}&\\beta_{2,2} & \\cdots & \\beta_{j,j}\\\\\n1& \\beta_{0,1} & \\beta_{1,2}& \\beta_{2,3} & \\cdots & \\beta_{j, 1+j}\\\\\n\n\\vdots\\\\\ni& \\beta_{0,i} & \\beta_{1,i+1}&\\beta_{2,i+2} & \\cdots & \\beta_{j, i+j}\\\\\n\n\\end{array}\n\\]\n\n\\begin{defin}\nThe \\emph{regularity} of a finitely generated graded module $M$ is the maximum $d$ such that some $\\beta_{j,d+j}$ is nonzero.\\end{defin}\n\n\\begin{ex}[The graded Betti diagram of a curve, Example 1.4 in \\cite{sidver}]\\label{ex:g2d9I}\nFor example, we can compute the graded Betti diagram of a curve of genus 2 embedded in $\\ensuremath{\\mathbb{P}}^7$ using Macaulay 2.\n\\begin{verbatim}\n \t \t 0 1 2 3 4 5 6\n total: 1 19 58 75 44 11 2\n 0: 1 . . . . . .\n 1: . 19 58 75 44 5 .\n 2: . . . . . 6 2\n\\end{verbatim}\nAs the diagram shows that $\\beta_{5,7}$ and $\\beta_{6,8}$ are nonzero, we see that the regularity of the homogeneous coordinate ring is 2 and the homogeneous ideal of the curve has regularity 3.\nWe will discuss this computation in greater depth in Example \\ref{ex:g2d9}. \\end{ex}\n\nThe regularity of the geometric counterpart of a finitely generated graded module, a coherent sheaf on projective space, has a geometric definition which originally appeared on pg. 99 of \\cite{mumford}.\n\\begin{defin}\n The \\emph{regularity} of a coherent sheaf $\\ensuremath{\\mathcal{F}}$ on $\\ensuremath{\\mathbb{P}}^n$ is defined to be the infimum of all $d$ such that $H^i(\\ensuremath{\\mathbb{P}}^n, \\ensuremath{\\mathcal{F}}(d-i))=0$ for all $i >0.$ \n \\end{defin}\nIf $M= \\oplus_{j \\geq 0} \\Gamma(\\ensuremath{\\mathbb{P}}^n, \\ensuremath{\\mathcal{F}}(j))$, then the regularity of $M$ is the maximum of the regularity of $\\ensuremath{\\mathcal{F}}$ and zero. The reader will find a good discussion in Chapter 4 of \\cite{geomSyz}. It is well-known that if $X \\subseteq \\ensuremath{\\mathbb{P}}^n$ is a smooth curve of genus $g$ and degree $d \\geq 2g+1$ then the regularity of $\\ensuremath{\\mathcal{I}}_X$ is 2 if $X$ is a rational normal curve and 3 otherwise, and the reader may find a nice exposition in \\cite{geomSyz}. A similar result is true for the first secant variety of a smooth curve.\n\n\\begin{rmk}\nAs this article was going to press, we learned from Adam Ginensky and Mohan Kumar that the image of a normal variety over an algebraically closed field under a proper morphism with reduced, connected fibers may fail to be normal. This invalidates the proof of the normality of $\\Sigma$ in Lemma 3.2 in \\cite{vermeireidealreg}. The arguments in \\cite{vermeireidealreg} and the subsequent papers \\cite{vermeiresecreg, sidver} go through under the additional hypothesis that $\\Sigma$ is normal, which we add below in Theorems \\ref{thm:reg} and \\ref{thm:sidver}. \n\\end{rmk}\n\n\\begin{thm}[\\cite{vermeiresecreg, sidver}]\\label{thm:reg}\nLet $X \\subseteq \\ensuremath{\\mathbb{P}}^n$ be a smooth curve of genus $g$ and degree $d \\geq 2g+3.$ If $\\Sigma$ is normal, the regularity of $\\ensuremath{\\mathcal{I}}_{\\Sigma}$ is 3 if $X$ is a rational normal curve and 5 otherwise.\n\\end{thm}\n\nMoreover, it is natural to conjecture:\n\n\\begin{conj}[\\cite{vermeiresecreg, sidver}]\\label{conj:reg}\nLet $X \\subseteq \\ensuremath{\\mathbb{P}}^n$ be a smooth curve of genus $g$ and degree $d \\geq 2g+2k+1.$ The regularity of $\\ensuremath{\\mathcal{I}}_{\\Sigma_k}$ is $2k+1$ if $X$ is a rational normal curve and $2k+3$ otherwise.\n\\end{conj}\nThis conjecture holds for genus 0 curves, as the $k$th secant variety of a rational normal curve has ideal generated by the maximal minors of a matrix of linear forms, and thus the ideal is resolved by an Eagon-Northcott complex. The result for genus 1 is proved in \\cite{fisher, vbH}.\n\n\n\\begin{defin}\nWe say that a variety $X \\subset \\ensuremath{\\mathbb{P}}^n$ is \\emph{arithmetically Cohen-Macaulay} if the depth of the irrelevant maximal ideal of $S = k[x_0, \\ldots, x_n]$ on $S_X$ is equal to the Krull dimension of $S_X.$ Via the Auslander-Buchsbaum theorem, this is equivalent to saying that the length of a minimal free resolution of $S_X$ is equal to $\\codim X.$\n\\end{defin}\nUsing the correspondence between local and global cohomology, one can see that this is the same as requiring\n $H^i(\\ensuremath{\\mathbb{P}}^n, \\ensuremath{\\mathcal{I}}_X(k))=0$ for all $0< i \\leq \\dim X.$ If $X \\subset \\ensuremath{\\mathbb{P}}^n$ is a normally generated smooth curve of degree $d$ and genus $g,$ then it is arithmetically Cohen-Macaulay as normal generation implies $H^1(\\ensuremath{\\mathbb{P}}^n, \\ensuremath{\\mathcal{I}}_X(j))=0$ for $j \\geq 1$. (The cohomology groups vanish automatically for $j \\leq 0.$) \n \n The main result of \\cite{sidver} is\n \\begin{thm}[\\cite{sidver}]\\label{thm:sidver}\n If $X \\subset \\ensuremath{\\mathbb{P}}^n$ is a smooth curve of genus $g$ and degree $d \\geq 2g+3$, and $\\Sigma$ is normal, then $\\Sigma$ is arithmetically Cohen-Macaulay.\n \\end{thm}\n \n\\begin{rmk}\n As we know that the singular locus of $\\Sigma$ is the curve $C$, its normality is equivalent to the arithmetically Cohen-Macaulay condition via Serre's condition. Indeed, Theorem \\ref{thm:sidver} holds if we assume normality of $\\Sigma.$ In fact, we know that the ideal of a secant variety of a rational normal curve has a resolution given by an Eagon-Northcott complex, and we also know the graded Betti diagram of the secant varieties of elliptic normal curves via \\cite{vbH}, so in these two cases, we do know normality.\n\\end{rmk}\n\n We conjecture that $\\Sigma_k$ is arithmetically Cohen-Macaulay if $d \\geq 2g+2k+1$ and hope that we can use cohomology to limit both the number of rows and the number of columns in the graded Betti diagram of $\\ensuremath{\\mathcal{I}}_{\\Sigma_k}$ in general. \nThe main difficulty in the cohomological program is that our hypotheses are solely in terms of the positivity of a line bundle on a smooth curve $X,$ and we need to prove vanishings in the cohomology of sheaves on its secant varieties which will necessarily have singularities. \n\nWe begin \\S 2 by giving the definition of the ideal of a secant variety and discussing how it may be computed via elimination and prolongation. We then give several examples of ideals of smooth curves and their secant varieties. In \\S 3 we discuss the geometry of the desingularization of the secant varieties of a curve and how Terracini recursion may be used to study them. We have not made an attempt to survey the vast literature on secant varieties of higher dimensional varieties here, choosing instead to limit our attention to secant varieties of smooth curves.\n\\bigskip\n\n\\noindent {\\bf Acknowledgements}\nCode for the computation of prolongations used in our examples was written in conjunction with the paper \\cite{sidsu}with the help of Mike Stillman. We are grateful to Seth Sullivant and Mike Stillman for allowing us to include this code here. The first author is partially supported by NSF grant DMS-0600471 and the Clare Boothe Luce Program and also thanks the organizers of the conference on Hilbert functions and syzygies in commutative algebra held at Cortona in 2007 as well as organizers of the Abel Symposium. We thank Mohan Kumar and Adam Ginensky for their communications.\n\n\\section{Computing secant varieties}\nIn this section we will describe how secant ideals may be computed via elimination and via prolongation. In \\S 2.1 we will see that the ideal of $\\Sigma_k(X)$ can be defined as the intersection of an ideal in $k+1$ sets of variables with a subring corresponding to the original ambient space. Thus, it is theoretically possible to compute the secant ideal of any variety whose homogeneous ideal can be written down. However, elimination orders are computationally expensive, so this method will be unwieldy for large examples. If $X$ is defined by homogeneous forms of the same degree, then the method of prolongation can be used to compute the graded piece of $I(\\Sigma_k(X))$ of minimum possible degree. This computation is fast, and in many cases yields a set of generators of $I(\\Sigma_k(X)).$ We will discuss prolongation in \\S 2.2 and give a \\emph{Macaulay 2} implementation in Appendix A.\n\n\\subsection{Secant varieties via elimination}\n \n\nLet $X \\subset \\ensuremath{\\mathbb{P}}^n$ be a variety with homogeneous ideal $I \\subset k[\\ensuremath{{\\bf x}}]$. We can define the $k$th secant ideal of $I$ so that it can be computed via elimination. We work in a ring with $k+1$ sets of indeterminates $\\ensuremath{{\\bf y}}_i = (y_{i,0}, \\ldots, y_{i,n})$ and let $I(\\ensuremath{{\\bf y}}_i)$ denote the image of the ideal $I$ under the ring isomorphism $x_j \\mapsto y_{i,j}.$ \n\n\n\nWe define the ideal of the \\emph{ruled join} of $X$ with itself $k$ times as in Remark 1.3.3 in \\cite{FOV}:\n\\[\nJ = I(\\ensuremath{{\\bf y}}_1) + \\cdots + I(\\ensuremath{{\\bf y}}_{k+1}) .\\] \nGeometrically, we embed $X$ into $k+1$ disjoint copies of $\\ensuremath{\\mathbb{P}}^n$ in a projective space of dimension $(k+1)(n+1)-1.$ If a point is in the variety defined by $J$, we will see a point of $X$ in each set of $y$-variables. If we project to the linear space $[y_{1,0}+ \\cdots +y_{1,n}: \\cdots :y_{k+1,0}+ \\cdots +y_{k+1,n}]$ then the points in the image are points whose coordinates are sums of $k+1$ points of $X.$ \n\nIn practice, we make the change of coordinates which is the identity on the first $k$ sets of variables and is defined by $y_{k+1,j} \\mapsto y_{k+1,j}-y_{1,j}- \\cdots - y_{k,j}$ on the last set of variables. This has the effect of sending $y_{1,j}+\\cdots + y_{k+1,j}$ to $y_{k+1,j},$ so that the ideal of the $k$th secant variety is the intersection of $I(\\ensuremath{{\\bf y}}_1) + \\cdots +I(\\ensuremath{{\\bf y}}_k) +I(\\ensuremath{{\\bf y}}_{k+1}-\\ensuremath{{\\bf y}}_1- \\cdots - \\ensuremath{{\\bf y}}_k)$ with $k[y_{k+1,0}, \\ldots, y_{k+1,n}].$\n\n\\begin{ex}[The secant variety of two points in $\\ensuremath{\\mathbb{P}}^2$.]\nConsider $X = \\{ [1:0:0], [0:1:0]\\}$ with defining ideal $I = \\langle x_0x_1, x_2 \\rangle.$ Using the definition above, the ideal of the join is\n\\[\nJ = \\langle y_{1,0}y_{1,1}, y_{1,2} \\rangle + \\langle y_{2,0}y_{2,1}, y_{2,2} \\rangle \\]\nUnder the change of coordinates $y_{2,j} \\mapsto y_{2,j} - y_{1,j}$ we have\n\\[\n\\tilde{J}= \\langle y_{1,0}y_{1,1}, y_{1,2} \\rangle + \\langle (y_{2,0}-y_{1,0})(y_{2,1}-y_{1,1}), y_{2,2} - y_{1,2} \\rangle\n\\]\nThe variety $V(\\tilde{J})$ consists of points of the form\n\\[\n\\begin{split}\n[a:0:0:a:b:0], [c:0:0:d:0:0],\\\\\n [0:e:0:0:f:0], [0:g:0:h:g:0],\n \\end{split}\\]\nwhere $[a:b], [c:d], [e:f], [g:h] \\in \\ensuremath{\\mathbb{P}}^1.$ Eliminating the first three variables projects $V(\\tilde{J})$ into $\\ensuremath{\\mathbb{P}}^2,$ and we see that the image of $V(\\tilde{J})$ under this projection is the line joining the two points of $X.$\n\n\\end{ex}\n\nSturmfels and Sullivant use a modification of this definition of the secant ideal in which they first define an ideal in the ring $k[\\ensuremath{{\\bf x}}, \\ensuremath{{\\bf y}}_1, \\ldots, \\ensuremath{{\\bf y}}_{k+1}]$, which has $k+2$ sets of variables, and then eliminate. Using the notation from before they work with secant ideals by first defining $J' = I(\\ensuremath{{\\bf y}}_1) + \\cdots +I(\\ensuremath{{\\bf y}}_{k+1}) + \\langle \\ensuremath{{\\bf y}}_1+ \\cdots + \\ensuremath{{\\bf y}}_{k+1}-\\ensuremath{{\\bf x}} \\rangle$ and then computing $I(\\Sigma_k(X)) = J' \\cap k[\\ensuremath{{\\bf x}}].$ Eliminating the $y$-variables produces an ideal in $k[\\ensuremath{{\\bf x}}]$ that vanishes on all points in $\\ensuremath{\\mathbb{P}}^n$ that can be written as the sum of $k+1$ points of $X,$ and hence defines the secant ideal of $\\Sigma_k(X).$\n\n\\begin{ex}[The secant variety of two points in $\\ensuremath{\\mathbb{P}}^2$ revisited.]\nConsider $X = \\{ [1:0:0], [0:1:0]\\}$ with defining ideal $I = \\langle x_0x_1, x_2 \\rangle.$ Using the definition of Sturmfels and Sullivant, the secant variety of $X$ is\n\\[\n\\begin{split}\nJ' = \\langle y_{1,0}y_{1,1}, y_{1,2} \\rangle + \\langle y_{2,0}y_{2,1}, y_{2,2} \\rangle \\\\\n+ \\langle y_{1,0}+y_{2,0}-x_0, y_{1,1}+y_{2,1}-x_1, y_{1,2}+y_{2,2}-x_2\\rangle.\n\\end{split}\n\\]\nThe variety $V(J')$ consists of points of the form\n\\[\n\\begin{split}\n[a+b:0:0:a:0:0:b:0:0], [c:d:0:c:0:0:0:d:0],\\\\ [0:e+f:0:0:e:0:0:f:0],[h:g:0:0:g:0:h:0:0],\n\\end{split}\n\\]\nwhere $[a:b], [c:d], [e:f], [g:h] \\in \\ensuremath{\\mathbb{P}}^1.$ Eliminating the $y$-variables projects $V(J')$ into $\\ensuremath{\\mathbb{P}}^2,$ and we see that the image of $V(J')$ under this projection is the line joining the two points of $X.$\n\\end{ex}\n\nFrom the point of view of computation, using the first definition of a secant ideal is probably better as it involves computing an elimination ideal in an ambient ring with fewer variables. The advantage of the second definition reveals itself in proofs, especially involving monomial ideals. \n\nIndeed, as the linear form $y_{1,j}+ \\cdots+y_{k+1, j}-x_j$ is in $J',$ we see that $(y_{1,j}+ \\cdots +y_{k+1, j})^m$ is equivalent to $x_j^m$ modulo $J'.$ Therefore, a monomial $f(\\ensuremath{{\\bf x}})$ is in $J' \\cap k[\\ensuremath{{\\bf x}}]$ if and only if $f(\\ensuremath{{\\bf y}}_1+\\cdots+\\ensuremath{{\\bf y}}_{k+1}) \\in J'.$ This is a key observation in Lemma 2.3 of \\cite{StS}.\n\n\\subsection{Secant varieties via prolongation}\n\nA sufficiently positive embedding of a variety $X$ has an ideal generated by quadrics. We expect the ideal of $\\Sigma_k(X)$ to be generated by forms of degree $k+2.$ There are many ways of seeing that $I(\\Sigma_k(X))$ cannot contain any forms of degree less than $k+2.$ This fact was made explicit algebraically by Catalano-Johnson \\cite{catalano} who showed that $I(\\Sigma_k(X))$ is contained in the $(k+1)$st symbolic power of $I(X)$ and cites an independent proof due to Catalisano. The stronger statement, that $I(\\Sigma_k(X))_{k+2} = I(X)^{(k+1)}$ appears in \\cite{LM1, LM2} and a proof of a generalization for ideals generated by forms of degree $d$ is in \\cite{sidsu}.\n\nThe connection between symbolic powers and the ideals of secant varieties, at least in the case of smooth curves, is implicit in work of Thaddeus \\cite{thaddeus}. Specifically, in \\S 5.3 he constructs a sequence of flips whose exceptional loci are the transforms of secant varieties and then identifies the ample cone at each stage. The identification of sections of line bundles on the transformed spaces with those on the original in \\S 5.2 then provides the connection. Though not made algebraically explicit, this connection is used in \\S 2.12 of \\cite{wahl}, is present throughout \\cite{vermeireidealreg} and is discussed on pg. 80 of \\cite{vermeireSecBir}.\n\nThe observation that $I(\\Sigma_k(X))$ and the $(k+1)$st symbolic power agree in degree $k+2$ leads to an algorithm for quickly computing all forms of degree $k+2$ in $I(\\Sigma_k(X)).$ First, we define the \\emph{prolongation} of a vector space $V$ of homogeneous forms of degree $d$ to be the space of a forms of degree $d+1$ whose first partial derivatives are all in $V.$ An easy way to compute the prolongation of $V$ is to compute the vector space $V_i$ of forms of degree $d+1$ formed by integrating the elements of $V$ with respect to $x_i.$ Then $V_1\\cap \\cdots \\cap V_n$ is the prolongation of $V.$ We provide \\emph{Macaulay2} code for the computation of prolongations in \\emph{Appendix A.}\n\nIf $V = I(X)_2,$ then the prolongation of $V$ is $I(\\Sigma_1(X))_3.$ The prolongation of $I(\\Sigma_k(X))_{k+2}$ is $I(\\Sigma_{k+1}(X))_{k+3}.$ As each of these spaces is just the intersection of a set of finite dimensional linear spaces, the vector spaces $I(\\Sigma_k(X)_{k+2})$ can be computed effectively in many variables. \n\n\\subsection{Computing the ideal of a smooth curve}\n\nIn this section we discuss the examples that we have been able to compute so far, most of which also appear in \\cite{sidver}. Essentially we need a mechanism for writing down the generators of $I(X).$ \nA curve of degree $\\geq 4g+3$ has a determinantal presentation by \\cite{EKS}, where matrices whose 2-minors generate $I(X)$ for elliptic and hyperelliptic curves are given. We can also re-embed plane curves into higher dimensional spaces. We give explicit examples of each class of example below. We computed the ideals of the secant varieties using the idea of prolongation described in the previous section. In each case, the projective dimension of the ideal generated by prolongation is equal to the codimension of the secant variety. Hence, we can deduce that the ideal of the secant variety is generated by the prolongation.\n\n\\begin{ex}[Re-embedding a plane curve with nodes: $g=2, d=5$, Example 1.5 in \\cite{sidver}]\\label{ex:g2d9}\nSuppose we have a plane quintic with 4 nodes. If we blow up the nodes we have a smooth curve of genus 2. We provide Macaulay 2 \\cite{mac2} code below that shows how to compute the ideal of the embedding of such a curve in $\\ensuremath{\\mathbb{P}}^7.$ This method of finding the equations of a smooth curve is due to F. Schreyer and was suggested to the first author by D. Eisenbud.\n\n\\begin{verbatim}\n--The homogeneous coordinate ring of P^2.\nS= ZZ\/32003[x_0..x_2]\n\n--These are the ideals of the 4 nodes\nI1 = ideal(x_0, x_1)\nI2 = ideal(x_0, x_2)\nI3 = ideal(x_1, x_2)\nI4 = ideal(x_0-x_1, x_1-x_2)\n\n--Forms in I vanishing twice at each of our chosen nodes\nI = intersect(I1^2, I2^2, I3^2, I4^2); \n\n--The degree 5 piece of I.\nM = flatten entries gens truncate(5, I)\n\n--The target of the rational map given by M.\nR = ZZ\/32003[y_0..y_8]\n\n--The rational map given by M and the ideal of its image.\nf = map(S, R, M)\nK = ker f\n\n--A random linear change of coordinates on P^8.\ng = map(R, R, {random(1, R), random(1, R),random(1, R),\nrandom(1, R),random(1, R),random(1, R),random(1, R),\nrandom(1, R),random(1, R)})\n\n--Add in the element y_8 and then eliminate it. \n--J = ideal of the cone over our plane quintic in P^7.\nJ =eliminate(y_8, g(K)+ideal(y_8));\n\\end{verbatim}\nThe graded Betti diagram of the ideal of the curve is Example \\ref{ex:g2d9I} and also Example 1.5 in \\cite{sidver}. Its secant ideal has graded Betti diagram given below.\n\\begin{verbatim}\n 0 1 2 3 4\n total: 1 12 16 8 3\n 0: 1 . . . . \n 1: . . . . . \n 2: . 12 16 . . \n 3: . . . 4 . \n 4: . . . 4 3\n\\end{verbatim}\n\n\\end{ex}\n\n\n\\begin{ex}[A determinantal curve: $g = 2, d=12$, Example 4.8 in \\cite{sidver}]\\label{ex:det}\nFollowing \\cite{EKS}, we can write down matrices whose $2 \\times 2$ minors generate the ideal of a hyperelliptic curve. We give such a matrix below for a curve with $g=2$ and $d=12.$\n\n\\[\\begin{pmatrix}{x}_{0}& {x}_{1}& {x}_{2}& {x}_{3}& {y}_{0}\\\\ {x}_{1}& {x}_{2}& {x}_{3}& {x}_{4}& {y}_{1}\\\\ {x}_{2}& {x}_{3}& {x}_{4}& {x}_{5}& {y}_{2}\\\\ {x}_{3}& {x}_{4}& {x}_{5}& {x}_{6}& {y}_{3}\\\\ {y}_{0}& {y}_{1}& {y}_{2}& {y}_{3}& {x}_{1}+{x}_{2}+{x}_{3}+{x}_{4}+{x}_{5}\\\\ \\end{pmatrix}\\]\n\\bigskip\n\nThe varieties $\\Sigma_k$, $k=0,1,2$, have the graded Betti diagrams below. Notice that the index of the final row of each diagram is $2k,$ indicating that the homogenous coordinate ring of $\\Sigma_k$ has regularity $2k$ and that the ideal of $\\Sigma_k$ has regularity $2k+1$ as predicted by Conjecture~\\ref{conj:reg}. Moreover, our curve sits in $\\ensuremath{\\mathbb{P}}^{10},$ and $\\dim \\Sigma_k = 2k+1,$ so the index of the last column is the codimension of $\\Sigma_k,$ indicating that each variety is arithmetically Cohen-Macaulay. We can also see that $I(\\Sigma_k)$ is generated in degree $k+2$ and has linear syzygies up to stage $p$ where $12 = 2g+2k+1+p$ and that $\\beta_{9-2k, 11} = \\binom{g+k}{k+1}$ as in Conjecture 1.4 in \\cite{sidver}.\n\n\\begin{verbatim}\n\n 0 1 2 3 4 5 6 7 8 9\n total: 1 43 222 558 840 798 468 147 17 2 \n 0: 1 . . . . . . . . . \n 1: . 43 222 558 840 798 468 147 8 . \n 2: . . . . . . . . 9 2\n \\end{verbatim}\n\n\\pagebreak\n\\begin{verbatim}\n 0 1 2 3 4 5 6 7\n total: 1 70 283 483 413 155 14 3 \n 0: 1 . . . . . . . \n 1: . . . . . . . . \n 2: . 70 283 483 413 155 . . \n 3: . . . . . . 7 . \n 4: . . . . . . 7 3\n\n 0 1 2 3 4 5\n total: 1 41 94 61 11 4 \n 0: 1 . . . . . \n 1: . . . . . . \n 2: . . . . . . \n 3: . 41 94 61 . . \n 4: . . . . . . \n 5: . . . . 6 . \n 6: . . . . 5 4\n\n\\end{verbatim}\n\\end{ex}\n\n\n\\begin{ex}[A Veronese re-embedding of a plane curve: $g=3, d=12$]\\label{ex:ver}\nLet $X$ be a smooth plane curve of degree 4 and genus 3. Re-embedding this curve via the degree 3 Veronese map we have a curve of degree 12 in $\\ensuremath{\\mathbb{P}}^9.$ Below we give the graded Betti diagram of the curve and its first two secant varieties.\n\\begin{verbatim}\n\n 0 1 2 3 4 5 6 7 8\n total: 1 33 144 294 336 210 69 16 3\n 0: 1 . . . . . . . .\n 1: . 33 144 294 336 210 48 . .\n 2: . . . . . . 21 16 3\n\n 0 1 2 3 4 5 6\n total: 1 38 108 102 43 18 6\n 0: 1 . . . . . .\n 1: . . . . . . .\n 2: . 38 108 102 10 . .\n 3: . . . . 30 . .\n 4: . . . . 3 18 6\n\t \\end{verbatim}\n\t \\pagebreak\n\t \\begin{verbatim}\n 0 1 2 3 4\n total: 1 8 23 26 10\n 0: 1 . . . .\n 1: . . . . .\n 2: . . . . .\n 3: . 8 . . .\n 4: . . 6 . .\n 5: . . 16 10 .\n 6: . . 1 16 10\n\\end{verbatim}\n\n\\end{ex}\n\nExamples \\ref{ex:g2d9}, \\ref{ex:det} and \\ref{ex:ver} together suggest an additional conjecture, that row $2k$ of the Betti diagram of $\\Sigma_k$ has precisely $g$ nonzero elements.\n\\section{Secant varieties as vector bundles: Terracini recursion}\nThe definition of a secant variety as the Zariski closure of the union of secant lines of $X$ does not lend itself to thinking about a secant variety geometrically. A more elegant point of view is to realize that a secant line to a projective variety $X$ is just the span of a length two subscheme of $X$. Thus we should think of $\\Sigma(X)$ as the image of a $\\ensuremath{\\mathbb{P}}^1$-bundle over the space of length two subschemes of X, $\\operatorname{Hilb}^2X$. One nice consequence of this point of view is that $\\operatorname{Hilb}^2X=\\operatorname{Bl}_{\\Delta}(X\\times X)\/S_2$ is smooth as long as $X$ is, and we obtain a geometric model for the secant variety on which we can apply standard cohomological techniques. Perhaps more importantly, this $\\ensuremath{\\mathbb{P}}^1$-bundle can be constructed explicitly via blowing up. \n\nUnder mild hypotheses on the positivity of the embedding of $X,$ the blowup of $\\ensuremath{\\mathbb{P}}^n$ at $X$ produces a desingularization of $\\Sigma(X)$ as a $\\ensuremath{\\mathbb{P}}^1$ bundle over $\\operatorname{Hilb}^2X$. Thinking of this bundle embedded inside the blowup of $\\ensuremath{\\mathbb{P}}^n$ at $X$ we can also examine how it meets the exceptional divisor. What we see above a point $p \\in X$ is a $\\ensuremath{\\mathbb{P}}^{n-2}$ that meets the proper transform of $\\Sigma(X)$ in the projection of $X$ into $\\ensuremath{\\mathbb{P}}^{n-2}$ from the tangent space at $p.$\n\nIn this section we provide explicit examples illustrating this point of view. In \\S 3.1 we illustrate the geometry of the blowups desingularizing secant varieties when $X$ is a set of 5 points in $\\ensuremath{\\mathbb{P}}^3.$ Although the secant varieties of rational normal curves are well-understood algebraically, we discuss them here as we may make explicit computations and give proofs which highlight the main ideas used in the more general case in \\cite{vermeiresecreg} but are much simpler. We make computations with rational normal curves of degrees 3 and 4 in \\S 3.2 and \\S 3.3. In \\S 3.4 we discuss how to think about the cohomology along the fibers of these blowups and how cohomology may be used to show that the secant variety is projectively normal.\n\n\\subsection{Secant varieties of points}\n\nLet $X\\subset \\ensuremath{\\mathbb{P}}^n$ be a finite set of points in linearly general position. We will analyze the successive blowups of $X$ and the proper transforms of its secant varieties, moving up one dimension at each stage. It is instructive to consider the geometry of the blowups of a finite set of points and its secant varieties as we can easily restrict our attention to the picture above a single point. We think the general picture of the geometry of the blowups will become transparent if we illustrate the construction in a concrete example. Although this computation may seem quite special, if $X$ consists of $n+2$ points in linearly general position, then a result of Kapranov \\cite{kapranov} tells us that the sequence of blowups actually gives a realization of $\\overline{M}_{0,n}.$\n\n\\subsubsection{Points in $\\ensuremath{\\mathbb{P}}^3$}\n\nLet $B_0 = \\ensuremath{\\mathbb{P}}^3$ and let $\\Sigma_0=X$ be a set of 5 linearly general points denoted $p_1, p_2, p_3, p_4, p_5$. Let $B_1$ be the blowup of $B_0$ at $\\Sigma_0.$ We let $E_i$ denote the exceptional divisor above $p_i$ and $\\widetilde{\\Sigma}_j$ be the proper transform of $\\Sigma_j$ for $j = 1,2.$ \n\n\\subsubsection{The first recursion}\nThe exceptional divisor $E_i$ is a $\\ensuremath{\\mathbb{P}}^2$ in which $\\widetilde{\\Sigma}_1 \\cap E_i$ is a set of 4 points in $\\ensuremath{\\mathbb{P}}^2$ and $\\widetilde{\\Sigma}_2 \\cap E_i$ is the union of lines joining these points in $\\widetilde{\\Sigma}_1$. Below we give a diagram depicting the exceptional divisor above a point $p_1$ together with the strict transforms of the span of $p_1$ with two other points $p_2$ and $p_3.$\n\\[\n\\includegraphics[width=6cm]{blowupPts.pdf}\n\\]\n The picture in $E_1$ can be found by projecting $\\Sigma_1$ and $\\Sigma_2$ into $\\ensuremath{\\mathbb{P}}^2$ away from $p_1.$\n\nTaking a more global picture, we see that $\\widetilde{\\Sigma}_1$ is a smooth variety consisting of the disjoint unions of proper transforms of lines. However, $\\widetilde{\\Sigma}_2$ is not smooth as the components of $\\widetilde{\\Sigma}_2$ intersect the $E_i$ in lines which meet at points.\n\n\n\n\\subsubsection{The second recursion}\nLet us now define $B_2$ to be the blowup of $B_1$ at $\\widetilde{\\Sigma}_1.$ We will abuse notation and let $E_i$ denote its own proper transform in $B_2.$ \nTo analyze $B_2$, it may be helpful to restrict our attention locally to a single point $p = [0:0:0:1]$ and examine the fiber over $p$ after blowing up $p$ and then blowing up the proper transform of a line containing $p.$ Using bihomogeneous coordinates $\\ensuremath{{\\bf x}}$ and $\\ensuremath{{\\bf y}}$ on $\\ensuremath{\\mathbb{P}}^3 \\times \\ensuremath{\\mathbb{P}}^2,$ the blowup of $\\ensuremath{\\mathbb{P}}^3$ is defined by $I(B_p) = \\langle x_iy_j-x_jy_i \\mid i \\neq j \\in \\{0,1,2\\} \\rangle.$ Blowing up the proper transform of the line $L$ defined by $\\langle x_0, x_1 \\rangle$ yields a subvariety of $\\ensuremath{\\mathbb{P}}^3 \\times \\ensuremath{\\mathbb{P}}^2 \\times \\ensuremath{\\mathbb{P}}^1$ which can be given in tri-homogeneous coordinates $\\ensuremath{{\\bf x}}, \\ensuremath{{\\bf y}}, \\ensuremath{{\\bf z}}$ by\n\\[\nI(B_{p,L}) = \\langle x_iy_j-x_jy_i \\mid i \\neq j \\in \\{0,1,2\\} \\rangle + \\langle x_0z_1-x_1z_0, y_1z_0-y_0z_1\\rangle.\n\\]\nTo understand the fiber above $p,$ we add the ideal of the point to $I(B_{p,L})$ to get $\\langle x_0, x_1, x_2, y_1z_0-y_0z_1 \\rangle.$ The $\\ensuremath{{\\bf x}}$ coordinates alone cut out $p \\times \\ensuremath{\\mathbb{P}}^2 \\times \\ensuremath{\\mathbb{P}}^1,$ and the equation in the $\\ensuremath{{\\bf y}}$ and $\\ensuremath{{\\bf z}}$-variables cuts out the blowup of the point $p \\times [0:0:1]$ in $p \\times \\ensuremath{\\mathbb{P}}^2.$ Thus, we can see that if blow up a point $p$ and a line containing it, above $p$ we get a $\\ensuremath{\\mathbb{P}}^2$ in which we have blown up one point. \n\nTurning back to the global picture, we see that when we have blown up $\\widetilde{\\Sigma}_1,$ the proper transform of $\\widetilde{\\Sigma}_2$ in $B_2$ is smooth. Moreover, above each $p_i$ we have a copy of our global picture projected into $\\ensuremath{\\mathbb{P}}^2$ away from a point of $X.$ Indeed, each $E_i$ is a $\\ensuremath{\\mathbb{P}}^2$ in which we have blown up 4 points. These 4 points correspond to the intersection of $\\widetilde{\\Sigma}_1$ with $E_i.$ The intersection of the proper transform of $\\Sigma_2$ with $E_i$ consists of the union of the exceptional divisors of the 4 points that are blown up in $E_i.$\n\n\\subsection{Blowing up a rational normal curve of degree 3}\nWe begin with the twisted cubic $X\\subset\\ensuremath{\\mathbb{P}}^3$. In this case the equations of the blowup are easy to write down and we can realize the blowup of $\\ensuremath{\\mathbb{P}}^3$ along $X$ as a $\\ensuremath{\\mathbb{P}}^1$ bundle over $\\operatorname{Hilb}^2 \\ensuremath{\\mathbb{P}}^1$ explicitly. The secant variety $\\Sigma(X)=\\ensuremath{\\mathbb{P}}^3$ is smooth, but the embedding is positive enough for $I(X)$ to have linear first syzygies, which is the condition required for the setup in \\cite{flip1}.\n\nThe three quadrics $x_0x_2-x_1^2, x_0x_3-x_1x_2, x_1x_3-x_2^2,$ which generate $I(X)$ give a rational map $\\ensuremath{\\mathbb{P}}^3 \\dashrightarrow \\ensuremath{\\mathbb{P}}^2.$ The blowup of $\\ensuremath{\\mathbb{P}}^3$ at $X$ is the graph of this map in $\\ensuremath{\\mathbb{P}}^3 \\times \\ensuremath{\\mathbb{P}}^2$ with bihomogeneous coordinates $\\ensuremath{{\\bf x}}$ and $\\ensuremath{{\\bf y}}.$ This graph is defined set-theoretically by equations coming from Koszul relations, for example $y_0(x_0x_3-x_1x_2)-y_1(x_0x_2-x_1^2).$ But these relations are generated by the two relations coming from linear syzygies:\n\\[\nx_0y_2-x_1y_1+x_2y_0, x_1y_2-x_2y_1+x_3y_0,\n\\]\nwhich generate the ideal of the blowup.\n\nLet $\\widetilde{\\ensuremath{\\mathbb{P}}}^3$ denote the blowup of $\\ensuremath{\\mathbb{P}}^3$ along $X$ and $E$ denote the exceptional divisor. Further, let $q_1$ and $q_2$ denote the restrictions of the projections from $\\ensuremath{\\mathbb{P}}^3 \\times \\ensuremath{\\mathbb{P}}^2$ to the first and second factors to $\\widetilde{\\ensuremath{\\mathbb{P}}}^3.$ Then $q_1:\\widetilde{\\ensuremath{\\mathbb{P}}}^3 \\to \\ensuremath{\\mathbb{P}}^3$ is the blowup map and $q_2:\\widetilde{\\ensuremath{\\mathbb{P}}}^3 \\to \\ensuremath{\\mathbb{P}}^2$ is the morphism induced by $|2H-E|.$ We analyze the fibers of both maps explicitly below.\n\nAbove any point $p \\in X,$ the fiber of $q_1$ is a $\\ensuremath{\\mathbb{P}}^1$. For example, if $p = [0:0:0:1],$ then $q_1^{-1}(p)$ is defined by adding $\\langle x_0, x_1, x_2 \\rangle$ to the ideal of the blowup to get \\[\\langle x_0, x_1, x_2, x_3y_0\\rangle = \\langle x_0, x_1, x_2, x_3 \\rangle \\cap \\langle x_0, x_1, x_2, y_0 \\rangle.\\]\nThe first primary component is irrelevant, and the second defines the $\\ensuremath{\\mathbb{P}}^1$ with points $([0:0:0:1],[0:y_1:y_2].)$ \n\nMoreover, we can see from these equations that the blowup of $\\ensuremath{\\mathbb{P}}^3$ along the twisted cubic is a $\\ensuremath{\\mathbb{P}}^1$-bundle over $\\ensuremath{\\mathbb{P}}^2$. To see $q_2^{-1}([1:0:0]),$ add the ideal $\\langle y_1, y_2 \\rangle$ to the ideal of the blowup to get the ideal $y_0 \\langle x_2, x_3 \\rangle.$ This shows that the fiber above $[1:0:0]$ consists of points of the form $([a:b:0:0] , [1:0:0]).$ As a length $n$ subscheme of $\\ensuremath{\\mathbb{P}}^1$ has an ideal generated by a single form of degree $n,$ $\\operatorname{Hilb}^n\\ensuremath{\\mathbb{P}}^1=\\ensuremath{\\mathbb{P}}^n$, and so this matches exactly what we expect from the description above. \n\n\n\nChoosing a less trivial example, in the next section we will begin to see a recursive geometric picture analogous to what we saw when we blew up a finite set of points and its secant varieties. Again, we will see the projection of our global picture in the fibers above points of our original variety. Note that we are projecting to the projectivized normal bundle and hence away from the \\emph{tangent} space to a point on our variety. (In our earlier example, we projected from a single point because a zero-dimensional variety has a zero-dimensional tangent spaces.)\n\n\\subsection{Blowing up a rational normal curve of degree 4}\nLet $X \\subset \\ensuremath{\\mathbb{P}}^4$ be a rational normal curve with defining ideal minimally generated by the six $2\\times 2$ minors of the matrix\n\\[\n\\begin{pmatrix}\nx_0 & x_1 & x_2 & x_3\\\\\nx_1& x_2& x_3& x_4\n\\end{pmatrix}\n\\]\nThese six quadrics are a linear system on $\\ensuremath{\\mathbb{P}}^4$ with base locus $X,$ so they give a rational map $\\ensuremath{\\mathbb{P}}^4 \\dashrightarrow \\ensuremath{\\mathbb{P}}^5.$ The closure of the graph of this map in $\\ensuremath{\\mathbb{P}}^4 \\times \\ensuremath{\\mathbb{P}}^5$ is $B = B_X(\\ensuremath{\\mathbb{P}}^4),$ $\\ensuremath{\\mathbb{P}}^4$ blown up at $X.$\n\n\\subsubsection{The ideal of the blowup}\n Let $R = k[\\ensuremath{{\\bf x}}, \\ensuremath{{\\bf y}}].$ Since $I(B)$ defines a subscheme of $\\ensuremath{\\mathbb{P}}^4 \\times \\ensuremath{\\mathbb{P}}^5$ it is a bihomogeneous ideal. It must contain homogeneous forms in the $y$-variables that define the image of $\\ensuremath{\\mathbb{P}}^4$ in $\\ensuremath{\\mathbb{P}}^5.$ Since each $y_i$ is the image of a quadric vanishing on $X$, the ideal $I(B)$ will also contain bihomogeneous forms that are linear in the $y_i$ corresponding to syzygies.\n\nIn our example, we get an ideal with 9 generators by running the Macaulay 2 \\cite{mac2} code\n\\begin{verbatim}\n--The coordinate ring of P^4 x P^5.\nS = ZZ\/32003[x_0..x_4, y_0..y_5]\n\n--The coordinate ring of P^4 with an extra parameter.\n--The parameter t ensures that the kernel is bi-homogeneous.\nB = ZZ\/32003[x_0..x_4,t]\n\n--The map S -> B.\nf = map(B,S,{x_0, x_1, x_2, x_3, x_4, \nt*(-x_1^2+x_0*x_2), t*(-x_1*x_2+x_0*x_3),\n t*(-x_2^2+x_1*x_3), t*(-x_1*x_3+x_0*x_4),\n t*(-x_2*x_3+x_1*x_4), t*(-x_3^2+x_2*x_4)})\n\n--The ideal defining the blowup in S\nK = ideal mingens ker f\n\\end{verbatim}\nOne generator, ${y}_{2}^{2}-{y}_{2} {y}_{3}+{y}_{1}{y}_{4}-{y}_{0} {y}_{5},$ in the $\\ensuremath{{\\bf y}}$-variables alone cuts out the image of $\\ensuremath{\\mathbb{P}}^4$ in $\\ensuremath{\\mathbb{P}}^5.$ The other 8 generators are constructed from linear syzygies on generators of $I(X)$ as in the previous example. We can use the ideal to analyze what happens when we take the pre-image of a point $p \\in \\ensuremath{\\mathbb{P}}^4$ under the blowup map. There are two cases depending on whether $p$ is contained in $X.$\n\nCase 1: Suppose that $p=[0:1:0:0:0] \\notin X.$ When we compute $I(p)+I(B)$ and then use local coordinates where $x_1=1,$ we get the ideal $\\langle x_0, x_2, x_3, x_4, y_0, y_1, y_2, y_3, y_4\\rangle.$ Thus, the pre-image of $p$ is a single point as expected.\n\nCase 2: Suppose that $p=[1:0:0:0:0] \\in X.$ Again, we compute $I(p)+I(B)$ and then use local coordinates where $x_0=1.$ We get the ideal $\\langle x_1, x_2, x_3, x_4, y_2, y_4, y_5 \\rangle$\nwhich defines a $\\ensuremath{\\mathbb{P}}^2.$\n\n\\subsubsection{The intersection of $\\widetilde{\\Sigma}_1$ with the exceptional divisor}\n To examine what happens to the secant variety of $X$ algebraically, we add the equation of $\\Sigma_1(X)$ to $I(B)$. Above the point $[1:0:0:0:0] $ we have the intersection of the $\\ensuremath{\\mathbb{P}}^2$ with coordinates $[y_0:y_1:y_3]$ with the hypersurface defined by $y_1^2-y_0y_3$. The conic $y_1^2-y_0y_3$ is the projection of our original curve away from the line with points of the form $[a:b:0:0:0].$ We have $[y_0:y_1:y_3] = [t^2:t^3:t^4],$ which is defined by the given equation. Schematically, we have\n \\[\n \\includegraphics[width=4cm]{rnc3.pdf}\n \\]\n\n\n\n\\subsection{Cohomology along the fibers}\nIt was observed by the second author in \\cite{vermeireidealreg} that one could use Bertram's \\textit{Terracini Recursiveness} to obtain cohomological relationships between different embeddings of the same curve. For example, let $X$ be a smooth curve embedded in $\\ensuremath{\\mathbb{P}}^n$ by a line bundle $L$ of degree at least $2g+3.$ As discussed above, blowing up $\\ensuremath{\\mathbb{P}}^n$ along $X$ desingularizes $\\Sigma_1$, and thus blowing up again along the proper transform of $\\Sigma_1$ yields a smooth variety $B_2$\n$$B_2 \\stackrel{\\pi_2}{\\rightarrow} B_1 \\stackrel{\\pi_1}{\\rightarrow} \\ensuremath{\\mathbb{P}}^n=\\ensuremath{\\mathbb{P}}\\Gamma(X,L).$$\nLet $\\pi=\\pi_1\\circ\\pi_2$. \n\nIf $x\\in X$, then $\\pi_1^{-1}(x)\\cong\\ensuremath{\\mathbb{P}}^{n-2}=\\ensuremath{\\mathbb{P}}\\Gamma(X,L(-2x))$. By Terracini Recursiveness, $\\pi^{-1}(x)$ is the blow up of $\\ensuremath{\\mathbb{P}}\\Gamma(X,L(-2x))$ along a copy of $X$ embedded by $L(-2x)$; equivalently $\\pi^{-1}(x)$ is precisely what is obtained by the projection $\\ensuremath{\\mathbb{P}}^n\\dashrightarrow\\ensuremath{\\mathbb{P}}^{n-2}$ from the line tangent to $X$ at $x$. In fact, it can be shown that the exceptional divisor of the desingularization $\\pi:\\widetilde{\\Sigma}\\rightarrow\\Sigma$ is precisely $X\\times X$ where the restriction $\\pi:X\\times X\\rightarrow X$ is projection \\cite[Lemma 3.7]{flip1}.\n\nBecause $B_2$ is obtained from $\\ensuremath{\\mathbb{P}}^n$ by blowing up twice along smooth subvarieties, we know that $\\operatorname{Pic}(B_2)=\\ensuremath{\\mathbb{Z}} H+\\ensuremath{\\mathbb{Z}} E_1+\\ensuremath{\\mathbb{Z}} E_2$ where $H$ is the proper transform of a hyperplane section, $E_1$ is the proper transform of the exceptional divisor of the first blow-up $\\pi_1$, and $E_2$ is the exceptional divisor of the second blow-up. Note that we similarly have $\\operatorname{Pic}(\\pi^{-1}(x))=\\ensuremath{\\mathbb{Z}} \\widetilde{H}+\\ensuremath{\\mathbb{Z}} \\widetilde{E}_1$. We examine the relationships among line bundles on $\\ensuremath{\\mathbb{P}}^n, B_1,$ and $B_2.$ \n\nBecause the generic hyperplane in $\\ensuremath{\\mathbb{P}}^n$ misses $x\\in X$, the restriction of $H$ to $\\pi^{-1}(x)$ is trivial. Because $E_1$ is the proper transform of the exceptional divisor of the first blow-up, the restriction of $E_1$ to $\\pi^{-1}(x)$ is $-\\widetilde{H}$. Finally, by Terracini Recursiveness, the restriction of $E_2$ to $\\pi^{-1}(x)$ is $\\widetilde{E}_1$. Thus a typical effective line bundle on $B_2$ of the form $\\mathcal{O}_{B_2}(aH-bE_1-cE_2)$ restricts to $\\mathcal{O}_{\\pi^{-1}(x)}(b\\widetilde{H}-c\\widetilde{E}_1)$.\n\n\n\n\n\n\n\\subsubsection{Regularity and projective normality of the secant variety to a rational normal curve}\nIn this section will illustrate how to apply the first stage of Bertram's Terracini Recursiveness in the special case where $X$ is a rational normal curve. Suppose that $X\\subset\\ensuremath{\\mathbb{P}}^n$ is a rational normal curve with $L=\\mathcal{O}_X(1)=\\mathcal{O}_{\\ensuremath{\\mathbb{P}}^1}(n)$ of degree at least $4$ so that $\\mathcal{I}_{\\Sigma}$ is $3$-regular, and $\\Sigma$ is projectively normal. \n\nIt follows \\cite[Proposition 9]{vermeiresecreg} from the description of the exceptional divisor of the desingularization $\\pi:\\widetilde{\\Sigma}\\rightarrow\\Sigma$ above that $\\Sigma$ has rational singularities; i.e. $R^i\\pi_*\\mathcal{O}_{\\widetilde{\\Sigma}}=0$ for $i>0$. Thus by Leray-Serre we immediately have $H^i(\\widetilde{\\Sigma},\\mathcal{O}_{\\widetilde{\\Sigma}}(k))=H^i(\\Sigma,\\mathcal{O}_{\\Sigma}(kH))$ for all $i,k$. \n\n\n\\begin{prop}\nLet $X\\subset\\ensuremath{\\mathbb{P}}^n$ be a rational normal curve. Then $\\mathcal{I}_{\\Sigma}$ is $3$-regular.\n\\end{prop}\n\n\\begin{proof}\nWe show directly that $H^i(\\ensuremath{\\mathbb{P}}^n,\\mathcal{I}_{\\Sigma}(3-i))=0$ for $i\\geq1$. As $\\Sigma$ is $3$-dimensional, we have only to show the four vanishings $1\\leq i\\leq4$. \n\nFor each $i \\geq 2$ and all $k$ we have $$H^i(\\ensuremath{\\mathbb{P}}^n,\\mathcal{I}_{\\Sigma}(k))=H^{i-1}(\\Sigma,\\mathcal{O}_{\\Sigma}(k))=H^{i-1}(\\widetilde{\\Sigma},\\mathcal{O}_{\\widetilde{\\Sigma}}(kH)).$$ \n\nThe restriction of $\\mathcal{O}_{\\widetilde{\\Sigma}}(kH))$ to a fiber of the map from $p:\\widetilde{\\Sigma} \\to \\ensuremath{\\mathbb{P}}^2$ is $\\ensuremath{\\mathcal{O}}_{\\ensuremath{\\mathbb{P}}^1}(kH).$ When $i=4$, all cohomology along the fiber vanishes and so in particular we have $H^{3}(\\widetilde{\\Sigma},\\mathcal{O}_{\\widetilde{\\Sigma}}(-H))=0$. In general, if $k \\geq -1,$ all of the higher cohomology along the fibers vanishes. This implies that the higher direct image sheaves vanish and that $H^i(\\widetilde{\\Sigma}, \\ensuremath{\\mathcal{O}}_{\\widetilde{\\Sigma}}(kH)) = H^i(\\ensuremath{\\mathbb{P}}^2, p_*\\ensuremath{\\mathcal{O}}_{\\widetilde{\\Sigma}}(kH))).$ When $i=3$, $k =0,$ and $H^2(\\widetilde{\\Sigma},\\mathcal{O}_{\\widetilde{\\Sigma}})=H^2(\\ensuremath{\\mathbb{P}}^2, p_*\\ensuremath{\\mathcal{O}}_{\\widetilde{\\Sigma}}) = H^2(\\ensuremath{\\mathbb{P}}^2,\\mathcal{O}_{\\ensuremath{\\mathbb{P}}^2})=0$.\n\n\nFor $i=2$, we blow up $B_1$ along $\\widetilde{\\Sigma}$ and use Terracini recursiveness. Consider the sequence\n$$0\\rightarrow\\mathcal{O}_{B_2}(H-E_1-E_2)\\rightarrow\\mathcal{O}_{B_2}(H-E_2)\\rightarrow\\mathcal{O}_{E_1}(H-E_2)\\rightarrow0.$$\nAs discussed earlier, the restriction of $\\mathcal{O}_{E_1}(H-E_2)$ to a fiber of the flat morphism $E_1\\rightarrow X$ is $\\mathcal{O}_{\\pi^{-1}(x)}(-\\widetilde{E}_1)$, but $H^i(\\pi_1^{-1}(x),\\mathcal{O}_{\\pi^{-1}(x)}(-\\widetilde{E}_1))=H^i(\\ensuremath{\\mathbb{P}}^{n-2},\\mathcal{I}_X)=0$ for $i\\geq0$. Thus it follows that \n\\begin{eqnarray*}\nH^i(B_2,\\mathcal{O}_{B_2}(H-E_1-E_2))&=&H^i(B_2,\\mathcal{O}_{B_2}(H-E_2))\\\\\n&=&H^i(B_1,\\mathcal{I}_{\\widetilde{\\Sigma}}(H))\\\\\n&=&H^i(\\ensuremath{\\mathbb{P}}^n,\\mathcal{I}_{\\Sigma}(1)),\n\\end{eqnarray*}\nwhere the last equality is a consequence of $\\Sigma$ having rational singularities.\nFrom the sequence on $B_2$\n$$0\\rightarrow\\mathcal{O}_{B_2}(H-E_1-E_2)\\rightarrow\\mathcal{O}_{B_2}(H-E_1)\\rightarrow\\mathcal{O}_{E_2}(H-E_1)\\rightarrow0$$\nOnce again considering $\\widetilde{\\Sigma}$ as a $\\ensuremath{\\mathbb{P}}^1$-bundle over $\\ensuremath{\\mathbb{P}}^2$, we see that $\\mathcal{O}_{\\widetilde{\\Sigma}}(H-E_1)$ is $\\mathcal{O}_{\\ensuremath{\\mathbb{P}}^1}(-1)$ along the fibers, thus $H^i(\\widetilde{\\Sigma},\\mathcal{O}_{\\widetilde{\\Sigma}}(H-E_1))=0$ for $i\\geq0$. Putting this together, we have\n\\begin{eqnarray*}\nH^i(\\ensuremath{\\mathbb{P}}^n,\\mathcal{I}_{\\Sigma}(1))&=&H^i(B_2,\\mathcal{O}_{B_2}(H-E_1-E_2))\\\\\n&=&H^i(B_1,\\mathcal{O}_{B_1}(H-E_1))\\\\\n&=&H^i(\\ensuremath{\\mathbb{P}}^n,\\mathcal{I}_{X}(1))\\\\\n&=&0.\n\\end{eqnarray*}\n\nFor $i=1$, in a similar fashion it is enough to show $H^1(B_2,\\mathcal{O}_{B_2}(2H-E_1-E_2))=0$. From the sequence\n$$0\\rightarrow\\mathcal{O}_{B_2}(2H-E_1-E_2)\\rightarrow\\mathcal{O}_{B_2}(2H-E_1)\\rightarrow\\mathcal{O}_{E_2}(2H-E_1)\\rightarrow0$$\nand the fact that $H^1(B_2,\\mathcal{O}_{B_2}(2H-E_1))=H^1(\\ensuremath{\\mathbb{P}}^n,\\mathcal{I}_X(2))=0$, it suffices to show $H^0(B_2,\\mathcal{O}_{B_2}(2H-E_1))\\rightarrow H^0(E_2,\\mathcal{O}_{E_2}(2H-E_1))$ is surjective. However, we know that $H^0(B_2,\\mathcal{O}_{B_2}(2H-E_1-E_2))=H^0(B_1,\\mathcal{I}_{\\widetilde{\\Sigma}}(2))=0$, which show that the map is injective. Thus, if we shows the two spaces have the same dimension we are done. We have the well-known identification $H^0(B_2,\\mathcal{O}_{B_2}(2H-E_1))=H^0(\\ensuremath{\\mathbb{P}}^n,\\mathcal{I}_X(2))$. Further, letting $\\varphi:B_1\\rightarrow\\ensuremath{\\mathbb{P}}^s$ be the map given by quadrics vanishing on $X$ (i.e. the morphism induced by the linear system $|2H-E_1|$), then we have the restriction $\\overline{\\varphi}:\\widetilde{\\Sigma}\\rightarrow\\ensuremath{\\mathbb{P}}^2$. \n\nNote that $\\widetilde{\\Sigma} \\subset \\ensuremath{\\mathbb{P}}^n \\times \\ensuremath{\\mathbb{P}}^s$ is a $\\ensuremath{\\mathbb{P}}^1$-bundle over $\\ensuremath{\\mathbb{P}}^2$. It is, further, a nice exercise to show that the double cover $\\ensuremath{\\mathbb{P}}^1\\times\\ensuremath{\\mathbb{P}}^1=\\widetilde{\\Sigma}\\cap E_1\\rightarrow\\ensuremath{\\mathbb{P}}^2$ is, in situ, the natural double cover re-embedded by the Veronese $v_{n-2}$. Therefore, $\\mathcal{O}_{\\ensuremath{\\mathbb{P}}^s}(1)|_{\\ensuremath{\\mathbb{P}}^2}=\\mathcal{O}_{\\ensuremath{\\mathbb{P}}^2}(n-2)$, and this implies that $\\overline{\\varphi}^*\\mathcal{O}_{\\ensuremath{\\mathbb{P}}^2}(n-2)=\\mathcal{O}_{\\widetilde{\\Sigma}}(2H-E_1)$. Hence, $H^0(\\widetilde{\\Sigma},\\mathcal{O}_{\\widetilde{\\Sigma}}(2H-E_1))=H^0(\\ensuremath{\\mathbb{P}}^2,\\mathcal{O}_{\\ensuremath{\\mathbb{P}}^2}(n-2))$. A quick computation gives $h^0(\\ensuremath{\\mathbb{P}}^n,\\mathcal{I}_X(2))=h^0(\\ensuremath{\\mathbb{P}}^2,\\mathcal{O}_{\\ensuremath{\\mathbb{P}}^2}(n-2))=\\binom{n}{2}$.\n\\end{proof}\n\n\n\nIn fact, we see from the proof that $H^1(\\ensuremath{\\mathbb{P}}^n,\\mathcal{I}_{\\Sigma}(1))=0$. As 3-regularity implies that $H^1(\\ensuremath{\\mathbb{P}}^n,\\mathcal{I}_{\\Sigma}(k))=0$ for all $k \\geq 2$ we also have:\n\\begin{cor}\n$\\Sigma$ is projectively normal.\n\\qed\n\\end{cor}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\nIdioms are an important yet complex language phenomenon. In certain corpora, 3 out of 10 sentences are estimated to contain idioms \\citep{moon1998fixed}. The pervasively usage of idioms entails a wide range of linguistic studies \\citep{cacciari2014idioms}. In NLP, idioms are involved in various tasks, including idiom recognition \\citep{peng-feldman-2016-experiments, liu-hwa-2018-heuristically}, embedding learning \\citep{tan-jiang-2021-learning}, and idiom comprehension \\citep{zheng-etal-2019-chid}.\n\nHowever, these NLP systems assume a monolingual setting, and the models perform poorly due to the non-compositional nature of idioms. Adopting a multilingual setting may be beneficial, but existing multilingual idiom datasets are rather small \\citep{moussallem-etal-2018-lidioms}. Machine translation systems also cannot help, because idiom translation is challenging \\citep{salton-etal-2014-evaluation, cap-etal-2015-account, fadaee-etal-2018-examining}. Idiom translation is even difficult for human, as it requires knowing both the source and target culture and mastering diverse translation strategies \\citep{wang2013study}. \n\nDespite these difficulties, there are not enough datasets for model improvement. Translating idioms from scratch is very laborious. Nevertheless, existing idiom dictionaries have always been readily available. Moreover, they usually provide more than one translation to each idiom, which naturally results in a paraphrasal dataset. The paraphrase strategies used for idiom translation are usually more aggressive than in other paraphrasal datasets such as PPDB \\citep{ganitkevitch-etal-2013-ppdb}. Therefore, neural network models based on these dictionaries may benefit idiom translation more than naive retrieval.\n\nTo this end, we present \\textbf{PETCI}, a \\textbf{P}arallel \\textbf{E}nglish \\textbf{T}ranslation dataset of \\textbf{C}hinese \\textbf{I}dioms. PETCI is collected from an idiom dictionary, together with machine translation results from Google and DeepL. Instead of directly using models to translate idioms, we design tasks that require the model to distinguish gold translations from unsatisfactory ones and to rewrite translations to improve their quality. We show that models perform better with the increase of the dataset size, while such increase requires no linguistic expertise. PETCI is publicly available\\footnote{\\url{https:\/\/github.com\/kt2k01\/petci}}.\n\nThe main contribution of this paper is two-fold. First, we collect a dataset that aims improve Chinese idiom translation by both machine translation systems and language learners. Secondly, we test several baseline models on the task we defined.\n\nIn the following sections, we begin by a detailed description of our data collection process (Section \\ref{section:data-collection}), followed by case studies and statistics (Section \\ref{section:statistics}). Based on the dataset, we define tasks and report the performance of baseline models we choose (Section \\ref{section:model}, \\ref{section:experiment}, \\ref{section:results}). Then, we discuss our observations, provide relevant work and future directions (Section \\ref{section:related-work}), and conclude (Section \\ref{section:conclusion}). \n\n\\begin{table*}[t]\n\\centering\n\\begin{tabular}{c|p{4cm}|p{4cm}|c}\n\\hline\n\\textbf{Chinese} & \\multicolumn{1}{c|}{\\textbf{Dictionary Translation}} & \\multicolumn{1}{c|}{\\textbf{Machine Translation}} & \\textbf{Issues Identified}\\\\\n\\hline\n\\cjk{\u4e3a\u864e\u5085\u7ffc}\n&\n\\begin{tabular}{@{\\textbullet~}p{3.6cm}@{}}\n assist an evil-doer \\\\\n give wings to a tiger \\\\\n increase the power of a tyrant to do evil \\\\\n \\textcolor{red}{\\ul{lend support to an evil-doer like adding wings to a tiger}}\n\\end{tabular}\n&\n\\begin{tabular}{@{\\textbullet~}p{3.6cm}@{}}\n \\textcolor{blue}{\\ul{Fu Yi}} for the tiger \\\\\n for the tiger's \\textcolor{magenta}{\\ul{wings}} \\\\\n for the tiger \\\\\n for the tiger's \\textcolor{magenta}{\\ul{wing}} \\\\\n for the tiger's \\textcolor{magenta}{\\ul{sake}}\n\\end{tabular}\n&\n\\begin{tabular}{c}\n \\textcolor{blue}{Name hallucination} \\\\\n \\textcolor{magenta}{Single-word edit} \\\\\n \\textcolor{red}{Very long alternative}\n\\end{tabular}\n\\\\ \\hline\n\\cjk{\u716e\u8c46\u71c3\u8401}\n&\n\\begin{tabular}{@{\\textbullet~}p{3.6cm}@{}}\n fratricidal strife \\\\\n burn beanstalks to cook beans\\textcolor{magenta}{\\ul{ - }}one member of a family injuring another \\\\\n boil beans with bean-stalks\\textcolor{magenta}{\\ul{ - }}reference to a fight among brothers \\\\\n\\end{tabular}\n&\n\\begin{tabular}{@{\\textbullet~}p{3.6cm}@{}}\n \\textcolor{blue}{\\ul{boiled beans}} \\\\\n burning beanstalks cook the beans (idiom)\\textcolor{red}{\\ul{; }}to cause internecine strife \\\\\n\\end{tabular}\n&\n\\begin{tabular}{c}\n \\textcolor{blue}{Partial translation}\\\\\n \\textcolor{magenta}{Hyphenation}\\\\\n \\textcolor{red}{Semicolon}\n\\end{tabular}\n\\\\\n\\hline\n\\cjk{\u98ce\u6d41\u4e91\u6563}\n&\n\\begin{tabular}{@{\\textbullet~}p{3.6cm}@{}}\n \\textcolor{blue}{\\ul{(of old companions)}} separated \\textcolor{red}{\\ul{and}} scattered\\\\\n blown apart by the wind \\textcolor{red}{\\ul{and}} scattered like clouds\\\\\n \\textcolor{blue}{\\ul{(of relatives or friends, separated from each other)}} as the wind blowing \\textcolor{red}{\\ul{and}} the clouds scattering\\\\\n\\end{tabular}\n&\n\\begin{tabular}{@{\\textbullet~}p{3.6cm}@{}}\n \\textcolor{magenta}{\\ul{wind and clouds}}\\\\\n \\textcolor{magenta}{\\ul{Wind flow and clouds scatter}}\\\\\n \\textcolor{magenta}{\\ul{Wind flow and clouds scattered}}\\\\\n\\end{tabular}\n&\n\\begin{tabular}{c}\n \\textcolor{blue}{Parenthesis} \\\\\n \\textcolor{magenta}{Literal translation} \\\\\n \\textcolor{red}{Explicit parallelism}\n\\end{tabular}\n\\\\ \\hline\n\\cjk{\u8427\u89c4\u66f9\u968f}\n&\n\\begin{tabular}{@{\\textbullet~}p{3.6cm}@{}}\n follow established rules\\\\\n \\textcolor{blue}{\\ul{Tsao}} (a Han Dynasty prime minister) followed the rules set by \\textcolor{blue}{\\ul{Hsiao}} (his predecessor)\\\\\n\\end{tabular}\n&\n\\begin{tabular}{@{\\textbullet~}p{3.6cm}@{}}\n \\textcolor{blue}{\\ul{Xiao}} Gui \\textcolor{blue}{\\ul{Cao}} Sui\\\\\n \\textcolor{magenta}{\\ul{lit. Xiao Gui Cao}} follows the rules \\textcolor{magenta}{\\ul{(idiom); fig.}} follow the rules of a certain place\\\\\n\\end{tabular}\n&\n\\begin{tabular}{c}\n \\textcolor{blue}{Different romanization} \\\\\n \\textcolor{magenta}{Unstable reference}\n\\end{tabular}\n\\\\ \\hline\n\\end{tabular}\n\\caption{Examples of idioms and translations in PETCI. Parts of the translations are colored and underlined, if they relate to the identified issues (details in Section \\ref{section:case-study}). The first Machine translation is always from Google.}\n\\label{tab:case-study}\n\\end{table*}\n\n\\section{Data Collection}\n\\label{section:data-collection}\n\\subsection{Human Translation}\nWe collect human translations from a English translation dictionary of Chinese idioms \\citep{liu_1993}. This dictionary is preferred because it provides multiple translations to a single idiom, with one of them labelled as being most frequently used. Though the dictionary is published 3 decades ago, the time is short compared to the observed long time for the meaning of idioms to change \\citep{nunberg1994idioms}. \n\nWe use the OCR tool \\texttt{tesseract}\\footnote{\\url{https:\/\/tesseract-ocr.github.io\/}} to extract the English translations from a scanned version of the book. The Chinese idioms are then manually added. We performed both automatic and manual check to ensure the quality of final results (details in Appendix \\ref{section:cleaning}).\n\n\\subsection{Machine Translation}\nWe collect machine translations from the publicly available models of Google and DeepL\\footnote{We do not use the Baidu translation that has better performance on Chinese, because it directly returns a dictionary translation for most of the time.}. Though they both provide free APIs for developers, richer information is returned from direct queries on the website interface (details in Appendix \\ref{section:interface}). For example, while Google always returns a single translation, DeepL sometimes returns alternative translations.\n\nWe decide to use the alternative translations. Therefore, instead of using developer APIs, we have to scrape translations from the web interface. Because the shadow DOM prevents the content from being scraped by \\texttt{selenium}, we use a script that automatically paste our source idiom into the source box, and take a screenshot of the result box. The \\texttt{tesseract} OCR tool is then used on the screenshots to retrieve the text. Though OCR performs much better on screenshots than on scans as expected, we manually check the results to ensure quality.\n\n\\subsection{Case Study}\n\\label{section:case-study}\n\nHere, we summarize our observations during data collection. Examples are listed in Table \\ref{tab:case-study}. In order to avoid confusion, for the rest of this paper, we use \\textbf{Gold} translations to refer to the set of the first translation of all idioms in the \\textbf{Dictionary} translations. The rest of the Dictionary translation is denoted as \\textbf{Human} translations. We use \\textbf{Machine} translations to refer the combination of \\textbf{Google} and \\textbf{DeepL} translations. In other words, we have:\n\\begin{itemize}\n \\item $\\textrm{Dictionary} = \\textrm{Gold} \\cup \\textrm{Human}$\n \\item $\\textrm{Machine} = \\textrm{Google} \\cup \\textrm{DeepL}$\n\\end{itemize}\n\n\\begin{description}[style=unboxed,leftmargin=0cm, labelsep=1em, itemsep=0em]\n \\item[Non-uniform Numbers of Alternatives] Alternative translations are provided by both the Dictionary and DeepL. The number of alternatives from the Dictionary varies from 0 to 13. DeepL provides only up to 3 alternatives, but the first translation sometimes contains two translations separated by a semicolon.\n \\item[Low Edit Distance] The multiple alternatives from DeepL usually result from single-word edits, when the translation itself is much longer than one word. This almost never happens in the Dictionary. \n \\item[Dictionary Artifacts] The Dictionary translations aim at human readers, so part of the translation may only be explanatory. Indicators of explanation include parenthesis, dash, and abbreviations such as \\emph{sb.} (somebody) and \\emph{sth.} (something). We expand the abbreviations but keep the other explanations. Also, due to the time it was published, the Dictionary uses the Wade-Giles romanization system, which is different from the Hanyu Pinyin used by Machine. \n \\item[Machine Artifacts] Some Machine translations can be the same as Human or even Gold ones. However, mistakes do exist. We notice meaningless repetition and spurious uppercase letters. In some cases, the uppercase letters seems to result from the machine recognizing part of an idioms as names. In very rare cases, Machine translations contain untranslated Chinese characters. We substitute them by the Hanyu Pinyin. All these artifacts are unique to Machine translations.\n \\item[Unstable References] DeepL sometimes refers to other dictionaries. However, DeepL may replace part of the correct dictionary translation by machine artifacts. Also, DeepL uses abbreviations inconsistently, indicating that it refers to multiple dictionaries. Despite the lack of citation information, we identify one of the dictionaries to be \\emph{4327 Chinese idioms} \\citep{akenos_2012}, which uses the two abbreviations \\emph{fig.} and \\emph{lit.}.\n \\item[Literal Machine Translation] The literal Machine translation is even less satisfactory than the literal Human translation for two reasons. First, Machine sometimes returns a partial translation of only 2 out of the 4 Chinese characters. Secondly, Machine may translate the characters one by one, ignoring the non-compositionality of idioms.\n \\item[Explicit Structure] Parallel structures are pervasive in Chinese idioms \\citep{wu1995cultural} but usually not explicitly indicated by a conjunction character. Parallelism can be revealed in the English translation by conjunctions. However, parallel structures are sometimes considered redundant, and abridging is used to remove parallelism in translations of Chinese idioms without the loss in meaning \\citep{wang2013study}. Structures other than parallelism are discussed in \\citep{wang-yu-2010-construction}, but they cannot be naively detected from translations.\n \\item[Optimality of Gold Translations] Many Human translations are more verbose than the Gold translation. This is consistent with the tendency of language learners to err on the verbose and literal side. However, many Human alternatives are also much more succinct than the Gold translation, which can be arguably better. Without further linguistic analysis, we accept the Gold translations as labelled by the dictionary.\n\\end{description}\n\n\\section{Statistics}\n\\label{section:statistics}\n\nIn this section, we compute statistics of PETCI, and compare with the existing idiom datasets. We then use our dataset to probe the appearance of idioms in other translation datasets. We also provide a quantitative analysis for some our observations in Section \\ref{section:case-study}.\n\\subsection{Comparison with Existing Datasets}\nWe summarize our comparison of dataset sizes in Table \\ref{tab:data-size}. Monolingual idiom dictionaries in both English and Chinese are usually much larger than bilingual idiom dictionaries. For comparison, we choose \\emph{Oxford Dictionary of Idioms} \\citep{ayto2020oxford}, \\emph{Xinhua Idiom Dictionary} \\citep{xinhua}, and \\emph{4327 Chinese idioms} \\citep{akenos_2012}. The precise number of translations in \\emph{4327 Chinese idioms} is not available, but the dictionary usually provides both a literal and a figurative translation. Therefore, we estimate the number of translations to be twice the number of Chinese idioms.\n\nThere have also been attempts to build a dataset of English translated Chinese idioms. For example, the CIBB dataset contains the English translation of Chinese idiom, together with a blacklist of words that indicate unwanted literal translation \\citep{shao-etal-2018-evaluating}. This dataset has been used to score translation models \\citep{huang-etal-2021-comparison}, but it has an extremely small size of only 50 distinct Chinese idioms and a pair of translations for each idiom. Another dataset, CIKB, thoroughly contains 38,117 idioms with multiple properties \\citep{wang-yu-2010-construction}. Among all properties, each idiom has at most 3 English translations, including the literal translation, free translation, and the English equivalent. However, of the 38,117 idioms, only 11,000 have complete properties (among which 3,000 are labelled as most commonly used). Though the CIKB dataset is not made public, we reasonably assume that the missing properties are mainly English translations, because the other properties are in Chinese and thus easier to obtain. We estimate the number of English translations in CIKB to be between 3 times the number of complete entries and of all entries.\n\n\\begin{table}\n\\centering\n\\begin{tabular}{ccc}\n\\hline\n\\textbf{Dataset} & \\textbf{Chinese} & \\textbf{English}\\\\\n\\hline\n\\emph{Oxford} & --- & 10,000 \\\\\n\\emph{Xinhua} & 10,481 & ---\\\\\n\\emph{4327} & 4,327 & 8,654 \\\\ \nCIBB & 50 & 100 \\\\ \nCIKB & 38,117 & 33K - 114K \\\\\nPETCI & 4,310 & 29,936 \\\\\n\\hline\n\\end{tabular}\n\\caption{The comparison between the sizes of dictionaries and datasets. Some sizes are estimated.}\n\\label{tab:data-size}\n\\end{table}\n\nFrom our comparison, we see that PETCI covers a large percentage of frequently used Chinese idioms, while uniquely providing much more parallel translations. To further grow the size of PETCI, we propose 3 orthogonal directions: (1) increase the number of idioms, (2) add more translations to each idiom, and (3) add multiple gold translations to each idiom. The first two directions can be accomplished by language learners without linguistic expertise. The third direction, though requiring expertise, is revealed to be useful by the model performance.\n\n\\subsection{Percentage of Chinese Idioms}\nWe would like to know how many sentences in widely used Zh-En datasets contain at least one idiom from our dataset. We consider the datasets from the news translation task from WMT21, with intuitively the highest percentage of idiom usage among all provided domains. The result in Table \\ref{tab:proportion} shows that all the datasets have a low Percentage of sentences using Chinese Idioms (PoI), with the PoI of the two datasets related to Wikipedia lower than 1\\%. \n\n\n\\begin{table}\n\\centering\n\\begin{tabular}{lcc}\n\\hline\n\\textbf{Dataset} & \\textbf{Size (M)} & \\textbf{PoI (\\%)}\\\\\n\\hline\nParaCrawl v7.1 & 14.17 & 1.56 \\\\\nNews Commentary v16 & 0.32 & 7.70\\\\\nWiki Titles v3 & 0.92 & 0.02 \\\\ \nWikiMatrix & 2.60 & 0.90\\\\ \nBack-translated news & 19.76 & 1.75\\\\\\hline\n\\end{tabular}\n\\caption{The Percentage of sentences using Chinese Idioms (PoI) of different datasets from the WMT21 news translation task.}\n\\label{tab:proportion}\n\\end{table}\n\n\nThe low PoI is not only observed in the Zh-En pair \\citep{fadaee-etal-2018-examining}. Also, the PoI might differ in the En-Zh and Zh-En directions, though the available Zh-En datasets from WMT are all undirectional. \n\n\\subsection{Quantitative Case Study}\n\\label{section:quant-case-study}\n\nIn this subsection, we qualitatively analyze our observations in the case study (Section \\ref{section:case-study}). The results are summarized in Table \\ref{tab:qual}. \n\n\\begin{table*}[t]\n\\centering\n\\begin{tabular}{l|cccccc}\n\\hline\n\\textbf{Statistics} & \\textbf{Dictionary} & \\textbf{Gold} & \\textbf{Human} & \\textbf{Machine} & \\textbf{Google} & \\textbf{DeepL}\\\\\n\\hline\nTotal number & 14997 & 4310 & 10687 & 14939 & 4310 & 10629\\\\\nAvg. length (token) & 5.03 & 4.97 & 5.06 & 3.72 & 2.81 & 4.09\\\\\nAvg. length (char) & 27.14 & 27.15 & 27.14 & 19.99 & 15.72 & 21.72\\\\\n\\hline\n\\% longer (token) & --- & --- & 47.33 & 25.44 & 11.39 & 34.14\\\\\n\\% shorter (token) & --- & --- & 33.16 & 54.75 & 71.07 & 48.13\\\\\n\\% longer (char) & --- & --- & 56.04 & 30.05 & 14.80 & 36.23\\\\\n\\% shorter (char) & --- & --- & 40.57 & 63.82 & 78.52 & 57.86\\\\\n\\hline\n\\% with NNP & 3.67 & 0.77 & 2.99 & 4.43 & 2.99 & 2.09\\\\\n\\% with CC & 39.72 & 18.26 & 31.00 & 32.44 & 9.84 & 30.32\\\\\n\\hline\nAvg. edit distance & 5.24 & --- & --- & --- & --- & 2.45 \\\\\n\\% of single edit & 0.83 & --- & --- & --- & --- & 27.20 \\\\\n\\hline\n\\end{tabular}\n\\caption{The statistics of different subsets in PETCI. \\emph{Longer} and \\emph{shorter} are based on comparison with the Gold translations, therefore not applicable to Dictionary translations. \\emph{Edit distances} are only available for subsets that contain multiple translations. Human is a subset of Dictionary, so its average edit distance is not calculated.}\n\\label{tab:qual}\n\\end{table*}\n\n\\begin{description}[style=unboxed,leftmargin=0cm, labelsep=1em, itemsep=0em]\n \\item[Length] The average length of Chinese idioms in PETCI is 4.30, with 91.42\\% of all idioms having 4 characters. We tokenize translations using the \\texttt{stanza} package \\citep{qi-etal-2020-stanza}, and report the average length of translations in tokens and characters. Surprisingly, the length of Human translations and Gold translations are extremely close. By calculating the percentage of different translations that are longer or shorter than the gold translation, we see that long sub-optimal translations are still unique to Human translations. The Machine translations tend to be shorter due to partial translation.\n \\item[Name Hallucination] We would like to know how much hallucinated names has been introduced in the Machine translations. We used \\texttt{stanza} to perform part-of-speech (POS) tagging on the translations, and calculate the percentage of idioms that has at least one NNP token in its translation. The Machine translations have a NNP percentage much higher than the Gold translation, but close to the Human translations. This may result from that Human translations include explanations that involve names in a story from which a idiom originates. \n \\item[Parallel Structure] The popular Chinese word segmentation package \\texttt{jieba} considers 93.64\\% of the Chinese idioms in PETCI as an atomic unit. However, the idioms obviously have internal structures. For a coarse estimation, we consider a translation containing the CC token to have parallel structure. We calculate the percentage of idioms that has at least one CC token in its translations. We see that parallel structure is pervasive in PETCI, but less favored by the Gold translation. Also, the DeepL translations can catch parallel structures as well as the Human translations. Google performs less well.\n \\item[Edit Distance] To quantitatively analyze the simple-word edit strategy used by DeepL, we calculate the average token-level Levenshtein distance of all pairs of translation for the same idiom. We then calculate the percentage of single-word edit pairs, with the length of translation being at least 2 tokens. The same calculation is also applied to Dictionary translations. We notice a lower edit distance and higher percentage of single-word edits from DeepL, which is consistent with our qualitative observation. This suggests the possibility of further augmenting the data by single-word edit methods, such as replacing synonyms based on WordNet \\citep{miller-1992-wordnet}.\n \\item[Split and Filter] We observe that DeepL may join multiple translations into one, and that all translations may sometimes include dictionary artifacts. Therefore, all statistics is performed after the removal of 2,134 parenthesized content and 341 abbreviations. To further prepare the dataset for training, we split 250 translations that include a semicolon. After the split, we remove translations from the Machine set that also appeared in the Dictionary. The processed Human\/Machine translation set has a size of 10,690\/13,539. In the following sections, we have:\n \\begin{itemize}\n \\item $\\textrm{Machine} \\leftarrow Split(\\textrm{Machine}) \\setminus \\textrm{Dictionary}$\n \\end{itemize}\n \n\\end{description}\n\nExplicitly incorporating the above translation features into models may improve their performance, but this strategy is outside the scope of this paper. For all models, we only use the text as the input.\n\n\\section{Models}\n\\label{section:model}\nA human expert is able to tell whether a translation is satisfactory, and then rewrite the translation if it is not. We expect the models to behave similarly. Therefore, we design a binary classification task and a rewriting task. The binary classifier should determine whether a given translation is Gold or not. The rewriting model should rewrite other translations in the standard of Gold ones. The two tasks may be combined in a pipeline, but we leave that for future work, and use the following models to perform either of the two tasks. No major changes are made on the model architectures, so we omit detailed descriptions and refer readers to the original papers.\n\n\\subsection{LSTM Classification}\n\nThe recurrent neural network (RNN) with LSTM cells \\citep{hochreiter1997long} is able to encode a sentence of variable length into a hidden state vector. Then, the vector can be fed into a linear layer for classification.\n\nSome variants of LSTM, such as Bidirectional LSTM and Multilayer LSTM \\citep{6707742}, can perform better on longer sequences, but most translations in PETCI are short. Moreover, we would like to introduce fewer variables when comparing models, so we only use the one-directional single-layer LSTM.\n\n\\subsection{Tree-LSTM Classfication}\nWe have shown that the translation of Chinese idioms exhibits special structures that may be used to improve model performance. A variant of LSTM that explicitly accounts for tree structures is the Tree-LSTM \\citep{tai-etal-2015-improved}.\n\nThe attention mechanism can be further incorporated into the Tree-LSTM \\citep{8665673}, but we do not use it for the sake of comparison. \n\n\\subsection{BERT Classification}\nThe BERT model has been reported to have superior performance on sentence classification tasks \\citep{devlin-etal-2019-bert}. It can also capture the structural information of the input text \\citep{jawahar-etal-2019-bert}.\n\nWe do not use better variants of BERT for the following considerations. First, some variants are pre-trained on different datasets other than the BookCorpus \\citep{bookcorpus} and English Wikipedia used by BERT. RoBERTa, for example, has part of its pre-training data from Reddit and CommonCrawl \\citep{liu2019roberta}. Intuitively, idioms are used more in the colloquial language of Reddit. Though we are not able to estimate the English PoI in these datasets, we do see that WikiMatrix has a much lower PoI than ParaCrawl. Similarly, we expect the Wikipedia dataset to have a lower PoI than CommonCrawl. The choice of pre-training datasets by RoBerta may benefit our downstream task, but not ideal for fair comparison.\n\nSecondly, other variants of BERT may improve the sematic representation of phrases and sentences, such as PhraseBERT \\citep{wang-etal-2021-phrase} and SentenceBERT \\citep{reimers-gurevych-2019-sentence}. However, they are also not ideal, because idiom translation covers a large range of text length. A user of the model may prefer to give either over-simplified (like Machine) or over-extended (like Human) translations, and the model is expected to perform well in both cases. Therefore, we take the vanilla BERT as a middle ground.\n\n\\subsection{OpenNMT Rewriting}\n\nRewriting is a sequence-to-sequence task. We use the OpenNMT framework which provides an encoder-decoder model based on LSTM with the attention mechanism \\citep{klein-etal-2017-opennmt}. A part of the rewriting task can be viewed as simplification, on which the OpenNMT framework is reported to perform well \\citep{nisioi-etal-2017-exploring}.\n\nWe do not consider the SOTA sequence-to-sequence models, such as BART \\citep{lewis-etal-2020-bart} or T5 \\citep{raffel2019exploring}, because they are much larger and forbiddingly difficult for language learners to use. \n\n\\section{Experiments}\n\\label{section:experiment}\n\\subsection{Classification}\nFor the binary classification, we use train\/dev\/test splits of 3448\/431\/431 idioms. To study the effect of blending Human and Machine translation, we construct 3 types of training sets that contain (1) both Gold and Human (H), (2) both Gold and Machine (M), and (3) all Gold, Human, and Machine translations (HM). The development sets are similarly constructed and used when training on the corresponding set. The test set is the combination of Gold, Human, and Machine translations, meaning that models trained on the HM set will be evaluated on Machine translations for zero-shot accuracy. To balance the training data, whenever we add a Human or Machine translation into the training set, we always add its corresponding Gold translation once. The development and test sets are not balanced.\n\nWe also want to examine whether increasing the size of the training set may improve model performance. We construct partial training sets from the full set. In the case of training set H, the smallest partial set is constructed by taking exactly one Human translation from each idiom in the training set, together with Gold translations. The size of 3 other partial sets are then evenly spaced between the sizes of the smallest and the full training set, resulting in 5 H sets in total. For the 3 partial sets with middle sizes, we randomly sample from the remaining Human translations to reach the target size. The sizes of 5 H training sets are 3,050\/4,441\/5,833\/7,224\/8,616, expressed in the number of non-Gold translations (the total size is strictly twice as large). We similarly construct 5 M sets (sizes 3,419\/5,265\/7,112\/8,958\/10,805), and merge the H and M sets to create 5 HM sets. Here, distributional shift is inevitably introduced, but this process mimics the real-world scenario where it is impossible for a model user to add extra Gold translations, but can only add their own non-Gold translations of an unknown distribution compared to the existing H and M sets.\n\nFor both LSTM and Tree-LSTM, we use the cased 300-dimensional GloVe vectors pre-trained on CommonCrawl \\citep{pennington-etal-2014-glove} to initialize word representations. We allow the word representations to be updated during training to improve performance \\citep{tai-etal-2015-improved}.\n\nWe use the binary constituency Tree-LSTM because it is reported to out-perform the dependency Tree-LSTM. We use CoreNLP to obtain the constituency parse trees for sentences \\citep{manning-etal-2014-stanford}. The binary parse option (not available in \\texttt{stanza}) is set to true to return the internal binary parse trees. Half-nodes with only 1 child are removed.\n\nHowever, fine-grained annotation on each node is not available, so only full trees are used during training. All nodes other than the root have a dummy label 0. We make this modification on a PyTorch re-implementation of the Tree-LSTM \\footnote{\\url{https:\/\/github.com\/dmlc\/dgl\/tree\/master\/examples\/pytorch\/tree_lstm}}.\n\nWe implement BERT classification using HuggingFace \\citep{wolf2019huggingface}. The pre-trained model we use is the \\texttt{bert-base-uncased}.\n\\subsection{Rewriting}\nWe use all Human and Machine translations for the idioms in the train split as the source training set. The corresponding Gold translations are the target. The development and test sets are similarly constructed from the dev\/test splits. We do not change the training set size for rewriting, and this choice will be justified when we discuss the results.\n\nOur model architecture is the same as the NTS model based on OpenNMT framework \\citep{nisioi-etal-2017-exploring}. Though originally designed to perform text simplification, the NTS model has no component that explicitly performs simplification. We only change pre-trained embeddings from word2vec \\citep{mikolov2013efficient} to GloVe.\n\n\\subsection{Hyperparameters and Training Details}\n\nTo ensure fair comparison, the hyperparameters for each model are fixed for different training sets. For LSTM and Tree-LSTM, we follow the choice of \\citep{tai-etal-2015-improved}. For BERT, \\citep{devlin-etal-2019-bert} suggest that a range of parameters work well for the task, so we choose a batch size of 16, a learning rate of 5e-5, and an epoch number of 3. For OpenNMT, we follow the configuration provided by \\citep{nisioi-etal-2017-exploring}. More details can be found in Appendix \\ref{section:training-details}.\n\n\\section{Results and Discussion}\n\\label{section:results}\n\n\\subsection{Classification}\n\nThe results are plotted in Figure \\ref{fig:classify}, with numeric results in Appendix \\ref{section:numeric}. The suffix of a model indicate the training set that it is trained on. \n\n\\begin{description}[style=unboxed,leftmargin=0cm, labelsep=1em, itemsep=0em]\n\n\\item[Gold Sparsity] Due to the sparsity of gold translations, a model may blindly assign the gold label with low probability, and the accuracy is still high. Therefore, we compare all results to a random baseline that assigns the label based on the probability equal to the label frequency in PETCI. The accuracy of the random baseline is 75.35\\% overall, 14.40\\% on Gold, and 85.60\\% on either Human or Machine. Most models underperform the random baseline. Also, the Gold accuracy of all models deteriorates with the increase in the training set size, but still above the random baseline. Our data balancing strategy does not seem to overcome the sparsity problem.\n\n\\item[Training Set Size] With the increase in training set size, the overall accuracy mostly increases. The increase in performance has not saturated, indicating the need of further increasing dataset size. The BERT-HM model trained on the most data performs best, closely followed by Tree-LSTM-HM. When trained on the most data, LSTM does not out-perform the random baseline. In few cases, the increase in dataset size hurts performance, such as for LSTM-M. This may be an artifact of high standard deviation.\n\n\\item[Human and Machine] The models trained either on H or M sets have a low zero-shot accuracy on the opposite side, while training on the merged HM sets benefits both. The LSTM-H model seems to a high accuracy comparable to the random baseline, even outperforming LSTM-M on Machine without ever seeing a Machine translation. However, this seems to be an artifact of over-assigning the non-Gold label. The good performance of -HM models indicates that it is reasonable to combine Machine and Human translation in the training set, even when the model is responsible for classifying translation that only comes from either MT systems or language learners. \n\n\\begin{figure*}[t]\n \\centering\n \\begin{subfigure}{}\n \\begin{tikzpicture}\n \\begin{axis}[\n title={\\textbf{Overall}},\n xlabel={Training Set Size},\n ylabel={Accuracy (\\%)},\n xmin=2000, xmax=21000,\n ymin=0, ymax=100,\n xtick={5000, 10000, 15000, 20000},\n xticklabels={5K, 10K, 15K, 20K},\n \n legend pos=north west,\n ymajorgrids=true,\n xmajorgrids=true,\n ]\n \\addplot [color=orange, no marks]\n table [x index=0, y index=1]\n {figs\/accs\/rnd-all.dat};\n \n \\addplot [color=blue, mark=triangle*, dashed, mark options={solid}]\n table [x index=0, y index=1, y error index=2] {figs\/accs\/lstm-test-all-gh.dat};\n \\addplot [color=blue, mark=triangle*, dotted, mark options={solid}]\n table [x index=0, y index=1, y error index=2] {figs\/accs\/lstm-test-all-gm.dat};\n \\addplot [color=blue, mark=triangle*]\n table [x index=0, y index=1, y error index=2] {figs\/accs\/lstm-test-all-ghm.dat};\n \\addplot [color=magenta, mark=diamond*, dashed, mark options={solid}]\n table [x index=0, y index=1, y error index=2] {figs\/accs\/tree_lstm-test-all-gh.dat};\n \\addplot [color=magenta, mark=diamond*, dotted, mark options={solid}]\n table [x index=0, y index=1, y error index=2] {figs\/accs\/tree_lstm-test-all-gm.dat};\n \\addplot [color=magenta, mark=diamond*]\n table [x index=0, y index=1, y error index=2] {figs\/accs\/tree_lstm-test-all-ghm.dat};\n \\addplot [color=red, dashed, mark options={solid}, mark size=1pt]\n table [x index=0, y index=1, y error index=2] {figs\/accs\/bert-test-all-gh.dat};\n \\addplot [color=red, dotted, mark options={solid}, mark size=1pt]\n table [x index=0, y index=1, y error index=2] {figs\/accs\/bert-test-all-gm.dat};\n \\addplot [color=red, mark size=1pt]\n table [x index=0, y index=1, y error index=2] {figs\/accs\/bert-test-all-ghm.dat};\n \n \\end{axis}\n \\end{tikzpicture}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}{}\n \\begin{tikzpicture}\n \\begin{axis}[\n title={\\textbf{Gold}},\n xlabel={Training Set Size},\n ylabel={Accuracy (\\%)},\n xmin=2000, xmax=21000,\n ymin=0, ymax=100,\n xtick={5000, 10000, 15000, 20000},\n xticklabels={5K, 10K, 15K, 20K},\n \n legend pos=north west,\n ymajorgrids=true,\n xmajorgrids=true,\n ]\n \n \\addplot [color=orange, no marks]\n table [x index=0, y index=1]\n {figs\/accs\/rnd-g.dat};\n \n \\addplot [color=blue, mark=triangle*, dashed, mark options={solid}]\n table [x index=0, y index=1, y error index=2] {figs\/accs\/lstm-test-g-gh.dat};\n \\addplot [color=blue, mark=triangle*, dotted, mark options={solid}]\n table [x index=0, y index=1, y error index=2] {figs\/accs\/lstm-test-g-gm.dat};\n \\addplot [color=blue, mark=triangle*]\n table [x index=0, y index=1, y error index=2] {figs\/accs\/lstm-test-g-ghm.dat};\n \\addplot [color=magenta, mark=diamond*, dashed, mark options={solid}]\n table [x index=0, y index=1, y error index=2] {figs\/accs\/tree_lstm-test-g-gh.dat};\n \\addplot [color=magenta, mark=diamond*, dotted, mark options={solid}]\n table [x index=0, y index=1, y error index=2] {figs\/accs\/tree_lstm-test-g-gm.dat};\n \\addplot [color=magenta, mark=diamond*]\n table [x index=0, y index=1, y error index=2] {figs\/accs\/tree_lstm-test-g-ghm.dat};\n \\addplot [color=red, dashed, mark options={solid}, mark size=1pt]\n table [x index=0, y index=1, y error index=2] {figs\/accs\/bert-test-g-gh.dat};\n \\addplot [color=red, dotted, mark options={solid}, mark size=1pt]\n table [x index=0, y index=1, y error index=2] {figs\/accs\/bert-test-g-gm.dat};\n \\addplot [color=red, mark size=1pt]\n table [x index=0, y index=1, y error index=2] {figs\/accs\/bert-test-g-ghm.dat};\n \n \\end{axis}\n \\end{tikzpicture}\n \\end{subfigure}\n \n \\begin{subfigure}{}\n \\begin{tikzpicture}\n \\begin{axis}[\n title={\\textbf{Human}},\n xlabel={Training Set Size},\n ylabel={Accuracy (\\%)},\n xmin=2000, xmax=21000,\n ymin=0, ymax=100,\n xtick={5000, 10000, 15000, 20000},\n xticklabels={5K, 10K, 15K, 20K},\n \n legend pos=north west,\n ymajorgrids=true,\n xmajorgrids=true,\n ]\n \\addplot [color=orange, no marks]\n table [x index=0, y index=1]\n {figs\/accs\/rnd-hm.dat};\n \n \\addplot [color=blue, mark=triangle*, dashed, mark options={solid}]\n table [x index=0, y index=1, y error index=2] {figs\/accs\/lstm-test-h-gh.dat};\n \\addplot [color=blue, mark=triangle*, dotted, mark options={solid}]\n table [x index=0, y index=1, y error index=2] {figs\/accs\/lstm-test-h-gm.dat};\n \\addplot [color=blue, mark=triangle*]\n table [x index=0, y index=1, y error index=2] {figs\/accs\/lstm-test-h-ghm.dat};\n \\addplot [color=magenta, mark=diamond*, dashed, mark options={solid}]\n table [x index=0, y index=1, y error index=2] {figs\/accs\/tree_lstm-test-h-gh.dat};\n \\addplot [color=magenta, mark=diamond*, dotted, mark options={solid}]\n table [x index=0, y index=1, y error index=2] {figs\/accs\/tree_lstm-test-h-gm.dat};\n \\addplot [color=magenta, mark=diamond*]\n table [x index=0, y index=1, y error index=2] {figs\/accs\/tree_lstm-test-h-ghm.dat};\n \\addplot [color=red, dashed, mark options={solid}, mark size=1pt]\n table [x index=0, y index=1, y error index=2] {figs\/accs\/bert-test-h-gh.dat};\n \\addplot [color=red, dotted, mark options={solid}, mark size=1pt]\n table [x index=0, y index=1, y error index=2] {figs\/accs\/bert-test-h-gm.dat};\n \\addplot [color=red, mark size=1pt]\n table [x index=0, y index=1, y error index=2] {figs\/accs\/bert-test-h-ghm.dat};\n \n \\end{axis}\n \\end{tikzpicture}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}{}\n \\begin{tikzpicture}\n \\begin{axis}[\n title={\\textbf{Machine}},\n xlabel={Training Set Size},\n ylabel={Accuracy (\\%)},\n xmin=2000, xmax=21000,\n ymin=0, ymax=100,\n xtick={5000, 10000, 15000, 20000},\n xticklabels={5K, 10K, 15K, 20K},\n \n legend pos=north west,\n ymajorgrids=true,\n xmajorgrids=true,\n ]\n \n \\addplot [color=orange, no marks]\n table [x index=0, y index=1]\n {figs\/accs\/rnd-hm.dat};\n \n \\addplot [color=blue, mark=triangle*, dashed, mark options={solid}]\n table [x index=0, y index=1, y error index=2] {figs\/accs\/lstm-test-m-gh.dat};\n \\addplot [color=blue, mark=triangle*, dotted, mark options={solid}]\n table [x index=0, y index=1, y error index=2] {figs\/accs\/lstm-test-m-gm.dat};\n \\addplot [color=blue, mark=triangle*]\n table [x index=0, y index=1, y error index=2] {figs\/accs\/lstm-test-m-ghm.dat};\n \\addplot [color=magenta, mark=diamond*, dashed, mark options={solid}]\n table [x index=0, y index=1, y error index=2] {figs\/accs\/tree_lstm-test-m-gh.dat};\n \\addplot [color=magenta, mark=diamond*, dotted, mark options={solid}]\n table [x index=0, y index=1, y error index=2] {figs\/accs\/tree_lstm-test-m-gm.dat};\n \\addplot [color=magenta, mark=diamond*]\n table [x index=0, y index=1, y error index=2] {figs\/accs\/tree_lstm-test-m-ghm.dat};\n \\addplot [color=red, dashed, mark options={solid}, mark size=1pt]\n table [x index=0, y index=1, y error index=2] {figs\/accs\/bert-test-m-gh.dat};\n \\addplot [color=red, dotted, mark options={solid}, mark size=1pt]\n table [x index=0, y index=1, y error index=2] {figs\/accs\/bert-test-m-gm.dat};\n \\addplot [color=red, mark size=1pt]\n table [x index=0, y index=1, y error index=2] {figs\/accs\/bert-test-m-ghm.dat};\n \n \\end{axis}\n \\end{tikzpicture}\n \\end{subfigure}\n \n \\begin{subfigure}{}\n \\begin{tikzpicture} \n \\begin{axis}[%\n hide axis,\n xmin=0, xmax=1,\n ymin=0, ymax=1,\n legend style={\n draw=white!15!black,\n legend cell align=left,\n legend columns=3,\n column sep=1ex,\n }\n ]\n \\addlegendimage{color=blue, mark=triangle*, dashed, mark options={solid}}\n \\addlegendentry{LSTM-H};\n \\addlegendimage{color=blue, mark=triangle*, dotted, mark options={solid}}\n \\addlegendentry{LSTM-M};\n \\addlegendimage{color=blue, mark=triangle*}\n \\addlegendentry{LSTM-HM};\n \\addlegendimage{color=magenta, mark=diamond*, dashed, mark options={solid}}\n \\addlegendentry{Tree-LSTM-H};\n \\addlegendimage{color=magenta, mark=diamond*, dotted, mark options={solid}}\n \\addlegendentry{Tree-LSTM-M};\n \\addlegendimage{color=magenta, mark=diamond*}\n \\addlegendentry{Tree-LSTM-HM};\n \\addlegendimage{color=red, dashed, mark options={solid}, mark size=1pt}\n \\addlegendentry{BERT-H};\n \\addlegendimage{color=red, dotted, mark options={solid}, mark size=1pt}\n \\addlegendentry{BERT-M};\n \\addlegendimage{color=red, mark size=1pt}\n \\addlegendentry{BERT-HM};\n \\addlegendimage{color=orange, no marks};\n \\addlegendentry{Random Baseline};\n \\end{axis}\n \\end{tikzpicture}\n \\end{subfigure}\n \\caption{Test accuracies of models on different subsets. The suffix of a model indicates the training set it is trained on. We report mean accuracies over 5 runs.}\n \\label{fig:classify}\n\\end{figure*}\n\n\\item[Detecting Structures] The subset accuracy reflects the structural difference of translations and a model's ability to detect it. The highest Gold accuracy come from the -M models, probably because it is easier for the model to distinguish Gold and Machine translations due to the broken structures of machine translation, indicated by the high NNP percentage and low CC percentage. LSTM-M has an unexpected low performance on the Machine set, maybe due to its inability to detect structure. In contrast, the Human translations resemble Gold translations in structure more than Machine translations, which may confuse the -H and -HM models. \n\n\\item[Confusion from Structures] We also notice the interesting fact that the Gold accuracy BERT deteriorates more rapidly than Tree-LSTM. This may be because that the Tree-LSTM model is better at explicitly identifying good structures that adds credibility to Gold translations. In the Human set, however, the identification of structure confuses both Tree-LSTM and BERT, preventing them from out-performing the sequential LSTM that is unaware of the structure. The Machine set creates no such confusion. \n\n\\end{description}\n\nA detailed analysis of the failure cases would require linguistic knowledge, which is beyond the scope of this paper. Coarse-grained metrics such as the length or the POS percentage are not sufficient. \n\n\\subsection{Rewriting}\n\nThe good performance of -HM models when trained on the full set justified our choice of training set for the rewriting task. The decreasing accuracy on Gold translations has less effect on the rewriting task, since we do not rewrite Gold translations.\n\nDespite the success of classification models, the training of the rewriting model failed. During training, though the perplexity on the training set is reduced, the perplexity on the development set remains high, indicating an inevitable over-fitting. The performance on the test set is visibly poor, defying the need for calculating automatic metrics.\n\nWe believe that OpenNMT fails because the rewriting task is too aggressive. The average token-level Levenshtein distance from Human\/Machine to Gold is 5.39\/4.93, close to the average Gold token length 4.97. Our situation is very different from the traditional sentence simplification tasks, where strategies are much more conservative and the overlapping rate is high \\citep{zhang-lapata-2017-sentence}. To rewrite, the model even has too guess from partial translations. The parametric knowledge in GloVe vectors does not suffice, and non-parametric knowledge should be incorporated by methods such as retrieval-augmented generation \\citep{lewis2020retrieval}.\n\n\\section{Related Work}\n\\label{section:related-work}\n\n\\begin{description}[style=unboxed,leftmargin=0cm, labelsep=1em, itemsep=0em]\n \\item[Leveraging Human Translation] There are works that leverage existing human translations, either from experts \\citep{shimizu-etal-2014-collection} or language learners \\citep{dmitrieva-tiedemann-2021-creating}. The collected datasets are innately more varied than the parallel corpus collected by back-translation or other automated methods, such as in \\citep{kim-etal-2021-bisect}. In our work, we try to use both human and machine translations to increase the size of PETCI, and they cooperate well.\n \\item[Faster BERT] Ideally, after expanding the dataset, the model needs to be re-trained. The sheer size of Transformer-based models may prohibit language learners from using it. Smaller versions of BERT have been proposed, such as DistilBERT \\citep{sanh2020distilbert}. However, the 40\\% decrease in model size comes at a cost of 3\\% decrease in performance, which may cause BERT to lose its slight advantage over smaller models such as Tree-LSTM. Similarly, quantization and pruning compress the BERT model at a factor of no more than 10, but at a cost of 2\\% decrease in performance \\citep{cheong2019transformers}.\n \\item[Evaluation Metrics] Our rewriting task is close to sentence simplification. Metrics such as BLEU \\citep{papineni-etal-2002-bleu} can be used to evaluate sequence generation quality, but are shown to be less correlated with human judgements on simplification quality than another metric SARI \\citep{xu-etal-2016-optimizing}. However, to establish the correlation between any of these metrics with PETCI rewriting quality, we need fine-grained labels. The Flesch-Kincaid readability score can also be used to judge the readability of translation, but it is not guaranteed to correlate with the ease of understanding idiom translations. The cosine similarity between translations in PETCI is not calculated, but a low similarity is expected due to the pervasive metaphoric language in idioms. \n\\end{description}\n\n\\section{Conclusion}\n\\label{section:conclusion}\nIn this paper, we build a Parallel English Translation dataset on Chinese Idioms (PETCI) and show its advantages over the existing Chinese idiom datasets. Based on PETCI, we propose a binary classification and a rewriting task, and the baseline models we choose succeed on the former, meanwhile showing great potential when the dataset size increases. Also, we build the datasets and choose the models to make them accessible to language learners, which allows further improvement without the involvement of translation experts. \n\n\\section*{Acknowledgements}\n\nThe author thanks Natsume Shiki for insightful comments and kind support.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n\\label{Introduction}\n\nThe shifting boundary between quantum and classical regimes \\cite{peres,zurek03} is a long-standing subject of scrutiny, both for its foundational and technical aspects. Indeed, the emergence of classical behaviour from the underlying quantum structure is still a controversial subject with several attempts aiming to address and resolve it such as \\cite{LusannaPauri}. To this end, semiclassical methods play an important role both in quantum mechanics and quantum field theory.\n\nFrom a purely pragmatic viewpoint, it often happens that some degrees of freedom can be much easier described {classically, commonly to an excellent} degree of approximation. However either for consistency (e.g., in semiclassical quantum gravity, \\cite{KieferQG}), or for practical purposes (e.g., study of chemical reactions) it becomes necessary to follow their interaction with other degrees of freedom that must be described by quantum mechanics. Indeed,\nthere have been several attempts to formulate a consistent hybrid classical-quantum (CQ) theory \\cite{boucher88,anderson95,sudarshan79,elze12,chua-hall12,pt01,k-pt}, each with varying results \\cite{k-pt,Garay12}.\n\nBoth fundamental and practical aspects were explored in recent efforts investigating the equivalence of Gaussian Quantum Mechanics\n(GQM) and classical statistical mechanics (more precisely, epistemically-restricted Liouville mechanics (ERL)) \\cite{brs:15,jl:15}. GQM \\cite{olivares12,Weedbrook12} restricts allowed states only to the so-called Gaussian states that have Gaussian Wigner quasiprobability distribution \\cite{wigrev}, and transformations and measurements that preserve this property. A positive Wigner distribution can be interpreted as a probability density on the phase space of a corresponding classical system. By imposing epistemic restrictions on Liouville classical mechanics --- postulating that {conjugate quantities} cannot be known with precision better than the fundamental quantum uncertainty --- one can assign classical statistical interpretations (probability distributions) to those Gaussian procedures, allowing a phenomenon to be described equivalently in both languages \\cite{brs:15,jl:15}. Remarkably, ERL captures many phenomena that are usually considered explicitly quantum, including entanglement (though not the ability to violate Bell-type inequalities), while being describable by local hidden variable theory.\n\nThese results indicate that a Gaussian quantum system behaves classically in some important respects. An interesting complementary question is then to what extent GQM can be regarded as fully classical, or {alternatively, whether or not GQM inevitably} displays tell-tale signs of quantum physics. For an isolated Gaussian system a specific question along these lines concerns the behaviour of the expectation values of reasonable classical observables (to be defined precisely in Sec.~\\ref{Moyal}). Another is that of the dynamics of interacting Gaussian and classical systems, and the pre-requisites for a consistent description of such dynamics. Such mixed dynamics is used to treat a variety of phenomena that range from gas kinetics and dynamics of chemical reactions to one-loop quantum gravity. Indeed, this latter question is of particular importance in the ongoing discussion as to whether or not gravity should be quantized.\n\n\n Motivated by the above, our goal in this paper is to investigate the consistency of combined classical and Gaussian quantum systems, or CGQ. If a Gaussian quantum system is indeed equivalent (under certain criteria) to a classical system, then its coupling to another classical system should be consistent with this equivalence whilst retaining the intrinsic quantum characteristics of the former. In particular, can CGQ ensure that quantum sector of the system respects the uncertainty principle?\n\n A Gaussian Hamiltonian is at most quadratic in canonical variables and, as a result, perfectly satisfies the correspondence principle: equations of motion for quantum dynamical variables are the same as their classical counterparts. Thus it is natural to investigate if the different mathematical structures used to describe classical and quantum systems can be made fully compatible.\n\nWe first investigate this question from the perspective of the Koopmanian formalism of mechanics\nin Sec.~\\ref{Koopman}. In this approach, both quantum and classical systems are described by wave functions on their respective Hilbert spaces. It is known that the Hilbert space description of a classical system is fully consistent and sometimes advantageous. We consider one quantum and one classical harmonic oscillator and the most general Gaussian interaction coupling the two. We find that various inconsistencies appear for any non-trivial bilinear interaction.\n\nThe phase-space description of a combined quantum-classical system that we use in Sec.~\\ref{Moyal} is based on the opposite approach. It is possible to describe the evolution of a quantum system on its classical phase space if Moyal brackets replace Poisson brackets. The two coincide for a harmonic oscillator, giving an additional interpretation to the results of \\cite{brs:15}. If again the classical and quantum oscillators are linearly coupled, preservation of the Heisenberg uncertainty relation for the quantum oscillator requires introduction of a minimal uncertainty in the classical one. This is again consistent with a view that effectively replaces classical mechanics (that allows, in principle, for infinite precision) with a statistical description that evolves according to the classical dynamical laws. In Sec.~\\ref{Moyal} we show how prior correlations between classical and quantum systems and\/or different classical potentials lead to violation of the uncertainty relation for the quantum initially Gaussian system.\n\nWe discuss the implication of this results and connection to the logical necessity to quantize gravity in the concluding section.\n\n\n\n\n\\section{Hilbert space picture}\\label{Koopman}\n\n We start with a brief discussion of the Koopmanian formalism, followed by applying it to the most general interacting Gaussian system with two degrees of freedom, one treated classically and the other quantum-mechanically. {A more detailed presentation of the mathematical aspects of this approach} can be found in \\cite{reedmethods}, while applications to measurement theory, entanglement, and mixed states were discussed in \\cite{peres,k-pt}. For\nsimplicity we consider a single degree of freedom and denote the\ncanonical variables as $x$ and $k$ (we reserve the symbols $p$ and $q$\nfor the momentum and position (operators) of a quantum system, to be introduced later). Consider\n the Liouville equation for a system with the phase space variables $(x,k)$, the Hamiltonian $H(x,k)$, and the probability density $f(x,k)$,\n\\begin{equation}\n\\mathrm{i}\\,\\partial f\/\\partial t=Lf, \\label{Leq}\n\\end{equation}\nwhere $L$ is the Liouville operator, or Liouvillian,\n\\begin{equation}\nL=\\left({\\partial H\\over\\partial k}\\right)\n \\left(-\\mathrm{i}{\\partial\\over\\partial x}\\right)\n -\\left({\\partial H\\over\\partial x}\\right)\n \\left(-\\mathrm{i}{\\partial\\over\\partial k}\\right). \\label{L}\n\\end{equation}\nSince the Liouville density $f$ is never negative it is possible to introduce\nintroduce likewise a function\n\\begin{equation}\n\\psi_c\\equiv\\sqrt{f},\\label{classwave}\n\\end{equation}\n which in this case satisfies the same equation of motion as $f$,\n\\begin{equation}\n\\mathrm{i}\\,\\partial\\psi_c\/\\partial t=L\\psi_c.\n\\end{equation}\nIt has the structure of the Schr\\\"{o}dinger equation with the Liouvillian taking the role of the generator of time translations, and its self-adjointness can be established under mild conditions of the potential \\cite{reedmethods}. Hence we can interpret $\\psi_{\\rm c}$ as ``classical wave function.''\n\nWe shall now consider $\\psi$ as the basic object. However, for our classical system only\n$f=|\\psi|^2$ has a direct physical meaning. It can be proven\nthat, under reasonable assumptions about the Hamiltonian, the\nLiouvillian is an essentially self-adjoint operator and generates\na unitary evolution~\\cite{reedmethods}:\n\\begin{equation}\\label{inner-koop}\n{\\langle}\\psi_c|\\phi_c{\\rangle}:=\\int\\psi_c(x,k,t)^*\\,\\phi_c(x,k,t)\\,dxdk={\\rm const.}\n\\end{equation}\n Note that while the classical wave function of Eq.~\\eqref{classwave} is real, complex-valued functions naturally appear in {this space, which can be extended to a Hilbert space with\n inner product given by (\\ref{inner-koop}) above \\cite{peres}.}\nIt is possible to further mimic quantum theory by introducing {\\it\ncommuting} position and momentum operators $\\hat{x}$ and $\\hat{k}$, defined by\n\\begin{equation}\n\\hat{x}\\,\\psi_c=x\\,\\psi_c(x,k,t)\\qquad{\\rm and}\n \\qquad\\hat{k}\\,\\psi_c=k\\,\\psi_c(x,k,t),\n\\end{equation}\nrespectively.\nNote that the momentum $\\hat{k}$ is not the shift operator (the\nlatter is $\\hat{p}_x=-\\mathrm{i} \\partial\/\\partial x$). Likewise the boost operator\nis $\\hat{p}_k=-\\mathrm{i} \\partial\/\\partial k$. These two operators are not\nobservable. We shall henceforth omit the hats over the classical\noperators when there is no danger of confusion.\n\n What we\nhave above is a ``Schr\\\"odinger picture'' (operators are constant,\nwave functions evolve in time as $\\psi(t)=U(t)\\psi(0)$, where the unitary operator\n$U(t)=e^{-\\mathrm{i} Lt}$ if the Hamiltonian is time-independent). We can\nalso define a ``Heisenberg picture'' \\cite{k-pt} where wavefunctions are\nfixed and operators evolve:\n\\begin{equation}\nX_H(t)=U^\\dagger XU.\n\\end{equation}\nThe Heisenberg equation of motion\n\\begin{equation}\n\\mathrm{i}\\,dX_H\/dt=[X_H,L_H]=U^\\dagger[X,L]\\,U,\n\\end{equation}\ntogether with the Liouvillian (\\ref{L}), readily give Hamilton's\nequations\n\\begin{equation}\n{dx\\over dt}={\\partial H\\over\\partial k},\\qquad\\qquad\n {dk\\over dt}=-{\\partial H\\over\\partial x}.\n\\end{equation}\n\nThis formalism allows to describe the states of classical and\nquantum systems in a single mathematical framework, namely in the\njoint Hilbert space ${\\mathcal{H}}={\\mathcal{H}}_q\\otimes{\\mathcal{H}}_c$.\n Since we are dealing\nwith the Hilbert spaces, the concepts of a partial trace and\nentanglement (including the one between classical and quantum\nstates) are naturally defined.\n\nIn the following we discuss coupled classical and quantum harmonic oscillators with the frequencies $\\omega_c$ and $\\omega_q$, respectively. To simplify the analysis we use dimensionless canonical variables. For a quantum oscillator we set the position and the momentum scales as $l$ and $l_p=\\hbar\/l$, by defining $\\bar{q}:=q\/l$ and $\\bar{p}:=p\/l_p$, respectively. For a classical oscillator the scales are set by $\\lambda$ and $\\lambda_k=\\kappa\/\\lambda$, where $\\kappa$ is a parameter with the units of action. The scales are set as\n\\begin{align}\n&l=\\sqrt{\\frac{\\hbar}{m \\omega_q}}, \\qquad l_p=\\frac{\\hbar}{l}=\\sqrt{\\hbar m \\omega_q},\\\\\n&\\lambda=\\sqrt{\\frac{\\kappa}{m \\omega_c}}, \\qquad\\lambda_k=\\frac{\\kappa}{\\lambda}=\\sqrt{\\kappa m \\omega_c},\n\\end{align}\nso\n\\begin{equation}\n[\\bar{q},\\bar{p}]=[\\bar{x},\\bar{p}_x]=[\\bar{k},\\bar{p}_k]=1\n\\end{equation}\nand the Hamiltonians can be expressed as\n\\begin{equation}\nH_q={\\tfrac{1}{2}}\\hbar\\omega_q\\big(\\bar{q}^2+\\bar{p}^2\\big), \\qquad H_c={\\tfrac{1}{2}}\\kappa\\omega_c\\big(\\bar{x}^2+\\bar{k}^2\\big).\n\\end{equation}\n{In terms of creation and annihilation operators, the most general bilinear Hermitian term coupling the quantum and classical systems is}\n\\begin{equation}\nK_{i}=\\mathrm{i} \\left(\\beta^*_{0x}ab_x-\\beta_{0x}b_x^{\\dagger}a^{\\dagger}+\\beta^*_{0k}ab_k-\\beta_{0k}b_k^{\\dagger}a^{\\dagger}\\right)+\n\\alpha_{0x}a^{\\dagger}b_x+\\alpha_{0x}^*b_x^{\\dagger}a+\\alpha_{0k}a^{\\dagger}b_k+\\alpha_{0k}^*b_k^{\\dagger}a.\n\\end{equation}\nUsing the relations $\\alpha_{0x}=\\alpha^{(1)}_{0x}+\\mathrm{i} \\alpha^{(2)}_{0x}$ and $\\beta_{0x}=\\beta^{(1)}_{0x}+\\mathrm{i} \\beta^{(2)}_{0x}$, and similar ones for $\\alpha_{0k}$ and $\\beta_{0k}$, and demanding that no unobservable operators are coupled to the quantum sector, we obtain the following form for the equations of motion\n\\begin{align} \\label{eqmotion}\n&\\dot{\\bar{q}}=\\omega_q\\bar{p}+2\\alpha^{(2)}_{0x}\\bar{x}+2\\alpha^{(2)}_{0k}\\bar{k}, \\quad \\qquad \\dot{\\bar{p}}=-\\omega_q\\bar{q}-2\\alpha^{(1)}_{0x}\\bar{x}-2\\alpha^{(1)}_{0k}\\bar{k},\\\\\\nonumber\n&\\dot{\\bar{x}}=\\omega_c\\bar{k}, \\qquad \\qquad \\quad \\qquad \\qquad \\qquad \\dot{\\bar{k}}=-\\omega_c\\bar{x},\\\\\n&\\dot{\\bar{p}}_x=\\omega_c\\bar{p}_k-2\\beta^{(2)}_{0x}\\bar{q}+2\\beta^{(1)}_{0x}\\bar{p}, \\qquad \\dot{\\bar{p}}_k=-\\omega_c\\bar{p}_x-2\\beta^{(2)}_{0k}\\bar{q}+2\\beta^{(1)}_{0k}\\bar{p}. \\nonumber\n\\end{align}\n{See Appendix A for a detailed derivation of these results.}\n\nWe observe that the classical position and momentum act on their quantum counterparts as external forces without experiencing any backreaction. This bizarre state of affairs also brings the system to resonance when $\\omega_c=\\omega_q$, describing an unlimited increase of energy of the quantum oscillator, similar to \\cite{pt01,k-pt}.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Phase space picture}\\label{Moyal}\n\n\nThe phase-space formulation of quantum mechanics provides us with an alternative way of analyzing hybrid quantum-classical systems. In this formulation, which is based on the Wigner function the quantum mechanical operators are associated with c-number functions in the phase space using Weyl's ordering rule \\cite{wigrev}. The quantum mechanical features of operators in Hilbert space, such as their noncommutativity, represents itself in the noncommutative multiplication of c-number functions through the\nMoyal $\\star$-product in the phase space, which corresponds to the Hilbert space operator product.\n\n In classical mechanics the evolution of a dynamical variable, represented by an arbitrary function of the form $f(x,k,t)$ in a {phase space whose conjugate variables are $(x,k)$}, is described by Hamilton's equations of motion. These equations are\n \\begin{equation}\n\\frac{d}{dt}f(x,k,t)=\\{f,H\\}+\\frac{\\partial}{\\partial t}f(x,k,t), \\label{liouveq}\n\\end{equation}\nwhere $\\{\\cdot\\,,\\cdot\\}$ is the Poisson bracket and $H$ is a classical Hamiltonian. {In quantum mechanics one can obtain an analogous phase space description} by replacing the Poisson with the Moyal bracket, and the Liouville function with the Wigner function $W(x,k,t)$. The Moyal evolution equation is given by \\cite{Moyal,zfc05}\n\\begin{equation}\n\\frac{\\partial}{\\partial t}W(x,k,t)=\\frac{H \\star W-W\\star H}{\\mathrm{i} \\hbar}\\equiv \\{ \\!\\!\\{W,H\\}\\!\\! \\},\n\\end{equation}\nwhere $\\{ \\!\\!\\{\\cdot \\, ,\\cdot \\}\\!\\! \\}$ represents the Moyal bracket and the $\\star$-product is defined as\n\\begin{equation}\n\\star \\equiv \\mathrm{e}^{\\frac{\\mathrm{i} \\hbar}{2}\\big(\\overset{\\leftarrow}{\\partial_x}\\overset{\\rightarrow}{\\partial_k}-\\overset{\\leftarrow}{\\partial_k}\\overset{\\rightarrow}{\\partial_x}\\big)}\n\\end{equation}\nOne can represent the Moyal bracket with a Poisson bracket plus correction terms\n\\begin{equation}\n\\{ \\!\\!\\{W,H\\}\\!\\! \\}=\\{W,H\\}+\\mathcal{O}(\\hbar).\n\\end{equation}\nIt is also important to note that for quadratic Hamiltonians the Moyal bracket reduces to the Poisson bracket.\n\nThe question of equivalence of quantum and classical descriptions makes sense in the following context. A positive initial Wigner function $W(x,k,t=0)$ that corresponds to the quantum state $\\hat\\rho(t=0)$ can be identified with the Liouville function, $f(t=0)\\leftarrow W(t=0)$. This function is evolved classically by Eq.~\\eqref{liouveq}, and then the reverse identification is made: $W(t)\\leftarrow f(t)$. If this represents a valid quantum state $\\rho_f$ the procedure is consistent. If, furthermore, the phase space expectation values, calculated with $f(t)$ or, equivalently, the quantum expectations calculated with $\\hat{\\rho}_f(t)$ are the same as the expectations that are obtained with the quantum-evolved state $\\hat\\rho(t)$, the two descriptions are equivalent.\n\nThis is the context of the statement of equivalence of GQM and classical statistical mechanics. Already at this stage, however, we point a minor issue that directly follows from properties of the Wigner function \\cite{wigrev}. The phase space expectation with $W_\\rho$ is equivalent to the Weyl-ordered expectation with the state $\\rho$. If the expectation of a different combination of operators needs to be evaluated, it cannot be done directly in the phase space; rather the Liouville\/Wigner function needs first to be converted to the corresponding quantum state.\n\n\n\n Let us consider a system with two degrees of freedom with the Hamiltonian\n\\begin{equation}\nH=\\frac{1}{2}\\left(p^2+k^2\\right)+V(q,x),\n\\end{equation}\nwhere $(q,p)$ and $(x,k)$ are the canonical pairs for the first and the second subsystems, respectively. As before, we use the dimensionless canonical variables and $\\hbar\\rightarrow 1$.\n\nWe consider the most general form of the potential given by\n\\begin{equation} \\label{generalp}\nV(q,x)=U_1(q)+U_2(x)+U(q,x).\n\\end{equation}\nMixed quantum-classical {dynamics, with substitution of Moyal brackets for Poisson brackets in the quantum subsystem, may be either a good approximation or produce unphysical results. A clear signature of the latter would be } violation of the Heisenberg uncertainty relation for the presumably quantum subsystem.\n\nThe subsequent analysis can be thought as an investigation of the consistency of the phase-space based mixed quantum-classical dynamics, where the first pair $(q,p)$ is a quantum system, that unless specified otherwise is a harmonic oscillator ($U_1(q)=\\alpha q^2\/2$), whilst the classical potential $U_2(x)$ and the interaction term $U(q,x)$ are general. Alternatively, it can be viewed as an investigation of how the phase-space description of the quantum dynamics breaks down.\nFrom either perspective, since Gaussian states are particularly well-behaved, we assume that the initial Wigner functions and\/or Liouville distributions are of Gaussian form.\n\n\nTo observe the violation of uncertainty relations we must trace the evolution of statistical moments in time. Here we briefly review their basic properties and the role in characterization of Gaussian states.\n\n\n\nThe quantum moments are defined as\n\\begin{equation}\nM^{a,b}\\equiv \\left\\langle \\delta \\hat{q}^a \\delta \\hat{p}^b \\right\\rangle_{\\text{ord}},\n\\end{equation}\nwhere the subscript `ord' refers to a particular ordering, e.g. symmetric or Weyl, and the expectation value of an operator $\\hat A$ is given by the trace formula\n\\begin{equation} \\label{qexpect}\n\\big\\langle\\hat{A}\\big\\rangle={\\rm tr}(\\hat{\\rho} \\hat{A}).\n\\end{equation}\n The quantities\n$\\delta \\hat{q}=\\hat{q}-\\left\\langle \\hat{q}\\right\\rangle$ and $\\delta \\hat{p}=\\hat{p}-\\left\\langle \\hat{p}\\right\\rangle$ are the operators for deviations from the mean (expectation) values,\n and the sum of the indices $(a+b)$ is the order of the moment $M^{a,b}$.\n\nAnalogously, we define the classical moments as\n\\begin{equation}\nM^{a,b}_C\\equiv \\left\\langle \\delta {x}^a \\delta {k}^b \\right\\rangle,\n\\end{equation}\nwhere $\\delta x=x-\\left\\langle x \\right\\rangle$ and $\\delta k=k-\\left\\langle k \\right\\rangle$ are deviations from the mean values of position and momentum respectively in the classical system. The mean (average) value of a function $A(x,k)$ is obtained by using the Liouville density $f(x,k,t)$,\n\\begin{equation}\\label{cmean}\n\\left\\langle A(t) \\right\\rangle=\\int_{-\\infty}^{\\infty}{\\int_{-\\infty}^{\\infty}{A(x,k)f(x,k,t) dx\\, dk}}.\n\\end{equation}\n We shall use angle brackets for both classical means and quantum expectation values, employing \\eqref{qexpect} and \\eqref{cmean} as appropriate.\n\n\nA Gaussian state $\\hat{\\rho}$ has a Gaussian characteristic function which its Fourier transform gives us a (Gaussian) Wigner function \\cite{olivares12,Weedbrook12},\n\\begin{equation}\nW(\\bm{X})=\\frac{\\exp \\left[-1\/2(\\bm{X}-\\bm{\\mu})^T\\bm{\\sigma}^{-1}(\\bm{X}-\\bm{\\mu})\\right]}{(2\\pi)^N \\sqrt{\\text{det}\\bm{\\sigma}}},\n\\end{equation}\nwhere $\\bm{\\mu}\\equiv\\left\\langle \\bm{X} \\right\\rangle$ and where $\\sigma$ is a covariance matrix, namely, the second moment of the state $\\hat{\\rho}$.\nBy definition, a Gaussian probability distribution can be completely described by its first and second moments; all higher moments can be derived from the first two using the following method\n\\begin{align}\n&\\left\\langle (\\bm{X}-\\bm{\\mu})^k\\right\\rangle=0~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\text{for odd}~ k,\\\\\n&\\left\\langle (\\bm{X}-\\bm{\\mu})^k\\right\\rangle=\\sum{(c_{ij}...c_{xz})}~~~~~~~~~\\text{for even}~ k\n\\end{align}\n also known as Wick's theorem \\cite{Ahrendt05}.\nThe sum is taken over all the different permutations of k indices. Therefore we will have $(k-1)!\/(2^{k\/2-1}(k\/2-1)!)$ terms where each consists of the product of $k\/2$ covariances $c_{ij} \\equiv \\left\\langle (X_i-\\mu_i)(X_j-\\mu_j)\\right\\rangle$.\n\n\nEpistemically-restricted Liouville mechanics (ERL) \\cite{brs:15} is obtained by adding a restriction on {classical} phase-space distributions, which are the allowed epistemic states of Liouville mechanics. These restrictions are {\\it the classical uncertainty relation (CUP)} and {\\it the maximum entropy principle (MEP)}. CUP implies that the covariance matrix of the probability distribution $\\bm{\\chi}$ must satisfy the inequality\n\\begin{equation}\\label{CUP}\n\\bm{\\chi}+\\mathrm{i} \\epsilon \\bm{\\Omega}\/2\\geq 0,\n\\end{equation}\nwhere $\\epsilon$ is a free parameter of ERL theory and $\\bm{\\Omega}$ is known as the symplectic form \\cite{olivares12,Weedbrook12}. To reproduce GCM we must set $\\epsilon=\\hbar$. The MEP condition requires that the phase-space distribution of the covariance matrix $\\bm{\\chi}$ has the maximum entropy compared to all the distributions with the same covariance matrix. Any distribution that satisfies these two conditions is a valid epistemic state and can be equivalently described by a Gaussian state.\n\nNow consider a system of two interacting degrees of freedom. Its state (quantum, classical or mixed) is Gaussian, i.e. fully described by the first two statistical moments. If the system is in a valid quantum or ERL state, its covariance matrix $\\bm{\\sigma}$ is non-negative, namely,\n\\begin{equation} \\label{positivity}\n\\bm{\\sigma}+\\mathrm{i} \\bm{\\Omega}\/2 \\geq 0,\n\\end{equation}\nThis condition requires that all the symplectic eigenvalues of the covariance matrix be non-negative or equivalently, its leading principal minors be all non-negative. Having the symplectic matrix organized in pairs of coordinates for each oscillator as $(q,p,x,k)$, the covariance matrix $\\bm{\\sigma}$ describing the state of the entire system can be decomposed as\n\\begin{equation}\n\\bm{\\sigma}=\\begin{pmatrix}\\bm{\\sigma}_Q & \\bm{\\gamma}_{QC}\\\\\n\\bm{\\gamma}^T_{QC} & \\bm{\\sigma}_C\\end{pmatrix},\n\\end{equation}\nwhere $\\bm{\\sigma}_Q$, $\\bm{\\sigma}_C$ are $2\\times2$ covariance matrices that describe the reduced states of respective subsystems $Q$ and $C$. The $2\\times2$ matrix $\\bm{\\gamma}_{QC}$ encodes the correlations between the two subsystems.\n\n\n\nAs discussed above we take the initial state of the entire system to be Gaussian. The first moments at the time $t=0$ are\n\\begin{equation}\n{\\langle}\\hat{q}(0){\\rangle}=q_0, \\qquad {\\langle}\\hat{p}(0){\\rangle}=p_0, \\qquad {\\langle} x(0){\\rangle}=x_0, \\qquad {\\langle} k(0){\\rangle}=k_0,\n\\end{equation}\nand the reduced correlation matrices are\n\\begin{equation}\n\\bm{\\sigma}_Q=\\begin{pmatrix} 1\/2+z_1& \\left\\langle \\delta p\\delta q \\right\\rangle\\\\\n\\left\\langle \\delta p\\delta q \\right\\rangle & 1\/2+z_2\\end{pmatrix}, \\quad\n\\bm{\\sigma}_C=\\begin{pmatrix} 1\/2+y_1 & 0\\\\\n0 & 1\/2+y_2 \\end{pmatrix}.\n\\end{equation}\nwhere to simplify the exposition we assume a diagonal correlation matrix for the system $C$.\n\n{Up to now these are simply two distinct systems. Anticipating the uncertainty relation, we\nconsider the first system $Q$ to be quantum-mechanical and the second system $C$ to be classical,\nand parametrize}\n\\begin{equation}\n\\left\\langle \\delta q^2 \\right\\rangle\\equiv{\\tfrac{1}{2}}+z_1, \\qquad \\left\\langle \\delta p^2 \\right\\rangle\\equiv {\\tfrac{1}{2}}+z_2,\n\\end{equation}\nwith analogous meanings for $y_1$ and $y_2$ for the system $C$.\nBy definition $z_1, z_2, y_1$, and $y_2$ can take any value from $(-1\/2,\\infty)$. The classical-classical correlations (CC) assumed to be zero for simplicity. Depending on how squeezed the state can get and how two systems are interacting with each other through correlation matrices, one can determine a specific range for these free parameters while satisfying the positivity condition \\eqref{positivity} of the covariance matrix of the whole ensemble.\n\nIt is straightforward to show that for the Gaussian quantum subsystem alone the Heisenberg Uncertainty Relation (UR) is\n\\begin{equation} \\label{hup}\nf(t)= \\left\\langle \\delta p^2 \\right\\rangle \\left\\langle \\delta q^2 \\right\\rangle-\\left\\langle \\delta q \\delta p\\right\\rangle^2-\\frac{1}{4} \\geq 0.\n\\end{equation}\n The same requirement holds for the classical subsystem only if it is in a valid ERL state.\n\n\nInstead of evolving the quantum state or the Liouville density, it is possible to follow the (generally infinite) hierarchy of statistical moments \\cite{bm98,Brizuela}. To find the moment equations we use the general formula for the time derivatives of the classical moments \\cite{bm98}, as detailed in Appendix B. As we are not looking for numerical solutions to these equations but rather wish only to probe for (lack of) consistency, we study their short-term temporal behaviour via series expansions. We therefore write\n\\begin{equation} \\label{moments}\n\\left\\langle \\delta p^2 \\right\\rangle\\equiv\\sum_{n=0}^N{\\frac{\\left\\langle \\delta p^2 \\right\\rangle^{(n)}_0}{n!}t^n},\\qquad\n\\left\\langle \\delta q^2 \\right\\rangle\\equiv\\sum_{n=0}^N{\\frac{\\left\\langle \\delta q^2 \\right\\rangle^{(n)}_0}{n!}t^n},\\qquad\n\\left\\langle \\delta q \\delta p\\right\\rangle\\equiv\\sum_{n=0}^N{\\frac{\\left\\langle \\delta q \\delta p \\right\\rangle^{(n)}_0}{n!}t^n},\n\\end{equation}\ntruncating the series at $N=3$, which is sufficient for our purposes.\n\n\nOur goal is to study the behaviour of $f(t)$ in CGQ. In particular, we investigate under what circumstances (if any) $f(t)<0$, signifying violation of uncertainty relations.\nFor non-Gaussian states it is easy to see the violation even in the first order term since not all the odd moments are zero. We can observe this by considering an arbitrary potential with a single degree of freedom, $V(q)$, as in the following example. For such potential without any initial QQ and CQ correlations we have\n\\begin{eqnarray}\nf(t)&=&\\frac{1}{2}\\Big(z_1+z_2+2z_1z_2\\Big)\\\\\\nonumber\n&-&\\frac{1}{120}\\bigg[(1+2z_1)\\Big(60 \\left\\langle \\delta p \\,\\delta q^2\\right\\rangle V^{(3)}(q)+20 \\left\\langle \\delta p\\, \\delta q^3\\right\\rangle V^{(4)}(q)+5\\left\\langle \\delta p \\,\\delta q^4\\right\\rangle V^{(5)}(q)+\\left\\langle \\delta p\\, \\delta q^5\\right\\rangle V^{(6)}(q)\\Big)\\bigg]t+\\mathcal{O}(t^2).\n\\end{eqnarray}\nwhere the first term (the uncertainty at $t=0$ can be zero and the overall sign of the first order term is negative. Hence for a generic state that initially saturates the uncertainty relation, $f(0)=0$, time evolution with any potential immediately violates it. However, Gaussian states are quite robust against the violation of HUR. If $f(t=0)=0$, then any potential of the form $V(q)$ will lead to a violation only in the third order term.\n\nNext we consider the most general form of the potential \\eqref{generalp}. By including both initial QQ $\\left\\langle \\delta q \\delta p\\right\\rangle_0$ and, e.g., QC $\\left\\langle \\delta q \\delta x\\right\\rangle_0$ correlations, while setting other correlations to zero, the general form of \\eqref{hup} becomes\n\\begin{eqnarray} \\label{ghup}\nf(t)&=& \\frac{1}{2}\\Big(z_1+z_2+2z_1z_2-2\\left\\langle \\delta q \\delta p\\right\\rangle^2_0\\Big)\\\\\\nonumber &+&\\frac{1}{16}\\left\\langle \\delta q \\delta p\\right\\rangle_0 \\left\\langle \\delta q \\delta x\\right\\rangle_0\\Big[32 U^{(1,1)}(q,x)+8(1+2y_2)U^{(1,3)}(q,x)\\!+\\!\\big(1+4y_2+4y_2^2\\big)U^{(1,5)}(q,x)+32\\left\\langle \\delta q \\delta x\\right\\rangle_0 U^{(2,2)}(q,x)\\\\\\nonumber\n&+&8\\left\\langle \\delta q \\delta x\\right\\rangle_0 U^{(2,4)}(q,x) (1+2y_2)+(8+16z_1)U^{(3,1)}(q,x)\n\\Big(2+16\\left\\langle \\delta q \\delta x\\right\\rangle_0^2+4y_2+4z_1+8y_2 z_1\\Big)U^{(3,3)}(q,x)\\\\\\nonumber\n&+&8\\left\\langle \\delta q \\delta x\\right\\rangle_0 U^{(4,2)}(q,x)(1+2z_1)+(1+4z_1+4z_1^2)U^{(5,1)}(q,x)\\Big]t+\\mathcal{O}(t^2),\n\\end{eqnarray}\nup to the leading order in time. The first term of this relation, which describes UR at $t=0$, cannot be initially saturated (namely $f(0)\\neq 0$) since inclusion of the QC correlation implies the reduced state of the quantum subsystem will no longer be pure (the only case where UR saturates). In this case the quantum system has some positive initial value $f(0)$ that can be minimized whilst satisfying the positivity condition \\eqref{positivity} of the covariance matrix of the whole system.\n\nWe can establish the inconsistency if the linear term is negative and the second order term is {either} negative or sufficiently small as to enable $f(t_*)<0$ for some time $t_*$.\nWe therefore observe that a necessary condition for violation of UR in the linear term is that neither of $\\left\\langle \\delta q \\delta p\\right\\rangle_0$, $\\left\\langle \\delta q \\delta x\\right\\rangle_0$ vanish, and at least one of the $U^{(i,j)}(q,x)$ (at least) is nonzero. Otherwise terms of higher order in $t$ must be included in \\eqref{ghup} for any possibility of observing a violation of UR. For example if we consider no QC or QQ correlations, the first term in \\eqref{ghup} saturates at $t=0$ and the first order term disappears. If the second order term can be made negative then a violation of UR follows immediately. Similar considerations hold for higher-order terms if the second-order term is positive. In the following examples we will analyze the behaviour of each term.\n\n\n\n\nConsider a specific form of an interaction potential given by\n\\begin{equation}\nU(q,x)=\\beta_1 q g(x)+\\beta_2 q^2 g(x),\n\\end{equation}\nwhere\n\\begin{equation}\ng(x)=\\gamma_1 x+\\gamma_2x^2.\n\\end{equation}\nFor the case with no QQ or QC correlations, equation \\eqref{hup} takes the following form up to the third order in time\n\\begin{eqnarray}\nf(t)&=&\\frac{1}{2}\\Big(z_1+z_2+2z_1z_2\\Big)\\\\\\nonumber\n&+&\\frac{1}{4}(1+2y_2)(1+2z_1)\\Big(\\beta_1^2+4 q_0 \\beta_1 \\beta_2+2\\beta_2^2\\big(1+2q_0^2+2z_1\\big)\\!\\Big)\\Big(\\!\\gamma_1^2+2x_0 \\gamma_1 \\gamma_2+\\!\\big(1+4x_0^2+2y_2\\big)\\gamma_2^2\\Big)t^2+2k_0\\!\\left(\\!\\frac{1}{2}+z_1\\!\\right)\\\\\\nonumber\n&\\times&\\beta_2(\\gamma_1+2x_0 \\gamma_2)\\left[\\frac{1}{2}+z_2-\\left(\\frac{1}{2}+z_1\\right)\\alpha-2\\left(\\frac{1}{2}+y_2\\right)\\!\\left(\\frac{1}{2}+z_1\\right)\\beta_2 \\gamma_2-x_0(1+2z_1)\\beta_2(\\gamma_1+x_0\\gamma_2)\\right]t^3+\\mathcal{O}(t^4).\n\\end{eqnarray}\n\nIn this example, the quadratic term is always positive; we can minimize its effect by choosing $y_2\\rightarrow -1\/2$. This is the case of the extreme squeezing, namely, the Gaussian distribution in phase space is squeezed in one dimension and elongated in the other.\n Violations of UR will occur if the coefficient of the $t^3$ term is negative, which can be arranged by setting $\\beta_2, \\alpha>0$, with all other variables also being positive. Three out of five terms in the square bracket are negative, and so the entire coefficient can be made negative by choosing large positive values for $\\alpha$ and $\\beta_2$. The fourth-order order term included both negative and positive terms and one can strengthen the negative terms by choosing the initial values arbitrarily large while diminishing the positive terms by choosing $y_2\\rightarrow -1\/2$. {If we include\n non-vanishing QQ and QC correlations, we find even for a quadratic potential\n($\\beta_2= \\gamma_2=0$) that violation of UR at second order in $t$ can occur for an appropriate choice of parameters; the upper limit for the time $t_*$ at $f(t)$ becomes negative is\n\\begin{equation} \\label{time-quad}\nt_*\\leq\\left|\\frac{2}{\\sqrt{4 (\\left\\langle \\delta q \\delta x\\right\\rangle_0 \\beta_1 \\gamma_1)^2 - 2(1+2z_2)(\\left\\langle \\delta q \\delta x\\right\\rangle_0 \\beta_1 \\gamma_1)}}\\right|,\n\\end{equation}\nprovided the argument under the square root is positive.}\n\nAs our second example we consider an interaction potential of the form\n\\begin{equation} \\label{example3}\nU(q,x)=\\beta_1 q x^2+\\beta_2 q^2 x,\n\\end{equation}\nfor which the equation \\eqref{hup} takes the form\n\\begin{eqnarray}\nf(t)&=&\\frac{1}{2}\\Big(z_1+z_2+2z_1z_2\\Big)+\\frac{1}{4}(1+2y_2)(1+2z_1)\\Big(\\big(1+4x_0^2+2y_2\\big)\\beta_1^2+8q_0 x_0\\beta_1 \\beta_2+2\\beta_2^2\\big(1+2q_0^2+2z_1\\big)\\Big)t^2\\\\\\nonumber\n&-&\\frac{1}{2}\\Big(k_0(1+2z_1)\\beta_2(-1-2z_2+\\alpha+2z_1 \\alpha+2x_0 \\beta_2+4x_0 z_1 \\beta_2)\\Big)t^3+\\mathcal{O}(t^4).\n\\end{eqnarray}\nLike the previous example, the second order term can be minimized by choosing $y_2 \\rightarrow -1\/2$. Violation of the UR will be manifest in the third and fourth order terms provided the free parameters $x_0$, $k_0$, $\\alpha$, and $\\beta_2$ are chosen to be large enough to make the quantity in the brackets positive.\n\nFor a potential of the form \\eqref{example3}, by introducing non-zero cross correlations (QC) between the classical and quantum subsystems (for example by considering $\\left\\langle \\delta q \\delta x\\right\\rangle_0$ in the correlation matrix $\\gamma_{QC}$) and also taking $\\left\\langle \\delta q \\delta p\\right\\rangle_0 \\neq 0$, we have\n\\begin{eqnarray}\nf(t)&=&\\frac{1}{2}\\Big(z_1+z_2+2z_1z_2-2\\left\\langle \\delta q \\delta p\\right\\rangle^2_0\\Big)+2\\left\\langle \\delta q \\delta p\\right\\rangle_0 \\left\\langle \\delta q \\delta x\\right\\rangle_0 (\\beta_1 x_0+\\beta_2 q_0)t\\\\\\nonumber\n&+&\\frac{1}{4}\\bigg[\\beta_1\\Big(8 k_0\\left\\langle \\delta q \\delta x\\right\\rangle_0\\left\\langle \\delta q \\delta p\\right\\rangle_0+4x_0(\\left\\langle \\delta q \\delta x\\right\\rangle_0+2\\left\\langle \\delta q \\delta x\\right\\rangle_0z_2)+(1+2y_2)^2(1+2z_1)\\beta_1+4x_0^2\\beta_1\\big(1-4\\left\\langle \\delta q \\delta x\\right\\rangle_0^2\\\\\\nonumber\n&+&2y_2+2z_1+4y_2z_1\\big)\\Big)+4 \\beta_2\\Big(2p_0\\left\\langle \\delta q \\delta p\\right\\rangle_0\\left\\langle \\delta q \\delta x\\right\\rangle_0+2\\beta_1\\left\\langle \\delta q \\delta x\\right\\rangle_0(1+2y_2)(1+2z_1)+q_0\\big(\\left\\langle \\delta q \\delta x\\right\\rangle_0+2z_2\\left\\langle \\delta q \\delta x\\right\\rangle_0\\\\\\nonumber\n&-&8x_0\\beta_1\\left\\langle \\delta q \\delta x\\right\\rangle_0^2+2x_0\\beta_1(1+2y_2)(1+2z_1)\\big)\\Big)+2\\beta_2^2\\Big(q_0^2\\big(2-8\\left\\langle \\delta q \\delta x\\right\\rangle_0^2+4y_2+4z_1+8y_2 z_1\\big)+(1+2z_1)\\big(4\\left\\langle \\delta q \\delta x\\right\\rangle_0^2\\\\\\nonumber\n&+&(1+2y_2)(1+2z_1)\\big)\\Big)\\bigg]t^2+\\mathcal{O}(t^3).\n\\end{eqnarray}\nIn this case we cannot saturate the first term since by including the QC correlation terms the reduced state of the quantum subsystem will not be pure anymore which is the only case where UR saturates. So we shall begin with some positive initial value that we can minimize while satisfying the positivity condition \\eqref{positivity} of the covariance matrix of the whole system.\nHowever, we can make the first and second derivative negative by satisfying the following conditions,\n\\begin{itemize}\n\t\\item $q_0<0$, $p_0<0$, $k_0<0$, and $\\beta_1<0$. $p_0$ can be as large as it is needed to make the whole second order term negative and rest of the parameters need to be positive.\n\\end{itemize}\n\nApplying these conditions keeps both linear and quadratic terms negative and the upper limit for the time scale during which $f(t)$ crosses zero can be obtain from\n\\begin{equation} \\label{time}\nt_*\\leq\\left|-\\frac{z_1+z_2+2z_1z_2-2\\left\\langle \\delta q \\delta p\\right\\rangle^2_0}{4 \\left\\langle \\delta q \\delta p\\right\\rangle_0 \\left\\langle \\delta q \\delta x\\right\\rangle_0 (\\beta_1x_0+\\beta_2q_0)}\\right|.\n\\end{equation}\n\n\n\n\n\n\n\\section{Conclusions}\n\n Our investigation has produced a result that is complementary to that of Bartlett, Rudolph and Spekkens \\cite{brs:15}: while a stand-alone Gaussian quantum system can be treated classically, it exhibits telltale quantum features that are revealed when it is coupled to a classical system.\n\n The results of Sec.\\ref{Koopman} show that Koopmanian formalism distinguishes between quantum and classical descriptions even if the interaction between the two systems is Gaussian. The correspondence principle cannot be enforced, and exclusion of the non-observable operators from the equations of motion eliminates the very possibility for the quantum subsystem to influence the classical one. In addition, since the classical Liouvillian operator is unbounded from below, a resonance leading to an infinite flow of energy from the classical to quantum system is possible.\n\nThe phase space quantum-classical picture is, as expected, consistent if the statistical moments satisfy the ERL restrictions and the Hamiltonian is Gaussian. However, if the interaction term is $U(x,q)$ is not bilinear, the mixed evolution quickly becomes inconsistent {(after a time given in\n \\eqref{time}) even if the initial state is Gaussian.\nFurthermore, if we start with a squeezed Gaussian state and let it evolve under a quadratic interaction term, we still observe a violation of UR after a specific amount of time\n\\eqref{time-quad}}. Beyond this time the correct quantum state is definitely no longer Gaussian.\n\nOur results have implications for the no-cloning theorem, quantum teleportation, and the EPR thought experiment, insofar as the lack of consistency of the hybrid models we describe render them\nunable to properly account for these phenomena. In addition, this latter property has a bearing on the question of the logical necessity of quantizing linearized gravity\n\\cite{Hug, Unruh:1984uq,Eppley,PG}. Consider a scalar field minimally coupled to a linearized gravitational field. Expanding both systems into normal modes we have two families of non-linearly coupled oscillators. In a consistent mixed description a family of quantum oscillators (scalar field) non-linearly interacts with classical oscillators (gravity). Assuming that the results of \\cite{Garay12} can be extended to a setting with infinite degrees of freedom, it is necessary to introduce uncertainty into the state of classical oscillators, thus indicating that a consistent mixed dynamics should involve at least a stochastic gravity. Moreover, the presence of the nonlinear interaction as in the examples above should eventually lead to the violation of uncertainty relations for quantum oscillators, making the entire scheme untenable. We will make a rigorous analysis along these lines in a future work.\n\n\n\n\\acknowledgments\n\nAA is supported by Cotutelle International Macquarie University Research Excellence Scholarship. She also thanks Nicolas C Menicucci for his helpful discussions.\nThis work was supported in part by the Natural Sciences and Engineering Research Council of Canada. DRT thanks Aharon Brodutch, Viqar Hussain, Rob Spekkens and Sandu Popescu for discussions and critical comments, and Perimeter Institute and Technion --- Israel Institute of Technology for hospitality.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}