diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzfiwe" "b/data_all_eng_slimpj/shuffled/split2/finalzzfiwe" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzfiwe" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nIn 2015, Laser Interferometer Gravitational-Wave Observatory (LIGO)\ndetected gravitational waves from a binary black hole (BBH) merger \\cite{GW150914}.\nIn Observation run 1 and 2, ten BBH merger events were confirmed \\cite{GWTC}.\nCurrently, advanced LIGO and advanced Virgo are operating and KAGRA will join this detector network in 2020 \\cite{KAGRA}.\nBesides the improvement of detectors,\nthe improvement of data analysis methods can contribute to accelerate the gravitational wave physics and astronomy.\n\nRecently, the use of deep learning methods is proposed for various purposes,\ne.g., the detection of gravitational waves \\cite{GeorgeHuerta, SNe},\nthe parameter estimation \\cite{Hongyu, Gabbard, Alvin}.\nthe noise subtraction \\cite{Wei, denoiseHongyu},\nand the classification of glitch noises \\cite{HuertaGravitySpy}.\nOur work is devoted to investigating the accuracy of the parameter estimation.\nOur question is {\\it how accurately deep learning methods can estimate physical parameters},\nor {\\it whether deep learning methods can estimate parameters more accurately than the standard method}.\n\nIn this paper, we focus on the analysis of ringdown gravitational waves.\nThe ringdown is the last stage of a BBH merger.\nThe remnant black hole is largely perturbed just after the merger and the perturbation decays as gravitational waves are emitted.\nLate time perturbations of the black hole is dominated by the black hole quasi-normal modes (QNMs).\nThe ringdown gravitational waves can be modeled by the damped sinusoidal waveforms having the complex-valued QNM frequencies predicted by the black hole perturbation theory \\cite{ReggeWheeler, Zerilli, Teukolsky}.\nIn general relativeity (GR), the QNM frequencies are determined by the black hole mass and spin.\nBecause of this property, the ringdown gravitational waves are useful for the test of GR \\cite{Berti2005, LIGOtestGR}.\nOne way to estimate the QNM frequencies is the matched filtering using the inspiral-merger-ringdown gravitational waves \\cite{GhoshA, GhoshB}. \nThe posterior distribution of the binary masses and the spins is estimated and it can be converted into the mass and the spin of the remnant black hole by the fitting formula obtained from numerical relativity simulations \\cite{Healy}.\nThis method relies on GR and the inference of these parameters is mainly governed by the inspiral part.\nIf the effects caused by exotic theories (e.g. modified theories of gravity, black hole mimickers)\nmodify the merger-ringdown part without changing the inspiral part,\nbias would be introduced in the posterior in this method.\nThus, we need a method to estimate QNM frequencies using only the merger-ringdown part.\n\nThere are two possible directions of investigation:\nimproving the matched filtering and implementing alternative methods.\nIn Ref.~\\cite{MDC}, comparison of various methods for the analysis of ringdown was done using test mock data.\nThe result shows that the deep learning method is competitive with the matched filtering.\nThe deep learning method used in this challenge was the one constructed for the point estimation, that is, the neural network returns only a single estimated value for each parameter that we want to estimate.\nDespite of this shortcoming, deep learning methods are still expected to be a useful method complementary to the matched filtering.\n\nRecently, the authors of Ref.~\\cite{Gabbard} proposed the use of the conditional variational auto encoder (CVAE) for gravitational data analysis.\nIn addition to that the computational speed of the CVAE is much faster than that of the matched filtering, the CVAE can estimate the posterior probability distributions of parameters.\nAlthough the purpose of Ref.~\\cite{Gabbard} was the rapid inference,\nwe apply the CVAE for the off-line analysis and assess the accuracy of the inference of the CVAE.\n\nThis paper is organized as follows.\nIn Sec.~\\ref{sec:dataset}, we present the construction of the waveforms.\nIn Sec.~\\ref{sec:MF}, we briefly review the matched filtering.\nIn Sec.~\\ref{sec:cvae}, the idea and the implementation of CVAE are explained.\nIn Sec.~\\ref{sec:cnn}, we introduce convolutional neural networks (CNNs) as another competitors to the CVAE.\nIn Sec.~\\ref{sec:results}, the results obtained by the CVAE are compared with the matched filtering and the CNN.\nWe focus on the accuracy of the maximum posterior estimations and the area of the confidence regions.\nWe also confirm that the confidence regions obtained by the CVAE have the frequentist meaning by making the P-P plot,\nwith evaluation of the magnitude of the error.\nWe summarize our results and future works in Sec.~\\ref{sec:conclusion}.\nThroughout this paper, we set $G=c=1$.\n\n\n\n\n\n\n\n\\section{Preparing mock templates}\n\\label{sec:dataset}\n\n\n\n\nAs explained in Introduction, the situation we consider is that only the merger-ringdown part is modified from that of GR,\nand we compare deep learning methods and the matched filtering in such a situation.\nFor this purpose, we need to generate a test dataset by modifying only the merger-ringdown part of the waveform.\nIn some modified theories of gravity, gravitational waves from inspiraling BBHs can be calculated in the post-Newtonian approximation.\nBut consistent simulations throughout the inspiral-merger-ringdown phases have not been done so far.\nIn addition, it is a highly speculative assumption that only the merger-ringdown part might be modified.\nTherefore, what we can do for generating modified templates is to modify the merger-ringdown phase of GR templates in a phenomenological manner.\nUsing the modified templates, we prepare a mock test data for comparison of the deep learning methods and the matched filtering.\nThese templates are used not only for preparing a test dataset, but also for training neural networks and for constructing the template bank of the matched filtering.\n\nThe precise modeling of the transition from the inspiral phase to the post-merger phase is difficult,\nbut we would be able to roughly assume that the gravitational waves of the merger-ringdown phase have the following properties,\n\\begin{itemize}\n\\item The amplitude after the peak monotonically decreases.\nAt a later time, the amplitude decays exponentially.\n\\item The frequency monotonically increases and converges to a certain QNM frequency at a later time.\n\\end{itemize}\nWe focus on the case where the waveforms are modified only after the time $t_p^\\mathrm{GR}$, at which the amplitude of GR template reaches its peak.\nTherefore, the inspiral part of the modified waveform coincides with GR one.\nIn this work, we focus only on $l=m=2$ mode and ignore overtones\nas they are much weaker especially for nearly equal-mass binaries.\nThe importance of the multi-modes and overtones has been studied in Refs.~\\cite{multimodeBerti, multimodeJulian, overtone}.\n\n\nWe denote the QNM frequencies for GR templates and for modified templates by $\\omega_\\mathrm{R, I}^\\mathrm{GR}$ and $\\omega_\\mathrm{R, I}$, respectively.\nThe modified templates are constructed by modifying the complex-velued templates in GR, $h^\\text{GR}(t)$.\nFirst, we decompose the strain $h^\\text{GR}(t)$ into the amplitude $A^\\text{GR}(t)$ and the frequency $\\omega^\\text{GR}(t)$ as\n\\begin{equation}\n\th^\\text{GR}(t) = A^\\text{GR}(t) e^{i\\phi^\\text{GR}(t)},\\ \\phi^\\text{GR}(t) = \\int^t dt' \\omega^\\text{GR}(t').\n\\end{equation}\nFrom $A^\\text{GR}(t)$ and $\\omega^\\text{GR}(t)$, the modified amplitude and frequency, $A(t)$ and $\\omega(t)$, are generated.\nOur modified templates are characterized by two parameters, $\\delta\\omega_\\text{R}$ and $\\delta\\omega_\\text{I}$.\nThe real and imaginary parts of the QNM frequency, $\\omega_\\mathrm{R}$ and $\\omega_\\mathrm{I}$, are specified by the fractional deviation from the GR values as\n\\begin{equation}\n\t\\omega_\\mathrm{R, I} = \\omega_\\mathrm{R, I}^\\mathrm{GR} ( 1 + \\delta_\\mathrm{R, I}).\n\\end{equation}\nIn our work, the modifications of the frequencies are assumed to be small.\nThe deviations of the real part and the imaginary part of QNM frequencies are assumed to be less than 30\\% and 50\\%, respectively (i.e. $|\\delta_\\mathrm{R}| < 0.30, |\\delta_\\mathrm{I}| < 0.50$).\n\n\n\n\nModified amplitudes are constructed from two parts, before and after the peak.\nAfter the peak, the amplitudes are modified from GR as\n\\begin{equation}\n\tA'(t) = \\frac{A^\\mathrm{GR}(t)}{1+e^{4 M\\omega_\\mathrm{I}^\\mathrm{GR}x}} +\\frac{A^\\mathrm{RD}(t)}{1+e^{-4 M\\omega_\\mathrm{I}^\\mathrm{GR}x}},\n\\end{equation}\nwith \n\\begin{equation}\n\tA^{\\mathrm{RD}}(t)=\\frac{1.18}{1+e^{- M \\omega_\\mathrm{I}^{\\mathrm{GR}} x}+e^{ M \\omega_{\\mathrm{I}} x}}.\n\\end{equation}\nwhere $M$ is the total mass of the binary, $x$ is the normalized time defined as $x := (t - t^\\text{GR}_p) \/ M$, \nand $t^\\text{GR}_p$ is the time when the GR amplitude $A^\\text{GR}(t)$ reaches its peak.\nThe time when the modified amplitude $A'(t)$ reaches its maximum is denoted by $t'_p$ and can differ from $t_p^\\text{GR}$.\nWe connect the GR amplitude before $t_p^\\mathrm{GR}$ and the modified amplitude after $t'_p$ with an appropriate normalization.\nNamely, the modified amplitude $A(t)$ is obtained as\n\\begin{eqnarray}\n\tA(t) = \n\t\\begin{cases}\n\t\tA^\\mathrm{GR} (t) & (t\\leq t_p^\\mathrm{GR}), \\\\\n\t\t\\alpha A'(t+t'_p-t_p^\\mathrm{GR}) & (t>t_p^\\mathrm{GR}),\n\t\\end{cases}\n\\end{eqnarray}\nwith $\\alpha := A^\\mathrm{GR}(t_p^\\mathrm{GR}) \/ A'(t'_p) $.\n\n\n\n\nThe GW frequency $\\omega(t)$ of the modified waveform is specified as\n\\begin{equation}\n\t\\omega(t)=\\frac{\\omega^{\\mathrm{GR}}(t)}{1+e^{4 M \\omega_{\\mathrm{I}}^{\\mathrm{GR}} x}}+\\frac{\\omega^{\\mathrm{RD}}(t)}{1+e^{- 4 M \\omega_{\\mathrm{I}}^{\\mathrm{GR}} x}},\n\\end{equation}\nwith\n\\begin{equation}\n\t\\omega^{\\mathrm{RD}}(t) = \\omega^\\mathrm{GR}_p + (\\omega_\\text{R} - \\omega^\\text{GR}_p)\\tanh (0.85 M\\omega_\\text{I}^\\text{GR} x),\n\\end{equation}\nand $\\omega^\\text{GR}_p := \\omega^\\text{GR}(t^\\text{GR}_p)$.\n\nFinally, we generate the gravitational wave strain, $h(t)$, by\n\\begin{equation}\n\th(t) = A(t)e^{i\\phi(t)},\\ \\phi(t) = \\int^t dt' \\omega(t').\n\\end{equation}\nThe waveform of the modified model having $\\delta_\\text{R} = \\delta_\\text{I}=0$ coincide with that of GR.\n\nAs a seed for modified templates, we use the waveform SXS:0305~\\cite{SXS} and the total mass is fixed to $M=72.158M_\\odot$.\nThe GR values of QNM frequency is calculated from the fitting formula in Ref.~\\cite{Berti2005}.\nExamples of the modified templates are shown in Fig.~\\ref{fig:templates}.\n\\begin{figure}[t]\n\\centering\n\\begin{minipage}{1.0\\hsize}\n\\includegraphics[width=8cm]{figure\/Amplitude_A0.pdf}\n\\end{minipage}\n\\centering\n\\begin{minipage}{1.0\\hsize}\n\\includegraphics[width=8cm]{figure\/Frequency_A0.pdf}\n\\end{minipage}\n\\caption{The amplitudes ({\\it top}) and the frequencies ({\\it bottom}) of the modified templates having various QNM frequencies.\nThe frequency $f_\\text{R}$ is defined as $f_\\text{R} = \\omega_\\text{R} \/ 2\\pi$.\nWhen $\\delta_\\text{R} = \\delta_\\text{I}=0$, they coincide with those of GR.\nThe black vertical line indicates the time at which the amplitude reaches its peak.}\n\\label{fig:templates}\n\\end{figure}\n\nIn the following analysis, the frequency $f$ is used rather than $\\omega$.\nThey are related with each other by $\\omega_\\text{R,I} = 2\\pi f_\\text{R,I}$.\nThe sampling rate is 4096Hz.\n\n\n\n\n\\section{Matched filtering}\n\\label{sec:MF}\nWhen the waveforms can be theoretically modeled and generated rapidly,\nthe matched filtering is a powerful method for the parameter estimation (see \\cite{Creighton} as a standard textbook).\nThe detection statistic is the signal-to-noise ratio (SNR) and it can be calculated by the noise-weighted inner product between the observational data $s(t)$ and a template $h(t)$,\n\\begin{equation}\n\\text{SNR} = 4\\text{Re} \\int_{f_\\text{min}}^{f_\\text{max}} df\\ \\frac{\\tilde{s}(f) \\tilde{h}^\\ast(f)}{S_n(f)},\n\\label{eq:snr}\n\\end{equation}\nwhere $S_n(f)$ is the noise power spectral density,\n$\\tilde{s}(f)$ and $\\tilde{h}(f)$ are the Fourier transforms of $s(t)$ and $h(t)$, respectively.\nWe use the LIGO O1 noise power spectral density,\n\\begin{align}\n\tS_n(f) = &10^{-44} \\times \\left( \\frac{18.0}{0.1+f} \\right)^4 + 0.49\\times 10^{-46} \\notag \\\\\n\t&+ \\left( \\frac{f}{2000.0} \\right)^2 \\times 16.0 \\times 10^{-46} [\\text{strain}^2\/\\text{Hz}],\n\t\\label{eq:LIGOO1}\n\\end{align}\ngiven in Ref.~\\cite{LIGOnoisecurve}.\n\nWe do not optimize the coalescence time in the present matched filtering analysis.\nInstead, we fix it to the value of the injected templates, assuming that it can be easily guessed from the inspiral part of the gravitational wave data.\nTherefore, our templates are parameterized by the deviation of the QNM frequency, $\\{\\delta_\\text{R}, \\delta_\\text{I}\\}$, and the initial phase, $\\phi_0$.\nSince the initial phase can be marginalized analytically,\nthe parameter search is done on the parameter space of $\\{\\delta_\\text{R}, \\delta_\\text{I}\\}$.\nWith the uniform prior, the posterior distribution of the real and imaginary parts of the QNM frequency $\\{f_\\text{R}, f_\\text{I}\\}$ can be obtained by\n\\begin{equation}\np(f_\\text{R}, f_\\text{I}|s) \\propto \\exp\\left[ \\frac{\\text{SNR}^2(\\delta_\\text{R}, \\delta_\\text{I})}{2} \\right].\n\\end{equation}\n\nFor the post-merger analysis, we set the boundaries of the integration range of frequency to $f_\\text{min} = 160$Hz and $f_\\text{max} = 512$Hz.\nThe lower cutoff frequency, $f_\\text{min}$, is the frequency at which the amplitude of the template reaches the maximum.\n\nIn our work, the template bank is constructed to form a uniform grid in the $(\\delta_\\text{R}, \\delta_\\text{I})$ plane.\nThe parameter $\\delta_\\mathrm{R}$ is varied in the range $[0.7, 1.3]$ with the step of $0.006$,\nwhile $\\delta_\\mathrm{I}$ in the range $[0.5, 1.5]$ with the step of $0.01$.\nThe template bank consists of 10,201 templates.\n\n\n\n\n\n\n\\section{Conditional variational auto encoder}\n\\label{sec:cvae}\n\n\\subsection{Idea of CVAE}\n\\label{subsec:ideaCVAE}\n\nIn this subsection, we explain the idea of CVAE \\cite{Gabbard}.\nIn Bayesian inference, the existence of the true posterior $\\hat{p}(y|x)$, the distribution of the physical parameters $y$ under the assumption that a signal $x$ is given, is assumed.\nHere, the parameterized distributions $p_\\theta(y|x)$ are used as an approximation of $\\hat{p}(y|x)$.\nThe parameter $\\theta$ depends on the input signal $x$.\nThe neural network is trained to estimate the relation between $x$ and $\\theta$ using a training dataset,\nthat is, a lot of pairs of input data and the true values of the physical parameters, $\\{ (x_i, y_i) \\}_{i=1\\dots N}$.\nThe Kullback-Leibler (KL) divergence,\n\\begin{equation}\n\tKL[\\hat{p}(y|x) | p_\\theta(y|x)] := \\int dy\\ \\hat{p}(y|x) \\log \\frac{\\hat{p}(y|x)}{p_\\theta(y|x)},\n\\end{equation}\nis one of the natural choices for quantifying the mismatch between two probability distributions.\nHere, we consider the minimization of the expected value of the KL divergence,\n\\begin{equation}\n\t\\mathbb{E}_{\\hat{p}(x)}\\left[ KL[\\hat{p}(y|x) || p_\\theta(y|x)] \\right].\n\t\\label{eq:avgKL}\n\\end{equation}\nBecause only the terms including $p_\\theta(y|x)$ are essential for optimization,\nthe minimization of \\eqref{eq:avgKL} is equivalent to the maximization of the average of the cross entropy:\n\\begin{align}\n\t&\\mathbb{E}_{\\hat{p}(x)}\\left[ H[\\hat{p}(y|x)||p_\\theta(y|x)] \\right] \\notag \\\\\n\t&:= \\int dxdy\\ \\hat{p}(x) \\hat{p}(y|x) \\log p_\\theta(y|x) \\notag \\\\\n\t&= \\int dxdy\\ \\hat{p}(x, y) \\log p_\\theta(y|x).\n\t\\label{eq: avgLogP}\n\\end{align}\nThis can be approximated by the sample mean,\n\\begin{equation}\n\t\\mathbb{E}_{\\hat{p}(x)} \\left[ H[\\hat{p}(y|x)||p_\\theta(y|x)] \\right] \\simeq \\frac{1}{N}\\sum_{i=1}^N \\log p_\\theta(y_i|x_i).\n\t\\label{eq:crossentropy}\n\\end{equation}\n\nFor example, Gaussian distribution can be used as $p_\\theta(y|x)$.\nHowever, it would be too simple to approximate the posterior.\nIn order to enhance the flexibility of the approximant, the hidden variable model is often employed.\nThe approximated distributions are given as a superposition of simple distributions,\n\\begin{equation}\n\tp_\\theta(y|x) = \\int dz\\ p_{\\theta_\\text{D}}(y|x,z) p_{\\theta_\\text{E}}(z|x).\n\t\\label{eq:hvm}\n\\end{equation}\nThe additional variables $z$, so-called \\textit{hidden variables}, inherit compressed information of the data $x$.\nWith the hidden variable model, $\\log p_\\theta(y|x)$ appeared in R.H.S of Eq.~\\eqref{eq:crossentropy} is bounded by the evidence lower bound (ELBO),\n\\begin{eqnarray}\n\t\\log p_\\theta(y|x) &\\geq& \\mathcal{L}_\\text{ELBO} \\notag \\\\\n\t&:=&\\mathbb{E}_{q_\\phi(z|x,y)} \\left[ \\log p_{\\theta_\\text{D}}(y|x,z) \\right] \\notag \\\\\n\t&&- \\text{KL} \\left[ q_\\phi(z|x,y) | p_{\\theta_\\text{E}}(z|x) \\right]\n\t\\label{eq:ELBO}\n\\end{eqnarray}\nfor an arbitrary distribution $q_\\phi(z|x,y)$.\nThe negative ELBO, $-\\mathcal{L}_\\text{ELBO}$, is employed as the loss function to be minimized.\n\nA CVAE estimates the relation between the parameters of distributions and the conditioning variables.\nAs an example, the distribution $p_{\\theta_\\text{E}}(z|x)$ presents the probability of $z$ conditioned by $x$.\nThe neural network corresponding to $p_{\\theta_\\text{E}}(z|x)$ takes $x$ as an input and predicts the plausible value of $\\theta_\\text{E}$.\nIn Eq.~\\eqref{eq:ELBO}, three distributions, $p_{\\theta_\\text{D}}$, $p_{\\theta_\\text{E}}$ and $q_\\phi$, appear.\nTherefore, we need three networks for emulating these distributions.\n\n\n\nFurther simplification of Eq.~\\eqref{eq:ELBO} can be done as follows.\nFirst, the first term of the R.H.S of Eq.~\\eqref{eq:ELBO} can be approximated by the sample average,\n\\begin{equation}\n\t\\mathbb{E}_{q_\\phi(z|x,y)} \\log p_{\\theta_\\text{D}}(y|x,z)\n\t\\simeq \\frac{1}{N_\\text{z}}\\sum_{j=1}^{N_\\text{z}} \\log p_{\\theta_\\text{D}}(y|x, z_j),\n\\end{equation}\nwhere $z_j$ is the $j$-th sample of $z$ following $q_\\phi(z|y,x)$.\nIn this work, we set $N_\\text{z} = 1$.\nSecond, we adopt multivariate Gaussian distributions with diagonal covariance matrices as $p_{\\theta_\\text{D}}$, $p_{\\theta_\\text{E}}$ and $q_\\phi$.\nWe denote the mean and covariance matrix of $p_{\\theta_\\text{E}}(z|x)$ by\n\\begin{subequations}\n\\begin{eqnarray}\n\t\\vec{\\mu}_\\text{E} &=& (\\mu_{\\text{E}, 1}, \\mu_{\\text{E}, 2}, \\dots, \\mu_{\\text{E}, D_\\text{z}}), \\\\\n\t\\Sigma_\\text{E} &=& \\text{diag}(\\sigma^2_{\\text{E}, 1}, \\sigma^2_{\\text{E}, 2}, \\dots, \\sigma^2_{\\text{E}, D_\\text{z}}),\n\\end{eqnarray}\nthose of $p_{\\theta_\\text{D}}(y|x,z)$ by\n\\begin{eqnarray}\n\t\\vec{\\mu}_\\text{D} &=& (\\mu_{\\text{D}, 1}, \\mu_{\\text{D}, 2}, \\dots, \\mu_{\\text{D}, D_\\text{y}}), \\\\\n\t\\Sigma_\\text{D} &=& \\text{diag}(\\sigma^2_{\\text{D}, 1}, \\sigma^2_{\\text{D}, 2}, \\dots, \\sigma^2_{\\text{D}, D_\\text{y}}),\n\\end{eqnarray}\nand those of $q_\\phi(z|x,y)$ by\n\\begin{eqnarray}\n\t\\vec{\\mu} &=& (\\mu_1, \\mu_2, \\dots, \\mu_{D_\\text{z}}), \\\\\n\t\\Sigma &=& \\text{diag}(\\sigma^2_1, \\sigma^2_2, \\dots, \\sigma^2_{D_\\text{z}}),\n\\end{eqnarray}\n\\end{subequations}\nwhere $D_\\text{z}$ and $D_\\text{y}$ are the dimensions of the hidden variable $z$ and the physical parameters $y$, respectively.\nThus, the parameters $\\theta_\\text{E}$, $\\theta_\\text{D}$ and $\\phi$ denoted abstractly so far are $\\theta_\\text{E} = \\{\\vec{\\mu}_\\text{E}, \\Sigma_\\text{E} \\}$, $\\theta_\\text{D} = \\{\\vec{\\mu}_\\text{D}, \\Sigma_\\text{D} \\}$ and $\\phi = \\{ \\vec{\\mu}, \\Sigma \\}$.\nThen, the loss function for one training data is obtained as \n\\begin{widetext}\n\\begin{equation}\n\t\\text{Loss} = \\frac{D_\\text{y}}{2} \\log 2\\pi + \\sum_{l=1}^{D_\\text{y}} \\log \\sigma_{\\text{D},l} + \\frac{1}{2} \\sum_{l=1}^{D_\\text{y}} \\frac{(y_l - \\mu_{\\text{D},l})^2}{\\sigma^2_{\\text{D},l}}\n\t+\\frac{D_\\text{z}}{2} - \\frac{1}{2} \\sum_{k=1}^{D_\\text{z}} \\left\\{ \\log \\frac{\\sigma^2_{\\text{E},k}}{\\sigma^2_{k}} + \\frac{(\\mu_{k} - \\mu_{\\text{E},k})^2}{\\sigma^2_{\\text{E},k}} + \\frac{\\sigma^2_{k}}{\\sigma^2_{\\text{E},k}} \\right\\}.\n\t\\label{eq:loss}\n\\end{equation}\n\\end{widetext}\n\n\n\n\\begin{figure*}[t]\n\\centering\n\\includegraphics[scale=0.35]{figure\/CVAE.pdf}\n\\caption{\\label{fig:cvae}\nThe schematic picture of the CVAE.\nEncoder1, Encoder2 and Decoder represent neural networks corresponding to the probability distributions $p_{\\theta_\\text{E}}(z|x)$, $q_\\phi(z|x,y)$ and $p_{\\theta_\\text{D}}(y|x,z)$, respectively.\nHere, we adopt multivariate Gaussian distributions for all three distributions.\nThe parameters characterizing these distributions are $\\theta_\\text{D} = \\{\\mu_\\text{D}, \\Sigma_\\text{D}\\}$, $\\theta_\\text{E} = \\{\\mu_\\text{E}, \\Sigma_\\text{E}\\}$ and $\\phi = \\{\\mu, \\Sigma\\}$.\nAt the training ({\\it left}), three networks are optimized so that the loss function is minimized.\nThe Kulback-Leibller divergence are calculated with the output of the Encoder1 and the Encoder 2.\nThe output of the Decoder is used for assessing the negative log posterior term.\nFor test events ({\\it right}), the Encoder 1 and the Decoder are employed for sampling predicted values.}\n\\end{figure*}\n\n\nFigure~\\ref{fig:cvae} shows the schematic picture of the CVAE we use in this work.\nThe neural networks corresponding to $p_{\\theta_\\text{E}}(z|x)$, $q_\\phi(z|x,y)$ and $p_{\\theta_\\text{D}}(y|x,z)$ are called as Encoder1, Encoder2 and Decoder, respectively.\nEach neural network returns the mean and the diagonal elements of the covariance matrices of each distribution.\nAt the training (the left figure of Fig.~\\ref{fig:cvae}), all networks are simultaneously trained with the loss function \\eqref{eq:loss}.\nWhen the trained the CVAE is applied to a test data (the right figure of Fig.~\\ref{fig:cvae}),\nwe use the networks corresponding to $p_{\\theta_\\text{D}}$ and $p_{\\theta_\\text{E}}$ for estimating a posterior.\nEstimating the posterior for a test event is based on the following sampling method.\nFirst we sample one value of $z$ from the distribution $p_{\\theta_\\text{E}}(z|x)$.\nNext, with the sampled $z$, a sample of the parameter $y$ is obtained from $p_{\\theta_\\text{D}}(y|z, x)$.\nRepeating these sampling processes, we finally obtain many samples of predicted values $y$ that follow the estimated posterior $p_\\theta(y|x)$. \n\n\n\n\n\n\\subsection{Implementation}\n\nIn this subsection, the implementation of the CVAE that we use is described.\nWe use \\verb|PyTorch| \\cite{PyTorch} for the implementation.\n\n\n\\subsubsection{Structure}\n\n\nAs explained in the subsection \\ref{subsec:ideaCVAE}, the CVAE consists of three neural networks, that is, two encoders and one decoder.\nEach of them has six layers and each internal layer has 512 units.\nWe put a ReLU layer after each fully-connected layer except for the last layer of each neural network.\nEncoder1 and Encoder2 will output the mean and the diagonal elements of the covariance matrix of the hidden variables.\nWe set the dimension of the hidden variables as $D_\\text{z} = 16$.\nThe input of Decoder is the sampled variables from the multi-variate Gaussian distribution having the mean and covariance matrix estimated by the encoder.\nDecoder returns the mean and the covariance matrix of the distribution $p_{\\theta_D}(y|z,x)$.\nThe entire structure of the CVAE we use in this work is shown in Table \\ref{tab:structureCVAE}.\n\n\\begin{table}[b]\n\\centering\n\\caption{\\label{tab:structureCVAE}\nThe structure of the CAVE that we use in this work.\nAll layers of Encoder1, Encoder2 and Decoder are fully connected layers.\nEach network consists of six fully connected layers.\nThe input of Encoder1 is the segment of the signal.\nThe inputs of Encoder2 are a segment of the signal and the injected values of $\\delta_\\text{R}$ and $\\delta_\\text{I}$.\nDecoder takes the signal and the hidden variables as input.}\n\\begin{ruledtabular}\n\\begin{tabular}{cc}\nNetwork & \\# of units of respective layers\\\\ \\hline\nEncoder1 & [128, 512, 512, 512, 512, 512, 32] \\\\\nEncoder2 & [130, 512, 512, 512, 512, 512, 32] \\\\\nDecoder & [144, 512, 512, 512, 512, 512, 4]\n\\end{tabular}\n\\end{ruledtabular}\n\\end{table}\n\n\n\\subsubsection{Dataset for training}\n\nFor the training, we use the same templates contained in the template bank for the matched filtering.\nEach template is labeled by $\\{ \\delta_\\text{R}, \\delta_\\text{I}\\}$.\nThe input signals as training data are generated as\n\\begin{equation}\n\tx(t) = A h_\\text{whitened}(t) + n(t)\n\\end{equation}\nwhere $h_\\text{whitened}$ is a template whitened with Eq.~\\eqref{eq:LIGOO1},\nthe noise $n(t)$ is generated from the standard normal distribution,\nand the amplitude $A$ is chosen to realize a specified SNR.\nTo prevent overfitting to a specific noise pattern, the noise realizations are generated and the whitened templates are injected into them for each iteration.\nFrom these simulated signals, we pick up 128 points starting from the amplitude peak,\nwhich is used as the input data of the CVAE.\n\n\n\\subsubsection{Training and inference scheme}\n\nThe Adam procedure \\cite{Adam} is used for the optimization algorithm.\nThe learning rate is set to $10^{-5}$ initially and decreased to $10^{-6}$ on the later stage of the training.\nThe scheduled training is employed, i.e., \nthe amplitude of the signal is gradually decreased from a large initial amplitude.\nThe training schedule is shown in Table \\ref{tab:scheduleCVAE}.\nThe batch size is 256.\n\nWhen the trained CVAE is applied to a test data, the sampling process to estimate the distribution is repeated until $4\\times10^6$ samples are collected.\n\n\n\\begin{table}[t]\n\\caption{\\label{tab:scheduleCVAE}\nThe training schedule for the CVAE.\nIn the last stage of training, input signals have SNR varying from 8 to 30.\nAfter 45000 epochs, the training is terminated when decreasing of training loss saturates.}\n\\begin{ruledtabular}\n\\begin{tabular}{ccc}\nepoch & the range of $A$ & learning rate\\\\ \\hline\n1 - 10000 & [8.0, 10.0] & $1.0\\times10^{-5}$ \\\\\n10001 - 15000 & [6.0, 10.0] & $1.0\\times10^{-5}$ \\\\\n15001 - 20000 & [4.0, 10.0] & $1.0\\times10^{-5}$ \\\\\n20001 - 25000 & [3.0, 10.0] & $1.0\\times10^{-5}$ \\\\\n25001 - 45000 & [2.0, 10.0] & $1.0\\times10^{-5}$ \\\\\n45001 - & [2.0, 10.0] & $1.0\\times10^{-6}$\n\\end{tabular}\n\\end{ruledtabular}\n\\end{table}\n\n\n\n\\section{Convolutional neural network}\n\\label{sec:cnn}\n\nIn this work, an ordinary neural network, which returns a single value for each parameter that we want to estimate,\nis also implemented as one of competitors to the CVAE.\nConvolutional neural networks (CNNs) are used for various research of the gravitational wave data analysis (e.g. \\cite{GeorgeHuerta}).\nOur CNN has three convolutional and four fully-connected layers.\nEach of them, except for the last layer, is followed by a ReLU layer.\nThe output of the last layer is the estimated values of $\\{ \\delta_\\text{R}, \\delta_\\text{I}\\}$.\nFor respective convolutional layers, the numbers of filters are 128, 256 and 512, \nand the sizes of filters are 32, 8 and 8.\nAll of fully connected layers have 512 units.\nWe use mean square loss for the loss function.\nAlso for the training of the CNN, scheduled training is employed.\nThe training schedule is shown in Table \\ref{tab:scheduleCNN}.\nThe CNN is also implemented by \\verb|PyTorch|.\nThe training dataset is the same as the CVAE.\n\n\\begin{table}[t]\n\\caption{\\label{tab:scheduleCNN}\nThe training schedule for the CNN.\nWe set the learning rate as $10^{-4}$ for the whole epoch of training.\nAfter the 4001st epoch, the training is terminated once the decrease of training loss saturates.}\n\\begin{ruledtabular}\n\\begin{tabular}{cc}\nepoch & the range of $A$ \\\\ \\hline\n1 - 1000 & [8.0, 10.0] \\\\\n1001 - 2000 & [6.0, 10.0] \\\\\n2001 - 3000 & [4.0, 10.0] \\\\\n3001 - 4000 & [3.0, 10.0] \\\\\n4001 - & [2.0, 10.0] \\\\\n\\end{tabular}\n\\end{ruledtabular}\n\\end{table}\n\n\n\n\\section{Results}\n\\label{sec:results}\n\n\\subsection{Dataset for comparison}\nWe prepare the mock test data in the same way as the training data.\nThe real-valued template $h_\\text{inj}$ is generated from a complex-valued modified template $h = h_+ + ih_\\times$ with the randomly sampled phase $\\phi_0$, i.e.,\n\\begin{equation}\n\th_\\text{inj} = h_+ \\cos\\phi_0 + h_\\times \\sin\\phi_0.\n\\end{equation}\nWe use the noise curve of LIGO O1 for generating the Gaussian noise (Eq.~\\eqref{eq:LIGOO1}).\nThree datasets with SNR of the merger-ringdown part 30.0, 15.0 and 8.0 are prepared (the definition of the merger-ringdown SNR is Eq.~\\eqref{eq:snr}).\nEach dataset consists of 500 simulated data whose $\\delta_\\text{R}$ and $\\delta_\\text{I}$ are randomly sampled from the region satisfying our assumptions, i.e., $|\\delta_\\text{R}|<0.3$ and $|\\delta_\\text{I}|<0.5$.\n\n\n\\subsection{Comparison of the point estimation}\nTo quantify the accuracy of the estimates, we define the following two quantities,\n\\begin{eqnarray}\n&\\overline{\\Delta Q} := \\dfrac{1}{N_\\text{data}} {\\displaystyle \\sum_{i=1}^{N_\\text{data}} } \\left( Q_i^\\text{est} - Q_i^\\text{true} \\right), \\label{eq:DeltaQ} \\\\\n&\\sigma(Q) := \\dfrac{1}{N_\\text{data}} \\left[ {\\displaystyle \\sum_{i=1}^{N_\\text{data}} } \\left( Q_i^\\text{est} - Q_i^\\text{true} \\right)^2 \\right]^{1\/2}. \\label{eq:SigmaQ}\n\\end{eqnarray}\nHere, $Q^\\text{est}$ is given by the estimated value that maximizes the posterior distribution for the matched filtering and the CVAE,\nwhile it is given by the output value for the CNN.\nThe comparison of the errors is shown in Table \\ref{tab:comp}.\nFrom this table, we can conclude that\n\\begin{itemize}\n\\item For both $f_\\text{R}$ and $f_\\text{I}$, the means of the errors $\\overline{\\Delta Q}$ are much smaller than the standard deviations $\\sigma(Q)$.\nTherefore, the estimates of both $f_\\text{R}$ and $f_\\text{I}$ are not significantly biased in all methods.\n\\item Because the standard deviations of the CVAE are smaller than those of the matched filtering and the CNN, we can say that the CVAE estimates the QNM frequencies more accurately than the other two methods.\n\\end{itemize}\n\n\n\n\\begin{table}[t]\n\\centering\n\\caption{\\label{tab:comp}\nThe comparison of the estimation errors.\nThe quantities $\\overline{\\Delta Q}$ and $\\sigma(Q)$ are defined in Eqs.~\\eqref{eq:DeltaQ} and \\eqref{eq:SigmaQ}.\nThe estimation by the CVAE has no significant bias for both of $f_\\text{R}$ and $f_\\text{I}$ and for any values of SNR.\nThe matched filtering and the CNN also estimate QNM frequency with small bias for most cases.\nComparing the values of $\\sigma(f_\\text{R,I})$, we find that the CVAE takes the smallest values for all cases,\nexcept for imaginary part of the dataset having SNR=8.\nFor this case, the CNN has a smaller value of $\\sigma(f_\\text{I})$ than the CVAE.\nHowever, the CNN derives a slightly larger value of $\\overline{\\Delta f_\\text{I}}$ than the CVAE.\nThis means that the estimation by the CNN is more biased.}\n\\begin{ruledtabular}\n\\begin{tabular}{llllll}\n$\\text{SNR}_\\text{RD}$ &method & $\\overline{\\Delta f_\\text{R}}$ [Hz] & $\\sigma(f_\\text{R})$ [Hz] & $\\overline{\\Delta f_\\text{I}}$ [Hz] & $\\sigma(f_\\text{I})$ [Hz] \\\\ \\hline\n& MF & -0.1607 & 3.5243 & -0.1865 & 2.7237 \\\\\n30.0 & CNN & 0.9732 & 8.2192 & -1.1812 & 3.0875 \\\\\n& CVAE & 0.0267 & 3.1180 & -0.2528 & 2.4311 \\\\ \\hline\n& MF & -0.4015 & 7.4448 & -0.5448 & 5.4256 \\\\\n15.0 & CNN & -0.0432 & 9.5206 & -0.6411 & 4.9630 \\\\\n& CVAE & -0.4253 & 6.2759 & -0.2109 & 4.8657 \\\\ \\hline\n& MF & -0.1755 & 15.2181 & -1.7824 & 9.6581 \\\\\n8.0 & CNN & 0.9783 & 14.2067 & 1.7371 & 7.7085 \\\\\n& CVAE & -0.2350 & 12.4485 & 0.4289 & 8.9368\n\\end{tabular}\n\\end{ruledtabular}\n\\end{table}\n\n\n\n\\subsection{Reliability of the confidence regions}\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=8cm]{figure\/contour_SNR8.0_00000.pdf}\n\\caption{An example of posterior estimations for a test data whose SNR is 8.0.\nBlue and orange contours are confidence regions estimated by the CVAE and the matched filtering, respectively.\nThe contours show (50, 90, 99)\\% confidence regions.\nBlue circle and orange square are the predicted values of the QNM frequency obtained by the CVAE and the matched filtering, respectively.\nBlack cross shows the injected value of the QNM frequency.}\n\\label{fig:posterior}\n\\end{figure}\n\n\n\\begin{figure*}[t]\n\\begin{tabular}{cc}\n\\begin{minipage}{0.5\\hsize}\n\\begin{center}\n\\includegraphics[width=8cm]{figure\/PPplot_MF_8.0.pdf}\n\\end{center}\n\\end{minipage}\n\\begin{minipage}{0.5\\hsize}\n\\begin{center}\n\\includegraphics[width=8cm]{figure\/PPplot_CVAE_8.0.pdf}\n\\end{center}\n\\end{minipage}\n\\end{tabular}\n\\caption{\\label{fig:ppplot}\nThe P-P plots of the matched filtering ({\\it left}) and of the CVAE ({\\it right}). \nThe SNR of the test dataset is 8.0.\nThe horizontal axis shows percentages of the confidence regions.\nThe vertical axis shows the fraction of events whose true values are located within the confidence regions.\nIf estimated confidence region has the frequentist meaning, the plot (blue line) is consistent with the diagonal line (black dotted line).\nThe orange region is 1-$\\sigma$ error of the binomial distribution.\nThe error estimation by the CVAE seems to be slightly biased.\nA similar feature can be seen for the datasets having SNR 15.0 and 30.0.}\n\\end{figure*}\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=7cm]{figure\/ConvergencePPplot_8.0.pdf}\n\\caption{The deviation of the P-P plot from the diagonal line.\nThe SNR of the dataset is 8.0.\nBlue circles and orange squares are obtained with 500 and 10,000 test events, respectively.\nThe CVAE estimates the posterior distributions with $<2$\\% systematic error.}\n\\label{fig:devpp}\n\\end{figure}\n\n\nAn example of the predictions of posterior distributions by the CVAE and the matched filtering is shown in Fig.~\\ref{fig:posterior}.\nBefore comparing the posterior estimations by the CVAE and the matched filtering,\nwe assess the reliability of the posterior distributions estimated by the CVAE.\nIf the estimation of posterior distribution is reliable,\nthe fraction of events whose true values are located within the $x$-\\% confidence region should be $x$-\\%.\nFor visualization, a P-P plot is useful.\nIn a P-P plot, we take the confidence level as horizontal axis and the fraction of events as vertical axis.\nIf the posterior distribution is reliable, the P-P plot reduces to the diagonal line.\nWe show the P-P plots obtained by the CVAE and the matched filtering in Fig.~\\ref{fig:ppplot}.\nIt is found that the error estimation by the matched filtering includes no significant bias.\nOn the other hand, the P-P plot for the CVAE seems to deviate from the 45$^\\circ$ line only slightly.\nIn order to quantify the systematic error,\nwe generate additional 9,500 test events for each SNR.\nFigure~\\ref{fig:devpp} shows the deviation from 45$^\\circ$ line for SNR=8.0 events.\nIt is found that the estimation by the CVAE contains the systematic error less than 2\\%.\nA similar feature can be seen for the events having SNR 15.0 and 30.0.\n\n\n\n\\subsection{Comparison of areas of confidence regions}\nTaking into account the existence of bias at a few percent level,\nwe compare the confidence regions obtained by the CVAE and the matched filtering.\nTo compare them quantitatively, we define \n\\begin{eqnarray}\n\\Delta S_i(x) = S^\\text{CVAE}_i(x) - S^\\text{MF}_i(x), \\\\\n\\overline{\\Delta S(x)} = \\frac{1}{N_\\text{data}} \\sum_{i=1}^{N_\\text{data}} \\Delta S_i(x),\\label{eq:S(x)}\n\\end{eqnarray}\nwhere $S^\\text{CVAE\/MF}_i(x)$ is the area of the $x$-\\% confidence region estimated by the CVAE\/the matched filtering for the $i$-th test event.\nWhen $\\Delta S_i(x)$ is negative, the constraint of the CVAE is tighter than that of the matched filtering.\nThe comparison of the area of the confidence region is shown in Table~\\ref{tab:area}.\nFor all datasets, the CVAE leads to more stringent constraint than the matched filtering.\n\n\\begin{table}[t]\n\\centering\n\\caption{\\label{tab:area}\nThe comparison of the areas of confidence regions.\nThe quantity $\\overline{\\Delta S(x)}$ is defined in Eq.~\\eqref{eq:S(x)}.\nFor all datasets having different SNRs, the CVAE gives tighter constraint than the matched filtering.}\n\\begin{ruledtabular}\n\\begin{tabular}{llll}\n$\\text{SNR}_\\text{RD}$ & $\\overline{\\Delta S(99)} [\\text{Hz}^2]$ & $\\overline{\\Delta S(90)} [\\text{Hz}^2]$ & $\\overline{\\Delta S(50)} [\\text{Hz}^2]$ \\\\ \\hline\n30.0 & -10.8893 & -6.6020 & -2.3531 \\\\\n15.0 & -119.521 & -64.5984 & -20.1443 \\\\\n8.0 & -415.235 & -185.065 & -46.8837 \n\\end{tabular}\n\\end{ruledtabular}\n\\end{table}\n\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nIn this paper, we investigated how accurately a CVAE can estimate the QNM frequencies using only merger-ringdown waveforms.\nTo do this, we generated modified waveforms by changing the merger-ringdown part of the GR template\nand constructed a test dataset by injecting the waveforms into simulated Gaussian noise data.\nWe compared the accuracies of the CVAE and the matched filtering,\nand showed the CVAE can predict the QNM frequencies with a higher accuracy than the matched filtering.\nNext, we evaluated the reliability of the confidence regions estimated by the CVAE, making a P-P plot.\nThe estimated confidence levels have the systematic error less than 2\\%.\nThe areas of 50\\%, 90\\% and 99 \\% confidence regions obtained by the CVAE and the matched filtering were compared\nand it was found that the CVAE can give more stringent constraint to the QNM frequencies than the matched filtering.\n\nIn this work, we only focused on the case of the Gaussian noise.\nTo make the deep learning method applicable to the real event analysis,\nthe case with the noise having non-Gaussianity need to be investigated.\nThe higher modes of the ringdown signal were also neglected.\nThe importance of the multi-mode analysis is indicated by several authors \\cite{multimodeBerti, multimodeJulian}.\nApplication to the black hole spectroscopy is remaining for future work.\n\nCVAE is not the only method for estimating posteriors (e.g. Bayesian neural network \\cite{Hongyu}, NN with reduced order modeling \\cite{Alvin}).\nComparison (or integration) with these methods would be insightful.\n\nIn this work, the merger-ringdown waveforms modified from those of GR were employed for training the CVAE.\nIn this sense, our method is model-dependent.\nAlthough the post-merger templates based on the specific theory of modified gravity are not obtained so far,\nthe result of our work is insightful when they can be constructed.\nOn the other hand, exploring model independent methods is a possible direction of future work.\nEven in non-GR theories, the ringdown gravitational waves would be expected to have the properties that the frequency is constant and the amplitude decays exponentially.\nNeural networks would be useful to detect these features from noisy signals and estimate the QNM frequencies independently of the way of modification.\n\n\n\n\\begin{acknowledgments}\nThis work was supported by JSPS KAKENHI Grant Number JP17H06358 (and also JP17H06357),\n{\\it A01: Testing gravity theories using gravitational waves,} as a part of the innovative research area,\n``Gravitational wave physics and astronomy: Genesis\".\nWe thank the members of the A01 group for useful discussions.\nSome part of calculation has been performed by using GeForce 2080Ti GPU at Nagaoka University of Technology.\n\\end{acknowledgments}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nEmergent need to achieve better, more precise and sensitive drug detection in medicine and health care recently has been\naddressed by developing biosensors based on two-dimensional materials (2DM)\\cite{Kostarelos2014,Bolotsky2019,Zhu2019,Daus2021,Lee2014,Campuzano2017,Oh2021,Pang2020}. Not only 2D materials offer new response and\/or transduction mechanisms and better performance, they can be used for label-free biosensing. Importantly, 2DMs could be designed and\/or integrated to generate several signals in response to a single analyte, as it will be illustrated below, or to respond by several channels to a group of substances in parallel, thus achieving a multimodal detection. \n\nThe multimodal operation exceeds single-mode biosensing through its higher throughput as well as ability to differentiate the analyte from background signals in a complex media, and potentially allows the multiplexing of biosensing\\cite{Yen2015,Wang2015,Lee2016,Lei2019,Zhang2017}, {\\em i.e.}, determining multiple analytes through a single test. While significant attention has been paid to exploring new 2D materials and demonstrating their biosensing capabilities at the level of single devices \\cite{Zhang2015,Wang2014,ArjmandiTash2016,Sekhon2021}, overall knowledge on what allows successful multimodal detection and what limits biosensing capabilities of 2DM heterostructures is scarce. Atomically thin 2D materials, having an ultimate surface-to-volume ratio, may possess surface non-uniformities at the nanometer scale (atomic impurities\/adsorbates\/defects, wrinkles\/ruptures) that modulate their optical properties, although their importance and explicit role in producing material's variability yet to be studied. To a large extent, the difficulty to determine physical mechanisms that control performance of 2DM devices is due to disparate scales for atomic non-uniformities compared to a micrometer, or larger, size of active elements of a biosensor. Structural characterization with a high spatial resolution, such as electron microscopy, often does not detect materials optical properties, while optical microscopy lacks the required resolution. Thus, in order to reveal such mechanisms, multiple characterization tools should be combined and correlated\\cite{Kolesnichenko2021}. In this work, correlated multidimensional imaging, including Raman and near-field microscopies, scanning probe and electron microscopies, was applied to unveil physical processes behind label-free multimodal detection of doxorubicin (DOX), an anthracycline cancer drug, by 2DM vertical heterostructures.\n\nDoxorubicin is one of the most common drugs against different types of cancer (haematological, thyroid, breast, ovarian, lung and liver cancer)\\cite{Norouzi2020,Chen2021a,Zhong2017,He2020,Yuan2021}. Since DOX is known for certain drug resistance and side effects\\cite{Carvalho2014,HOFMAN2015168,MITRY201617,Umsumarng2015}, an efficient and sensitive detection of the amount of DOX in various types of biological samples, potentially at the point-of-care, has significant value. Recently, DOX has been loaded on graphene oxide and other nanocomposites\\cite{Sun2008,Chekin2019,Hasanzadeh2016,Yang2020,Pei2020}. Regular Raman microscopy, as well as surface enhanced Raman spectroscopy (SERS) were used to detect DOX in various cell lines and real samples\\cite{Zong2018,Gautier2013,Huang2013,Farhane2015,Farhane2017,Litti2016}. Here, optical signaling of the presence of DOX (deposited from solution) is demonstrated via three independent channels: (1) graphene enhanced Raman spectra (GERS) of DOX, (2) Raman shift of monolayer graphene (MLG) and (3) photoluminescence (PL) shift of single layer MoS$_2$ (Fig.\\ref{fig:fig1}).\n\n\n\n\\begin{figure*}[b]\n\t\\centering \n\t\\includegraphics[width=\n\t\\textwidth]{fig01.jpg\n\n\t\\caption{Multiplexed detection of doxorubicin drug. (a) Schematics of multimode detection by the combination of MoS$_2$ photoluminescence, DOX GERS, and Raman shift of monolayer graphene. (b) GERS signal of DOX\/MLG (red), vs. reference Raman spectra of DOX\/DMSO solution (cyan) and MLG (gray); red (cyan) arrows mark DOX (DMSO) lines. (c) Modulation of MoS$_2$ PL spectrum: with DOX (red) and w\/o DOX (cyan); inset shows DOX molecular structure. (d) Fitting of measured PL spectra from (c): A\/B-exciton and trion (X$^-$) lines are shown; modulation of peak position ($\\Delta\\omega$) and intensity ($\\Delta P$) are indicated using A-exciton fit; inset shows the schematics of optical subbands of MoS$_2$. (e-f) Typical Raman spectra of MLG: with DOX (red) and before incubation (blue); G- and 2D-line intensities were normalized to unity. (g-h) Correlation plots and (i-l) partial distribution functions for peak position and width for G- and 2D-lines, measured locally, at diffraction limited spots across the sample; same color code as in (e-f); clear line red-shift and broadening are detected with DOX.}\n\t\\label{fig:fig1}\n\\end{figure*}\n\n\n\nCurrently, two major approaches are implemented in biosensor technology: label-free and label-based sensing. While the latter shows high selectivity limited only by our ability to find a high-optical-contrast receptor with best binding to known analyte, the former is much more versatile, especially in terms of sensing a wide range of analytes, enabling agnostic biosensing, and being capable to detect yet unknown biothreats for which the receptors have not been developed. Though very promising, label-free biosensors require additional calibration due to lower specificity. To solve the problem sensing multiplexing, combined with machine learning, has been applied\\cite{Zhang,Misun2016,Cui2020}.\n\n\n\n\\begin{table*}\n\t\\caption{The PL fit parameters for Fig.\\ref{fig:fig1}(d): upper\/lower row corresponds to PL with\/without DOX}\n\t\\label{table:tablePL}\n\t\\centering\n\\resizebox{\\textwidth}{!}{\t\n\t\\begin{tabular}{|*{9}{c|}}\n\t\t\\hline\n\t\t\\multicolumn{3}{|c}{Trion} & \\multicolumn{3}{|c}{A-exciton} & \\multicolumn{3}{|c|}{B-exciton} \\\\ \\hline \n\t\t$\\omega_c$, eV & $\\gamma$, meV & P, cts. &$\\omega_c$, eV & $\\gamma$, meV & P, cts. &$\\omega_c$, eV & $\\gamma$, meV & P, cts. \\\\ \\hline\n1.739 $\\pm$ 0.002 & 60. $\\pm$ 3. & 32 $\\pm$ 4 & 1.815 $\\pm$ 0.0002 & 82.0 $\\pm$ 0.3 & 793 $\\pm$ 4 & 1.953 $\\pm$ 0.001 & 135.8 $\\pm$ 2. & 203 $\\pm$ 1 \n\t\t\\\\ \\hline\n1.719 $\\pm$ 0.003 & 60. $\\pm$ 9. & 15 $\\pm$ 3 & 1.806 $\\pm$ 0.002 & 90.7 $\\pm$ 0.3 & 586 $\\pm$ 3 & 1.955 $\\pm$ 0.002 & 135.0 $\\pm$ 2. & 197 $\\pm$ 2 \n\\\\ \\hline\n\\end{tabular}\n}\n\\end{table*}\n\n\n\n\nIn order to achieve multiplexed detection, arrays of different sensors could be integrated in one device\\cite{Kalmykov2019}. To avoid unnecessary complexity of integration, mutimodal sensing materials and heterostructures are developed\\cite{Novoselov2016,Jeong2015,Ma2020,AlaguVibisha2020}. Here we demonstrate multiplexed detection of doxorubicin by vertical heterostructure of monolayer graphene\/transition metal dichalcogenide (TMDC) by measuring response of 2D materials in 3 optical channels: MoS$_2$ photoluminescence, graphene Raman shift and graphene enhanced Raman scattering of molecular fingerprint modes of the molecule itself.\n\n\n\n\n\\section{Results and discussion}\n\\subsection*{Label-free detection of Doxorubicin}\n\nMolybdenum disulfide, a typical TMDC 2D material, is known to show strong PL signal\\cite{Mak2013} which can be modulated by adsorbtion of molecular species\\cite{Mouri2013,CatalanGomez2020,Aryeetey2021,Barja2019,Mitterreiter2021,Schuler2020,Thiruraman2018}. Fig.\\ref{fig:fig1}(c) shows a profound change in PL spectrum of MoS$_2$ photoluminescence (PL) after incubation to 172 nM solution of DOX for 15 minutes (the large area integrated PL is presented here; to not be confused with local micro-PL discussed below). In order to understand physical mechanisms resulting in the DOX recognition, the PL band is fitted with individual excitation lines: as shown in the inset of Fig.\\ref{fig:fig1}(d), the MoS$_2$ optical transitions include typical B- and A-exciton subbands, trion (X$^-$) and, often, additional localized modes. Here the shift in mode peak position ($\\Delta\\omega$), peak intensity ($\\Delta P$) and width ($\\Delta\\gamma$) are indicative for analyte absorption, resulted in subsequent charge transfer\/doping and strain imposed in the 2D material. These shifts are specific for an analyte: panel (d) and data in Table \\ref{table:tablePL} provide the values for DOX analyte. While upper B-exciton is barely influenced by the drug molecules (a small intensity difference is detected, see red arrow in panel (d)), both A-exciton and trion are red-shifted, have lower intensity and larger peak width, that all together lead to the spectral differences in panel (c). The ability to detect DOX at a low (sub-nM) concentration (and differentiate it from other components of a complex solution) would depend on amount of signal over the noise for the biosensor. Importantly, the variation of the signal in the pristine biosensing material adds to the total uncertainty and reduces the device performance as we discuss below.\n\n\\begin{figure}[!]\n\t\\centering\n\t\\includegraphics[width=0.\n\t\\textwidth]{fig12.jpg\n\t\\caption{Stability test of MoS$_2$\/graphene vertical heterostructure. SEM (a,e) and sSNOM (b-d,f-h) images of two MoS$_2$ islands, randomly selected, coated with monolayer graphene. The island (a) shows nearly zero degradation after 242 days in ambient -- from (b) to (c), neither after 705 days -- from (b) to (d); the island (e) was selected near a tear in MLG and shows (g) partial oxidation near the central micro-crystallite of molybdenum after 242 days, followed by (h) almost complete oxidation of MoS$_2$ surface after 705 days. All scale bars are 1 $\\mu$m.\n\t\n\t\n\t\n\t}\n\t\\label{fig:fig12}\n\\end{figure}\n\n\\begin{table*}\n\t\\caption{Measured GERS enhancement factors for major fingerprint Raman lines of DOX}\n\t\\label{table:tableGERS}\n\t\\centering\n\t\\begin{tabular}{|*{8}{c|}}\n\t\t\\hline\n\t\tRaman line position,\tcm$^{-1}$ & 1236 &\t1244 &\t1260 &\t1268 &\t1326 &\t1434 &\t1613 \\\\\n\t\t\\hline\n\t\tGERS enhancement factor & 6.4 & 7.0 &\t23.3 &\t23.3 &\t1.8 &\t2.9 &\t2.1 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\\end{table*}\n\n\n \n\nAgnostic detection of a chemical or biothreat requires multiplexing the receptor signal with additional channels, as there is no calibrated negative control for unknown analyte. In order to differentiate the signal from DOX against any other molecule potentially causing PL modulation, we measure the characteristic fingerprint Raman spectrum of DOX. Fig.\\ref{fig:fig1}(b) shows the Raman spectrum of DOX\/DMSO solution (cyan curve). However, DOX Raman lines (highlighted by red arrows) are mixed, superimposed and even obscured with DMSO (background) response (cyan arrows). Furthermore, the line intensity of analyte would be comparable to background even at a relatively high DOX concentration. On contrary, when deposited on graphene surface, most of DOX lines become clearly visible, due to a significant GERS enhancement of the Raman signal of DOX (compare red and cyan curves). Table \\ref{table:tableGERS} summarizes the amount of signal enhancement for particular lines. In our sample with only two substances, the intensity of fingerprint lines of DOX already allows to confirm the analyte structure and determine the presence of analyte (which cannot be found from a PL data channel alone). While in general, for an agnostic biosensor, the whole Raman spectrum should be analyzed by machine learning correlation analysis of the data. Here, GERS, the second data channel, complements the PL detection which can provide information of the concentration of the drug (while the intensity of the GERS signal depends on enhancement factors and cannot be used to measure the amount of analyte). \n\n\n\n\n\n\n\n\n\\begin{figure*}[!]\n\t\\centering\n\t\\includegraphics[width=\n\t\\textwidth]{fig02_v.jpg\n\t\\caption{Local PL characterization of MLG\/MoS$_2$ heterostructure. \n\t\n\t\t(a) Single-point PL spectra of the MoS$_2$ island in (e). (inset) Total PL intensity map; stars show locations for the point spectra of the same color in main panel.\n\t\t(b) Correlation plots and (c-d) partial distribution functions for peak position and width for A-exciton (red) and trion (orange) lines, measured locally; several clusters are visible in trion data, highlighted by ovals in correlation plot and Gaussian envelope curves in distributions.\n\t\t(f-i) Confocal maps of MoS$_2$ PL: (top row) fitted intensity and (bottom row) peak position for (left) trion and (right) A-exciton; arrows show regions of higher PL intensity for trion (lower for A-exciton). \n\t\tAll scale bars are 1 $\\mu$m.}\n\t\\label{fig:fig2}\n\\end{figure*}\n\n\nAs Fig.\\ref{fig:fig1}(b) shows, several DOX lines are superimposed with the Raman spectrum of graphene (gray curve corresponds to MLG reference), specifically with D- and G-lines near 1350 and 1600 cm$^{-1}$. While obscuring some of the DOX modes, Raman spectra of graphene should be analyzed separately, yielding yet another channel, to be multiplexed with the PL and GERS data. Fig.\\ref{fig:fig1}(e-f) shows pronounced red-shift and the width increase for two major lines of graphene, G- and 2D-band, upon interaction with the DOX analyte (red). Panels (g-l) show detailed statistical information on modulation of both line position and width for both modes; in contrast with previous optical data, each data point in this figure corresponds to a small local region on the sample, less than 0.1 $\\mu$m$^2$, diffraction limited. Clearly, the data points aggregate in two separate clusters, though, point-to-point variability due to non-uniformity of the signal is non-negligible for 2D-mode (compare $\\Delta\\gamma\/\\Delta\\omega$ correlation plot in panel (h) and partial distribution functions in panels (k-l)). Statistical distribution of the data from (g-l) contains important information about the material\/sample, which will be elaborated in detail next.\n\n\n\\subsection*{Stability of 2D van der Waals heterostructure materials}\n\nElectron microscopy of MoS$_2$\/graphene vertical heterostructure, fabricated as described in Methods, reveals structural non-uniformities. A few typical images of several randomly selected single layer MoS$_2$ islands, coated with MLG, are shown in Fig.\\ref{fig:fig12}(a,e) and Fig.\\ref{fig:fig2}(e). White nanocrystallites, likely made of insulating molybdenum oxide, charged under e-beam, are seen either in the center of the island (metal nucleation site) or at the edge (metal precipitation site); in some cases those grow to microcrystals of Mo$_2$O$_3$ (see Fig.\\ref{fig:fig12}(e)) of characteristic triangular (or rectangular, not shown here) shape and size up to 1\/2 micrometer. Graphene seems to be conformal to the substrate, making short wrinkles between nanoscale posts (10-20 nm tall). \n\n\n\nWhile the surface of MoS$_2$ islands appears mostly uniform in scanning electron microscopy (SEM) image, optical properties of 2DM demonstrate substantial variation in agreement with Raman and PL statistics from Fig.\\ref{fig:fig1} and Fig.\\ref{fig:fig2}. The variability of PL in pristine material could produce uncertainty in detection of the analyte. In order to find the origin for such a variation, scattering scanning near-field optical microscopy (sSNOM) has been applied. Careful alignment of large area scans of the same heterostructure allows us to correlate different characterization channels (including SEM, scanning probe imaging, as well as PL and Raman microscopy, having a lower resolution though). In Fig.\\ref{fig:fig12}(b-d) the \nsSNOM image (2nd harmonic optical amplitude, see Methods for details) reveals variation of surface impedance of MLG\/MoS$_2$ heterostructure at the sub-micrometer scale, not captured by SEM (or AFM). We argue that a series of bright regions (on the darker background of MoS$_2$) correspond to the local defects of the TMDC material. Indeed, we regularly observe such a contrast at the edge of the island which is known to be prone to partial oxidation. Similar regions in the bulk of the island should correspond to concentrated sulfur vacancies, reactive to oxygen, and formation of oxy-sulfate regions, often appearing as nanoscale posts that ruckle graphene around (a near-field phase image in Fig.\\ref{fig:fig4}(h) has the best contrast which allows to resolve nano-posts). The series of maps in Fig.\\ref{fig:fig12}(b-d) and (f-h) show evolution of such regions protected (or non-protected) by graphene coating: the larger island (a), covered with intact MLG, preserves the same number of partially oxidized regions after nearly 2 years in ambient, except for a small oxide crystal grown in the bottom right corner, where a trench in graphene (dark line) opens an access to the air. On contrary, the small island (e) has the MLG coating cracked; as a result, the surface is slowly oxidized over the course of retention period, almost entirely on the map in panel (h). sSNOM mapping also shows that the large graphene wrinkles (bright diagonal lines in panel (d)) do not lead to alteration of optical properties. On the opposite, the oxy-sulfate regions will be shown to generate non-uniform doping of the MoS$_2$ and (graphene), leading to the PL variability over the sample.\n\n\n\n\n\n\nLocal fluctuations of PL in the pristine material were analyzed in another island of the same 2DM vertical heterostructure mapped by SEM in Fig.\\ref{fig:fig2}(e) and in Fig.\\ref{fig:fig3}(b) and Fig.\\ref{fig:fig4}(h) by sSNOM. Several features are clearly resolved: graphene ruptures (not reaching the island), an oxide crystallite at the edge of the island, a few oxy-sulfate nano-posts and graphene wrinkles around the posts, and several regions of darker SEM contrast (likely, more conductive than bare MLG), potentially indicating doping\/Fermi level variation. Confocal PL image of the same area is presented in Fig.\\ref{fig:fig2}(a), inset. The large non-uniformity of PL intensity is followed by substantial variability of PL line shape (cf. the curves in main panel taken at three locations shown in the inset). Similar to the large area PL data in Fig.\\ref{fig:fig1}(c), the main variability of micro-PL results from the A and X$^-$ states, to be analyzed separately. Panel (b) presents the correlation plot for fitted PL peak position and width for A-exciton (red) and trion (orange) states by the local optical probe on the surface of MLG\/MoS$_2$ heterostructure shown in Fig.\\ref{fig:fig2}(e). MicroPL reveals large non-uniformity in optical signal. Trion partial distribution functions for both $\\Delta\\gamma$ and $\\Delta\\omega$ show 3 major clusters (highlighted by ovals in panel (b) and green curves in (c-d)), that correspond to the regions of heterostructure where materials properties are locally modulated. \n\n\nMaps in panels (f-i) show actual distribution of the peak position, $\\Delta\\omega$, and peak intensity, $\\Delta P$, with diffraction limited resolution. Importantly, the intensity maps show the anti-correlation for PL strength of A-exciton and trion (as indicated by red and orange arrows): the trion PL is the highest where the A-exciton PL is depressed, compare locations for trion-dominated (blue\/black) and exciton-dominated (purple) PL curves in panel (a). Such a correlation may result from non-uniform doping of the MoS$_2$ island. Indeed, in a highly-doped area the neutral excitons are bound to free charges and, thus, converted into trions\\cite{Mouri2013}. \n\n\n\n\n\n\\subsection*{Multidimensional characterization of heterostructure materials}\n\n\nAlthough useful to shed the light on the PL variability, the confocal PL characterization neither has enough spatial resolution nor enables assessing the MoS$_2$ doping level to uncover the mechanisms of non-uniform optical signaling. Instead, we developed a multidimensional imaging combining sSNOM and Kelvin probe force microscopy (KPFM) to be correlated with PL (and Raman) microscopy. In Fig.\\ref{fig:fig3}(a-b) two maps of the same island -- using KPFM (work function) channel and sSNOM (optical surface impedance) channel -- show identical contrast, further detailed in panel (c) where the cross section profiles allow to quantify the variation of the Fermi level of graphene above the MoS$_2$ layer. The profile of work function is schematically shown in Fig.\\ref{fig:fig3}(h). Charge transfer in the vertical heterojunction decreases the carrier density in both graphene and MoS$_2$ underneath, thus, decreasing the magnitude of graphene work function and doping level. The KPFM probe is in contact with the outermost layer of the heterostructure, graphene, thus it measures the work function of MLG. Graphene above the island appears negatively doped by MoS$_2$. The MLG Fermi level, taken with respect to graphene Dirac point, is negative, corresponding to p-doping. Statistical distribution of the Fermi level values of graphene on\/off the island is shown in panel (f) by red\/green histogram. Knowing $E_F$ in bare graphene and in the vertical heterostructure allows us to calculate the MoS$_2$ doping level, Fig.\\ref{fig:fig3}(d). Using median values for $E_F$, it can be estimated to lie in the range $1-25\\times10^{12}$~cm$^{-2}$, which is further corroborated by independent Raman data below.\n\n\n\n\n\\begin{figure}[!]\n\t\\centering\n\t\\includegraphics[width=0.4\n\t\\textwidth]{fig13_v.jpg\n\t\\caption{Correlation of MLG work function data with sSNOM optical surface impedance. Aligned maps for (a) KPFM and (b) sSNOM (4th harmonic) amplitude. (c) Cross section profiles across the MoS$_2$ area (KPFM, red and sSNOM, pink) vs.MLG reference (KPFM, gray), taken along the lines of the same color in (a-b). (d) Calculated electron density in MoS$_2$ heterostructure, log-scale, vs. Fermi levels in bare\/doped graphene off\/on the island. (f-g) Partial distribution functions for measured $E_F$ in bare graphene (off island, green) and graphene doped by the MoS$_2$ (on island, red) from KPFM map in (a). (e) Partial distribution functions for sSNOM signal from (b) to calibrate near-field signal by $E_F$. Note common abscissa axis for panels (d,f), not (e). Inset (h) shows schematics of charge transfer in the vertical heterojunction on SiO$_2$ substrate with negative charge traps. Pink curve outlines the variation of MLG work function.\n\t\tAll scale bars are 1 $\\mu$m.\n\t\n\t\t}\n\t\\label{fig:fig3}\n\\end{figure}\n\n\nComparison of KPFM and sSNOM profiles in Fig.\\ref{fig:fig3}(c), as well as the distribution functions in Fig.\\ref{fig:fig3}(f-e), allows to calibrate the near-field signal in terms of the Fermi level of the heterostructure. Then, one could interpolate the charge transfer\/doping data to the nanometer features, only resolved by sSNOM (such as wrinkles, oxy-sulfate regions, etc.), and thus, determine the origin for PL non-uniformity. \n\nEnhanced resolution of sSNOM allows us to determine 5 sources of non-uniform doping in the vertical van der Waals heterostructures as shown schematically in Fig.\\ref{fig:fig3}(h). (i) The primary doping is defined by conditions of the MoS$_2$ synthesis: it is known that often the stoichiometry of TMDC is slightly off the equilibrium values. Deficiency in sulfur leads to creation of surface vacancies, typically resulting in n-doping\\cite{Komsa2015}. (ii) Filling of the S-vacancy with oxygen or CH-group yields weaker n- or p-doping\\cite{Mouri2013,Zheng2020}, which was shown to be localized near the defect site\\cite{Nan2014}. (iii) In Mo-abundant synthesis, small micro-crystallites of metal molybdenum form, later oxidized to MoO$_x$, or forming MoO$_x$S$_y$ domains. (iv) In the heterostructure, work function and\/or Fermi level difference between the layers results in charge transfer between the layers. Typically p-doped MLG would become an acceptor for electrons transferred from n-doped MoS$_2$. Finally, (v) the Si\/SiO$_2$ substrate supports the heterostructure, which is known to have a high density of traps at the interface. Such traps, if charged, produce a substantial field and shift of the Fermi level in all 2DM layers above it, generating a random Coulomb potential for charge carriers both in MoS$_2$ and graphene. \n\nAdditional evidence for the existence of defects\/vacancies in TMDC lattice is provided by high-angle annular dark-field (HAADF) Scanning Transmission Electron Microscopy (STEM) imaging. Fig.\\ref{fig:fig4}(e) shows atomic resolution map of a typical MoS$_2$ island. The boundary between dark area and the lower contrast area likely reflects the grain boundary which separates regions of different lattice orientation. Such a twin boundary produces strain and may result in localization of electronic states. Furthermore, several (3-fold) individual defects are seen in the STEM image (approximately half a dozen per 200 nm$^2$ which corresponds to ca. $3\\times10^{12}$~cm$^{-2}$).\n\n\n\n\n\n\n\n\\begin{figure*}[!]\n\t\\centering\n\t\\includegraphics[width=\n\t\\textwidth]{fig04-t.jpg\n\t\\caption{Raman mapping of doping and strain non-uniformity in the heterostructure. (a) Typical MoS$_2$ Raman spectrum, fitted with E$^1$ and A-lines. (b) A-line intensity map. (c) Typical Raman spectra for MLG off\/on MoS$_2$ island, fitted by G (orange), D' (pink) and 2D (green) lines; splitting of G- and 2D-lines is shown in the fit. (d) Raman map of 2D-amplitude showing the island location, cf. map in (b).\t\n\t\t(e) HAADF-STEM image of MoS$_2$ lattice: notice grain boundaries and individual defects; scale bar is 2 nm. \n(f,g) Calculated doping and strain for MoS$_2$ layer overlaid with SEM map; (h) sSNOM phase image of the same area; (i-k) MLG doping, hydrostatic and shear strain maps.\n\t\tAll scale bars, except in (e), are 1 $\\mu$m.\n\t\n\t}\n\t\\label{fig:fig4} \n\\end{figure*}\n\n\n\nMultiple sources of optical non-uniformity, stemming from the variation of the doping level, have been further studied with micro-Raman imaging: typical Raman spectra of MLG\/MoS$_2$ heterostructure are shown in Fig.\\ref{fig:fig4}(a,c). \nPanel (a) presents A- and E$^1$-modes of MoS$_2$ layer, A-intensity map is shown in inset (b). Mode frequencies, fitted as in (a), allow to determine the strain and doping\\cite{Rao2019} of the island underneath the graphene, see Methods, generating the maps presented in panels (f,g). Consistent with the KPFM data, Fig.\\ref{fig:fig3}(a), MoS$_2$ doping is lower along the vertical axis of the island, thus, both the amount of charge transfer and graphene $E_F$ should be lower. \nCharge doping and strain in graphene have been calculated using a similar procedure\\cite{Neumann2015,Mueller2017}. \nUpper\/lower curves in panel (c) correspond to MLG Raman lines off\/on TMDC, where the location of the island is clearly seen, {\\em e.g.}, in the map of 2D-amplitude (d). Fig.\\ref{fig:fig4}(i,j) show graphene doping and isotropic\/hydrostatic strain. Furthermore, the splitting of the G- and 2D-doublet modes (see the fitted curves in panel (c)) yields\\cite{Narula2012} the shear (non-isotropic) component of the strain, panel (k). \n\n\nHigh-resolution map reveals that the hole carrier density in graphene increases next to the location of a large MoO$_x$ crystallite, which should indicate additional chemical doping. Besides doping, all nanoscale features of heterostructure morphology make contributions to the uniform and non-uniform components of graphene strain, thus making Raman line width larger than the natural width\\cite{Neumann2015}, due to the statistical broadening.\n\n\n\n\n\n\\subsection{Outline}\n\\label{sec:conclusions} \n\nCumulatively, multidimensional characterization data above revealed existence of non-uniformities in 2D materials at the nanoscale and allowed to identify doping and\/or strain variations as the origin of statistical distribution of the optical signals used in all three recognition channels (PL shift, Raman spectroscopy and GERS). When integrated over the device area, such a variability in local response would translate in a broadening of the biosensing spectral signal, thus, raising device-to-device variability and, ultimately, lowering the sensitivity and the limit-of-detection by increasing background and\/or systematic error. While the variability of individual device response often could be addressed by careful calibration against known analytes, such a fluctuation and spread of the integrated response would affect biosensing accuracy and, certainly, reduce the ability to perform precise biosensing in the agnostic detection mode. Presented study suggests that in order to improve the performance of biosensors based on 2DM heterostructures, non-uniformity of doping and strain -- two major mechanisms for optical signal variation -- must be addressed. Currently, most of 2DM heterostructures are fabricated by transfer methods, that are known to produce both strain and doping\\cite{Leong2019,Bousige2017,Banszerus2017} (especially for wet transfer). New methods of strain-free and doping-free transfer need to be developed\\cite{Leong2019,Seo2021}. Alternatively, such heterostructure materials should be fabricated in-situ, in synthetic facility, to preserve the layer epitaxy and exclude contamination between the layers.\n\n\n\\section*{Methods}\n\n\\subsection{Sample fabrication} \nThe monolayer MoS$_2$ was grown on a Si substrate with 300 nm thick SiO$_2$ by Chemical Vapor Deposition (CVD) method as described in\\cite{Aryeetey2021}. Optimization of synthesis parameters and stoichiometric ratio of molybdenum to sulfur resulted in producing triangular MoS$_2$ islands with low defect density (cf. STEM image in Fig.\\ref{fig:fig4}(e)), predominantly single layers, with low surface coverage. Monolayer graphene was grown by CVD on Cu foil. MLG was transferred onto MoS$_2$ using the conventional PMMA assisted transfer technique\\cite{Gao2012}. The SEM image of resulted heterostructure is shown on Fig.\\ref{fig:fig2}(e).\n\n\n\\subsection{Sample Characterization.}\nSEM sample imaging was performed in a field emission scanning electron microscope Zeiss Auriga FIB\/FESEM. Atomic resolution images of monolayer MoS$_2$ samples transferred onto Quantaifoil TEM grids were recorded using Nion Ultra HAADF-STEM operating at 60 kV with 3rd-generation C3\/C5 aberration corrector and 0.5 nA current in atomic-size probe $\\sim 1.0 - 1.1$\\AA~ (NCATSU).\nConfocal PL and Raman characterization were performed using a Horiba Jobin Yvon LabRAM HR-Evolution Raman system, 488 nm (for Raman) and 532 nm (for PL) laser excitation wavelengths were used; Horiba XploRA Raman system was used for taking Raman spectra at 532 nm of excitation. Analysis of PL and Raman characterization was performed using home-written codes. \n \nsSNOM maps were collected using scattering type scanning near-field optical microscope (custom-built Neaspec system) in pseudo-heterodyne mode (tapping amplitude $\\sim$70 nm, ARROW-NCPt probes by Nanoworld $<$25 nm radius), excitation by CW Quantum Cascade Laser (MIRCat by Daylight) at power $<$ 2 mW in focal aperture at 1577-1579 cm$^{-1}$ (6.333-6.341 $\\mu$m). Amplitude and phase of high order harmonics ($\\ge 2$) are proportional to the local impedance of the sample under the tip.\n\nThe AFM\/KPFM was performed using Dimension Icon AFM in PeakForce Kelvin Probe Force Microscopy in frequency modulated mode (PFKPFM-FM, Bruker Nano Inc., Santa Barbara, CA) utilizing a PFQNE-AL probe (Bruker SPM Probes, Camarillo, CA). Prior to measuring the samples, the KPFM response of the probe was checked against an Au-Si-Al standard and the work function of the Al reference metal layer was calibrated against a freshly cleaved highly oriented pyrolytic graphite (HOPG) reference sample (PFKPFM-SMPL, HOPG-12M, Bruker SPM Probes, Camarillo, CA); 4.6~eV was used for the work function reference value for HOPG.\n\n\n\\subsection{Supporting Information} \\par \nSupporting Information is available from the Wiley Online Library or from the author.\n\n\\section*{Acknowledgments:} \n\t\nAuthors are personally thankful to Drs. T. Tighe, T. Williams and M. Wetherington (MCL, PSU). S.V.R. acknowledges NSF support (CHE-2032582). T.I. and K.S. acknowledges NSF support (CHE-2032601). T.I. acknowledges Sample Grant from The Pennsylvania State University 2DCC-MI\n, which is supported by NSF cooperative agreement (DMR-1539916). Work at PSU sSNOM facility has been partially supported by NSF MRSEC (DMR-2011839). Part of this work was performed at the Joint School of Nanoscience and Nanoengineering (JSNN), a member of the Southeastern Nanotechnology Infrastructure Corridor (SENIC) and National Nanotechnology Coordinated Infrastructure (NNCI), which is supported by the NSF grant (ECCS-1542174). Scanning Transmission Electron Microscope imaging was conducted at the Center for Nanophase Materials Sciences at ONRL, which is a DOE Office of Science User Facility. \n\n\n\n\n\n\n\n\n\n\n\n\\section*{Supplementary Information}\n\n\n\n\\section{Strain and Doping analysis}\nThe background signal of Raman spectra for monolayer graphene (MLG) was fit and subtracted using air PLS \\cite{Zhang2010}. Then peaks were fit for both MLG and TMDC spectra using the non-linear least-squares minimization and curve-fitting library (LMFIT) for python. Peaks were fit using Lorentzian line shapes around the D, G$^+$, G$^-$, 2D$^+$ and 2D$^-$ peaks, as well as around nearby shoulder peaks if they were distinguishable. \n\n\\begin{figure*}[b]\n\t\\centering \n\t\\includegraphics[width=.5 \n\t\\columnwidth]{SI-01.jpg\n\t\\caption{G vs 2D plot showing the split of graphene on the bare SiO$_2$ substrate (light blue) and over MoS$_2$ island (green). Red\/yellow line indicates a characteristic slope for G-2D data correlation caused by pure doping\/strain (isotropic biaxial).}\n\t\\label{fig:SI-01}\n\\end{figure*}\n\nThe initial separation of strain and doping is accomplished by examining the central peak position of the 2D and G line fits. These Raman frequencies are sensitive to both strain and doping because of the change in lattice constants and force fields that effect the phonon frequencies. Lee et al. created a procedure for extracting the strain and doping of graphene through the statistical analysis of the changes in the Raman frequency position \\cite{Lee2012}. By plotting the 2D and G peaks against each other we are able to see trends in the spectra which represent modulation either by strain or by doping, or both. Strain is seen in the MLG Raman data as a cluster in of the 2D\/G correlation plot (Figure \\ref{fig:SI-01}) with a linear slope of approximately 2.2. The linear slope for p-doped MLG is approximately 0.75 which is also seen in Figure \\ref{fig:SI-01}. At very low values of p-doping and n-doping the dependence should be nonlinear, though, due to Fermi velocity (density of states) renormalization. \n\nIt is known that graphene on MoS$_2$ and SiO$_2$ substrate is typically p-doped. Assuming linear correlation with Raman frequencies, we can extract the relative change in strain and doping by solving the linear equation system:\n\\begin{eqnarray}\n\t\\left(\\begin{array}{l}\n\t\t\\omega_G\n\t\t\\\\ \\\\\\omega_{2D}\n\t\\end{array}\\right)\n\t=\n\t\\begin{array}{|ll|}\n\t\ta_{G,\\varepsilon}\\qquad & a_{G,\\rho}\n\t\t\\\\ &\\\\\n\t\ta_{2D,\\varepsilon} & a_{2D,\\rho}\n\t\\end{array}\n\t\\left(\\begin{array}{l}\n\t\t\\varepsilon\n\t\t\\\\ \\\\\n\t\t\\rho\n\t\\end{array}\\right)\n\t\\label{SI-01}\n\\end{eqnarray}\nWhere the vector ($\\omega_{G}$,$\\omega_{2D}$) should be calibrated against unstrained and undoped graphene reference sample. \n\nGraphene has two different polarizations of optical modes that are degenerate at zero strain. Depending on the axial direction of (uniaxial) strain, position of one of the modes shifts with respect to the other one. This generates a Raman doublet for general strain. Knowing a particular strain configuration is only possible with Raman mapping in polarized light, which resolved the polarization of a phonon mode. However, even in the case of non-polarized Raman data, position of individual components of the doublet allows to separate the isotropic and anisotropic components of the strain. The latter corresponds to the shear strain, although in order to determine specific shear direction, a polarized spectroscopy will be required, also on a calibration sample with know lattice orientation.\n\n\n\n\\begin{figure*}[!]\n\t\\centering \n\t\\includegraphics[width=1 \n\t\\columnwidth]{SI-02.jpg\n\t\\caption{Main figure reproduces the data from Fig. 5c: the dots and filled line represent the experimental data and the total fitted curve. The individual components of the doublet are shown with the thin lines. Additional D'-component is needed for fitting the spectrum in vicinity of the G-doublet for bare graphene. (inset) Schematics of G-line splitting with the shear strain. Hydrostatic strain, on contrary, shifts the whole doublet but does not influence the splitting.}\n\t\\label{fig:SI-02}\n\\end{figure*}\n\nMueller et al. developed a formalism to separate the doping and hydrostatic strain and shear strain components\\cite{Mueller2017} . Critically, the shear strain component does not change the strain\/doping correlation, that is, the slope of the curves in Figure \\ref{fig:SI-01}. While the hydrostatic component does not affect the splitting of the 2D or G peaks into doublet, as it shown on Figure \\ref{fig:SI-02}. The amount of the splitting allows us to determine the magnitude of the shear strain, while the magnitude of the hydrostatic strain can be determined by examining the averaged peak position (after splitting). We can then determine the magnitude of the strain components and the doping by examining the shift of the peaks in a \"zero strain\" case or a \"zero doping\" case. Parametrization follows the paper by Das et al.: in the undoped case the 2D peak shifts at a rate of 1.04 cm$^{-1}$ per 10$^{12}$ cm$^{-2}$ hole density \\cite{Das2009}. We use 2D splitting data and the Grueneisen parameter and the shear deformation potential from \\cite{Mueller2017} to determine the strain components from:\n\\begin{equation}\n\t\\omega_{2D}^{\\pm}= \\langle\\omega_{2D}\\rangle\\, \\left(-\\alpha\\,\\varepsilon_h\\pm \\beta\\,\\varepsilon_s\\right)\n\t\\label{SI-02}\n\\end{equation}\nwhere $\\alpha= 1.8$ is the Grueneisen parameter for MLG, and $\\beta= 0.99$ is the shear deformation potential. \n\n\n\n\n\\begin{figure*}[!]\n\t\\centering \n\t\\includegraphics[width=1 \n\t\\columnwidth]{SI-03A.jpg\n\t\\caption{The maps showing the fitted parameters for splitting of 2D peaks.}\n\t\\label{fig:SI-03A}\n\\end{figure*}\n\n\\begin{figure*}[!]\n\t\\centering \n\t\\includegraphics[width=1 \n\t\\columnwidth]{SI-03B.jpg\n\t\\caption{The maps showing the fitted parameters for splitting of G peaks.}\n\t\\label{fig:SI-03B}\n\\end{figure*}\n\nThe strain and doping of MoS$_2$ can also be determined from Raman correlation data (Rao et al.2019). Peaks for MoS$_2$ were fit in the same way as the graphene peaks (Figure \\ref{fig:SI-04}). For the case of MoS$_2$ we compare E peak and A peak that are near 382 and 404 cm$^{-1}$ respectively. The E peak position is more sensitive to strain, similar to the 2D peak of MLG, while the A peak position is more sensitive to doping, like the G peak of MLG. The slope for strain correlation is $\\sim$4; the slope for doping is $\\sim$0.12. We can then use the undoped E peak position, and a Gruneisen parameter for MoS$_2$ of $\\sim$0.86, to obtain the average strain. Then we examine the unstrained A peak position which shifts at a rate of 4 cm$^{-1}$ per 1.8~10$^{13}$ cm$^{-2}$ \\cite{Chakraborty2012} and determine the doping. Unlike graphene, the peak splitting in MoS$_2$ is absent. \n\n\\begin{figure*}[!]\n\t\\centering \n\t\\includegraphics[width=1 \n\t\\columnwidth]{SI-04.jpg\n\t\\caption{The maps showing the fitted parameters for MoS$_2$ peaks.}\n\t\\label{fig:SI-04}\n\\end{figure*}\n\n\n\n\n\\section{Characterization by Peak-Force Kevin probe force microscopy (KPFM)}\nIn general, the work function value $\\Phi_{sample}$ and, consequently, Fermi level variation can be calculated from KPFM measurements using an equation:\n\\begin{equation}\n\t\\Phi_{sample} = e \\, V_{CPD} - \\Phi_{probe}\t\n\t\\label{SI-03}\n\\end{equation}\nwhere $V_{CPD}$ is the contact potential difference between the sample and the AFM probe, $e$ is elemental charge, and $\\Phi_{probe}$ is the work function of the KPFM probe. Prior to measuring the MoS$_2$\/graphene samples, we checked the KPFM probe response against an Au-Si-Al standard. Topography and $V_{CPD}$ maps of standard are shown on Figure \\ref{fig:SI-04}a,b. Next, the work function of aluminum was calibrated against a freshly cleaved highly oriented pyrolytic graphite (HOPG) reference (Figure \\ref{fig:SI-04}c,d) using value of $\\Phi_{HOPG} =4.6$eV (PFKPFM-SMPL, HOPG-12M, Bruker SPM Probes, Camarillo, CA). To eliminate influence of water condensation the AFM chamber was purged with nitrogen gas.\n\n\n\n\\begin{figure*}[!]\n\t\\centering \n\t\\includegraphics[width=1 \n\t\\columnwidth]{SI-05.jpg\n\t\\caption{(a) AFM topography and (b) KPFM signal for the Au-Si-Al standard. (c) AFM topography and (d) KPFM channel for HOPG reference sample.}\n\t\\label{fig:SI-05}\n\\end{figure*}\n\nThe topography of MoS$_2$\/graphene sample (Figure \\ref{fig:SI-05}a) is dominated by roughness of Si\/SiO$_2$ substrate, and since the thickness of MLG and monolayer MoS$_2$ is below RMS, topographical details of heterostructures cannot be resolved by routine AFM imaging. The spatial distribution of $V_{CPD}$ and calculated work function value of the same area are presented on Figure \\ref{fig:SI-05}b,c. It must be noted that the KPFM probe is in contact with the outermost layer of the heterostructure, graphene, thus it measures the work function of MLG, $\\Phi_{MLG}$, either on or off the MoS$_2$ island. The work function value \"off\" the island reveals p-doping of graphene, likely due to transfer procedure. The Fermi level value, taken with respect to graphene Dirac point, is negative. The \"on\" value is shifted towards the Dirac point, showing non-uniform n-doping effect, originated from the charge transfer in the heterostructure.\n\n\\begin{figure*}[!]\n\t\\centering \n\t\\includegraphics[width=1 \n\t\\columnwidth]{SI-06.jpg\n\t\\caption{The maps of a MoS$_2$\/graphene sample: (a) AFM topography, (b) KPFM, and (c) calculated work function distribution.}\n\t\\label{fig:SI-06}\n\\end{figure*}\n\n\n\n\n\n\\section{Calculation of charge density in MLG and in MoS$_2$ monolayer}\n\\label{app:dos-eqs}\n\nFor 2d-materials with parabolic dispersion relation (with massive fermions), like MoS$_2$, the energy is given by: $E=E_c+\\hbar^2k^2\/(2m^*)$. Then, the density of states (DOS) is constant for each band: $=2m^*\/(\\pi\\hbar^2)$. Then, the following integral gives the carrier density dependence on the Fermi level (spin and valley degeneracy included):\n\\begin{equation}\n\tn(F)=\\frac{2m^*}{\\pi\\hbar^2}\\int_{E_c}^\\infty \\frac{dE}{1+\\exp[\\frac{E\n\t\t\tF}{kT}]}=\\frac{2m^* kT}{\\pi\\hbar^2}\\log\\left(1+\\exp\\left[\\frac{F-E_c}{kT}\\right]\\right)=N_c \\, \\log\\left(1+\\exp\\left[\\frac{|E_c|-|F|}{kT}\\right]\\right)\n\n\t\\label{2d-DOS-MoS2}\n\\end{equation}\nwhere we assume that both Fermi level $F$ and $E_c=-4.21$~eV\\cite{Larentis2014} are taken with respect to the vacuum level and, thus, are negative (this definition is consistent with the definition for Dirac point $E_D$, conduction band edge $E_c$ \nand Fermi level $F\n. The conduction band DOS is given by:\n\\begin{equation}\n\tN_c=\\frac{2m^* kT}{\\pi\\hbar^2}=\\frac{2m^*}{m_o}\\frac{kT}{\\pi a_B^2 \\; E_B}\\simeq 7.6\\,10^{12}\\,\\mathrm{cm}^{-2}\n\t\\label{Nc-MoS2}\n\\end{equation}\nwith $m_o$ being the free electron mass, $a_B=0.53$~\\AA, $E_B=27$~eV, and effective mass in MoS$_2$ is taken to be 0.35$m_o$\\cite{Peelaers2012}.\n\n\n\n\nThere are two limits to be noted: for non-degenerate doping ($|F|>|E_c|$, Fermi level lies below the bottom of CB), one can use $\\log(1+x)\\sim~x$ and write:\n\\begin{equation}\n\tn\\simeq N_c \\, \\exp\\left[-\\frac{|F|-|E_c|}{kT}\\right]\n\t\\label{2d-DOS-MoS2-nondeg}\n\\end{equation}\nwhile in the degenerate doping limit ($|E_c|-|F|\\gg kT>0$, Fermi level is within the CB), unity is neglected compared to the large exponential, and we derive linear dependence of the charge density on the Fermi level:\n\\begin{equation}\n\tn\\simeq N_c \\frac{|E_c|-|F|}{kT} \n\t\\label{2d-DOS-MoS2-ndeg}\n\\end{equation}\n\nCorrespondingly for the monolayer graphene, which is gapless with a linear dispersion relation $E=\\hbar v_F k$, we derive:\n\\begin{equation}\n\tn_g(F)=\\frac{(E_D-F)^2}{\\pi\\hbar^2 v_F^2}=N_g\\,(E_D-F)^2\n\n\t\\label{2d-DOS-MLG}\n\\end{equation}\nwhere the Dirac point $E_D=\\chi_{MLG}\\simeq -4.57$~eV\\cite{Yu2009}, and Fermi velocity $v_F\\simeq 1.16\\times10^6$~m\/s\\cite{Knox2008}. We emphasize that $N_g$ is not a density of carriers, neither it is a 2d-DOS in a classical sense: $N_g\\simeq 5.46\\;10^{13}$~cm$^{-2}$~eV$^{-2}$.\n\n\n\n\n\n\\begin{figure*}[!]\n\t\\centering \n\t\\includegraphics[width=0.5 \n\t\\columnwidth]{SI-07.jpg\n\t\\caption{Matching band structure offsets in MoS$_2$\/graphene van der Waals heterojunction: in order to align the Fermi level, the charg transfer between the 2D-materials must happen, resulted in a drop of vacuum level between the layers.}\n\t\\label{fig:SI-07}\n\\end{figure*}\n\n\n\n\n\nSince the 2d materials are electrically isolated from the Si substrate by the oxide layer, they are at floated potential and the charge transfer produces 2D charge densities $\\pm en_1$, equal (by magnitude and opposite by sign) in both layers, and generates $2\\delta V$, a potential difference between TMDC and MLG ($\\phi(z\\pm d\/2)=\\pm\\delta V$). This potential difference is linearly proportional to the surface charge formed at each of the materials, as the result of charge transfer.\n\nThen, the positions of the Fermi levels, both defined with respect to the higher vacuum level in MLG, are:\n\\begin{equation}\n\t|F_g| =|F_g^{(o)}| -\\Delta_F \\qquad\\qquad\n\t|F_{MoS_2}|=|F_{MoS_2}^{(o)}|+2\\delta V +\\Delta_F{}_{MoS_2}\n\t\\label{levels-03}\n\\end{equation}\nwhere the differences: $\\Delta_F=F^{(o)}_g-F>0$ is the Fermi level (up)shift in graphene, which can be measured as work function difference taken on and off the TMDC island, and $\\Delta_F{}_{MoS_2}$, the Fermi level (down)shift in MoS$_2$.\n\nKnowing the expression for TMDC and MLG DOS, one can easily calculate the charge transfer and, then, the potential difference between the layers in the vertical heterostructure. Thus, the relation between the measured MLG work function and the doping level of TMDC can be established, as shown in the Fig. 4d of main text.\n\n\n\n\n\\section{Animation File}\n\\label{app:movie}\n\nSI includes an animation file representing some of the channels of multidimensional characterization of a particular van der Waals vertical heterostructure (single layer MoS$_2$ packaged by graphene monolayer). The same area of the sample has been imaged using different instrumentation. Theoretical results on the charge transfer (doping) and various components of strain are also shown.\n\n\n~\\newpage~\n\\section*{References}\n\\label{app:ref}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction\nEver since the discovery of a class of objects with extremely bright infrared emission called Ultra\/Luminous Infrared Galaxies (LIRG: $L_{\\rm{FIR}}$\\ = 10$^{11-12}$$L_{\\sun}$; ULIRG: $L_{\\rm{FIR}}$\\ $>$10$^{12}$$L_{\\sun}$) with the \\textit{The Infrared Astronomical Satellite} \\citep[$IRAS$; e.g.][]{Houck1984,Houck1985,Soifer1984b,Soifer1984}, these objects have been studied at every wavelength. High-resolution optical imaging show that most of these systems resemble merging\/interacting systems \\citep[e.g.][]{Murphy1996}. \\cite{Larson2016} showed that major mergers play a significant role for all sources with $L_{\\rm{IR}}$\\ $\\geq$ 10$^{11.5}$$L_{\\sun}$. In addition, optical wavelengths show that these systems are highly dust obscured. Various star formation tracers have shown extreme star formation rates \\citep[e.g.][]{U2012} suggesting that local U\/LIRGs are great laboratories to study extreme modes of star formation. Mid-infrared observations are used to reveal the contribution of an active galactic nucleus (AGN), if present, to the far infrared luminosity \\citep[e.g.][]{Genzel1998,Veilleux2009}. Molecular gas observations via carbon monoxide ($^{12}$CO) have revealed rich concentrations of fuel for current and future star formation \\citep[e.g.][]{Downes1998,Bryant1999,Wilson2008} and also massive, energetic molecular outflows on kiloparsec scales in several sources \\citep[e.g.][and references therein]{Cicone2014}. \n \n\\citet{Gao2004b} used HCN~$J$~=~1$-$0\\ observations to show that there exists a tight correlation between infrared luminosity ($L_{\\rm{IR}}$; proxy for star formation rate) and HCN~$J$~=~1$-$0\\ (proxy for amount of dense molecular gas). This tight relation was extended to span over 7-8 orders of magnitude of $L_{\\rm{IR}}$\\ by \\citet{Wu2005}. These results are interpreted to mean that HCN~$J$~=~1$-$0\\ is tracing the molecular gas component that is directly related to the star formation.\n\n\nSingle dish observations of the $^{12}$CO\\ isotopologue, $^{13}$CO\\ \\citep[e.g.][]{Casoli1992,Garay1993,Aalto1991,Aalto1995,Greve2009,Papadopoulos2012a} in U\/LIRGs have shown a trend of extremely weak emission relative to $^{12}$CO. The integrated brightness temperature line ratios of $^{12}$CO\/$^{13}$CO\\ in U\/LIRGs were found to be usually high ($>$20) when compared to normal disk galaxies suggesting different interstellar medium (ISM) environments between the two classes of galaxies. High-resolution observations \\citep[e.g][]{Aalto1997,Casoli1999,Sliwa2012,Sliwa2013,Sliwa2014} show a similar trend with values evening exceeding 50 for some sources \\citep{Sliwa2017c}. Possible explanations for the unusually high ratios include, photodissociation of $^{13}$CO, excitation and\/or optical depth effects, or abundance variations \\citep[e.g][]{Casoli1992,Henkel1993,Taniguchi1999}. While photodissociation is likely ruled out as the dominant process due to the strong C$^{18}$O\\ emission \\citep[see][]{Casoli1992}, recent radiative transfer studies are showing that the [$^{12}$CO]\/[$^{13}$CO]\\footnote{Square brackets denote abundances while no brackets around ratios denote brightness temperature line ratios unless specifically stated} abundance ratio is higher ($\\geq$100) than what we perceive as normal \\citep{Sliwa2013,Sliwa2014,Sliwa2017a,Papadopoulos2014,Henkel2014,Tunnard2015b}. In this paper, we model the molecular gas of the well studied ULIRG Arp 220 to determine whether it follows the [$^{12}$CO]\/[$^{13}$CO]\\ trend seen in the literature.\n\n\n\\textit{Arp 220:} \\object{Arp 220} (IRAS 15327+2340, UGC 9913, VV 540, IC 1127) is the nearest example of a ULIRG and thus well studied at every wavelength. In this advanced merger, the two nuclei are still distinguishable, separated by 1$^{\\prime\\prime}$\\ \\citep[$\\sim$390~pc;][]{Norris1988} and is modelled to finish merging within 6~$\\times$~10$^{8}$~years \\citep{Konig2012}. With one of the most powerful star-forming environments in the local universe, it is a popular starburst template for dusty, star forming high redshift galaxies. Recently, \\citet{Barcos-Munoz2015} observed 33~GHz continuum at very high-resolution ($\\sim$30~pc) revealing that synchrotron radiation is dominant at this frequency. Combining the sizes measured from the 33~GHz continuum and infrared observations, \\citet{Barcos-Munoz2015} derived very high molecular gas surface densities ($>$2~$\\times$~10$^{5}$~$M_{\\sun}$~pc$^{-2}$) and infrared surface luminosities ($>$4~$\\times$~10$^{13}$~$L_{\\sun}$~pc$^{-2}$). Although there is no clear evidence of an AGN with the 33~GHz continuum \\citep{Barcos-Munoz2015}, \\citet{Downes2007} showed that the dust in the western nucleus is hot ($\\sim$ 170~K) and the size of the dust source is small that it implies a large surface luminosity that can only be plausible by an AGN. \\citet{Wilson2014} used 691~GHz continuum to also show that the western nucleus has a very high luminosity surface density that requires either the presence of an AGN or a \"hot starburst\". \\citet{Lockhart2016} were able to resolve previously observed H$\\alpha$+[NII] emission into a bubble-shaped feature that is aligned with the western nucleus. Either an AGN or extreme star formation within the inner $\\sim$100~pc of the nuclei are the likely possibilities for the origin of this bubble. \\cite{Zsch2016} confirm an outflow from the western nucleus by comparing their high-resolution Very Large Array (VLA) data of several molecular species to the ALMA data.\n\n\nOver the last two decades, Arp 220 has been observed in high-resolution CO many times \\citep{Scoville1997,Downes1998,Sakamoto1999,Downes2007,Sakamoto2008,Matsushita2009,Martin2011,Konig2012,Rangwala2015,Scoville2017}. The observations reveal a large concentration of molecular gas ($\\sim$~5~$\\times$~10$^{9}$~$M_{\\sun}$; e.g. Downes $\\&$ Solomon 1998) within two compact nuclei surrounded by a diffuse kiloparsec-scale disk. Observations of rare CO isotopologues (i.e. $^{13}$CO\\ and C$^{18}$O) have been mainly obtained using single dish telescopes resulting in global fluxes \\citep[e.g.][]{Greve2009,Papadopoulos2012a}; however, \\citet{Matsushita2009} published high-resolution Submillimeter Array (SMA) $^{13}$CO\\ and C$^{18}$O $J$=2$-$1\\ ($\\sim$ 3.5\\arcsec) observations showing that the two isotopologues have similar intensities where they suggest that either the lines are optically thick or there is an overabundance of C$^{18}$O\\ compared to $^{13}$CO. \n\nUsing $Herschel$ Fourier Transform Spectrometer (FTS) spectra, \\citet{Rangwala2011} modelled the global $^{12}$CO\\ emission from $J$ =1-0 to $J$ = 13-12 with a 2-component molecular gas: cold, moderately dense (\\tkin\\ = 50~K and $n_{\\rm{H_{2}}}$\\ = 10$^{2.8}$~cm$^{-3}$) and warm, dense (\\tkin\\ = 1350 K and $n_{\\rm{H_{2}}}$\\ = 10$^{3.2}$ cm$^{-3}$) molecular gas components. The dust continuum from the FTS spectrum was also shown to be consistent with a warm dust ($T_{\\rm{dust}}$ = 66~K) with a large optical depth ($\\tau_{\\rm{dust}}$ $\\sim$ 5 at 100~$\\mu$m; Rangwala et al. 2011). \n\nDense gas tracers such as HCN, HCO$^{+}$, HNC and CS have also been observed in Arp 220 \\citep[e.g.][Barcos-Mu\\~noz et al. in preparation]{Aalto2009,Aalto2015b,Greve2009,Sakamoto2009,Imanishi2010,Scoville2015,Martin2011,Martin2016,Tunnard2015a}. \\cite{Sakamoto2009} reported P-Cygni profiles in the HCO$^+$~$J$~=~3$-$2\\ and $J$~=~3-2 suggesting a $\\sim$100~km s$^{-1}$\\ outflow originating from the inner regions of the nuclear disks. \\cite{Aalto2009} found that HNC $J$~=~3-2 is bright and, perhaps, amplified in the western nucleus and weak in the east suggesting very different physical conditions. \\cite{Martin2016} find that HCN and HCO$^+$~$J$~=~4$-$3\\ and $J$~=~3-2 are optically thick and affected by different absorption systems that can hide up to 70$\\%$ of the total intrinsic emission from these lines. \n\n\\textit{NGC 6240:} \\object{NGC 6240} (IRAS 16504+0228, UGC 10592, VV 617, IC 4625) is unusual among the class of LIRGs in having exceptionally strong emission in the near and mid-IR lines of molecular hydrogen. The consensus of the infrared observers is that the strong H$_{\\rm{2}}$ lines are due to shocked gas, not star formation \\citep{Rieke1985,Depoy1986,Lester1988,Herbst1990,Elston1990,vanderWerf1993,Mori2014}. Since the cooling time of this shocked gas is short ($\\sim$10$^{7}$~yr), we may be seeing NGC 6240 at a privileged moment during the merger process \\citep[e.g.][]{Sugai1997}. Recent work of \\citet{Meijerink2013} showed that the $^{12}$CO\\ spectral line energy distribution (SLED), obtained using the $Herschel$ FTS, is consistent with either an X-ray dominated region (XDR) or with shocked molecular gas; however, the lack of the ionic species OH$^{+}$ and H$_{2}$O$^{+}$, normally found in high abundance in gas clouds near elevated X-ray or cosmic ray fluxes, ruled out the XDR models and once again, shocks are the most likely scenario to explain the observations.\n\nThe near and mid-infrared H$_{\\rm{2}}$ emission peaks between the two nuclei (e.g., Lester et al. 1988; Herbst et al. 1990), suggesting several possible scenarios. Two of the more recent ideas are those of \\citet{Ohyama2003} and \\citet{Nakanishi2005}. The preferred model of Ohyama et al. (2003) is that the merger's tidal forces have channeled the molecular gas into the space between the two nuclei, where it is being shocked by a superwind from the southern nucleus, the stronger source in radio continuum, X-rays, and near- and mid-IR. Nakanishi et al. (2005) propose instead that the two nuclei have had a nearly head-on collision, after which the two supermassive black holes, the older bulge stars, and the recent starburst stars all moved onwards to the present sites of the two nuclei. Due to drag and stripping forces, however, the circumnuclear gas was left behind at the original collision site. \nThis resembles the well-known Bullet Cluster scenario, except that collision speeds are lower, and NGC 6240's gas densities are 10$^{7}$ times higher than those of the Bullet Cluster, so the gas left behind in the middle is cool molecular gas rather than hot X-ray-emitting plasma.\n\nNGC 6240 has also been observed in high-resolution $^{12}$CO\\ many times \\citep{Wang1991,Bryant1999,Tacconi1999,Nakanishi2005,Iono2007,Wilson2008,Feruglio2013a,Feruglio2013b}. The CO emission is peaked in between the two nuclei as is observed for the warm H$_{2}$ emission. \\cite{Feruglio2013b} detected blueshifted CO emission between -200~km s$^{-1}$\\ and -500~km s$^{-1}$\\ peaking near the southern AGN position at the same position where an H$_{2}$ outflow was found \\citep{Ohyama2000,Ohyama2003}. A redshifted CO component peaks in between the two nuclei, similar to the CO emission at the systemic velocity, with a large velocity dispersion ($\\sim$500~km s$^{-1}$\\ at the maximum) suggesting highly turbulent gas \\citep{Feruglio2013b}. \n\nIn addition to CO, dense gas tracers have also been observed, however, not to the extent as that for Arp 220. \\cite{Nakanishi2005} observed HCN and HCO$^+$~$J$~=~1$-$0\\ at 2-3\\arcsec, where both lines peak in between the two nuclei, similar to CO. \\cite{Wilson2008} observed part of the HCO$^+$~$J$~=~4$-$3\\ line, where it still peaks in between the two nuclei. \\cite{Tunnard2015b} observed the rarer $^{13}$C isotopologues of HCN and HCO$^+$\\ in NGC 6240 and found line ratios $>$ 30. \n\\begin{table*\n\\caption{Source Summary}\\label{tab:sourcesum}\n\\centering\n\\begin{tabular}{lccccccc}\n\\hline \\hline\n\t\t&\\multicolumn{2}{c}{\\underline{(0,0) Position}} \t\t&& \\multicolumn{2}{c}{\\underline{Center Velocity}} & & \\\\\nSource\t& RA \t& Dec\t&\t$L_{\\rm{FIR}}$\\ &\tcz$_{lsr}$\t& z$_{lsr}$\t&D$_{L}$\t & Linear Scale \\\\\n\t\t& (J2000)\t& (J2000)\t& ($L_{\\sun}$)\t&(km s$^{-1}$)\t\t&\t\t\t& (Mpc)\t& (pc arcsec$^{-1}$) \\\\\n\\hline\nArp 220\t\t&15 34 57.24\t&+23 30 11.2 \t &1.4 $\\times$ 10$^{12}$\t&5434\t&0.018126\t&81.3\t&\t390 \\\\\nNGC 6240\t&16 52 58.89\t&+02 24 03.7\t &5.4 $\\times$ 10$^{11}$\t&7340\t&0.02448\t&108\t\t&\t520 \\\\\n\\hline\n\\end{tabular}\n\\tablefoot{[1] 9-year WMAP parameters \\citep{Hinshaw2012}: $H_{o}$ = 69.3, $\\Omega_{\\rm{matter}}$ = 0.28, $\\Omega_{\\rm{vacuum}}$ = 0.72.[2] $L_{\\rm{FIR}}$\\ reference: \\cite{Sanders2003}. [3] The (0,0) position is the phase center of the observations.}\n\\end{table*}\n\nIn this paper, we present new Plateau de Bure Interferometer (PdBI) observations of HCN\\ and HCO$^+$~$J$~=~1$-$0\\ and C$_{\\rm{2}}$H~$N$~=~1$-$0\\ for both Arp 220 and NGC 6240 (Table \\ref{tab:sourcesum}). In addition, we present new PdBI observations of $^{13}$CO $J$=1$-$0\\ and $J$ = 2-1, CS~$J$~=~2$-$1\\ and $J$ = 5-4, HNCO~$J_{k,k'}$~=~5$_{0,4}$~-~4$_{0,4}$, CH$_{\\rm{3}}$CN(6-5), SiO $J$ = 2-1, and HN$^{13}$C $J$ = 1-0 and Atacama Large Millimeter\/submillimeter Array (ALMA) Science Verification (SV) observations of CS~$J$~=~4$-$3\\ and CH$_{\\rm{3}}$CN(10-9)\\ for Arp 220. These observations will be made available to the public. The paper is broken down into the following sections: In Section 2, we describe the observations and reduction. In Section 3, we present integrated brightness temperature line ratio maps to show the varying conditions of the ISM across these sources. In Section 4, we present a radiative transfer analysis of $^{12}$CO, $^{13}$CO\\ and C$^{18}$O\\ for Arp 220 at $\\sim$700~pc scales. In Section 5, we discuss the molecules detected in both sources, the results of the radiative transfer analysis of Arp 220, the [$^{12}$CO]\/[$^{13}$CO]\\ and [$^{12}$CO]\/[C$^{18}$O]\\ abundance ratios found in Arp 220 and a comparison of the HCN\/HCO$^{+}$ line ratios. In Section 6, we summarize the major results. We end the paper with an appendix describing the release of the data online and the continuum from the observations.\n\n\n\n\\section{Observations}\n\\subsection{PdBI\nFor the line-ratio investigations in this paper, we used\n$^{12}$CO $J$=1$-$0\\ and $J$~=~2--1\nPdBI data sets on Arp~220 and NGC~6240, that were \npartly data from earlier publications (for Arp 220: Downes \\& Solomon 1998;\nDownes \\& Eckart 2007; K\\\"onig et al.\\ 2012; and for NGC~6240: \nTaconni et al.\\ 1999). We also use the SMA observations of $^{12}$CO $J$=3$-$2\\ in Arp~220 published in \\cite{Sakamoto2008} and the PdBI observations of $^{12}$CO $J$=1$-$0\\ in NGC~6240 published in \\cite{Feruglio2013a}.\n\nOther, previously unpublished, data sets are, for Arp~220, \n$^{13}$CO $J$=1$-$0\\ at 3\\,mm and $^{13}$CO $J$=2$-$1\\ at 1.4\\,mm observed simultaneously, \nand also CS~$J$~=~2$-$1\\ at 3\\,mm and CS~$J$~=~5$-$4\\ at 1\\,mm observed simultaneously.\nAdditional data sets included\nHCN, HN$^{13}$C, HCO$^+$~$J$~=~1$-$0, C$_{\\rm{2}}$H~$N$~=~1$-$0,\nand SiO $J$~=~2--1 (v=0) all observed simultaneously.\n\nFor most of these observations, the six 15\\,m antennas\nwere arranged with spacings from 24\\,m to 400\\,m. The longer baselines,\nobserved in winter, had phase errors $\\leq 40^\\circ$ at 1.4\\,mm, and\n$\\leq 15^\\circ$ at 3\\,mm. Short spacings ($\\leq 80$\\,m), observed in\nsummer at 3\\,mm, had r.m.s.\\ phase errors $\\leq 20^\\circ$.\n\nThe SIS receiver noise plus spillover\nand sky noise gave typical equivalent system temperatures outside the\natmosphere of 150\\,K at 3\\,mm (86\\,GHz) in the lower sideband, and 250\nto 400\\,K at 1.3\\,mm in upper and lower sidebands separated by 3\\,GHz,\nwith the upper band typically at 215 to 225\\,GHz. The spectral\ncorrelators covered 1700\\,km s$^{-1}$ \\ at 3\\,mm and 800\\,km s$^{-1}$ \\ at 1.4\\,mm,\nwith instrumental resolutions of 8 and 4\\,km s$^{-1}$ , respectively.\n\nThese raw data were then smoothed to channels of 10, 20, and 40\\,km s$^{-1}$ . \nThe primary amplitude calibrators were 3C273 (a variable source, but\ntypically 18\\,Jy at 3\\,mm and 13\\,Jy at 1.4\\,mm, at the \ntime of the observations), and MWC349\n(a mostly non-varying source, \nwith 1.0 and 1.7\\,Jy at 3 and 1.4\\,mm during the years of\nthe observations). The uncertainties\nin the flux scales are typically $\\pm 5$\\% at 3\\,mm and $\\pm 10$\\% at 1.4\\,mm.\n\nIn some of the observing epochs, \nthe observing program monitored phases every 20\\,min, at 3 and 1.4\\,mm\nsimultaneously, with the same phase calibrators used in earlier\nobservations of these sources (see Table~1 of Downes \\& Solomon 1998). Prior to 2004, the\ndata processing program used the 1.4\\,mm total power to correct\namplitudes and phases at 3 and 1.4\\,mm for short-term changes in\natmospheric water vapor. After 2004, this was done with \nwater-vapour monitoring receivers at 22 GHz on each antenna.\nA post-observation calibration program took the 3\\,mm curve of phase\nversus time, scaled it to 1.4\\,mm, then subtracted it from the observed\n1.4\\,mm calibrator phases, and then fit the phase difference between\nthe two receivers. All visibilities are weighted by the integration\ntime and the inverse square of system temperature. Most maps were\nmade with this ``natural weighting'' of the $uv$ data.\n\nSome sources were observed with improved IRAM receivers in February\n2008, in a $2\\times 1$\\,GHz spectroscopic mode that simultaneously covered the lines of HCN, HCO$^+$~$J$~=~1$-$0\\ and C$_{\\rm{2}}$H~$N$~=~1$-$0, \nin the newer PdBI extended configuration with antenna\nspacings up to 760\\,m. For these later observations, typical receiver \ntemperatures were 50\\,K at 3\\,mm.\n\nAll data reductions were done with the MAPPING program in the standard\nIRAM GILDAS\\footnote{http:\/\/www.iram.fr\/IRAMFR\/GILDAS} package.\n\nThe datacubes were converted to FITS files and imported into \\verb=CASA= \\citep{McMullin2007} v4.7.1 for data analysis. We created integrated intensity maps using a 1$\\sigma$ cutoff for weak lines such as, for example, HN$^{13}$C and a 5$\\sigma$ cutoff for strong emission lines such as, for example, $^{12}$CO, HCN and HCO$^+$. These integrated intensity maps are presented in Figures \\ref{fig:arp220maps} and \\ref{fig:ngc6240maps}. Table \\ref{tab:observations} presents the various molecular lines detected and their observational properties. We also present spectra in Figures \\ref{fig:arp220spec} and \\ref{fig:n6240spec}. \n\n\\subsection{ALMA\nArp~220 was a target for Band 5 SV observations on 16 July 2016. The four spectral windows, of 1.875GHz bandwidth each, were placed on H$_{2}$O (3$_{13}$ - 2$_{20}$), HNC $J$=2-1, CS $J$=4-3 and CH$_{3}$OH (4$_{31}$ - 3$_{30}$). After checking for any obvious calibration flaws, we use the once phase-only self-calibrated delivered visibility dataset. We image the CS~$J$~=~4$-$3\\ and what we have identified as CH$_{\\rm{3}}$CN(10-9)\\ lines. The H$_{2}$O line has been presented in \\cite{Koenig2016b}. Using \\texttt{CASA} v4.7.1, we create datacubes of 20 km s$^{-1}$\\ channel widths with a natural weighting for maximum sensitivity. We created integrated intensity maps using a 2$\\sigma$ cutoff (Figure \\ref{fig:arp220maps}.) \n\n\n\\subsection{Short-Spacings Flux for $^{12}$CO\n\\cite{Greve2009} present single dish fluxes for both Arp~220 and NGC~6240. Comparing the fluxes measured for Arp~220 in \\citet[][;$^{12}$CO $J$=1$-$0\\ and $J$=2-1]{Downes1998} to that of \\cite{Greve2009} shows that the PdBI maps have recovered nearly all flux ($^{12}$CO $J$=1$-$0: 410 Jy km s$^{-1}$\\ vs 420 Jy km s$^{-1}$; $^{12}$CO $J$=2$-$1: 1100 Jy km s$^{-1}$\\ vs 1130 Jy km s$^{-1}$). \\cite{Sakamoto2008} made comparisons to single dish observations for $^{12}$CO $J$=3$-$2\\ and concluded that $\\sim$10\\% of the total flux is missing. Since 10\\% of the total flux is likely spread over the entire source, each point in Arp~220 will have insignificant missing $^{12}$CO $J$=3$-$2\\ flux when compared to the calibration uncertainty ($\\pm$ 15\\%). \n\nFor NGC~6240, \\cite{Feruglio2013a} made a comparison with single dish observations for $^{12}$CO $J$=1$-$0\\ and found an agreement in fluxes. The $^{12}$CO $J$=2$-$1\\ observations of \\cite{Tacconi1999} are missing $\\sim$30\\% of the total flux when compared to the flux measured in \\cite[][1220 Jy km s$^{-1}$\\ vs 1740 Jy km s$^{-1}$]{Greve2009}.\n\n\\begin{table*\n\\caption{Line Observations Summary}\\label{tab:observations}\n\\centering\n\\begin{tabular}{llccccc}\n\\hline \\hline\nSource \t& Line \t& $\\nu_{\\rm{rest}}$\t&Resolution \t& rms \t\t\t&Channel Width\t\t& Flux \\\\\n\t\t&\t\t& (GHz)\t\t\t&(arcsec)\t\t&(mJy beam$^{-1}$)\t&(km s$^{-1}$) \t& (Jy km s$^{-1}$) \\\\\n\\hline \nArp 220\t&$^{13}$CO $J$=1$-$0\\\t &110.201\t&2.0 x 1.3\t & 1.5\t&40\t&8.0 \\\\\n\t\t&$^{13}$CO $J$=2$-$1\\\t &220.399\t&0.9 x 0.7\t & 1.9\t&20\t&47.0 \\\\\n\t\t&HCO$^+$~$J$~=~1$-$0\\\t &89.189\t&1.3 x 0.8\t &0.7\t \t&20\t& 17.8 \\\\\n\t\t&HCN~$J$~=~1$-$0\\\t &88.631\t&1.3 x 0.8\t &0.7\t \t&20\t& 45.0 \\\\\n\t\t&CS~$J$~=~2$-$1\\\t &97.981\t&1.6 x 1.1\t &1.3\t \t&20\t&17.1 \\\\\n\t\t&CS~$J$~=~4$-$3\\\t&195.954\t&0.9 x 0.7 & 1.4\t&20\t&61.0 \\\\\n\t\t&CS~$J$~=~5$-$4\\\t &244.936\t&0.9 x 0.7\t &2.5\t \t&40\t&40.5 \\\\\n\t\t&C$_{\\rm{2}}$H~$N$~=~1$-$0\\\t &87.3\/4\t&1.3 x 0.8\t &0.8\t \t&20 &8.8 \\\\\n\t\t&CH$_{\\rm{3}}$CN(6-5)\\ & 110.328 - 110.385\t&2.0 x 1.3\t& 1.5\t\t&40\t& $>$3.9 \\\\\n\t\t&CH$_{\\rm{3}}$CN(10-9)\\ & 183.674 - 183.964\t&0.9 x 0.8\t& 2.4\t\t&20\t& 19.7 \\\\\t\t\n\t\t&HNCO~$J_{k,k'}$~=~5$_{0,4}$~-~4$_{0,4}$\\ & 109.906\t&2.0 x 1.3\t\t&1.5\t&40\t& $>$2.0\\\\\n\t\t&HN$^{13}$C $J$ = 1-0 &\t87.091\t&1.3 x 0.8& 0.8\t&40\t& 0.7 \\\\\n\t\t&SiO $J$=2-1 & 86.847\t&1.3 x 0.8& 0.8\t&40\t& 5.5 \\\\\n\t\t&\t \t &\t\t\t &\t \t&\t& \\\\\nNGC 6240&HCO$^+$~$J$~=~1$-$0\\\t &89.189\t&1.1 x 1.1\t &1.0\t \t&20& \t9.9 \t\\\\\t\n\t\t&HCN~$J$~=~1$-$0\\\t &88.63\t&1.1 x 1.1\t &1.0\t \t&20& \t6.3\t\\\\\t\n\t\t&C$_{\\rm{2}}$H~$N$~=~1$-$0\\\t &87.3\/4\t&1.1 x 1.1\t &0.3\t \t&40& \t1.3\t\\\\\t\t\n\\hline\n\\end{tabular} \\\\\n\\textbf{NOTE}: [1] Splatalogue (http:\/\/www.splatalogue.net\/) was used to obtain frequencies. [2] CH$_{3}$CN is composed of several lines that span the stated frequency range. [3] Calibration uncertainties are $\\sim$5$\\%$ and $\\sim$10$\\%$ for 3mm and 1mm observations, respectively.\n\\end{table*}\n\\begin{figure*}[!htbp]\n\\centering\n$\\begin{array}{c@{\\hspace{0.1in}}c@{\\hspace{0.1in}}c}\n\\includegraphics[scale=0.2]{fig1-map-13CO10arp220.pdf} &\\includegraphics[scale=0.2]{fig1-map-13CO21arp220.pdf} & \\includegraphics[scale=0.2]{fig1-map-HN13C10arp220.pdf} \\\\\n\\includegraphics[scale=0.2]{fig1-map-HNCO54arp220.pdf} &\\includegraphics[scale=0.2]{fig1-map-SiO21arp220.pdf} &\\includegraphics[scale=0.2]{fig1-map-C2Harp220.pdf} \\\\\n \\includegraphics[scale=0.2]{fig1-map-CS21arp220.pdf} & \\includegraphics[scale=0.2]{fig1-map-CS43arp220.pdf} & \\includegraphics[scale=0.2]{fig1-map-CS54arp220.pdf} \\\\\n \\end{array}$\n $\\begin{array}{c@{\\hspace{0.1in}}c}\n \\includegraphics[ scale=0.2]{fig1-map-HCN10arp220.pdf} & \\includegraphics[scale=0.2]{fig1-map-HCOP10arp220.pdf} \\\\\n\\includegraphics[scale=0.2]{fig1-map-CH3CN65arp220.pdf} &\\includegraphics[scale=0.2]{fig1-map-CH3CN109arp220.pdf} \n\\end{array}$\n\\caption[]{Arp 220: The ellipse in the bottom left corner of each map represents the synthesized beam size. (TOP ROW) $^{12}$CO $J$=1$-$0, $J$=2-1 and HN$^{13}$C $J$=1-0 with contours corresponding to [4, 6, 8, 10] $\\times$ 0.45 Jy beam$^{-1}$ km s$^{-1}$, [4, 8, 12, 16, 20, 24] $\\times$ 0.5 Jy beam$^{-1}$ km s$^{-1}$\\ and [3, 4, 5, 6] $\\times$ 0.135 Jy beam$^{-1}$ km s$^{-1}$, respectively. (2$^{nd}$ ROW) HNCO~$J_{k,k'}$~=~5$_{0,4}$~-~4$_{0,4}$, SiO $J$=2-1 and C$_{\\rm{2}}$H~$N$~=~1$-$0\\ with contours corresponding to [4, 6, 8,10] $\\times$ 0.36Jy beam$^{-1}$ km s$^{-1}$, [4, 6, 8, 10, 15, 20, 25, 30] $\\times$ 0.135 Jy beam$^{-1}$ km s$^{-1}$\\ and [5, 10, 15, 20, 25] $\\times$ 0.135 Jy beam$^{-1}$ km s$^{-1}$, respectively. (3$^{rd}$ ROW) CS~$J$~=~2$-$1, $J$=4-3 (ALMA) and $J$=5-4 with contours corresponding to [4, 8, 12, 16, 20, 24] $\\times$ 0.5, 1.2 and 0.29 Jy beam$^{-1}$ km s$^{-1}$, respectively. (4$^{th}$ ROW) HCN~$J$~=~1$-$0\\ and HCO$^+$~$J$~=~1$-$0\\ with contours corresponding to [5, 10, 15, 20, 25] $\\times$ 0.52 and 0.15 Jy beam$^{-1}$ km s$^{-1}$, respectively. (BOTTOM ROW) CH$_{\\rm{3}}$CN(6-5)\\ and (10-9) with contours corresponding to [4, 6, 8, 10] and [6, 12, 18, 24, 30, 40, 50] $\\times$ 0.33 Jy beam$^{-1}$ km s$^{-1}$, respectively.\n}\n\\label{fig:arp220maps}\n\\end{figure*}\n\\begin{figure*}[htbp]\n\\centering\n$\\begin{array}{c@{\\hspace{0.1in}}c@{\\hspace{0.1in}}c}\n\\includegraphics[ scale=0.2]{fig2-map-HCN10-n6240.pdf} & \\includegraphics[scale=0.2]{fig2-map-HCOP10-n6240.pdf} & \\includegraphics[scale=0.2]{fig2-map-C2H10-n6240.pdf}\\\\ \n\\end{array}$\n\\caption[]{NGC 6240: The ellipse in the bottom left corner of each map represents the synthesized beam size. HCN $J$=1-0 and HCO$^+$~$J$~=~1$-$0\\ with contours corresponding to [3, 6, 9, 12, 15, 18] $\\times$ 0.184 Jy beam$^{-1}$ km s$^{-1}$\\ and C$_{\\rm{2}}$H~$N$~=~1$-$0\\ with contours corresponding to [1, 2, 3, 4, 5] $\\times$ 0.2 Jy beam$^{-1}$ km s$^{-1}$.}\n\\label{fig:ngc6240maps}\n\\end{figure*}\n\\begin{figure*}[!htbp]\n\\centering\n$\\begin{array}{c@{\\hspace{0.1in}}c}\n\\includegraphics[ scale=0.55]{f4a_13co10.pdf} & \\includegraphics[scale=0.55]{f4a_13co21.pdf} \\\\\n \\includegraphics[scale=0.55]{f4a_c2h.pdf} & \\includegraphics[scale=0.55]{f4a_hcn.pdf} \\\\\n\n \\includegraphics[scale=0.55]{f4a_ch3cn.pdf} &\\includegraphics[scale=0.55]{f4a_CS.pdf}\n \\end{array}$\n\\caption[]{\\textbf{($Top$ and $Middle$) Spectra averaged over a 1\\arcsec\\ diameter aperture centered at Arp 220W and Arp 220E (see Table \\ref{tab:lineratios} and Section \\ref{sec:rad}) and ($Bottom$) Normalized spectra of each of the CS lines averaged over a 3\\arcsec\\ diameter aperture. }}\n\\label{fig:arp220spec}\n\\end{figure*}\n\\begin{figure*}[!htbp]\n\\centering\n$\\begin{array}{c@{\\hspace{0.1in}}c}\n\\includegraphics[ scale=0.55]{f4b_c2h.pdf} & \\includegraphics[scale=0.55]{f4b_hcn.pdf} \\\\\n \\end{array}$\n\\caption[]{ \\textbf{Spectra averaged over a 1\\arcsec\\ diameter aperture for NGC 6240 centered on position (0,0).} }\n\\label{fig:n6240spec}\n\\end{figure*}\n\n\n\\section{Line Ratios\nFor Arp 220, we create the following integrated line brightness temperature (T$_{B}$) ratio \nmaps for $^{12}$CO\\ and $^{13}$CO:\n\nr$_{21}$ = T$_{\\rm{B}} ^{^{12}\\rm{CO}(2-1)}$ \/ T$_{\\rm{B}} ^{^{12}\\rm{CO}(1-0)}$, \n\nr$_{32}$ = T$_{\\rm{B}} ^{^{12}\\rm{CO}(3-2)}$ \/ T$_{\\rm{B}} ^{^{12}\\rm{CO}(2-1)}$, \n\n$^{13}$r$_{21}$ = T$_{\\rm{B}} ^{^{13}\\rm{CO}(2-1)}$ \/ T$_{\\rm{B}} ^{^{13}\\rm{CO}(1-0)}$,\n\nR$_{10}$ = T$_{\\rm{B}} ^{^{12}\\rm{CO}(1-0)}$ \/ T$_{\\rm{B}} ^{^{13}\\rm{CO}(1-0)}$,\n\nR$_{21}$ = T$_{\\rm{B}} ^{^{12}\\rm{CO}(2-1)}$ \/ T$_{\\rm{B}} ^{^{13}\\rm{CO}(2-1)}$.\n\nWe also create integrated T$_{B}$ line ratio maps for molecules other than CO as follows:\n\nH$_{10}$ = T$_{\\rm{B}} ^{\\rm{HCN}(1-0)}$ \/ T$_{\\rm{B}} ^{\\rm{HCO^{+}}(1-0)}$ and\n\nCS \/ HNCO = T$_{\\rm{B}}^{\\rm{CS(5-4)}}$ \/ T$_{\\rm{B}}^{\\rm{HNCO(5-4)}}$ (Figure \\ref{fig:arp220lineratios}).\n\nFor NGC 6240, we show maps of the T$_{B}$ line ratios of \nr$_{21}$ and H$_{10}$ (Figure \\ref{fig:ngc6240lineratios}; with both quantities defined as for Arp 220). Table \\ref{tab:lineratios} gives a summary of the observed line ratios. \n\nTo match the physical scales that we analyze, we smooth the data cubes using a Gaussian kernel to match angular resolution. For Arp 220, we smooth the data to the resolution of $^{13}$CO $J$=1$-$0\\ (Table \\ref{tab:observations}) and for NGC 6240, we use a compromised resolution of 1.2\\arcsec, limited by the resolution of the $^{12}$CO $J$=1$-$0\\ observations of \\citet{Feruglio2013a}. We applied a 5$\\sigma$ cut to each map used to produce the the ratio maps. \n\nSince $^{12}$CO\\ is believed to be optically thick, the R$_{10}$ and R$_{21}$ line ratios give a lower limit to the true [$^{12}$CO]\/[$^{13}$CO]\\ abundance ratio. To illustrate this point, we start with the most general equation for the R line ratios,\n\\begin{eqnarray}\\label{eqn:ratio}\n\\begin{aligned}\n\\rm{R} &= \\frac{T_{\\rm{B}}^{^{12}\\rm{CO}}}{T_{\\rm{B}}^{^{13}\\rm{CO}}} \\\\\n\\rm{R} &= \\frac{T_{\\rm{EX}}^{^{12}\\rm{CO}}}{T_{\\rm{EX}}^{^{13}\\rm{CO}}} \\frac{(1-\\rm{e}^{-\\tau_{^{12}CO}})}{(1-\\rm{e}^{-\\tau_{^{13}CO}})}\n\\end{aligned}\n\\end{eqnarray}\nwhere T$_{\\rm{EX}}$ is the excitation temperature, $\\tau_{\\rm{^{12}CO}}$ and $\\tau_{\\rm{^{13}CO}}$ are the optical depths of $^{12}$CO\\ and $^{13}$CO, respectively. If both $^{12}$CO\\ and $^{13}$CO\\ were to be in local thermal equilibrium (LTE) so that their excitation temperatures were equal, then Equation \\ref{eqn:ratio} simplifies to \n\\begin{equation}\\label{eqn:ratio2}\n\\rm{R} = \\frac{(1-\\rm{e}^{-\\tau_{^{12}CO}})}{(1-\\rm{e}^{-\\tau_{^{13}CO}})} \n\\end{equation}.\nIn addition, if the $^{12}$CO\\ emission is sufficiently optically thick, so that $(1-\\rm{e}^{-\\tau_{^{12}CO}})$ $\\rightarrow$ 1 and if $^{13}$CO\\ is sufficiently optically thin, so that $(1-\\rm{e}^{-\\tau_{^{13}CO}})$ $\\rightarrow$ $\\tau_{^{13}CO}$, then Equation \\ref{eqn:ratio2} can be further simplified to \n\\begin{eqnarray}\\label{eqn:ratio}\n\\begin{aligned}\n\\rm{R} &= \\frac{1}{\\tau_{^{13}CO}}\\\\\n\\rm{R} &= \\frac{\\tau_{^{12}CO}}{\\tau_{^{13}CO}} \\frac{1}{\\tau_{^{12}CO}}\\\\\n\\rm{R} &= \\frac{[^{12}CO]}{[^{13}CO]}\\frac{1}{\\tau_{^{12}CO}}\n\\end{aligned}\n\\end{eqnarray}\nwhere the ratio of the optical depths of the isotopologues are equivalent to the abundance ratio ($\\tau$ $\\propto$ column density). In this case, the observed ratio R of line brightness temperatures, will always be less than the true abundance, [$^{12}$CO]\/[$^{13}$CO], because of the attenuating factor of the $^{12}$CO\\ optical depth. \n\nOnly if $^{12}$CO\\ and $^{13}$CO\\ are both optically thin, will the observed ratio of line brightness temperatures directly trace the abundance ratio, providing that the $^{12}$CO\\ and $^{13}$CO\\ excitation temperatures are equal.\n\nAs is well-known, however, because of resonant trapping in the CO lines, a more realistic approach is to use one of the standard non-LTE approximations, which allows for different excitation temperatures in the different CO lines, and this is what we actually do (Section 4).\n\n\\begin{figure*}[!htbp]\n\\centering\n$\\begin{array}{c@{\\hspace{0.5in}}c}\n\\includegraphics[ scale=0.3]{figA_ratio_co2110_a220.pdf} & \\includegraphics[scale=0.3]{figA_ratio_co3221_a220.pdf} \\\\\n\\includegraphics[ scale=0.3]{figA_ratio_co121310_a220.pdf} & \\includegraphics[ scale=0.3]{figA_ratio_co121321_a220.pdf}\\\\\n\\includegraphics[ scale=0.3]{figA_ratio_co13co2110_a220.pdf} & \\includegraphics[scale=0.3]{figA_ratio_hcnhco_a220.pdf} \\\\\n\\includegraphics[ scale=0.3]{figA_ratio_cshnco54_a220.pdf} \\\\\n\\end{array}$\n\\caption{Arp~220 line ratios: (TOP) r$_{21}$ and r$_{32}$, (2nd ROW) R$_{10}$ and R$_{21}$, (3rd ROW) $^{13}$r$_{21}$ and H$_{10}$, (BOTTOM) CS \/ HNCO. The ellipse in the bottom left corner of each map represents the synthesized beam size. The black contours represent the high-resolution $^{12}$CO $J$=2$-$1\\ emission published in \\cite{Downes2007} to guide the eye to the positions of the two nuclei of Arp~220. Note that the resolution of our observations do not spatially resolve the two nuclei. }\n\\label{fig:arp220lineratios}\n\\end{figure*}\n\\begin{figure*}[!htbp]\n\\centering\n$\\begin{array}{c@{\\hspace{0.5in}}c}\n\\includegraphics[ scale=0.3]{fig_ratio_ngc6240_hcnhco.pdf} & \\includegraphics[ scale=0.3]{fig_ratio_ngc6240_co2110.pdf} \\\\\n\\end{array}$\n\\caption[]{NGC 6240 Line ratios: (LEFT) H$_{10}$ and (RIGHT) r$_{21}$. The ellipse in the bottom left corner of each map represents the synthesized beam size. The black contours represent the 86~GHz continuum emission to guide the eye to the positions of the two nuclei of NGC~6240.}\n\\label{fig:ngc6240lineratios}\n\\end{figure*}\n\\begin{table*\n\\caption{Line Ratios}\\label{tab:lineratios}\n\\centering\n\\begin{tabular}{lcccccccc}\n\\hline\\hline\n\t\t\t& \\multicolumn{2}{c}{\\underline{Position}} & & & & & \\\\\n\t\t\t&RA (J2000)&Dec (J2000)&r$_{\\rm{21}}$\t& r$_{\\rm{32}}$\t & R$_{\\rm{10}}$\t&R$_{\\rm{21}}$\t&$^{13}$r$_{\\rm{21}}$ & H$_{10}$\t\t \\\\\n\\hline\nArp 220E\t\t&15 34 57.342 \t&+23 30 11.610\t &0.8 $\\pm$ 0.1\t&0.8 $\\pm$ 0.1\t\t&23 $\\pm$ 3\t&19 $\\pm$ 2\t&0.9 $\\pm$ 0.1 &2.4 $\\pm$ 0.2\t \\\\\nArp 220W\t\t&15 34 57.197\t&+23 30 11.595\t &1.1 $\\pm$ 0.1\t&0.60 $\\pm$ 0.08\t&38 $\\pm$ 4\t&19 $\\pm$ 2\t&2.3 $\\pm$ 0.3&\t2.7 $\\pm$ 0.2\\\\\nArp 220C\t\t&15 34 57.258\t&+23 30 11.592\t &1.2 $\\pm$ 0.1 &0.60 $\\pm$ 0.08\t&31$\\pm$ 3\t&26 $\\pm$ 3\t&1.3 $\\pm$ 0.1&2.5 $\\pm$ 0.2\t\\\\\nNGC 6240\t&16 52 58.900 & +02 24 03.810\t &1.0 $\\pm$ 0.1 &...\t&...\t&...\t&...\t&0.80 $\\pm$ 0.09\t \\\\\n\\hline \n\\end{tabular}\n\\newline\n\\textbf{NOTE}: The positions for Arp 220E\/W do not correspond to the positions of the two nuclei. Our resolution is too poor to distinguish the nuclei spatially, therefore, the Arp 220E\/W positions are $\\sim$ 2\\arcsec\\ apart (greater than the synthesized beam major axis) with Arp 220C as the central position. Position error = $\\pm$0.1\\arcsec\n\\end{table*}\n\n\\section{Radiative Transfer Analysis}\\label{sec:rad\nTo model the $^{12}$CO\\ emission (and that of the rarer CO isotopologues), we use the escape-probability radiative transfer program RADEX \\citep{vanderTak2007}. To find the most likely RADEX solution, given the observed line strengths, and other constraints, we use the Monte Carlo Markov Chain code\\footnote{https:\/\/github.com\/jrka\/pyradexnest} of \\citet{Kamenetzky2014}. This code implements the nested sampling algorithm MultiNest \\citep{Feroz2009} using its Python wrapper PyMultiNest \\citep{Buchner2014} to constrain parameters. As stated in \\citet{Kamenetzky2014}, ``The algorithm `nests inward' to subsequently smaller regions of high-likelihood parameter space. Unlike calculating the likelihood using a grid method, the algorithm can focus on high likelihood regions and thus estimate parameter constraints more efficiently.\" Model points are generated using RADEX with the following input parameters: kinetic temperature (\\tkin), column density of molecular species X per unit line width ($N_{\\rm{X}}$\/$\\Delta$V) and volume density of the collision partner, molecular hydrogen ($n_{\\rm{H_{2}}}$). In addition, the resulting flux is allowed to scale uniformly down by an area filling factor, $\\Phi_{\\rm{A}}$\\ $\\leq$1. We also implement three priors: \n\\begin{enumerate}\n\\item A column length to constrain the diameter of the molecular emission region. This prior assists in constraining the column and volume densities. We estimate the upper limit to the column length to be the diameter of the synthesized beam ($\\sim$~700~pc). \n\\item A dynamical mass ($M_{\\rm{dyn}}$) as an upper limit to the total mass that can be contained within the column density. We use the equation from \\cite{Wilson1990} assuming a diameter of $\\sim$700~pc and velocity FWHMs from literature presented below.\n\\item An optical depth range of 0 to 100. An optical depth below 0 implies maser emission and the upper limit of 100 is recommended by the RADEX documentation.\n\\end{enumerate}\n We refer the reader to Kamenetzky et al. (2014) for more details. \n\nWe model three molecular species simultaneously: $^{12}$CO, $^{13}$CO\\ and C$^{18}$O. The $^{12}$CO\\ lines modelled are the $J$=1-0, 2-1 \\citep{Downes1998} and 3-2 \\citep{Sakamoto2008}. We use a line ratio of $^{13}$CO\/C$^{18}$O\\ = 1 \\citep{Matsushita2009,Greve2009} to estimate the emission of C$^{18}$O\\ at our angular resolution ($\\sim$2\\arcsec). Due to the lack of resolution elements across the $^{13}$CO\\ emission of Arp 220, we model the molecular gas at the central peak position of $^{13}$CO $J$=1$-$0\\ (see Table \\ref{tab:observations}; Arp 220C) since the peak emission falls in between the two nuclei and 1\\arcsec\\ to each side of Arp 220C, which will place Arp 220E\/W more than one resolution beam apart. We stress that Arp 220E\/W are not the positions of the nuclei and are only modelled for completeness. For Arp 220W, we used only the $J$=1-0 and 2-1 lines because the addition of the $J$=3-2 line resulted in poor SLED fits, likely indicating the presence of a second molecular gas component that cannot be fit with our data. We adopt the following linewidths: 120 km s$^{-1}$\\ and 250 km s$^{-1}$\\ for Arp~220W and Arp~220E measured using ALMA $^{12}$CO $J$=1$-$0\\ observations at 0.09\\arcsec\\ (37~pc) \\citep{Scoville2017}, respectively, and 320 km s$^{-1}$\\ for Arp 220C \\citep{Downes1998} measured from PdBI $^{12}$CO $J$=1$-$0\\ observations. The mean, best fit and $\\pm$1$\\sigma$ results of \\tkin, $n_{\\rm{H_{2}}}$, $N_{\\rm{^{12}CO}}$, $\\Phi_{\\rm{A}}$, thermal pressure, molecular gas mass, CO-to-H$_{2}$ conversion factor ($\\alpha_{\\rm{CO}}$) and the relative abundances of $^{13}$CO\\ and C$^{18}$O\\ to $^{12}$CO\\ per model point are presented in Table \\ref{tab:results}. The best fit SLED for Arp 220C is presented in Figure \\ref{fig:sleds}. The (marginal) probability of a single parameter is computed by integrating over all other parameters. We present the marginal probability distributions of \\tkin, $n_{\\rm{H_{2}}}$, $N_{\\rm{^{12}CO}}$\\ and relative abundances of $^{13}$CO\\ and C$^{18}$O\\ as well as the 2D probability distribution of log$_{10}$(\\tkin) versus log$_{10}$($n_{\\rm{H_{2}}}$) in Figure \\ref{fig:arp220prob}. \n\nWe do not model NGC 6240 due to the lack of available high-resolution observations of $^{13}$CO\\ and the significant missing $^{12}$CO\\ flux in the existing high-resolution maps. We refer the reader to \\citet{Tunnard2015b} for an extensive radiative transfer modelling of the molecular gas in NGC 6240. \n\n\\begin{table*\n\\scriptsize\n\\caption{Radiative Transfer Results}\\label{tab:results}\n\\centering\n\\begin{tabular}{clccccccccc}\n\\hline\\hline\n& \t&\\tkin\\\t&log$_{10}$($n_{\\rm{H_{2}}}$)\t&log$_{10}$(P) &log$_{10}$($\\Phi_{\\rm{A}}$)\t&log$_{10}$($<$$N_{\\rm{^{12}CO}}$\\ $>$) &log$_{10}$($M_{\\rm{H_{2}}}$) & $\\alpha_{\\rm{CO}}$\\ &\t[$^{12}$CO]\/[$^{13}$CO]\\\t\t& [$^{12}$CO]\/[C$^{18}$O]\\ \\\\\n&\t&(K)\t&(cm$^{-3}$)\t& (K cm$^{-3}$) &\t&(cm$^{-2}$) &($M_{\\sun}$) &($M_{\\odot}$ (K km s$^{-1}$ pc$^{2}$)$^{-1}$)\t& & \\\\ \\relax\n&[1]\t&[2]&[3]&[4]&[5]&[6]&[7]&[8]&[9]&[10] \\\\\n\\hline\nArp 220C&Mean \t& 130 & 2.99 & 5.2 &-0.30 \t& 19.33 \t&8.81\t&0.4 &125\t & 125\t\t\\\\\n\t&Best Fit \t\t& 38 \t& 3.35 &\t4.9& -0.036 \t&19.20 \t&8.69 \t&0.3 & 90\t &\t91\t\\\\\n\t&-1$\\sigma$ Value\t&37 \t& 2.55 &\t4.9 & -0.53 \t&18.97\t& 8.46 \t&0.2 &58\t &\t56\t\\\\\n\t&+1$\\sigma$ Value\t&690 & 3.40 &\t5.4 & -0.074\t&19.74 \t& 9.23 \t&1.0\t&309 & 323\\\\\n\t\t\\hline\n\t\t\n\nArp 220E &Mean \t\t&240 & 2.54 & 4.9 &-0.22 \t&19.04 \t&8.53\t&0.4 &93\t & 93\t\t\\\\\n\t\t&Best Fit \t\t&34 \t & 3.10 &\t4.6& -0.017 \t&19.24\t&8.73 \t& 0.6 & 159\t &\t159\t\\\\\n\t\t&-1$\\sigma$ Value\t&10 \t & 2.8 &\t4.3 & -0.17\t&19.03\t& 8.52 \t&0.4 &100\t &\t100\t\\\\\n\t\t&+1$\\sigma$ Value\t&110 & 3.4 &\t4.9 & 0.0\t&19.45 \t& 8.94 \t&1.0\t&255 & 255\\\\\n\t\t\\hline\n\nArp 220W&Mean \t& 105 & 3.78 & 5.8 &-0.32 \t& 19.36 \t&8.85\t&0.5 &151\t & 149\t\t\\\\\n\t&Best Fit \t\t& 300 \t& 3.2 &\t5.7& -0.7 \t&19.34 \t&8.83 \t&0.5 & 142& 143\t\\\\\n\t&-1$\\sigma$ Value\t&90 \t& 2.3 &\t5.0 & -0.91\t&18.96\t& 8.45 \t&0.2 &62\t &\t62\t\\\\\n\t&+1$\\sigma$ Value\t&1000 & 4.1 &\t6.3 & -0.48\t&19.71 \t& 9.20\t&1.1\t&330 & 330\\\\\n\t\t\\hline\t\n\t\t\n\\end{tabular}\n\\textbf{NOTES:} Col [1]: Statistic; Col[2] Kinetic Temperature; Col [3] Volume Density; Col[4]: Thermal Pressure Col[5]: Filling Factor; Col[6]: $^{12}$CO\\ Column Density; Col[7]: Molecular Gas (H$_{2}$) mass assuming [$^{12}$CO]\/[H$_{2}$] = 3 $\\times$ 10$^{-4}$; Col[8]: Conversion Factor $\\alpha_{\\rm{CO}}$\\ = 1.36$M_{\\rm{H_{2}}}$\/$L_{\\rm{CO}}$); Factor of 1.36 is to account for He; Col[9]: [$^{12}$CO]\/[$^{13}$CO]\\ abundance ratio; Col[10]: [$^{12}$CO]\/[C$^{18}$O]\\ abundance ratio; The $\\pm\\sigma$ values denote the range of values within 1$\\sigma$. \n\\end{table*}\n\\begin{figure}[!htbp]\n\\centering\n\\includegraphics[ scale=0.6]{fig4-arp220-sled.pdf} \\\\\n\\caption[]{Arp 220C SLED for $^{12}$CO\\ (black x), $^{13}$CO\\ (green left triangle) and C$^{18}$O\\ (orange right triangle). Solid and dashed lines represent the most probable SLED solution for each molecule. }\n\\label{fig:sleds}\n\\end{figure}\n\\begin{figure*}[!htbp]\n\\centering\n$\\begin{array}{c@{\\hspace{0.5in}}c}\n\\includegraphics[ scale=0.4]{fig5a_Tempdist.pdf} & \\includegraphics[scale=0.4]{fig5b_Densdist.pdf} \\\\\n\\includegraphics[ scale=0.4]{fig5a_abundist.pdf} & \\includegraphics[ scale=0.4]{fig5a_bacdist.pdf} \\\\\n\\end{array}$\n\\includegraphics[scale=0.4]{fig5e_contour.pdf} \\\\\n\\caption[]{Probability distributions for Arp 220C. (\\textit{Top Left}) Temperature, (\\textit{Top Right}) Volume density, (\\textit{Middle Left}) Abundance of $^{13}$CO\\ and C$^{18}$O\\ relative to $^{12}$CO. (\\textit{Middle Right}) Average column density of $^{12}$CO. (\\textit{Bottom}) Temperature versus volume density. Contours are 40, 60, 80 and 95$\\%$ of the most probable solution. Dashed lines represent log(pressure).}\n\\label{fig:arp220prob}\n\\end{figure*}\n\n\n\\section{Discussion\n\\subsection{Molecules Detected\n\\textbf{$^{13}$CO\\ (\\textit{Second most common isotopologue of carbon monoxide, after $^{12}$CO.} )} -The $^{13}$CO $J$=2$-$1\\ peak falls near the western nucleus of Arp 220 while the $^{13}$CO $J$=1$-$0\\ peak falls in between the two nuclei. Both $^{13}$CO $J$=1$-$0\\ and $J$ = 2-1 have an east-west morphology instead of a north-east south-west morphology of the $^{12}$CO\\ maps which suggests that very little $^{13}$CO\\ emission originates from the diffuse kiloparsec disk surrounding the nuclei. The $^{13}$CO\\ lines (and other optically thin tracers) are important to truly trace the physical conditions of the molecular gas as they can probe deeper into the molecular gas ensemble than the optically thick $^{12}$CO\\ (i.e. brick wall effect). This is apparent in the line ratio maps (Figure \\ref{fig:arp220lineratios}) where the $^{12}$CO\\ line ratios do not vary greatly across the system while the $^{13}$CO\\ line ratios show an east-west gradient. \n\n\\textbf{HCN (\\textit{Hydrogen cyanide}) and HCO$^+$\\ (\\textit{Formyl ion})} - \\textit{Arp 220:} The HCN\\ and HCO$^+$~$J$~=~1$-$0\\ emission in Arp 220 peaks at the same position near the western nucleus which may suggest more dense gas in the west than found near the eastern nucleus. A 2D Gaussian fit of the emission shows that the size of HCN (1.4\\arcsec\\ $\\times$ 1.0\\arcsec; size deconvolved from the beam \\textbf{assuming a Gaussian structure}) is more compact than HCO$^+$ (1.9\\arcsec\\ $\\times$ 1.4\\arcsec). The spectra (Figure \\ref{fig:arp220spec} show absorption features in both HCN and HCO$^{+}$ similar to that seen in \\cite{Martin2016} and \\cite{Scoville2017}. The absorption appears to have the same depth in both lines, which seem to be absorbing nearly all of the continuum. The main difference between the two lines is that HCN has more emission. \n\n\\textit{NGC 6240:} The integrated line flux from the HCN\\ and HCO$^+$~$J$~=~1$-$0\\ is roughly half the single-dish flux measured by \\citet{Greve2009}. This is evidence of a larger scale HCN and HCO$^+$~$J$~=~1$-$0\\ emission that is filtered out by the interferometer. With our shortest baseline ($\\sim$ 80~m), 50\\% of the flux is at scales greater than 3\\arcsec\\ ($\\sim$1.5~kpc). Future observations need to include shorter spacings to recover all flux.\n\nAs also observed for Arp 220, the HCN emission is more compact than that for HCO$^+$\\ which is evident in the integrated intensity maps (Figure \\ref{fig:ngc6240maps}). A 2D Gaussian fit on the integrated intensity maps shows that HCN and HCO$^+$~$J$~=~1$-$0\\ are unresolved at our angular resolution (1.1\\arcsec) with an upper limit to the source size of 0.5\\arcsec\\ for HCN and 0.9\\arcsec\\ $\\times$ 0.5\\arcsec\\ for HCO$^+$. The upper limits still indicate that HCO$^+$~$J$~=~1$-$0\\ is coming from a more extended component which supports the analysis of \\citet{Greve2009} suggesting that HCN traces a denser gas phase than HCO$^+$.\n\n\n\\textbf{C$_{2}$H (\\textit{Ethynyl})} - The C$_{2}$H $N$ =1-0 transition consists of six hyperfine structure lines ($J$=3\/2 -1\/2 F=1-1, 2-1, 1-0 and $J$=1\/2-1\/2 F=1-1, 0-1, 1-0). Due to the low spectral resolution, the six lines are blended in our observations for both sources.\n\\textit{Arp 220:} The C$_{\\rm{2}}$H\\ emission is peaked near the western nucleus.\n\\textit{NGC 6240:} The C$_{\\rm{2}}$H\\ emission is peaked in between the two nuclei, similar to the HCN and HCO$^+$\\ emission.\n\n\\textbf{SiO (\\textit{Silicon monoxide})} - The SiO $J$=2-1 emission from Arp 220 has been previously observed by \\citet{Tunnard2015a} at a similar spatial resolution. The total flux we measure agrees with \\citet{Tunnard2015a}; however, we note that there may be contamination from H$^{13}$CO$^{+}$ $J$=1-0. \\citet{Tunnard2015a} explore this possible contamination. We also do not see an absorption feature with the SiO $J$=2-1 line profile in Arp 220W, however, for Arp 220E there is a hint of an absorption feature (Figure \\ref{fig:arp220spec}). Since the detection in the Arp 220E is poor, it is difficult to conclude whether this is a real feature. We note that \\cite{Tunnard2015a} do not see an absorption feature in SiO $J$=2-1.\n\n\\textbf{HNCO (\\textit{Isocyanic acid})} and \\textbf{CS (\\textit{Carbon monosulfide})}: Only part of the HNCO line is observed due to bandwidth constraints and may contain contamination from C$^{18}$O\\ due to the broad linewidths of Arp 220. \\citet{Martin2009} observed HNCO~$J_{k,k'}$~=~5$_{0,4}$~-~4$_{0,4}$\\ in Arp 220 using the IRAM 30m and measured a flux of $\\sim$5.5 Jy km s$^{-1}$, more than twice our measured flux. The emission of HNCO has a horizontal elongation in the vicinity of the two nuclei with very little emission in the surrounding disk. The peak of the emission of HNCO is near the position of the western nucleus (Figure \\ref{fig:arp220maps}); however, this may be influenced by the lack of the eastern portion of the line profile. \n\n\\cite{Greve2009} observed CS~$J$~=~2$-$1\\ and $J$=5-4 in Arp 220. Our high-resolution observations of CS~$J$~=~2$-$1\\ agree with the total flux within uncertainties while the CS~$J$~=~5$-$4\\ is missing 37$\\pm$12$\\%$ of the total flux in the single dish map. \\cite{Galametz2016} observed the CS~$J$~=~4$-$3\\ emission from Arp 220 using APEX. The ALMA SV CS~$J$~=~4$-$3\\ observations total flux agrees with the single dish observations. A comparison of the three CS lines (Figure \\ref{fig:arp220spec}), shows small differences in the line profiles. The CS~$J$~=~4$-$3\\ and $J$=5-4 have a double peak profile similar to what \\cite{Greve2009} noted, while the CS~$J$~=~2$-$1\\ line profile lacks the peak at higher velocities. The CS~$J$~=~5$-$4\\ line also has a third peak around an observed frequency of $\\sim$ 240.8GHz, but this may be due to line contamination from perhaps CH$_{3}$OH and\/or CH$_{2}$NH lines. \\cite{Martin2009} observed a similar third peak in the CS~$J$~=~5$-$4\\ line.\n\nCS is a known tracer of photo-dominated regions (PDRs), where the CS is enhanced through S$^{+}$ chemistry \\citep{Sternberg1995}. HNCO, however, is photodissociated easily by UV photons, but is enhanced in regions with shocks \\citep{Zinchenko2000}. \\citet{Martin2008} proposed that the relative abundance of HNCO to CS can be used as a good diagnostic tool to distinguish between the influence of shocks and molecular clouds illuminated by UV radiation. They determined that an offset region in IC 342 (denoted by IC 342* in their analysis) is dominated by shocks with a CS(5-4)\/HNCO(5-4) line ratio of $\\sim$0.3. M82 was determined to be dominated by PDRs with a CS(5-4)\/HNCO(5-4) line ratio $>$64. In the analysis of Martin et al., Arp 220 is a composite system with contributions from PDRs and shocks. Since we only observe part of the HNCO, our line ratios of CS(5-4)\/HNCO(5-4) (Figure \\ref{fig:arp220lineratios}) is strictly an upper limit to the true line ratios; however, despite this handicap, we see significant variations of the line ratio between the region of the two nuclei ($\\sim$7) and the kiloparsec disk ($\\sim$ 12-30). Assuming both lines are optically thin and in LTE, the line ratios trace the relative abundance and this strongly suggests that the ISM in the kiloparsec disk is not strongly influenced by shocks while there is very likely a more significant contribution of shocks, likely from stellar winds, possible outflows as suggested by \\citet{Sakamoto2009} and\/or supernova explosions, near the two nuclei. Higher spatial resolution imaging is required to properly separate these two regions. \n\n\\textbf{HN$^{13}$C (\\textit{Rare isotopologue of hydrogen isocyanide})} - The HN$^{13}$C emission is peaked oddly below the western nucleus by $\\sim$0.5 $\\pm$ 0.1\\arcsec. Nearly the entire emission is originating from the western side of Arp~220 which may be due to sensitivity since the line is weak and the eastern nucleus is the weaker emitter of the two. \\citet{Wang2016} observed HN$^{13}$C $J$=1-0 and 3-2 using the IRAM~30m and APEX single dish telescopes. They find that the emission is only detected in the blue component of Arp~220 (i.e. the western nucleus region) similar to what we see and our flux measurement (Table \\ref{tab:observations}) agrees with theirs. \n\\citet{Tunnard2015a} observed HN$^{13}$C $J$=3-2 in Arp~220 with the PdBI. The peaks of HN$^{13}$C $J$=3-2 are within the positions of the nuclei (both east and west; see Figure 7 of Tunnard et al. 2015a). The flux measured by \\citet{Tunnard2015a} is a factor of $\\sim$2.5 times higher than that of \\citet{Wang2016}; however, the line in the observations of \\citet{Tunnard2015a} is near the bandpass edge and as noted by \\citet{Wang2016}, the observations may have been difficult to continuum subtract properly. If the case is that the line was poorly continuum subtracted, the peak positions of HN$^{13}$C $J$=3-2 in the PdBI map may not be due to the molecular emission but of the 1.2mm continuum contamination. \nUsing the total flux of HNC~$J$~=~1-0 published in \\cite{Greve2009} and our flux measurement for HN$^{13}$C (Table \\ref{tab:observations}), the line ratio of HNC\/HN$^{13}$C is $\\sim$ 52. Since HNC is likely optically thick, this measured line ratio is a lower limit to the [$^{12}$C]\/[$^{13}$C] abundance. \n\n\\textbf{CH$_{3}$CN (\\textit{Methyl cyanide}}) - Due to bandwidth limitations, only part of the CH$_{3}$CN~(6-5) line is observed. The CH$_{3}$CN is partially blended with the $^{13}$CO $J$=1$-$0\\ line and is nearly as strong in emission as $^{13}$CO $J$=1$-$0. The CH$_{3}$CN~(12-11) line, which would be near the $^{13}$CO $J$=2$-$1, is not included in our bandpass; however a very small part of the CH$_{3}$CN~(12-11) line may be blended in the blue side of the $^{13}$CO $J$=2$-$1\\ line profile. The ALMA CH$_{3}$CN (10-9) observations observed the entire line profile (Figure \\ref{fig:arp220spec}). \n\nSince only the emission on the blue side of the CH$_{3}$CN~(6-5) line is observed, it is not surprising that the emission is peaked near the western nucleus; however, when compared to the CH$_{3}$CN (10-9), the peak positions agree. CH$_{3}$CN is believed to be a tracer of hot cores with the relative ease of detecting the line near hot cores \\citep[e.g.][]{Remijan2004,Purcell2006} and is a good ISM thermometer \\citep{Guesten1985} if multiple transitions are observed. \\citet{Gratier2013} find that CH$_{3}$CN is more abundant in UV illuminated gas and thus may be a PDR tracer. Since U\/LIRGs do experience intense massive star formation, CH$_{3}$CN should be relatively bright (and is in Arp~220). Observations of other transitions of CH$_{3}$CN will be relatively easy in the ALMA\/NOEMA era as demonstrated by the ALMA SV and early PdBI observations.\n\n\n\\subsection{Molecular Gas Conditions in Arp 220\nThe radiative transfer modelling of the molecular gas in Arp 220 is consistent with a warm ($\\sim$~40~K), moderately dense (10$^{3.4}$~cm$^{-3}$ molecular gas component. The best fit molecular gas density is a factor of $\\sim$4 higher than what was found by \\citet{Rangwala2011} using $Herschel$ FTS observations; however, the best fit $n_{\\rm{H_{2}}}$\\ and \\tkin\\ of the two sets of models are consistent within the 1$\\sigma$ uncertainties. We note that this result should not be perceived as a lack of very dense molecular gas as been presented in \\citet{Scoville2015}. Our results are averaged over a $\\sim$700~pc scale which may be dominated by a less dense medium, most likely from the diffuse kiloparsec disk. \n\nWithin the 1$\\sigma$ range, a warm, moderately dense gas component is favoured (Table \\ref{tab:results} and Figure \\ref{fig:arp220prob}). This result is very similar to other advanced mergers such as VV~114 \\citep{Sliwa2013} and NGC~2623 \\citep{Sliwa2017a} where, on average over $\\sim$1~kpc scales, a warm, moderately dense molecular gas is the best fit solution. This can be explained by feedback from massive star formation, which these mergers will experience intensely over the course of the merger process and possibly outflows. The feedback in the form of stellar winds and supernova from massive stars and the outflows that have been observed \\citep[e.g.][]{Cicone2014} are able to push back on the molecular gas, relieving the pressure while decreasing the molecular gas volume density. \\citet{Sakamoto2009} have observed a molecular P-Cygni profile suggesting $\\sim$100~km s$^{-1}$\\ outflows from the nuclei, which can explain our picture of a less dense medium traced by CO. Within early\/intermediate stage mergers we should see a denser molecular medium over roughly kiloparsec scales traced by CO and its isotopologues since the molecular gas is likely still inflowing to the central regions of the merging systems thus increasing the pressure and volume density of the molecular gas \\citep{Sliwa2017a}. In Arp~220, the volume density and thus thermal pressure on $\\sim$kiloparsec scales will likely be on the rise as the two nuclei are on the verge of the final coalescence. \n\n\n\\subsection{[$^{12}$CO]\/[$^{13}$CO]\\ and [$^{12}$CO]\/[C$^{18}$O]\\ Abundance Ratios in Arp 220\nThe R$_{10}$ and R$_{21}$ line ratios of Arp 220 (see Table \\ref{tab:lineratios}) on a position by position basis are typical for ULIRGs but abnormally high when compared with normal disk galaxies \\citep[e.g.][]{Davis2014} The R$_{10}$ line ratio is higher (at some positions) than other LIRGs such as Arp~299 and Arp~55 \\citep{Casoli1999,Sliwa2017a}. However, these high R$_{10}$ line ratios are more common for advanced major mergers such as VV~114, IRAS~13120-5453 and NGC~2623 \\citep{Sliwa2013,Sliwa2017a,Sliwa2017c,Saito2015}. The possible explanations for the higher line ratios include optical depth effects, increased [$^{12}$CO]\/[$^{13}$CO]\\ abundance ratio via some mechanism or excitation effects \\citep[e.g][]{Casoli1992,Henkel1993,Taniguchi1999}.\n\nOur radiative transfer analysis allows the [$^{12}$CO]\/[$^{13}$CO]\\ and [$^{12}$CO]\/[C$^{18}$O]\\ abundances to vary as a free parameter. We find that for Arp 220, the most probable abundance ratio is 90-125, roughly 3 times higher than the ISM value around the Galactic center \\citep[e.g.][]{Milam2005}. In our Galaxy, the abundance ratio increases with increasing radius. If Arp 220 had no enhancements in [$^{12}$CO]\/[$^{13}$CO], and had a\nsimilar [$^{12}$CO]\/[$^{13}$CO]\\ abundance gradient as observed in our Galaxy,\nthen we would expect values of [$^{12}$CO]\/[$^{13}$CO]~$\\leq$50 since the molecular gas in Arp~220 is concentrated within $\\sim$2-3~kpc. Even for NGC~6240, the most probable abundance ratio is 98 \\citep{Tunnard2015b}. \n\nOptical depth effects have been ruled out by our analysis as the best fit solutions have optical depths $>$ 1 for $^{12}$CO\\ and $<<$1 for $^{13}$CO.\n\nSelective photo-dissociation was thought to be a possible mechanism to drive the unusually high [$^{12}$CO]\/[$^{13}$CO]\\ abundance ratio we observe; however, this mechanism is ruled out because C$^{18}$O, a molecule that should be destroyed via photo-dissociation as fast or faster if less abundant than $^{13}$CO, is relatively strong in emission. The likely dominant source of the increased [$^{12}$CO]\/[$^{13}$CO]\\ and [$^{12}$CO]\/[C$^{18}$O]\\ abundance ratios is stellar nucleosynthesis. Short-lived ($\\leq$~10~Myr) massive stars produce \\element[][12]{C}\\ and \\element[][18]{O}, but \\element[][13]{C}\\ is produced in the envelopes of long-lived low\/intermediate mass during the red giant phase. If extreme starbursts have a top-heavy initial mass function, then\nmany massive OB stars will be formed in the Arp 220 starbursts,\nwhose supernovae will enrich the ISM, and we would expect to\nsee increased abundances of \\element[][12]{C}\\ and \\element[][18]{O}. \\citet{Matsushita2009} and \\citet{Greve2009} show that the $^{13}$CO\/C$^{18}$O\\ line ratio is $\\sim$~1 for Arp 220. The $^{13}$CO\/C$^{18}$O\\ line ratio in spirals has been shown to be on average $\\sim$6 \\citep{JD2017}, 6 times higher than for Arp 220. This strongly suggests that stellar nucleosynthesis has enriched the ISM of Arp 220. This is a similar result in other advanced mergers such as NCG 2623 and IRAS 13120-5453 \\citep{Sliwa2017c}. \n\nFractionation of the \\element[][12]{C}\\ and \\element[][13]{C}\\ has also been considered as playing a role in the relative carbon isotope abundances that we find in U\/LIRGs. The most important reaction to exchange carbon isotopes is\n\\begin{equation}\\label{eqn:frac}\n^{13}\\rm{C^{+}} + \\rm{^{12}CO} \\rightleftharpoons \\rm{^{12}C^{+}} + \\rm{^{13}CO} + 35\\rm{K}\n\\end{equation}\npredicted by \\citet{Watson1976}. The forward reaction is exothermic and in cold molecular gas can dominate isotope fractionation favouring the formation of more $^{13}$CO. In hotter temperatures, both the forward and reverse reactions are about equally probable and would not affect the overall abundance ratio greatly. Since the molecular gas in Arp 220 is warm ($\\sim$ 40K), we can rule out fractionation as a possible contaminate in our [$^{12}$CO]\/[$^{13}$CO]\\ abundance. \\citet{Langer1984} showed that oxygen fractionation is very small and the oxygen isotope ratios reflect stellar nucleosynthesis processes further supporting the stellar nucleosynthesis enrichment scenario for the observed line ratios. \n\n\\subsection{HCN and HCO$^+$\\ Line Ratios: AGN or Starburst?\nAs stated in the introduction, a linear relation between HCN~$J$~=~1$-$0\\ and infrared luminosity was found and interpreted to be a direct correlation of dense gas and star formation \\citep{Gao2004a}. Within the last decade or so, the nature of HCN~$J$~=~1$-$0\\ as a true dense gas tracer has been questioned. Studies of samples of AGN and starburst systems have found enhancements of HCN~$J$~=~1$-$0\\ emission among the AGN systems. Recently, \\citet{Privon2015} have shown that composite (i.e. AGN and star forming) and star formation dominated systems can also show a similar enhancement in HCN~$J$~=~1$-$0\\ emission. \\cite{Privon2015} found that for AGN, starbursts and composite systems the H$_{10}$ ratios are 1.84$\\pm$0.43, 0.88$\\pm$0.28 and 1.14$\\pm$0.49, respectively. This overlap of values for the different types of systems has complicated the picture of HCN\\ and HCO$^+$\\ emission suggesting multiple processes contributing to the line ratio differences. \n\nNevertheless, we compare the line ratios of Arp~220 and NGC~6240. We find that the line ratios are significantly different between the two sources. Arp~220 is consistent with AGN-like line ratios while NGC~6240 is consistent with a starburst-like ratio, similar to NGC~4039 and the Antennae Overlap region \\citep{Schirm2016}. This result is quite surprising considering the fact that NGC~6240 has two well known AGNs \\citep[e.g.][]{Komossa2003} while the presence of an AGN in Arp 220 is still debatable. The low line ratio found for NGC~6240 may reflect the different area filling factors of the two species as our observations are slightly unresolved with our beam; however, this is not the case for Arp~220 as our observations do resolve the emission and therefore, line ratios within one beam element would be filled with emission from both species. We note that our line ratio does not suggest the presence of an AGN in Arp 220 as our angular resolution does not distinguish the individual nuclei and the other possible mechanism affecting the H$_{10}$ line ratios \\citep[e.g][]{Privon2015}. Radiative transfer analyses including just these dense gas tracers are required to understand what is driving the line ratios. We also note that other effects such as infrared pumping of HCN and recombination of HCO$^+$\\ may be factors that need to be taken into consideration. \n\n\n\\section{Conclusions\nWe present new PdBI observations of several molecular gas tracers for the two nearby major mergers Arp~220 and NGC~6240. The observations show a wealth of chemistry in Arp~220 and different conditions between the two sources. The main results are summarised as follows:\n \\begin{enumerate}\n \\item $^{13}$CO $J$=2$-$1\\ \/ $J$=1-0 line ratio reveal that molecular gas conditions vary across the disk of Arp~220 where the line ratio increases from east to west.\n \\item The best fit molecular gas conditions at the peak position of $^{13}$CO $J$=1$-$0\\ imply a warm (40~K), moderately dense (10$^{3.4}$~cm$^{-3}$) molecular gas component. This solutions differs in volume density by a factor of $\\sim$4 from the analysis of \\cite{Rangwala2011} most likely because of the spatial resolution difference.\n\t\\item The HCN\/HCO$^+$~$J$~=~1$-$0\\ line ratio for Arp~220 is greater than 2, while for NGC~6240 the line ratios are below 1. \n\t\\item The [$^{12}$CO]\/[$^{13}$CO]\\ abundance ratio for Arp~220 is a factor of $\\sim$3 or more higher than the central value in the Milky Way, while the [$^{12}$CO]\/[C$^{18}$O]\\ abundance ratio is quite low indicating the enhancement of C$^{18}$O\\ in the ISM. The likely explanation that can explain both [$^{12}$CO]\/[$^{13}$CO]\\ and [$^{12}$CO]\/[C$^{18}$O]\\ is an enrichment of $^{12}$C and $^{18}$O in the ISM via stellar nucleosynthesis. Higher-resolution and sensitive observations of $^{13}$CO\\ and C$^{18}$O\\ are needed to look for specific spatial variations as seen in IRAS~13120-5453 \\citep{Sliwa2017c}.\n\t\n\tThese data, along with other sources (see Appendix \\ref{sec:release}) will be made available to the public for download with the intention to use for future analyses and comparisons with new observations. \n \\end{enumerate}\n \n\n\n\n\\begin{acknowledgements\nWe thank the referee for their comments and suggestions that improved this manuscript. We thank K. Sakamoto for passing along the SMA $^{12}$CO $J$=3$-$2\\ observations of Arp 220. KS thanks Julia Kamenetzky for her MCMC code made available via github. KS also thanks L. Barcos-M\\~unoz, A.K. Leroy, E. Schinnerer, F. Walter, and L.K. Zschaechner for fruitful discussions. Based on observations carried out with the IRAM Interferometer PdBI. IRAM is supported by INSU\/CNRS (France), MPG (Germany) and IGN (Spain). The Submillimeter Array is a joint project between the Smithsonian Astrophysical Observatory and the Academia Sinica Institute of Astronomy and Astrophysics and is funded by the Smithsonian Institution and the Academia Sinica.\n\nThis paper makes use of the following ALMA data: ADS\/JAO.ALMA$\\#$2011.0.00018.SV. \nALMA is a partnership of ESO (representing its member states), NSF (USA) and \nNINS (Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan), and KASI (Republic of Korea), \nin cooperation with the Republic of Chile. The Joint ALMA Observatory is \noperated by ESO, AUI\/NRAO and NAOJ.\n\\end{acknowledgements}\n\n\\bibliographystyle{aa.bst}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\IEEEPARstart{D}{eep} discriminative models (DDMs) have been extensively studied recently to solve a variety of computer vision problems, with applications in image classification, action recognition, and object detection, to name a few.\nDDMs use deep neural networks (DNNs)~\\cite{chendeepage, Yan2014Age,chen_using_2017,huang2018mixture} to formulate the input-output mapping in discriminative models.\nDue to a shortage of well-labeled training data, \\emph{i.e.}~without noise and imbalanced distribution problems, learning DDMs is particularly challenging. \nConsiderable literature has grown up around the theme of how to learn DDMs effectively~\\cite{rothe2018deep,huang_soft-margin_2017,ruiz2018fine}.\n\nOne typical approach is to learn more discriminative features through rather deep neural networks and cost-sensitive discriminative functions~\\cite{gao_age_2018,parkhi2015deep,niu_ordinal_2016,rothe2018deep,agustsson2017anchored}.\nThe other typical approaches are to reweight training samples according to estimation errors~\\cite{cui2019class,khan2019striking} or gradient directions~\\cite{ren2018learning} (\\emph{i.e.}~meta learning).\nAlthough these approaches can partially alleviate the noise and imbalanced distribution problem of training data, they are not consistent with the cognitive process of human beings; the difference is we learn new things based on previously learned knowledge.\nIn addition, these approaches do not provide a way to distinguish noisy and underrepresented examples, and thus cannot fundamentally avoid noisy and biased solutions.\n\n\\begin{figure*}[t]\n\t\\centering\n\t\\includegraphics[width=0.9\\textwidth]{Figure1_v2.pdf}\n\t\\caption{The Motivation of considering underrepresented examples in SPL. \\textbf{(a):} The histogram shows the number of facial images of different ages, and the average entropy curve indicates the predictive uncertainty. We observe that the \\emph{underrepresented examples} always have high entropy. \\textbf{(b):} Underrepresented examples are selected only in SPUDRFS at an early pace. Because the underrepresented examples tend to have relatively large loss, they would not be selected at an early pace. \\textbf{(c):} A new self-paced learning method: easy and underrepresented examples first. This builds on the intuition that the underrepresented examples are prone to incur unfair treatment since they are the ``minority'' in training data.}\n\t\\label{Figure1}\n\\end{figure*}\n Intuitively, learning DDMs from easy to hard may be more reasonable because already-learned knowledge, in such a process, can be leveraged to learn DDMs. \nIn addition to this, the noisy and underrepresented examples could be recognized by virtue of already-learned knowledge.\nHence, we resort to a gradual learning strategy \\emph{i.e.}, self-paced learning~\\cite{Kumar2010Self, jiang2015self, ma2017self, ijcai2017}.\nSo far, however, there are few studies on learning DDMs in a self-paced manner.\nThen, a natural question arises: \\emph{can SPL guide DDMs to achieve more robust and less biased solutions?}\n\n\nAn underlying problem in SPL, which is firstly shown by this study, is that it assumes the distribution of training data is balanced, and thus the bias of solutions may be further exacerbated when such an assumption fails.\nThe reason is that the underrepresented examples, due to \\emph{large} prediction error, would not be selected for training at early paces, thereby incurring unfair treatment.\nThis means existing SPL methods have fairness issue.\n\n\n\nTo address the fairness issue in existing SPL methods as well as answer the above questions, this paper proposes a new self-paced learning method for learning DDMs.\nIt tackles the fundamental ranking and selection problems in SPL from a new perspective: fairness.\nOn one hand, our new method keeps SPL's advantage in robustness.\nOn the other hand, it prevents seriously biased solutions induced by SPL.\nThe major contributions of this work include:\n\\begin{itemize}\n\t\\item We for the first time, show that existing SPL methods may lead to seriously biased solutions. To address this problem, we propose a new SPL method: easy and underrepresented examples first. This combines with a typical DDM, \\emph{i.e.}, deep regression forests (DRFs), can lead to a new model---deep regression forests with consideration of underrepresented examples (SPUDRFs). The new model shows considerable performance improvement in both accuracy and fairness.\n\n\n\t\\item To further promote robustness, in SPUDRFs, we define capped likelihood function with respect to DRFs' parameters so as to further exclude noisy examples.\n\t\\item To measure regression fairness, we define a fairness metric for regression problems, which can reflect fairness concerning some sensitive attributes. \n\t\\item For verification, we validate SPUDRFs on three computer vision tasks: (\\romannumeral1) facial age estimation, (\\romannumeral2) head pose estimation, and (\\romannumeral3) gaze estimation.\n\tExtensive experimental results on the Morph \\uppercase\\expandafter{\\romannumeral2}~\\cite{ricanek2006morph}, FG-NET~\\cite{panis2016overview}, BIWI~\\cite{fanelli2013random} , BU3DFE~\\cite{pan2016mixture} and MPIIGaze~\\cite{zhang2015appearance} datasets demonstrate the efficacy of our proposed new self-paced method.\n\tOn all the above tasks, SPUDRFs almost achieve state-of-the-art performance.\n\\end{itemize}\n\nA preliminary version of this work was published in~\\cite{pan2020self}.\nThis paper significantly improves~\\cite{pan2020self} in the following aspects. \n(\\romannumeral1) We extend our model to incorporate capped likelihood, which could further promote robustness.\nThe robust SPUDRFs can recognize and exclude examples with labeling noise, whereas the original is unable to do so, and only places more emphasis on reliable examples. \n(\\romannumeral2) We extend our model to combine with various weighting schemes, whereas \\cite{pan2020self} merely adopts a mixture weighting scheme.\n(\\romannumeral3) We define the fairness metric for regression in this work and show obvious fairness improvement of SPUDRFs against the original SP-DRFs. \n(\\romannumeral4) We also evaluate SPUDRFs on a new computer vision task, \\emph{i.e.}, gaze estimation, and demonstrate its validity on the MPIIGaze dataset.\n(\\romannumeral5) Both robustness and fairness evaluation results are added, along with more comprehensive discussions.\n\nThis paper is organized as follows. Sec.~\\ref{sec:related work} introduces related work on SPL, and DDMs for facial age estimation, head pose estimation and gaze estimation. Sec.~\\ref{sec:SPUDRFs} details our proposed SPUDRFs. Sec.~\\ref{sec:robust SPUDRFs} introduces robust SPUDRFs.\nSec.~\\ref{sec:fairness metric} defines a fairness metric for regression problem.\nSec.~\\ref{sec:experiment} exhibits and analyzes the experimental results on three computer vision tasks and five related datasets. Sec.~\\ref{sec:discussion} discusses the strengths and potential weaknesses of this work, followed by Sec.~\\ref{sec:conclusion} concluding this paper with perspectives.\n\n\\section{Related Work}\n\\label{sec:related work}\nThis section reviews SPL methods and DDMs-based facial age estimation, head pose estimation and gaze estimation.\n\n\n\\subsection{Self-Paced Learning}\nSelf-paced learning, as a gradual learning method, builds on the intuition that, rather than considering all training samples simultaneously, the learning should be presented with the training data from easy to difficult~\\cite{Kumar2010Self,meng_theoretical_2017}.\nTo date, a great deal of study on self-paced learning (SPL) has been undertaken, mostly about learning conventional discriminative models, \\emph{e.g.}~SVM, logistic regressor.\nThere also exists some literature in which SPL is employed for clustering problems~\\cite{Huang2021DSMVC,Huang2021NSMVC, Guo2019adaptive}.\n\n\n\nLearning conventional discriminative models in a self-paced manner exhibits superiority over traditional methods.\nIn~\\cite{Kumar2010Self}, Kumar \\emph{et al.}~proposed the original SPL framework in a regime that suggests processing the samples in order of an easy to hard order. \nIn~\\cite{jiang2015self}, Zhao \\emph{et al.}~generalized the hard weighting scheme in SPL to a soft weighting scheme, which promoted discrimination accuracy.\nIn~\\cite{ma2017self}, Ma \\emph{et al.}~proposed self-paced co-training, where SPL is applied for multi-view or multi-modality problems.\nIn~\\cite{ren2018self}, Ren \\emph{et al.}~introduced capped-norm into the objective function of SPL, so as to further exclude the interference of noisy examples.\nIn fact, the above work can be cast as incorporating SPL with conventional classifiers, \\emph{e.g.}, SVM, logistic regressors, \\emph{etc.}.\nTo incorporate SPL with regression models, in~\\cite{han2017self}, Han \\emph{et al.}~learned a mixture of regressors in a self-paced manner, and added an $\\ell_{1,2}$ norm regularizer to avoid poorly conditioned linear sub-regressors.\n\n\n\nIn computer vision community, recently some researchers have realized that learning DDMs in a self-paced manner may lead to more robust solutions.\nOne of the few studies is~\\cite{ijcai2017}, where Li \\emph{et al.}~sought to enhance the learning robustness of CNNs by SPL.\nHowever, they omitted one important problem in learning discriminative models: the imbalanced distribution of training data.\n\n\n\nOur work is inspired by the existing studies~\\cite{yang2019self,jiang2014self} which take the class diversity in sample selection of SPL into consideration.\nJiang \\emph{et al.}~\\cite{jiang2014self} proposed to ensure the class diversity of samples at the early paces in self-paced training.\nYang \\emph{et al.}~\\cite{yang2019self} defined a metric called ``complexity of image category\" to measure the number of samples in each category, as well as jointly classifying difficulty, and used it to select samples.\nThe two methods mentioned above find that a lack of class diversity in sample selection may result in seriously biased solutions.\nHowever, what causes this problem has not been studied. \nThis work shows that the cause is fundamentally the unfairness of sample ranking in SPL, where underrepresented examples may often have large losses and not be selected at early paces.\nOn the other hand, \\cite{yang2019self,jiang2014self} are only suitable for classification, but not for regression in which the output targets are continuous and high dimensional.\nThis paper will go further in this direction, aiming to tackle the fundamental drawback in SPL---ranking unfairness---and to integrate SPL with DDMs.\n\n\n\\subsection{Facial Age Estimation}\nFacial age estimation has been extensively studied for a decade in the computer vision community.\nIn recent years, a large and growing body of literature~\\cite{niu_ordinal_2016,chen_using_2017,gao_age_2018,shen_deep_2018,li2019bridgenet} has been proposed for age estimation with varying degrees of success.\nOrdinal-based approaches~\\cite{niu_ordinal_2016,chen_using_2017} are the best-known and have demonstrated improved results.\nFor example, Niu \\emph{et al.}~proposed to estimating age through a set of sequential binary queries---each query refers to a comparison with a predefined age---to exploit the inter-relationship (ordinal information) among age labels.\nFurthermore, Gao \\emph{et al.}~\\cite{gao_age_2018} proposed DLDL-v2 to explore the underlying age distribution patterns, so as to effectively accommodate age ambiguity.\nShen \\emph{et al.}~\\cite{shen_deep_2018} used VGG-16 to extract facial features, and mapped the extracted features to age by deep regression forests (DRFs).\nLi \\emph{et al.}~\\cite{li2019bridgenet} proposed BridgeNet to predict age by mixing local regression results, where local regressors are learned on partitioned data space. \nDeng \\emph{et al.}~\\cite{deng2021pml} proposed an age classification method to deal with long-tailed distribution, where a variational margin is used to minimize the influence of head classes that misleads the prediction of tailed samples. \nOverall, these DDMs-based approaches have improved age estimation performance significantly. However, they plausibly ignore one problem: the interference with facial images duo to PIE (\\emph{i.e.}~pose, illumination and expression) variation, occlusion, misalignment and so forth.\nMore importantly, these approaches mostly ignore the fact that the training data is not distributed uniformally.\n\n\n\\subsection{Head Pose Estimation}\nHead pose estimation has attracted vast attention from the computer vision community for a long time.\nRecently, an increasing amount of DDMs-based methods have been proposed for head pose estimation and have dramatically boosted estimation accuracy.\nOf these, Riegler \\emph{et al.}~\\cite{riegler2013hough} utilized convolutional neural networks\n(CNNs) to extract patch features within facial images, and more precise results were achieved.\nIn~\\cite{huang2018mixture}, Huang \\emph{et al.}~proposed multi-modal deep regression networks, with multi-layer perceptron (MLP) architecture, to fuse the RGB and depth images for head pose estimation.\nIn~\\cite{wang2019deep}, Wang \\emph{et al.}~proposed a deep coarse-to-fine network to further boost pose estimation accuracy.\nIn~\\cite{ruiz2018fine}, Ruiz \\emph{et al.}~used a large, synthetically expanded head pose dataset to train a rather deep multi-loss convolutional neural networks (CNNs), gaining obvious accuracy improvement.\nIn~\\cite{kuhnke2019deep}, Kuhnke \\emph{et al.}~proposed a domain adaptation method for head pose estimation where shared label spaces across different domains are assumed.\nAlthough accuracy has improved, handling noisy examples in the above models is not straightforward. \nMoreover, the methods mentioned above seldom deal with imbalanced training data, which may widely exist in visual problems.\n\n\n\\subsection{Gaze Estimation}\nAppearance-based gaze estimation learns a mapping from facial or eye image to gaze.\nRecently, a considerable amount of DDMs based approaches have been proposed for person-independent gaze estimation.\nFor example, in~\\cite{zhang2017s}, Zhang \\emph{et al.}~used spatial weights VGG-16 to encode face image to gaze direction. \nThe spatial weights were used to flexibly suppress or enhance features from different facial regions. \nIn the literature~\\cite{krafka2016eye}, Krafka \\emph{et al.}~implemented a CNNs-based gaze tracker by mapping captured images from the left eye, right eye and face to gaze. The location and size of the head in the original image are provided for mapping networks. \nIn~\\cite{fischer2018rt}, Fischer \\emph{et al.}~adopted two backbone networks to extract features from two eye regions for gaze regression, and achieved improved robustness. \nInstead of predicting gaze directly, in~\\cite{park2018deep}, Park \\emph{et al.}~proposed another method which maps the single eye image to an intermediate pictorial representation as additional supervision to simplify 3D gaze direction estimation. \nWang \\emph{et al.}~\\cite{wang2019generalizing} proposed a Bayesian adversarial learning approach to address the appearance, head pose variations in gaze estimation, and overfitting problem.\nXiong \\emph{et al.}~\\cite{xiong2019mixed} proposed mixed effect neural networks to combine global and person specific gaze estimation results together.\nMeanwhile, other studies show that facial images can help gaze estimation.\nIn \\cite{biswas2021appearance, cheng2020gaze, cheng2020coarse}, multi-channel estimation architectures were adopted to utilize facial image and eye images jointly. \nThese methods outperform the ones that only use eye images as input for gaze estimation.\nIn addition to person-independent methods, there is some literature about person-specific gaze adaption methods~\\cite{park2019few, yu2019improving, chen2020offset}, which have boosted performance dramatically when a few annotated examples are used for calibration.\nThese methods indeed improve gaze estimation results; however, whether we should learn DDMs for gaze estimation in a gradual learning manner has not been discussed, nor has there been discussion of how to deal with imbalanced distribution of training data.\n\n\\section{Self-Paced Deep Regression Forests with Consideration on Underrepresented Samples}\n\\label{sec:SPUDRFs}\nThis section first reviews the basic concepts in DRFs; and introduces the objective formulation for SPUDRFs, as well as variant weighting strategies and an underrepresented example augmentation method; and finally details the optimization algorithm.\n\n\\subsection{Preliminaries}\n\nDeep regression forests (DRFs), as a deep regression model, connect deep neural netwoks (DNNs) to regression forests.\nWe start by reviewing the basic concepts in DRFs~\\cite{shen_deep_2018}.\t\n\n\n\\noindent \\textbf{Deep Regression Tree.} DRFs usually consist of a number of deep regression trees, each of which, given input-output pairs $\\left\\{\\mathbf{x}_i, y_i\\right\\}_{i=1}^N$, map extracted features through DNNs to target output by passing a regression tree. Further, a regression tree $\\mathcal{T}$ consists of split (or decision) nodes $\\mathcal{N}$ and leaf (or prediction) nodes $\\mathcal{L}$~\\cite{shen_deep_2018} (see Fig.~\\ref{Figure1}).\nSpecifically, each split node $n \\in \\mathcal{N}$ determines whether a sample $\\mathbf{x}_i$ goes to left or right sub-tree; each leaf node $\\ell \\in \\mathcal{L}$ represents a Gaussian distribution $p_{\\ell}(y_i)$ with mean $\\mu_l$ and variance $\\sigma^2_l$---parameters of output distribution defined for each tree $\\mathcal{T}$.\n\n\\noindent \\textbf{Split Node.}\nSplit node represents a split function $s_{n}(\\mathbf{x}_i ; \\bm{\\Theta}) : \\mathbf{x}_i \\rightarrow[0,1]$, where $\\bm{\\Theta}$ denotes the parameters of DNNs, as in Fig.~\\ref{Figure1}(c).\nConventionally, $s_{n}(\\mathbf{x}_i ; \\bm{\\Theta})$ is formulated as $\\sigma\\left(\\mathbf{f}_{\\varphi(n)}(\\mathbf{x}_i; \\bm{\\Theta})\\right)$, where $\\sigma(\\cdot)$ denotes a sigmoid function while $\\varphi(\\cdot)$ denotes an index function specifying the $\\varphi(n)$-th element of $\\mathbf{f}(\\mathbf{x}_i; \\bm{\\Theta})$ in correspondence with the split node $n$, and $\\mathbf{f}(\\mathbf{x}_i; \\bm{\\Theta})$ denotes the extracted features through DNNs.\nAn example to illustrate the sketch chart of the DRFs is shown in Fig.~\\ref{Figure1}(c), where $\\varphi_1$ and $\\varphi_2$ are two index functions for two trees.\nFinally, the probability of the sample $\\mathbf{x}_i$ falling into the leaf node $\\ell$ is given by:\n\\begin{equation}\n\t\\label{Eq.1}\n\t\\omega_\\ell( \\mathbf{x}_i | \\bm{\\Theta)}=\\prod_{n \\in \\mathcal{N}} s_{n}(\\mathbf{x}_i ; \\bm{\\Theta})^{[\\ell \\in \\mathcal{L}_{n_{\\text{left}}}]}\\left(1-s_{n}(\\mathbf{x}_i ; \\bm{\\Theta})\\right)^{\\left[\\ell \\in \\mathcal{L}_{n_{\\text{right}}}\\right]},\n\\end{equation}\nwhere $[\\cdot]$ denotes an indicator function conditioned on the argument. \nIn addition, $\\mathcal{L}_{n_\\text{left}}$ and $\\mathcal{L}_{n_\\text{right}}$ correspond to the sets of leaf nodes owned by the sub-trees $\\mathcal{T}_{n_\\text{left}}$ and $\\mathcal{T}_{n_\\text{right}}$ rooted at the left and right children ${n}_{l}$ and ${n}_{r}$ of node $n$.\n\n\t\n\\noindent \\textbf{Leaf Node.} \nFor a tree $\\mathcal{T}$, each leaf node $\\ell \\in \\mathcal{L}$ defines a Gaussian distribution over output $y_i$ conditioned on $\\left\\{\\mu_l,\\sigma^2_l\\right\\}$.\nSince each example $\\mathbf{x}_i$ has the probability $\\omega_\\ell(\\mathbf{x}_i | \\bm{\\Theta)}$ to reach leaf node $\\ell$, considering all leaf nodes, the predictive distribution over $y_i$ shall sum all leaf distribution weighted by $\\omega_\\ell(\\mathbf{x}_i | \\bm{\\Theta)}$:\n\\begin{equation}\n\t\\label{Eq.2}\n\tp_{\\mathcal{T}}(y_i | \\mathbf{x}_i ; \\bm{\\Theta}, \\bm{\\pi})=\\sum_{\\ell \\in \\mathcal{L}} \\omega_\\ell( \\mathbf{x}_i | \\bm{\\Theta)} p_{\\ell}(y_i),\n\\end{equation}\nwhere $\\bm{\\pi}$ represents the distribution parameters of all leaf nodes associated with tree $\\mathcal{T}$.\nNote that $\\bm{\\pi}$ varies along with tree $\\mathcal{T}_k$, and thus we rewrite them in terms of $\\bm{\\pi}_k$.\n\n\\noindent \\textbf{Forests of Regression Trees.}\nSince a forest $\\mathcal{F}$ consists of a number of deep regression trees $\\left\\{\\mathcal{T}_1,...,\\mathcal{T}_k\\right\\}$, the predictive output distribution shall consider all trees:\n\\begin{equation}\n\t\\label{Eq.3}\n\tp_{\\mathcal{F}}\\left(y_i|\\mathbf{x}_i,\\bm{\\Theta},\\bm{\\Pi} \\right)\n\t=\n\t\\frac{1}{K}\\sum_{k=1}^K p_{\\mathcal{T}_k}\\left(y_i|\\mathbf{x}_i; \\bm{\\Theta}, \\bm{\\pi}_k\\right),\n\\end{equation}\nwhere $K$ is the number of trees and $\\bm{\\Pi}=\\left\\{\\bm{\\pi}_1,...,\\bm{\\pi}_K\\right\\}$.\n$p_{\\mathcal{F}}\\left(y_i|\\mathbf{x}_i,\\bm{\\Theta},\\bm{\\Pi} \\right)$ denotes the likelihood that the $i^{th}$ sample has output $y_i$.\n\n\n\\subsection{Objective}\n\\label{Uncertainty}\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.50\\textwidth]{Rank1_v1.pdf}\n\t\\caption{The average rank of each age group in the $1^{st}$ pace. SP-DRFs tend to rank the underrepresented examples at the end. In contrast, SPUDRFs tend to rank the underrepresented examples in the front to ensure they are selected for training from the very beginning.}\n\t\\label{grouprank}\n\\end{figure}\n\n\n\\noindent\\textbf{Underrepresented Examples.} Considering underrepresented examples in SPL is one of the main contributions of this work.\nIn this work, underrepresented examples mean ``minority''.\nThe underrepresented level could be measured by predictive uncertainty (\\emph{i.e.}~entropy).\nIn fact, underrepresented examples may incur unfair treatment in early paces due to imbalanced data distribution.\nOur proposed method tackles the ranking and selection problems in SPL from a new perspective: \\emph{fairness}.\n\n\t\n\\noindent\\textbf{Entropy.} Given a sample $\\mathbf{x}_i$, its predictive uncertainty is formulated by calculating the entropy of the conditional output distribution $p_{\\mathcal{F}}\\left(y_i|\\mathbf{x}_i,\\bm{\\Theta},\\bm{\\Pi} \\right)$:\n\\begin{equation}\n\t\\label{Eq.4}\n\tH\\left [p_{\\mathcal{F}}\\left(y_i|\\mathbf{x}_i,\\bm{\\Theta},\\bm{\\Pi} \\right)\\right] = \\frac{1}{K}\\sum^K_{k=1}H\\left [p_{\\mathcal{T}_k}\\left(y_i|\\mathbf{x}_i,\\bm{\\Theta}, \\bm{\\pi}_k \\right)\\right],\n\\end{equation}\nwhere $H\\left[ \\cdot\\right ]$ denotes entropy, and the entropy which corresponds to the $k^{th}$ tree is:\n\\begin{multline}\n\t\\label{Eq.5}\n\tH\\left [p_{\\mathcal{T}_k}\\left(y_i|\\mathbf{x}_i,\\bm{\\Theta}, \\bm{\\pi}_k \\right)\\right] =\\\\ -\\int p_{\\mathcal{T}_k}\\left(y_i|\\mathbf{x}_i,\\bm{\\Theta}, \\bm{\\pi}_k \\right)\\ln p_{\\mathcal{T}_k}\\left(y_i|\\mathbf{x}_i,\\bm{\\Theta}, \\bm{\\pi}_k \\right) dy_i.\n\\end{multline}\nGiven $\\mathbf{x}_i$, the larger the entropy $H\\left [p_{\\mathcal{F}}\\left(y_i|\\mathbf{x}_i,\\bm{\\Theta},\\bm{\\Pi} \\right)\\right]$ is, the more uncertain the prediction should be, \\emph{i.e.}, the more underrepresented the sample $\\mathbf{x}_i$ is.\n\n\nThe predictive distribution $p_{\\mathcal{T}_k}\\left(y_i|\\mathbf{x}_i; \\bm{\\Theta}, \\bm{\\pi}_k\\right)$ takes the form $\\sum_{\\ell \\in \\mathcal{L}} \\omega_\\ell( \\mathbf{x}_i | \\bm{\\Theta)} p_{\\ell}(y_i)$.\nBecause the integral of mixture of Gaussians in Eq.~(\\ref{Eq.5}) is non-trivial, we resort to Monte Carlo sampling.\nHowever, because this has a large computational cost~\\cite{huber2008entropy}, we turn to use the lower bound of this integral to approximate the integration:\n\\begin{equation}\n\t\\label{Eq.6}\n\tH\\left [p_{\\mathcal{T}_k}\\left(y_i|\\mathbf{x}_i,\\bm{\\Theta}, \\bm{\\pi}_k \\right)\\right]\\approx\\frac{1}{2}\\sum_{\\ell\\in\\mathcal{L}}\\omega_\\ell(\\mathbf{x}_i|\\mathbf{\\Theta})\\left[\\ln \\left(2\\pi \\sigma_\\ell^2\\right)+1\\right].\n\\end{equation}\n\t\n\n\\noindent\\textbf{Formulation.} Let $v_i$ denote a latent variable indicating whether the $i^{th}$ sample is selected $(v_i = 1)$ or not $(v_i = 0)$ for training.\nOur objective is to jointly maximize the log likelihood function with\nrespect to DRFs' parameters $\\bm{\\Theta}$ and $\\bm{\\Pi}$, and learn the latent variables $\\mathbf{v}=\\left(v_1,...,v_N\\right)^T$.\nFig.~\\ref{grouprank} shows that the original self-paced method may miss the underrepresented examples at an early pace, resulting in ignorance.\nConsidering fairness, we prefer to select the underrepresented examples, which probably have higher predictive uncertainty (\\emph{i.e.}~entropy), particularly at an early pace.\nTherefore, we maximize the likelihood function of DRFs coupled with the sample selection term meanwhile considering ranking fairness,\n\\begin{equation}\n\t\\label{Eq.7}\n\t\\max_{\\bm{\\Theta},\\bm{\\Pi}, \\mathbf{v}} \\sum_{i=1}^{N} v_{i} \\left \\{ \\log p_{\\mathcal{F}}\\left(y_i|\\mathbf{x}_i,\\bm{\\Theta},\\bm{\\Pi} \\right) + \\gamma H_i \\right \\} + \\lambda\\sum_{i=1}^N v_i ,\n\\end{equation}\nwhere $\\lambda$ controls learning pace ($\\lambda\\geq0$), $\\gamma$ is a trade-off between loss and uncertainty ($\\lambda\\geq0$), $H_i$ denotes the entropy of output distribution $p_{\\mathcal{F}}\\left(y_i|\\mathbf{x}_i,\\bm{\\Theta},\\bm{\\Pi} \\right)$.\nWhen $\\gamma$ decays to $0$, the objective function is equivalent to the log-likelihood function with respect to DRFs' parameters $\\bm{\\Theta}$ and $\\bm{\\Pi}$.\nIn Eq.~(\\ref{Eq.7}), each sample is weighted by $v_i$, and whether $\\log p_{\\mathcal{F}}\\left(y_i|\\mathbf{x}_i,\\bm{\\Theta},\\bm{\\Pi} \\right) + \\gamma H_i>-\\lambda$ determines\nthe $i^{th}$ sample is selected.\nThat is, the sample with high likelihood value or entropy may be selected.\nThe optimal $v_i^*$ is:\n\\begin{align}\n\t\\label{Eq.8}\n\tv_i^* = \\left\\{ \\begin{array}{ll}\n\t\t1 & \\textrm{if $\\log p_{\\mathcal{F}i} + \\gamma H_i > -\\lambda$}\\\\\n\t\t0 & \\textrm{otherwise}\n\t\\end{array} \\right.,\n\\end{align}\nwhere $p_{\\mathcal{F}}\\left(y_i|\\mathbf{x}_i,\\bm{\\Theta},\\bm{\\Pi} \\right)$ is denoted by $ p_{\\mathcal{F}i}$ for simplicity, and we use this representation in all following parts.\nIteratively increasing $\\lambda$ and decreasing $\\gamma$, samples are dynamically involved for training DRFs, starting with easy and underrepresented examples and ending up with all samples.\nThus, SPUDRFs are prone to achieve more robust and less biased solutions.\n\n\nOne might argue that the noisy and hard examples tend to have high predictive uncertainty, with the result that they are selected in the early pace.\nIn fact, from Eq.~(\\ref{Eq.8}), we observe that whether a sample is selected is determined by both its predictive uncertainty and the log likelihood of being predicted correctly.\nThe noisy and hard examples probably have relatively large loss \\emph{i.e.}~low log likelihood $\\log p_{\\mathcal{F}i}$, and would not be selected in the early pace.\n\n\n\n\\subsection{Weighting Schemes}\nWeighting each sample according to its importance would be more reasonable in SPUDRFs.\nIn the previous section, we adopted a hard weighting scheme in SPUDRFs, as defined in Eq.~(\\ref{Eq.7}), where one sample selected or not is determined by $v_i$.\nSuch a weighting scheme appears to omit the importance of samples.\nSPUDRFs can easily incorporate other weighting schemes, including mixture weighting and soft weighting~\\cite{jiang2014easy}.\n\n\n\\noindent\\textbf{Mixture Weighting.}\nMixture weighting scheme~\\cite{jiang2014easy} weights selected sample by its importance, \\emph{i.e.}, $0\\leq v_i \\leq 1$.\nThe objective function under mixture weighting scheme is:\n\\begin{equation}\n\t\\label{Eq.9}\n\t\\max_{\\bm{\\Theta},\\bm{\\Pi}, \\mathbf{v}} \\sum_{i=1}^{N} v_{i} \\left \\{ \\log p_{\\mathcal{F}i} + \\gamma H_i\\right \\} + \\zeta \\sum_{i=1}^N \\log\\left(v_i + \\zeta\/\\lambda\\right) ,\n\\end{equation}\nwhere $\\zeta$ is a parameter controlling the learning pace.\nWe set $\\zeta=\\left(\\frac{1}{\\lambda'}-\\frac{1}{\\lambda}\\right)^{-1}$, and $\\lambda>\\lambda'>0$ to construct a reasonable soft weighting formulation.\nThe self-paced regularizer in Eq.~(\\ref{Eq.9}) is convex with respect to $v\\in\\left[0,1\\right]$.\nThen, setting the partial gradient of Eq.~(\\ref{Eq.9}) with respect to $v_i$ to be zero will lead the following equation:\n\\begin{equation}\n\t\\log p_{\\mathcal{F}i} + \\gamma H_i + \\frac{\\zeta}{v_i + \\zeta\/\\lambda} = 0.\n\\end{equation}\nThe optimal solution of $v_i$ is given by:\n\\begin{align}\n\tv_i^* = \\left\\{ \\begin{array}{ll}\n\t\t1 & \\textrm{if $\\log p_{\\mathcal{F}i} + \\gamma H_i \\geq -\\lambda' $}\\\\\n\t\t0 & \\textrm{if $\\log p_{\\mathcal{F}i} + \\gamma H_i \\leq -\\lambda $}\\\\\n\t\t\\frac{-\\zeta}{\\log p_{\\mathcal{F}i} + \\gamma H_i} - \\zeta\/\\lambda & \\textrm{otherwise}\n\t\\end{array} \\right.\n\t\\label{Eq.11}\n\\end{align}\nIf either the log likelihood or the entropy is too large, $v^*_i$ equals 1.\nIn addition, if the likelihood and entropy are both too small, $v^*_i$ equals 0.\nExcept for the above two cases, the soft weighting, \\emph{i.e.}, the last line of Eq.~(\\ref{Eq.11}), is adopted.\n\n\\noindent\\textbf{Soft Weighting.}\nA soft weighting scheme~\\cite{jiang2014easy} weights a selected sample with respect to its output likelihood and entropy. Such a weighting scheme includes: linear weighting scheme and logarithmic weighting scheme.\n\n\\noindent\\textbf{Linear weighting.} This scheme linearly assigns weights to samples with respect to output likelihood and entropy. The objective of SPUDRFs under linear weighting scheme is formulated as:\n\\begin{equation}\n\t\\max_{\\bm{\\Theta},\\bm{\\Pi}, \\mathbf{v}} \\sum_{i=1}^{N} v_{i} \\left \\{ \\log p_{\\mathcal{F}i} + \\gamma H_i\\right \\} - \\frac{1}{2} \\lambda \\sum_{i=1}^N \\left(v_{i}^2-2v_{i}\\right),\n\t\\label{Eq.12}\n\\end{equation}\nwhere $\\lambda>0$, $v_i\\in[0,1]$. We set the partial gradient of Eq.~(\\ref{Eq.12}) with respect to $v_{i}$ to be zero, then the optimal solution for $v_i$ is:\n\\begin{align}\n\tv_i^* = \\left\\{ \\begin{array}{ll}\n\t\t\\frac{\\log p_{\\mathcal{F}i} + \\gamma H_i + \\lambda}{\\lambda} & \\textrm{if $\\log p_{\\mathcal{F}i} + \\gamma H_i \\geq -\\lambda $}\\\\\n\t\t0 & \\textrm{otherwise}\n\t\\end{array} .\\right.\n\t\\label{Eq.13}\n\\end{align}\nThe larger the log likelihood and entropy are, the higher the weight associated with the $i^{th}$ sample should be. \n\n\\noindent\\textbf{Logarithmic weighting.} This scheme penalizes the output likelihood and entropy logarithmically. The objective of SPUDRFs under a logarithmic weighting scheme can be formulated as:\n\\begin{equation}\n\t\\max_{\\bm{\\Theta},\\bm{\\Pi}, \\mathbf{v}} \\sum_{i=1}^{N} v_{i} \\left \\{ \\log p_{\\mathcal{F}i} + \\gamma H_i\\right \\} - \\sum_{i=1}^N \\left( \\zeta v_{i} - \\frac{\\zeta^{v_i}} {\\log \\zeta} \\right),\n\t\\label{Eq.14}\n\\end{equation}\nwhere $\\zeta=1-\\lambda$, $0 < \\lambda < 1$ and $v_i\\in[0,1]$. Similarly, the optimal solution for $v_i$ is\n\\begin{align}\n\tv_i^* = \\left\\{ \\begin{array}{ll}\n\t\t\\frac{\\log \\left( \\zeta - \\log p_{\\mathcal{F}i} - \\gamma H_i \\right)}{\\log \\zeta} & \\textrm{if $\\log p_{\\mathcal{F}i} + \\gamma H_i \\geq -\\lambda $}\\\\\n\t\t0 & \\textrm{otherwise}\n\t\\end{array} .\\right.\n\t\\label{Eq.15}\n\\end{align}\n\n\n\n\\subsection{Underrepresented Example Augmentation}\n\nAs previously explained, the SPUDRFs method places more emphasis on underrepresented examples and may achieve less biased solutions.\nSince the intrinsic reason for SPL's biased solutions is the ignorance of underrepresented examples, we further rebalance training data via distribution reconstruction.\nSpecifically, we distinguish the underrepresented examples whose $H_i$ are larger than $\\beta$, from regular examples at each pace and augment them.\nAs the number of underrepresented examples increases through augmentation, the label distribution imbalance problem is alleviated. \n\n\n\\subsection{Optimization}\n\\label{Learning}\nTo optimize SPUDRFs, as defined in Eq.~(\\ref{Eq.7}), we propose a two-step alternative search strategy (ASS) algorithm: (\\romannumeral1) For sample selection, update $\\mathbf{v}$ with fixed $\\bm{\\Theta}$ and $\\bm{\\Pi}$ (\\romannumeral2) update $\\bm{\\Theta}$ and $\\bm{\\Pi}$ with current fixed sample weights $\\mathbf{v}$.\nIn addition, with fixed $\\mathbf{v}$, our DRFs are learned by alternatively updating $\\bm{\\Theta}$ and $\\bm{\\Pi}$.\nIn \\cite{shen_deep_2018}, the parameters $\\bm{\\Theta}$ for split nodes (\\emph{i.e.}~parameters for VGG) are updated through gradient descent since the loss is differentiable with respect to $\\bm{\\Theta}$.\nIn comparison, the parameters $\\bm{\\Pi}$ for leaf nodes are updated by virtue of variational bounding~\\cite{shen_deep_2018} when fixing $\\bm{\\Theta}$.\n\n\t\n\n\\renewcommand{\\algorithmicrequire}{ \\textbf{Input:}}\n\\renewcommand{\\algorithmicensure}{ \\textbf{Output:}}\n\\begin{algorithm}[t]\n\t\\caption{The training process of SPUDRFs.}\n\t\\label{alg:The}\n\t\\begin{algorithmic}[1]\n\t\t\\REQUIRE\n\t\t$\\mathcal{D}=\\left\\{\\mathbf{x}_i, \\mathbf{y}_i\\right\\}_{i=1}^N$.\n\t\t\\ENSURE\n\t\tModel parameters $\\bm{\\Pi}$, $\\bm{\\Theta}$.\n\t\t\\STATE Initialize $\\bm{\\Pi}^{0}$, $\\bm{\\Theta}^{0}$, $\\lambda^0$, $\\gamma^0$.\n\t\t\\STATE \\textbf{while} not converged \\textbf{do}\n\t\t\\STATE \\quad Update $\\mathbf{v}$ by Eq.~(\\ref{Eq.8}).\n\t\t\\STATE \\quad \\quad \\textbf{while} not converged \\textbf{do}\n\t\t\\STATE \\quad \\quad \\quad \\quad Randomly select a batch from $\\mathcal{D}$.\n\t\t\\STATE \\quad \\quad \\quad \\quad Calculate $H_i$ for each sample by Eq.~(\\ref{Eq.6}).\n\t\t\\STATE \\quad \\quad \\quad \\quad Update $\\bm{\\Theta}$ and $\\bm{\\Pi}$ to maximize Eq.~(\\ref{Eq.7}).\n\t\t\\STATE \\quad \\quad \\textbf{end while}\n\t\t\\STATE \\quad \\quad Increase $\\lambda$ and decrease $\\gamma$.\n\t\n\t\t\\STATE \\textbf{end while}\n\t\\end{algorithmic}\n\t\\label{alg1}\n\\end{algorithm}\n\n\n\\section{Robust Self-Paced Deep Regression Forests with Consideration on underrepresented examples}\n\\label{sec:robust SPUDRFs}\nAs has already been discussed in Sec.~\\ref{sec:SPUDRFs}, SPL tends to place more emphasis on reliable examples to achieve more robust solutions.\nHowever, SPL's intrinsic selection scheme can not exclude noisy examples, especially the examples with labeling noise.\nTo alleviate this drawback, we introduce capped-likelihood function in SPUDRFs, which can render an output likelihood with an especially small value as zero:\n\\begin{equation}\n\t\\text{cap}(p_{\\mathcal{F}i},\\epsilon) = \\frac{\\max(p_{\\mathcal{F}i}-\\epsilon, 0)}{p_{\\mathcal{F}i}-\\epsilon}p_{\\mathcal{F}i}, \n\t\\label{cap_equation}\n\\end{equation}\nwhere $\\epsilon$ denotes the threshold and $\\epsilon>0$. \nGiven the output likelihood $p_{\\mathcal{F}_i}$, the capped likelihood $p^c_{\\mathcal{F}_i}$ is defined as $\\text{cap}\\left(p_{\\mathcal{F}_i},\\epsilon\\right)$. \nBecause the log-likelihood values of the noisy examples are prone to be\nsmall, such an operation sets capped likelihood $p^c_{\\mathcal{F}_i}$ to be negatively infinite. \nThus, noisy examples would not be selected, especially not examples with labeling noise.\nSPUDRFs with capped likelihood are defined as robust SPUDRFs. \n\\begin{equation}\n\t\\max_{\\bm{\\Theta},\\bm{\\Pi}, \\mathbf{v}} \\sum_{i=1}^{N} v_{i} \\left \\{ \\log p_{\\mathcal{F}i}^{c} + \\gamma H_i \\right \\} + \\lambda\\sum_{i=1}^N v_i,\n\t\\label{Eq.17}\n\\end{equation}\nThe robust SPUDRFs can be optimized using the optimization method proposed in Sec.~\\ref{Learning}. \nNote that, if $p_{\\mathcal{F}_i}\\leq\\epsilon$, the capped likelihood $p^c_{\\mathcal{F}_i}=0$, that is, $\\log p^c_{\\mathcal{F}_i}=-\\infty$. \nSince $\\lambda$ is a positive factor, maximizing Eq.~(\\ref{Eq.17}) must yield $v_i=0$, meaning that the $i^{th}$ sample would not be selected.\nBy adjusting $\\epsilon$, we can exclude a certain ratio of noisy examples to obtain more robust solutions.\n\n\n\\section{Fairness Metric}\n\\label{sec:fairness metric}\nIn addition to \\emph{accuracy}, fairness should also be an essential metric to measure the performance of a regression model, particularly when the regression targets are related to sensitive attributes.\nFor example, in age estimation, we expect our model to treat the younger and the elder fairly, \\emph{i.e.}, not to result in large MAEs for the former but small MAEs for the latter.\nThe notion of fairness was originally defined concerning a protected attribute such as gender, race or age.\nHowever, the term fairness has a range of potential definitions. Here, we adopt a notion of fairness that is for sensitive features~\\cite{fitzsimons2019general}.\n\\emph{Defining a fairness metric for regression is one of the contributions of this work.}\n\nThe present studies~\\cite{agarwal2019fair, berk2021fairness, komiyama2018nonconvex, zafar2017fairness} mostly refer to fairness constraints in regressions, for example, statistical parity or bounded group loss~\\cite{agarwal2019fair, komiyama2018nonconvex}, but seldom refers to fairness metrics.\nIn this work, we define a new fairness metric for regression models.\nTo be specific, the test dataset is divided into $K$ subsets, each of which is denoted by $\\mathbf{D}_k=\\left\\{\\mathbf{x}_i, y_i| y_i \\in \\mathcal{G}_k\\right\\}$, and $\\mathcal{G}_k$ denotes the $k^{th}$ group.\nA fair model is expected to have the same performance on all subsets, which can be described mathematically as:\n\\begin{equation}\n\t\\label{Eq1_fairness}\n\t\\mathbb{E}_{k}\\left[L \\left(\\hat{y}, y\\right)\\right]= \\mathbb{E}_{l}\\left[L\\left(\\hat{y}, y\\right)\\right] \\quad \\forall k,l \\in \\left\\{1,2,..., K\\right\\}, k\\neq l.\n\\end{equation}\nwhere $\\mathbb{E}_k\\left[\\cdot\\right]$ denotes the expectation with respect to the loss $L\\left( \\cdot \\right)$ over group $\\mathbf{D}_k$, $\\hat{y}$ is the predicted value of the model and $y$ is the real target value. Motivated by $p\\%-$rule \\cite{zafar2017fairness}, which measure classification fairness, we evaluate fairness between two subsets $\\mathbf{D}_k$ and $\\mathbf{D}_l$ as follows:\n\\begin{equation}\n\t\\label{Eq2_fairness}\n\tf\\left( \\mathbf{D}_k,\\mathbf{D} _l\\right) = \\min\\left(\\frac{\\mathbb{E}_{k}\\left[L \\left(\\hat{y}, y\\right)\\right]}{\\mathbb{E}_{l}\\left[L\\left(\\hat{y}, y\\right)\\right]} , \\frac{\\mathbb{E}_{l}\\left[L \\left(\\hat{y}, y\\right)\\right]}{\\mathbb{E}_{k}\\left[L \\left(\\hat{y}, y\\right)\\right]} \\right).\n\\end{equation}\nThe loss expectation ratio characterizes the model's fairness on every two subsets. A small ratio means the performance on such two subsets is significantly distinct, reflecting model bias against a particular subset. \nOn the contrary, a large ratio indicates the losses on $\\mathbf{D}_k$ and $\\mathbf{D}_l$ are similar, indicating the model is relatively fair for $\\mathbf{D}_k$ and $\\mathbf{D}_l$. In particularly, when the ratio is equal to 1, the model satisfies Eq.~(\\ref{Eq1_fairness}), and $\\mathbf{D}_k$ and $\\mathbf{D}_l$ are treated fairly.\n\n\n\\begin{figure}\n\t\\centering\n\t\\subfloat[]{\n\t\t\\includegraphics[width=0.48\\textwidth]{age_new_compressed.pdf}%\n\t\t\\label{fig:age_samples}}\\\\\n\t\\subfloat[]{\n\t\t\\includegraphics[width=0.48\\textwidth]{pose_new_compressed.pdf}%\n\t\t\\label{fig:head_samples}}\\\\\n\t\\subfloat[]{\n\t\t\\includegraphics[width=0.48\\textwidth]{gaze_new_compressed.pdf}%\n\t\t\\label{fig:gaze_samples}}\\\\\n\t\\caption{Example images in the Morph \\uppercase\\expandafter{\\romannumeral2}, FG-NET, BIWI, BU3DFE and MPIIGaze datasets. \\textbf{(a):} The images from the Morph \\uppercase\\expandafter{\\romannumeral2} and FG-NET dataset. \\textbf{(b):} The images from the BIWI and BU3DFE dataset. \\textbf{(c):} The images from MPIIGaze dataset.}\n\t\\label{dataset_samples}\n\\end{figure}\n\n\nFinally, the regression fairness is defined as the expectation of the fairness between any two subsets,\n\\begin{equation}\n\t\\label{Eq3_fairness}\n\t\\text{FAIR} = \\mathbb{E}\\left[f\\left(\\mathbf{D}_i,\\mathbf{D}_j\\right)\\right].\n\n\\end{equation}\nIn SPUDRFs, when we set $\\gamma \\textgreater 0$, all samples are sorted by both likelihood and entropy. As a result, easy and underrepresented samples are selected first, which means samples from all subsets would be selected at early paces. Therefore, our model can alleviate the model's prejudice against different subsets and improve regression fairness. More extensive evaluation results can be found in Sec.~\\ref{fairness_section}.\n\n\n\\begin{figure*}[t]\n\t\\centering\n\t\\subfloat[Morph \\uppercase\\expandafter{\\romannumeral2}]{\n\t\t\\includegraphics[width=1.0\\textwidth]{SPUDRFs_validation_v3_compressed.pdf}%\n\t\t\\label{fig:spu_validation_morph}}\\\\\n\t\\subfloat[MPIIGaze]{\n\t\t\\includegraphics[width=1.0\\textwidth]{Fig3_mpii_v4_compressed.pdf}%\n\t\t\\label{fig:spu_validation_mpii}}\\\\\n\t\\caption{Visualization of the SPL process on the Morph \\uppercase\\expandafter{\\romannumeral2} and MPIIGaze datasets. \\textbf{Left:} The typical worst cases at each pace. The images in each panel are sorted by the increasing paces. The two numbers below each image are the labels (left) and predicted targets (right). \\textbf{Right:} MAE comparison between SP-DRFs and SPUDRFs at each pace. The MAE bins are sorted by the increasing paces.}\n\t\\label{SPUDRFs_validation}\n\\end{figure*}\n\n\n\n\t\\section{Experiment}\n\\label{sec:experiment} \n\\subsection{Datasets and Experimental Setup}\n\\label{setup}\n\\noindent\\textbf{Dataset.} There are five datasets used in our experiments, namely Morph \\uppercase\\expandafter{\\romannumeral2}, FG-NET, BIWI, BU3DFE and MPIIGaze.\n\n\n\\noindent\\textbf{Morph \\uppercase\\expandafter{\\romannumeral2}.} \tThe Morph \\uppercase\\expandafter{\\romannumeral2}~\\cite{ricanek2006morph} dataset contains $55,13$4 face images of $13618$ individuals with unbalanced gender and ethnicity distributions.\nThese images are near-frontal pose, neutral expression, and uniform illumination.\n\n\n\\noindent\\textbf{FG-NET.} The FG-NET~\\cite{panis2016overview} dataset includes $1,002$ color and grey images of $82$ people, with each subject having more than $10$ photos at different ages. \nSince all images were taken in an uncontrolled environment, there is a significant deviation on the lighting, pose, and expression (\\emph{i.e.}~PIE) of faces inside the dataset.\n\n\\noindent\\textbf{BIWI.} The BIWI dataset~\\cite{fanelli2013random} includes $15678$ images collected by a Kinect sensor device for different persons, and head poses with pitch, yaw, and roll angles mainly ranging within $\\pm 60^{\\circ}$, $\\pm 75^{\\circ}$ and $\\pm 50^{\\circ}$.\nThese images are from $20$ subjects, including ten males and six females, where four males have been captured twice with\/without glasses.\n\n\n\\noindent\\textbf{BU3DFE.} The BU-3DFE dataset \\cite{pan2016mixture} is collected from $100$ subjects, of whom $44$ are male, and $56$ are female. Following the work \\cite{pan2016mixture}, we randomly rotated the 3D face models to produce $6000$ images with pitch and yaw angles ranging within $\\pm 30^{\\circ}$ and $\\pm 90^{\\circ}$. \n\n\n\\noindent\\textbf{MPIIGaze.} The MPIIGaze dataset~\\cite{zhang2015appearance} includes $213659$ images from $15$ persons. The number of images for each person is between $1498$ and $34745$. The normalized gaze angle is in the range of $[-20^\\circ, +5.0^\\circ]$ degrees in vertical and $[-25^\\circ, +25^\\circ]$ degrees in horizontal.\nDue to massive deviation of image number amongst different persons, similar to \\cite{zhang2015appearance}, $1500$ images from the left and right eyes of each person were chosen for final experiment. \n\n\n\\noindent\\textbf{Reprocessing and Data Augmentation.}\nMTCNN~\\cite{zhang_joint_2016} was used for joint face detection and alignment on the Morph \\uppercase\\expandafter{\\romannumeral2} and FG-NET datasets.\nIn addition, following~\\cite{shen_deep_2018}, two methods were adopted for data augmentation (\\romannumeral1) random cropping; and (\\romannumeral2) random horizontal flipping. \nOn the BIWI and BU3DFE datasets, only random cropping was adopted for augmentation. \nOn the MPIIGaze dataset, two normalized eye images (\\emph{i.e.}~left and right) were obtained following the work~\\cite{zhang2015appearance}, and\nonly random cropping was used for data augmentation.\n\\begin{figure*}[t]\n\t\\centering\n\t\\includegraphics[width=0.92\\textwidth]{Uncertainty_efficacy.pdf}\n\t\\caption{Visualization of leaf node distribution. The $1^{st}$, $3^{rd}$, and $6^{th}$ paces are chosen for visualization. For SP-DRFs, the Gaussian means of leaf nodes (denoted by red points in the second row) concentrate in a small range, resulting in seriously biased solutions. For SPUDRFs, the Gaussian means of leaf nodes (denoted by orange points in the third row) distribute in a reasonable range, resulting in lower MAEs.}\n\t\\label{Uncertainty_efficacy}\n\\end{figure*}\n\n\n\\noindent\\textbf{Parameter Setting.}\nVGG-16~\\cite{Simonyan2015} was employed as the backbone network of SPUDRFs. \nFor MPIIGaze, following \\cite{fischer2018rt}, two VGG-16 networks were used.\nIn addition, the pre-trained models were VGG-16 for MPIIGaze, and VGG-Face~\\cite{parkhi2015deep} for other datasets.\nWhen training VGG-16, the batch size were $32$ for Morph~\\uppercase\\expandafter{\\romannumeral2}, BIWI and BU3DFE, $8$ for FG-NET, and $128$ for MPIIGaze.\nThe drop out ratio was $0.5$.\nThe maximun iterations in each pace was $80k$ for Morph~\\uppercase\\expandafter{\\romannumeral2}, $10k$ for FG-NET, $20k$ for BIWI and BU3DFE, and $5k$ for MPIIGaze.\nSGD optimizer was used for BIWI and Adam optimizer was used for other datasets.\nThe initial learning rate was $2 \\times 10^{-5}$ for Morph~\\uppercase\\expandafter{\\romannumeral2} and BU3DFE, $1 \\times 10^{-5}$ for FG-NET, $0.2$ for BIWI, $3 \\times 10^{-5}$ for MPIIGaze.\nWe reduced the learning rate ($\u00d70.5$) per $10k$ iterations for Morph~\\uppercase\\expandafter{\\romannumeral2}, BIWI and BU3DFE.\nHere, some hyper-parameter settings are slightly different from the Caffe version~\\cite{pan2020self}.\t\n\nThe hyper-parameters of DRFs were: tree number ($5$), tree depth ($6$), output unit number of feature learning ($128$).\nThe iterations to update leaf node predictions was $20$, and the number of images to update leaf node predictions was the whole training set for MPIIGaze and $50$ times of batch size for other datasets.\nIn the first pace, $50\\%$ samples were selected for training.\nHere, $\\lambda$ was set to guarantee the first $50\\%$ samples with large $\\log p_{\\mathcal{F}i} + \\gamma H_i$ values were involved.\n$\\lambda'$ was set to ensure that $10\\%\\sim20\\%$ selected samples adopt soft weighting under a mixture weighing scheme.\nTo promote efficiency, the samples selected in the previous pace were preserved and the rest were ranked for current sample selection.\nIn addition, $\\gamma$ was initialized to be $15$ for Morph \\uppercase\\expandafter{\\romannumeral2} and BIWI, and $5$ for FG-NET, BU-3DFE, and MPIIGaze.\n$\\beta$ was used to recognize underrepresented examples that need to be augmented twice in each pace ($1000$ for BIWI, $400$ for BU3DFE, $2000$ for MPIIGaze).\nThe number of paces was empirically set to be 10 for Morph \\uppercase\\expandafter{\\romannumeral2}, $6$ for BIWI and MPIIGaze, $3$ for FG-NET and BU-3DFE. \nExcept for the first pace, an equal proportion of the rest data was gradually involved in each pace.\n\n\n\\noindent\\textbf{Evaluation Details.} To evaluate regression accuracy, we used the mean absolute error (MAE).\nMAE is defined as $\\sum_{i=1}^{N}\\left|\\hat{y}_{i}-y_{i}\\right|\/N$, where $\\hat{y_{i}}$ represents the predicted output for the $i^{th}$ sample, and $N$ is the total number of images for testing.\nIn addition, for age estimation, CS calculates the percentage of images sorted in the range of $\\left[y_{i}-L, y_{i}+L\\right]$: $CS(L)=\\sum_{i=1}^{N}\\left[\\vert\\hat{y}_{i}-y_{i}\\vert \\leq L\\right]\/N \\cdot 100 \\%$, where $[ \\cdot ]$ denotes an indicator function and $L$ is the error level. \nFurther, to measure regression fairness, we adopt our proposed fairness metric FAIR given in Sec.\\ref{sec:fairness metric}.\n\nTo calculate the above metrics, for Morph \\uppercase\\expandafter{\\romannumeral2}, BIWI and BU3DFE, each dataset was divided into a training set ($80\\%$) and a testing dataset ($20\\%$). The random division was repeated $5$ times, and the reported MAEs were averaged over $5$ times. \nThe leave-one-person-out protocol was used for FG-NET~\\cite{shen_deep_2018} and MPIIGaze~\\cite{zhang2015appearance}, where one subject was used for testing and the left subjects for training.\n\n\n\n\\subsection{Validity of SP-DRFs and SPUDRFs}\n\\label{valid}\nThis section validates the original SPL method as well as the new proposed SPL method for learning DDMs.\nIn the following, SP-DRFs denotes self-paced deep regression forests, which learn DRFs using the original SPL method.\n\n\n\n\\noindent \\textbf{Self-Paced Learning.}\nRecall that learning DDMs in a gradual learning manner is more consistent with the cognitive process of human beings, and the noisy examples can be distinguished by virtue of learned knowledge.\nThat means the learning can place more emphasis on ``reliable\" examples.\nTo show this, Fig.~\\ref{SPUDRFs_validation}(a) illustrates the representative face images at each learning pace of SP-DRFs on the Morph \\uppercase\\expandafter{\\romannumeral2} dataset, sorted by increasing $\\lambda$ and decreasing $\\gamma$.\nThe two numbers below each image are the actual age (left) and predicted age (right).\nWe observe that the training images involved in each pace become more confusing and noisy step by step, compared to images in early paces.\nSince the current model is initialized by the results obtained at the last pace, the updated model is adaptively calibrated by ``reliable'' examples rather than by confusing and noisy ones.\nThus, SP-DRFs have improved MAE and CS compared to DRFs.\nWe observe that SP-DRFs gain improvement by 0.33 in MAE $\\left(2.17\\rightarrow1.84\\right)$, by $1.55\\%$ in CS $\\left(91.3\\% \\rightarrow92.85\\%\\right)$, as shown in Fig.~\\ref{morph_experiment}(a).\n\n\nSome representative eye images in the gradual learning sequence for MPIIGaze dataset are shown in Fig.~\\ref{SPUDRFs_validation}(b). \nThe images in each panel are sorted by increasing $\\lambda$ and decreasing $\\gamma$. \nThe two numbers below each pair of images are the actual gaze direction (left) and the predicted gaze direction (right). \nThe same phenomena---the easy examples are prone to be selected in the early paces, while the confusing and noisy ones are prone to be selected in the later paces---can be observed. \nSince the updated model at each pace is adaptively calibrated by ``reliable'' examples rather than by confusing and noisy ones, thus, the MAE associated with each pace decreases step by step.\nFinally, the MAE of SP-DRFs decreases to be $4.57$, whereas the MAE of DRFs is $4.62$ (see Table.~\\ref{MPII_table}).\n\n\n\\noindent\\textbf{Considering Ranking Fairness.}\n\\label{ExpUnderSamples}\nAs was mentioned in Sec.~\\ref{sec:SPUDRFs}, the existing SPL methods may exacerbate the bias of solutioins.\nFig.~\\ref{Uncertainty_efficacy} visualizes the leaf node distributions of SP-DRFs in the gradual learning process.\nThe Gaussian means $\\mu_l$ associated with the $160$ leaf nodes, where each $32$ leaf nodes are defined for $1$ tree, are plotted in each sub-figures.\nThree paces, \\emph{i.e.}~the $1^{st}$, $3^{rd}$, and $6^{th}$ pace, are randomly chosen for visualization.\nFor clarity, only pitch and yaw angles are shown.\nMeanwhile, the leaf node distributions of SPUDRFs are also visualized in Fig.~\\ref{Uncertainty_efficacy}.\n\n\nIn Fig.~\\ref{Uncertainty_efficacy}, the comparison between SP-DRFs and SPUDRFs validates our proposed new SPL method.\nIn SP-DRFs, because the ranking fairness is not considered, the leaf nodes (red points in the $2^{rd}$ row) are only concentrated in a small range that most samples are distributed over, thus leading to seriously biased solutions.\nThe poor MAEs of SP-DRFs can serve as evidence for this.\nIn contrast, because the ranking fairness is considered in SPUDRFs, the leaf nodes are distributed over a wide range that could cover underrepresented examples, thus improving performance.\nSPUDRFs, in the pitch and yaw directions, achieve the best performance with MAEs of $0.71$ and $0.69$, compared to SP-DRFs with MAEs of $1.05$ and $1.23$ ($47.9\\%$ and $78.3\\%$ improvements). \n\n\n\n\n\n\\subsection{Comparison with State-of-the-art Methods}\n\\label{sec:Comparison}\n\nWe compare SPUDRFs with other state-of-the-art methods on three vision tasks: age estimation, head pose estimation and gaze estimation.\n\n\n\n\n\\noindent\\textbf{Results on Age Estimation.} The comparison between our method and other baselines on the Morph \\uppercase\\expandafter{\\romannumeral2} and FG-NET datasets are shown in Fig.~\\ref{morph_experiment}(a) and Fig.~\\ref{fgnet_experiment}(b).\nThe baseline methods include: LSVR~\\cite{guo_human_2009}, RCCA \\cite{Huerta2014Facial}, OHRank~\\cite{Chang2011Ordinal}, OR-CNN \\cite{niu_ordinal_2016}, Ranking-CNN \\cite{chen_using_2017}, DRFs~\\cite{shen_deep_2018}, DLDL-v2~\\cite{gao_age_2018}, and PML~\\cite{deng2021pml}.\nThe results show some consistent trends.\nFirst, SPUDRFs have superior performance compared to conventional discriminative models, such as LSVR~\\cite{guo_human_2009} and OHRank~\\cite{Chang2011Ordinal}.\nSecond, SP-DRFs consistently outperform other DDMs.\nCompared to DRFs, our gains in MAE are $0.33$ on Morph \\uppercase\\expandafter{\\romannumeral2} and 0.12 on FG-NET, and in CS are $1.55\\%$ and $2.91\\%$, respectively.\nThe promotions demonstrate that learning DRFs in a self-paced manner is more reasonable.\nThird, SPUDRFs outperform SP-DRFs on both MAE and CS.\n\n\nFig.~\\ref{morph_experiment}(b) shows the CS curves of SPUDRFs, SP-DRFs and other baseline methods on the Morph \\uppercase\\expandafter{\\romannumeral2} dataset, and Fig.~\\ref{fgnet_experiment}(b) shows the CS curves of different methods on the FG-NET dataset. \nOn both datasets above, SPUDRFs consistently outperform other DDMs. \nWe observe, the CS of SPUDRFs reaches $93.34\\%$ at error level $L=5$, significantly outperforming DRFs by $2.04\\%$ on the Morph \\uppercase\\expandafter{\\romannumeral2} dataset. \nWe also observe that SPUDRFs outperform DRFs by $4.09\\%$ in CS, on the FG-NET dataset.\nThe CS increase clearly validates our proposed self-paced learning method.\n\n\n\n\\begin{figure}[t] \n\t\\centering \n\t\\subfloat[]{\n\t\t\\begin{tabular}{@{}l|c|c}\n\t\t\t\\hline\n\t\t\tMethod & MAE$\\downarrow$ & CS$\\uparrow$\\\\\n\t\t\t\\hline\n\t\t\t\\hline\n\t\t\tLSVR \\cite{guo_human_2009} & 4.31 & 66.2\\% \\\\\n\t\t\tRCCA \\cite{Huerta2014Facial} & 4.25 & 71.2\\% \\\\\n\t\t\tOHRank \\cite{Chang2011Ordinal} & 3.82 & N\/A \\\\\n\t\t\tOR-CNN \\cite{niu_ordinal_2016} & 3.27 & 73.0\\% \\\\\n\t\t\tRanking-CNN \\cite{chen_using_2017} & 2.96 & 85.0\\% \\\\\n\t\t\tDLDL-v2 \\cite{gao_age_2018}& 1.97 & N\/A \\\\\n\t\t\tDRFs \\cite{shen_deep_2018} & 2.17 & 91.3\\% \\\\\n\t\t\tPML \\cite{deng2021pml} & 2.15 & N\/A \\\\\n\t\t\tSP-DRFs & 1.84 & 92.85\\% \\\\\n\t\t\tSPUDRFs & \\textbf{1.78} & \\textbf{93.34\\%} \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\t\\label{tab:morph_mae}\n\t}\\\\\n\t\\subfloat[]{\n\t\t\\includegraphics[width=0.42\\textwidth]{morph_cs5.pdf}\n\t\t\\label{fig:morph_cs}\n\t} \n\t\\caption{Quantitative comparison with state-of-the-art methods on the Morph \\uppercase\\expandafter{\\romannumeral2} dataset. \\textbf{Upper:} MAE comparison results. \\textbf{Lower:} CS comparison results.}\n\t\\label{morph_experiment}\n\\end{figure}\n\n\\begin{figure} \n\t\\centering \n\t\\subfloat[]{\n\t\t\\begin{tabular}{@{}l|c|c}\n\t\t\t\\hline\n\t\t\tMethod & MAE$\\downarrow$ & CS$\\uparrow$\\\\\n\t\t\t\\hline\n\t\t\t\\hline\n\t\t\tIIS-LDL \\cite{xin_geng_facial_2013} & 5.77 & N\/A \\\\\n\t\t\tLARR \\cite{guodong_guo_image-based_2008} & 5.07 & 68.9\\% \\\\\n\t\t\tMTWGP \\cite{Yu2010Multi} & 4.83 & 72.3\\% \\\\\n\t\t\tDIF \\cite{han_demographic_2015} & 4.80 & 74.3\\% \\\\\n\t\t\tOHRank \\cite{Chang2011Ordinal} & 4.48 & 74.4\\% \\\\\n\t\t\tCAM \\cite{Luu2013Contourlet} & 4.12 & 73.5\\% \\\\\n\t\t\tDRFs \\cite{shen_deep_2018} & 2.80 & 84.50\\% \\\\\n\t\t\tSP-DRFs& 2.68 & 87.41\\% \\\\\n\t\t\tSPUDRFs & \\textbf{2.64} & \\textbf{88.59\\%}\\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\t\\label{tab:fgnet_mae}\n\t}\\\\\n\t\\subfloat[]{\n\t\t\t\\includegraphics[width=0.42\\textwidth]{fgnet_cs_new1.pdf}\n\t\t\\label{fig:fgnet_cs}\n\t}\n\t\\caption{Quantitative comparison with state-of-the-art methods on the FG-NET dataset. \\textbf{Upper:} MAE comparison results. \\textbf{Lower:} CS comparison results.}\n\t\\label{fgnet_experiment}\n\\end{figure}\n \n\t\n\n\\noindent\\textbf{Results on Head Pose Estimation.} \nRecall that considering underrepresented examples in SPL is particularly crucial when the training data has an imbalanced distribution problem.\nSec.~\\ref{valid} has shown the effectiveness of SP-DRFs and SPUDRFs for head pose estimation tasks on the BIWI dataset.\nFig.~\\ref{Uncertainty_efficacy} shows the Gaussian means of learned leaf nodes of SP-DRFs and SPUDRFs on the BIWI dataset.\nIn SP-DRFs, because the underrepresented examples are neglected, the learned leaf nodes are distributed over a small range, resulting in seriously biased solutions.\nIn SPUDRFs, owing to consideration of underrepresented examples, the learned leaf nodes are distributed over a wide range, leading to steadily improved MAE results. \nThis section aims to show that considering ranking fairness in SPL can obtain more reasonable results.\n\n\n\nTab.~\\ref{headpose_table} shows the comparison results of SP-DRFs and SPUDRFs with recent head pose estimation approaches. \nBecause SVR~\\cite{drucker1997support}, RRF~\\cite{liaw2002classification} and KPLS~\\cite{al2012partial} are all conventional regression methods, they have inferior performance relative to MoDRN.\nSPUDRFs achieve the best performance with an MAE of $0.74$ on the BIWI dataset and $0.82$ on the BU-3DFE dataset, which is state-of-the-art performance. \nSuch a significant gain in MAE $(1.13\\rightarrow0.74)$ on BIWI demonstrates that considering ranking fairness in early paces when training DRFs can lead to much more reasonable results. \nThe explanation is that, as illustrated in Fig.~\\ref{Uncertainty_efficacy}, the learned leaf nodes of SPUDRFs are distributed over a wide range that can cover underrepresented examples.\n \\vspace{-0.4cm}\n \\begin{table}[h]\n \t\\centering\n \t\\caption{Quantitative comparison with state-of-the-art methods. \\textbf{Left:} Comparison results on the BIWI dataset, \\textbf{Right:} Comparison results on the BU-3DFE dataset.}\n \t\\label{headpose_table}\n \t\\begin{tabular}[h]{cc}\n \t\t\\small\n \t\t\\scalebox{1.0}{\n \t\t\t\\begin{tabular}{@{}l|c}\n \t\t\t\t\\hline\n \t\t\t\tMethod & MAE$\\downarrow$\\\\\n \t\t\t\t\\hline\n \t\t\t\t\\hline\n \t\t\t\n \t\t\t\tSVR~\\cite{drucker1997support} & 3.14 \\\\\n \t\t\t\tRRF~\\cite{liaw2002classification} & 3.06 \\\\\n \t\t\t\tKPLS~\\cite{al2012partial} & 2.88 \\\\\n \t\t\t\tSAE~\\cite{hinton2006reducing} & 1.94 \\\\\n \t\t\t\tMoDRN~\\cite{huang2018mixture} & 1.62 \\\\\n \t\t\t\tDRFs~\\cite{shen_deep_2018} & 1.33\\footnotemark[1]\\\\\n \t\t\t\tSP-DRFs & 1.13 \\\\\n \t\t\t\tSPUDRFs & \\textbf{0.74} \\\\\n \t\t\t\t\\hline\n \t\t\t\\end{tabular}\n \t\t}\n \t\t&\n \t\t\\small\n \t\t\\scalebox{1.0}{\n \t\t\t\\begin{tabular}{@{}l|c}\n \t\t\t\t\\hline\n \t\t\t\tMethod & MAE$\\downarrow$\\\\\n \t\t\t\t\\hline\n \t\t\t\t\\hline\n \t\t\t\tSVR~\\cite{drucker1997support} & 4.21 \\\\\n \t\t\t\tSAE~\\cite{hinton2006reducing} & 4.14 \\\\\n \t\t\t\tKPLS~\\cite{al2012partial} & 4.12 \\\\\n \t\t\t\tRRF~\\cite{liaw2002classification} & 4.09 \\\\\n \t\t\t\tMoDRN~\\cite{huang2018mixture} & 3.86 \\\\\n \t\t\t\tDRFs~\\cite{shen_deep_2018} & 0.99 \\\\\n \t\t\t\tSP-DRFs & 0.89 \\\\\n \t\t\t\tSPUDRFs & \\textbf{0.82} \\\\\n \t\t\t\t\\hline\n \t\t\t\\end{tabular}\n \t\t} \\\\\n \t\\end{tabular}\n \\end{table}\n\\footnotetext[1]{The reported results are better than our previous work~\\cite{pan2020self} because we used floating point labels in our current experiment but integral labels in~\\cite{pan2020self}.}\n\t\n\n\\noindent\\textbf{Results on Gaze Estimation.}\nFor gaze estimation, the accuracy comparisons between our proposed SP-DRFs, SPUDRFs and other baselines on the MPIIGaze dataset are shown in Tab.~\\ref{MPII_table}. \nBoth the MAE and standard deviation across all persons are reported. \nNote that the standard deviation of RT-GENE~\\cite{fischer2018rt}, Pict-Gaze~\\cite{park2018deep}, and Ordinal Loss~\\cite{guo2021order} are not reported because the original studies do not provide this information. \nAs shown in Tab.~\\ref{MPII_table}, we observe that SPUDRFs outperform all baseline methods in MAE, confirming the effectiveness of our proposed method.\nBecause RT-GENE~\\cite{fischer2018rt} directly maps the features extracted by VGG-16 to gaze through multiple FC layers, it has inferior performance relative to DRFs $(4.62\\rightarrow4.80)$. \nFor a fair comparison, we chose RT-GENE with $1$ model.\nFurther, because SPUDRFs method learn DRFs in a self-paced manner and take into account ranking fairness, it has a further gain in MAE over DRFs $(4.62\\rightarrow4.45)$. \nPict-Gaze~\\cite{park2018deep} regresses an input image to an intermediate pictorial representation and then regresses the representation to the gaze direction.\nOrdinal Loss~\\cite{guo2021order} utilizes ordinal loss with order regularization to solve the regression problem.\nThe two methods mentioned above do not take into consideration the differences amongst samples; they train all examples simultaneously and thus have inferior MAEs. \nWe observe that the MAE of Pict-Gaze~\\cite{park2018deep} and Ordinal Loss~\\cite{guo2021order} are $4.56$ and $4.49$ respectively, while ours is $4.45$.\nIt's noteworthy that SP-DRFs, when compared with DRFs, only promotes MAE slightly.\nThis is probably due to the obvious distribution difference between training data and test data in the leave-one-out setting.\n\n\n\n\\begin{table}[h]\n\t\\centering\n\t\\caption{Quantitative comparison with state-of-the-art methods on the MPIIGaze dataset.}\n\t\\label{MPII_table}\n\t\\begin{tabular}[h]{cc}\n\t\t\\small\n\t\t\\scalebox{1.0}{\n\t\t\t\\begin{tabular}{@{}l|c}\n\t\t\t\t\\hline\n\t\t\t\tMethod & MAE$\\downarrow$\\\\\n\t\t\t\t\\hline\n\t\t\t\t\\hline\n\t\t\t\tMPIIGaze~\\cite{zhang2015appearance} & 6.30$\\pm 1.80$ \\\\\n\t\t\t\tiTracker~\\cite{krafka2016eye} & 6.20$\\pm 0.85$ \\\\\n\t\t\t\tGazeNet+~\\cite{zhang2017mpiigaze} & 5.40$\\pm 0.67$\\\\\n\t\t\t\tMeNets~\\cite{xiong2019mixed} & 4.90$\\pm 0.59$\\\\\n\t\t\t\tRT-GENE~\\cite{fischer2018rt} & 4.80$\\pm -- $\\\\\n\t\t\t\tPict-Gaze~\\cite{park2018deep} & 4.56$\\pm -- $\\\\\n\t\t\t\tOrdinal Loss~\\cite{guo2021order} & 4.49$\\pm -- $\\\\\n\t\t\t\tDRFs~\\cite{shen_deep_2018} & 4.62$\\pm 0.89 $ \\\\\n\t\t\t\tSP-DRFs & 4.57$\\pm 0.78 $ \\\\\n\t\t\t\tSPUDRFs & $\\textbf{4.45}\\pm \\textbf{0.84}$ \\\\\n\t\t\t\t\\hline\n\t\t\t\\end{tabular}\n\t\t}\n\t\\end{tabular}\n\\end{table}\n\n\\begin{table*}\n\t\\centering\n\t\\caption{Quantitative Evaluation Results using Different Weighting Schemes. }\n\t\\begin{tabular}{c|c|c|c|c|c|c|c|c|cc}\n\t\t\\hline\\hline\n\t\t\\multirow{2}{*}{Weighting Schemes} & \\multicolumn{2}{c|}{MORPH} & \\multicolumn{2}{c|}{FGNET} & \\multicolumn{2}{c|}{BIWI} & \\multicolumn{2}{c|}{BU-3DFE} & \\multicolumn{2}{c}{MPIIGaze} \\\\ \\cline{2-11}\n\t\t& SP-DRFs & SPUDRFs & SP-DRFs & SPUDRFs & SP-DRFs & SPUDRFs & SP-DRFs & SPUDRFs & \\multicolumn{1}{c|}{SP-DRFs} & SPUDRFs \\\\ \\hline\\hline\n\t\tHard & 1.85 & \\textbf{1.78}& \\textbf{2.68}& 2.66 & 1.24 & 0.76 & 0.94 & 0.84 & \\multicolumn{1}{c|}{4.58} & \\textbf{4.45} \\\\\n\t\tLinear &1.84 & 1.80 & 2.69 & 2.66 & 1.18 & 0.79 & 0.92 & \\textbf{0.82}& \\multicolumn{1}{c|}{\\textbf{4.57}} & 4.46 \\\\\n\t\tLog & 1.85 & 1.81 & 2.68& \\textbf{2.64} & \\textbf{1.13} & 0.77 & \\textbf{0.89} & 0.82& \\multicolumn{1}{c|}{4.60} & 4.48 \\\\\n\t\tMixture & \\textbf{1.84} & 1.80 & 2.79 & 2.70 & 1.26 & \\textbf{0.74}& 0.93 & 0.86 & \\multicolumn{1}{c|}{4.58} & 4.45\\\\ \\hline\\hline\n\t\\end{tabular}\n\t\\label{ablation_experiments}\n\\end{table*}\n\n\t\\subsection{Different Weighting Schemes}\nA potential concern for SP-DRFs and SPUDRFs is that different weighting schemes could affect the estimation performance on a variety of visual tasks.\nTo evaluate this, we compare SP-DRFs\/SPUDRFs under different weighting schemes, including hard, mixture, and soft weighting. \t\nUnder all weighting schemes, $\\lambda$ was set as in Sec.~\\ref{setup}. \nUnder the mixture weighting scheme, $\\lambda'$ was set to ensure that $10\\%$ selected samples adopt soft weighting for Morph \\uppercase\\expandafter{\\romannumeral2}, and 20\\% samples for other datasets.\n\n\n\n\nTable.~\\ref{ablation_experiments} shows the comparison results.\nThe performances of SP-DRFs\/SPUDRFs with different weighting schemes are only slightly different.\nWe observe that the weights for a large proportion of examples are close to 1.\nFor example, under the log weighting scheme, only $2\\%$ of examples have weights below $0.5$.\nUnder the mixture weighting scheme, the proportion of samples whose weights are 1 can be set manually.\nWe chose to set this proportion to be $0.8\\sim0.9$ on different tasks.\nThe MAEs are not guaranteed to be better than other weighting schemes.\n\n\n\t\n\n\\subsection{Robust SPUDRFs}\nThe intuition for SPUDRFs to work better on different visual tasks is its improved robustness, \\emph{i.e.}, emphasizing more on ``reliable\" examples. \nTo further promote the robustness, we propose robust SPUDRFs in Sec.~\\ref{sec:robust SPUDRFs}, which enable SPUDRFs to handle labeling noise.\nWe added noise to labels to test the validity of robust SPUDRFs on the above datasets. \nSpecifically, we chose $10\\%$ samples in Morph \\uppercase\\expandafter{\\romannumeral2}, BIWI or MPIIGaze datasets, and added Gaussian noise $\\mathcal{N}\\left(0, 10\\right)$ to their labels. \nTo control the proportion ($0\\%\\sim20\\%$) of samples whose likelihoods are capped to be 0, \\emph{i.e.}, the portion of samples to be excluded, we set $\\epsilon$ at variant values.\n\n\n\nFig.~\\ref{capped_experiments} shows the MAE curves of SPUDRFs across variant capped proportions. \nWe observe that, when no example's likelihood is capped at 0, due to the presence of noise, the corresponding MAE ais large for each dataset. \nAs the capped proportion grows, the MAE gradually decreases.\nWhen the capped proportion changes to become $10\\%$, the MAE in Fig.~\\ref{capped_experiments} almost achieves minimal values, which demonstrate that robust SPUDRFs are capable of excluding noisy examples.\nWhen the capped proportion grows continuously, the MAE changes to become large. \nOne explanation is that some regular examples may be discarded when the capped proportion becomes over 10\\%.\n\n\\begin{figure}\n\t\\centering\n\t\\subfloat{\n\t\t\\includegraphics[width=0.4\\textwidth]{capped_morph_ori_new.pdf}%\n\t\t\\label{fig:capped_morph}}\\\\\n\t\\subfloat{\n\t\t\\includegraphics[width=0.4\\textwidth]{capped_biwi_ori_new.pdf}%\n\t\t\\label{fig:capped_biwi}}\\\\\n\t\\subfloat{\n\t\t\\includegraphics[width=0.4\\textwidth]{capped_mpii_ori_new.pdf}%\n\t\t\\label{fig:capped_mpii}}\n\t\\caption{The MAEs of SPUDRFs across different capped proportions evaluated on three datasets: Morph \\uppercase\\expandafter{\\romannumeral2}, BIWI, and MPIIGaze.}\n\t\\label{capped_experiments}\n\\end{figure}\n\n\n\\subsection{Fairness Improvement}\n\\label{fairness_section}\n\\begin{figure*}[t]\n\t\\centering\n\t\\subfloat{\\includegraphics[width=0.48\\textwidth]{fair_biwi_pitch_new.pdf}%\n\t\t\\label{fairness_experiments_pitch}}\n\t\\hfil\n\t\\subfloat{\\includegraphics[width=0.48\\textwidth]{fair_biwi_yaw_new.pdf}%\n\t\t\\label{fairness_experiments_yaw}}\n\t\\caption{Comparison of different methods on the BIWI dataset. In pitch and yaw directions, SP-DRFs and SPUDRFs outperform DRFs in MAE on most groups, but the MAEs of SP-DRFs are worse than DRFs on underrepresented groups.}\n\t\\label{fairness_experiments}\n\\end{figure*}\n\n\t\nThis section discusses how SPUDRFs improve regression fairness on different visual tasks. \n\n\n\nTaking into account of underrepresented examples, SPUDRFs show improved accuracy for age, pose and gaze estimation, compared to SP-DRFs.\nIn this section, we show how SPUDRFs further improve regression fairness.\nIn Sec.~\\ref{sec:fairness metric}, we define FAIR as a regression fairness metric.\nTo evaluate the regression fairness of DRFs, SP-DRFs, and SPUDRFs, we divided all the datasets mentioned above into different subsets and calculated FAIR on them.\nFor Morph \\uppercase\\expandafter{\\romannumeral2}, the entire data was divided into $7$ groups, \\emph{i.e.}, $[10, 20], [20,30], \\ldots, [70,80]$. \nA similar division was conducted on the FG-NET dataset. \nFor BIWI and BU-3DFE, the pitch, yaw, and roll directions were regarded as independent directions, and the interval for each regression group is $10^\\circ$. \nFor MPIIGaze, the pitch and yaw directions were regarded as relevant angles.\nWe set the group interval to be $5^\\circ$ in each direction, resulting in a total of 60 groups.\n\n\n\nTable.~\\ref{fairness_rule} shows the FAIR values on all the datasets. \nThe higher the FAIR is, the more fair the regression model is.\nNote that SP-DRFs always have inferior FAIR relative to DRFs.\nFor example, SP-DRFs have a FAIR of $0.43$, while DRFs have a FAIR of $0.46$ for BIWI, and SP-DRFs have a FAIR of $0.37$ while DRFs have a FAIR of $0.42$ for FG-NET. \nThe results demonstrate that SP-DRFs tend to aggravate the bias of solutions. \nOn the other hand, SPUDRFs have significantly improved FAIR compared to SP-DRFs.\nWe observe SPUDFRs gains improvement by $5\\%$ for FGNET and $27\\%$ for BIWI.\nThis is an evidence that SPUDRFs tend to have more fair estimation results on various visual tasks.\t\n\n\n\n\\begin{table}[h]\n\t\\centering\n\t\\caption{Regression Fairness Evaluation on Different Datasets.}\n\t\\begin{tabular}{c|c|c|c|c|c}\n\t\t\\hline\n\t\tMethods & Morph \\uppercase\\expandafter{\\romannumeral2} & FGNET & BIWI & BU-3DFE & MPIIGaze \\\\\n\t\t\\hline \\hline\n\t\tDRFs & 0.46 & 0.42 & 0.46 & 0.74 & 0.67 \\\\\n\t\tSP-DRFs & 0.44 & 0.37 & 0.43 & 0.72 & 0.67 \\\\\n\t\tSPUDRFs & \\textbf{0.48} & \\textbf{0.42} & \\textbf{0.70} & \\textbf{0.76} & \\textbf{0.69} \\\\ \\hline\n\t\\end{tabular}\n\t\\label{fairness_rule}\n\\end{table}\t\n\nSome pose estimation results of DRFs, SP-DRFs, and SPUDRFs for BIWI are shown in Fig.~\\ref{fairness_experiments}. \nThe estimation accuracy for different pose groups reveals some consistent trends.\nFirst, for most groups, SP-DRFs and SPUDRFs have superior MAEs over DRFs. \nIn Sec.~\\ref{Uncertainty}, we discussed that these gains in fairness are due to SPL, which can guide DDMs to achieve more reasonable solutions.\nSecond, SP-DRFs tend to have even worse MAEs for the underrepresented groups than DRFs, for example, $\\left[30^\\circ, 60^\\circ\\right]$ in the pitch direction and $\\left[-70^\\circ, -40^\\circ\\right]$ in the yaw direction.\nThis demonstrates that existing SPL methods have a fatal drawback: the ranking and selection schemes may incur seriously biased estimation results.\nThird, also for underrepresented groups, SPUDRFs gain significant improvement in MAE compared to DRFs. \nFor example, our gains in MAE are $87.71\\%$ and $77.68\\%$ from $[-60^\\circ, -50^\\circ]$ and $[30^\\circ, 60^\\circ]$ in pitch direction, $80.96\\%$ and $89.52\\%$ from $[-70^\\circ, -40^\\circ]$ and $[60^\\circ, 70^\\circ]$ in yaw direction.\nThis also serves as an evidence that our proposed self-paced method alleviates the fairness problem in existing SPL methods. \n\n\n\n\n\\section{Discussion}\n\\label{sec:discussion}\nIn the experiments above, we have evaluated SPUDRFs against other baseline methods on different visual tasks, such as facial age estimation, head pose estimation and gaze estimation (Sec. \\ref{sec:Comparison}).\nWe also evaluated the SPUDRFs method under different weighting schemes, its extension with capped likelihood formulation, and its performance improvement on fairness.\nOn a number of tasks and datasets, SPUDRFs and SP-DRFs outperform other baseline methods.\nThe advantage of considering ranking fairness in SPUDRFs is most obvious for BIWI.\nFor Morph II and BU-3DFE, the performance improvement when considering ranking fairness in SPUDRFs is also observable.\n\n\n\nLearning DDMs in a self-paced manner has some limitations. \nThe most noticeable one is that, in a leave-one-out setting, SP-DRFs perform only slightly better than DRFs. \nWe speculate that this is due to the data distribution difference between training set and test set.\nTherefore, whether SPL can improve the performance of DDMs may largely depend on the distribution divergence between training data and test data. \n\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nThis paper explored how self-paced learning guided deep discriminative models (DDMs) to obtain robust and less biased solutions on different visual tasks, \\emph{e.g.}~facial age estimation, head pose estimation, and gaze estimation.\nA novel self-paced method, which considers ranking fairness, was proposed.\nSpecifically, a new ranking scheme that jointly considers loss and fairness was proposed in SPL.\nSuch a method was combined with a typical DDM---deep regression forests (DRFs)---and led to a new model, namely deep regression forests with consideration on underrepresented examples (SPUDRFs).\nIn addition, SPUDRFs under different weighting schemes, their extension with capped likelihood formulation, and their performance improvement on fairness were discussed.\nExtensive experiments on three well-known computer vision tasks demonstrated the efficacy of our proposed new self-paced method.\nThe future work will include exploring how to incorporate such a method with other DDMs.\n\n\n\n\\section*{Acknowledgment}\nThe authors would like to thank Jiabei Zeng of the Institute of Computing Technology, Chinese Academy on Sciences, for the valuable advice on the gaze estimation experiments.\nThis work is partially supported by the National Key R\\&D Program of China AI2021ZD0112000, National Natural Science Fundation of China Nos.~62171111, 61806043, 61971106 and 61872068, and the Special Science Foundation of Quzhou No.~2020D013.\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec_Int}\nBose-Einstein condensates (BECs) confined in toroidal \ntraps have been the subject of many experimental studies recently~\\cite{Ryu07,Ramanathan11,Moulder12,Wright12,Marti12,Beattie13}.\nThis research covers topics such as the observation of persistent current~\\cite{Ryu07}, phase slips across a stationary barrier~\\cite{Ramanathan11}, stochastic~\\cite{Moulder12} and deterministic~\\cite{Wright12} phase slips between vortex states, the use of toroidal condensates in interferometry~\\cite{Marti12}, and the stability of superfluid flow in a spinor condensate~\\cite{Beattie13}. \nThese experiments have given rise to theoretical studies discussing, e.g., the excitation spectrum and critical velocity of a superfluid BEC~\\cite{Dubessy12} and the simulation of the experiment~\\cite{Ramanathan11} using the Gross-Pitaevskii equation~\\cite{Mathey12,Piazza12} and the truncated Wigner approximation~\\cite{Mathey12}. Most of the experimental and theoretical studies concentrate on the properties of persistent currents. \nThe phase of a toroidal BEC changes by $2\\pi k$ as the toroid is encircled, the integer $k$ being the winding number of the vortex. In a singly connected geometry a vortex with $|k|>1$ is typically unstable against splitting into\nvortices with smaller $k$. In a multiply connected geometry \n this process is suppressed for energetic reasons. \nIn Ref.~\\cite{Moulder12} it was shown experimentally that a vortex with winding number three can persist in a toroidal single-component BEC for up to a minute. \nIn other words, toroidal geometry makes it possible to avoid the fast vortex splitting taking place in a singly connected BEC and study the properties of vortices with large winding number. \nInstead of using a toroidal trap, a multiply connected geometry that stabilizes vortices can also be created by applying a Gaussian potential along the vortex core~\\cite{Kuopanportti10}. \n\nIn this paper, we calculate the Bogoliubov spectrum of a toroidal quasi-one-dimensional (1D) spin-1 BEC. Motivated by the experimental results of Refs.~\\cite{Moulder12,Beattie13}, we assume that the splitting of vortices occurs on a very long time scale in a spinor condensate where only one spin component is populated. The dominant instabilities can then be assumed to arise from the spin-spin interaction. For related theoretical studies on toroidal two-component condensates, see, for example, Refs.~\\cite{Smyrnakis09,Anoshkin12}. \nIn our analysis, the population of the $m_F=0$ spin component is taken to be zero initially, making it possible to calculate the excitation spectrum analytically. \nThis type of a state can be prepared straightforwardly experimentally. The proliferation of instabilities can be observed by measuring the densities of the spin components. \n\nThis paper is organized as follows. In Sec.~\\ref{sec_Ham} we define the Hamiltonian, describe briefly the calculation of the excitation spectrum, and show that the spectrum can be divided into magnetization and spin modes. In Sec.~\\ref{sec_Mag} we analyze the properties of the magnetization modes and \nillustrate how the presence of unstable modes can be seen experimentally. \nWe also compare the analytical results with numerical calculations. \nIn Sec.~\\ref{sec_Spin} we study the spin modes and their experimental observability analytically and numerically and show that a rotonlike spectrum can be realized both in rubidium and sodium condensates. \nIn Sec.~\\ref{sec_Exp} we discuss two recent experiments on toroidal BECs \nand show examples of the instabilities than can be realized in these systems. \nFinally, in Sec.~\\ref{sec_Con} we summarize our results. \n\n\\section{Energy and Hamiltonian}\n\\label{sec_Ham}\nThe order parameter of a spin-$1$ Bose-Einstein condensate reads $\\psi=(\\psi_{1},\\psi_{0},\\psi_{-1})^T$, \nwhere $T$ denotes the transpose. It fulfills the identity $\\psi^\\dag \\psi =n_{3D}$, \nwhere $n_{3D}$ is the total particle density. \nWe assume that the system is exposed to a homogeneous magnetic field oriented along the $z$ axis. \nThe energy functional becomes, then, \n\\begin{align}\n\\nonumber\n&E[\\psi]=\\int d\\mathbf{r} \\left(\\psi^\\dag(\\mathbf{r})\\hat{H}_0(\\mathbf{r})\\psi(\\mathbf{r})\\right.\\\\\n&\\left. +\\frac{1}{2}\\left\\{ g_0 n_{3D}^2(\\mathbf{r}) + g_2 [\\psi^\\dag(\\mathbf{r})\\hat{\\mathbf{F}}\\psi(\\mathbf{r})]^2\\right\\}\\right),\n\\label{eq_E}\n\\end{align}\nwhere the single-particle Hamiltonian $\\hat{H}_0$ is defined as\n\\begin{align}\n\\hat{H}_0(\\mathbf{r})=-\\frac{\\hbar^2\\nabla^2}{2m}+U(\\mathbf{r})-\\mu_{3D}-p\\hat{F}_z+q\\hat{F}_z^2, \n\\end{align}\nand $\\hat{\\mathbf{F}}=(\\hat{F}_x,\\hat{F}_y,\\hat{F}_z)$ is the (dimensionless) spin operator of a spin-1 particle, $U$ is the trapping potential, and $\\mu_{3D}$ \nis the chemical potential. The magnetic field introduces the linear and quadratic Zeeman terms, given by $p$ and $q$, respectively. The sign of $q$ can be controlled experimentally by using a linearly polarized microwave field \\cite{Gerbier06}. \nThe strength of the atom-atom interaction is characterized by $g_0=4\\pi \\hbar^2(a_0+2a_2)\/3m$ and $g_2=4\\pi \\hbar^2(a_2-a_0)\/3m$, where $a_F$ is the $s$-wave scattering length for two atoms colliding with total angular momentum $F$. \nThe scattering lengths of ${}^{87}$Rb used here are $a_0=101.8a_B$ and $a_2=100.4 a_B$ \\cite{vanKempen02}, measured in units of the Bohr radius $a_B$.\nFor ${}^{23}$Na the corresponding values are $a_0=50.0a_B$ and $a_2=55.1a_B$ \\cite{Crubellier99}.\n\n\nThe condensate is confined in a toroidal trap given in cylindrical coordinates as $U(r,z,\\varphi)=m \\left[\\omega_r^2 (R-r)^2+\\omega_z^2 z^2 \\right]\/2$,\nwhere $R$ is the radius of the torus and $\\omega_r,\\omega_z$ are the trapping\nfrequencies in the radial and axial directions, respectively. We assume that the condensate is quasi-1D, so that the order parameter factors as \n$\\psi(r,z,\\varphi;t)=\\psi_{r;z}(r,z)\\psi_\\varphi(\\varphi;t)$, \nwhere $\\psi_{r;z}$ is complex valued and time independent.\nThe normalization of $\\psi_{r;z}$ is chosen such that $\\int\\int r dr dz |\\psi_{r;z}(r,z)|^2=N\/2\\pi$, \nwhere $N$ is the total number of particles. This means that \n\\begin{align}\n\\|\\psi_\\varphi(t)\\|\\equiv\n\\sqrt{\\int_{0}^{2\\pi}d\\varphi\\ \\psi_\\varphi^\\dag(\\varphi;t)\\psi_\\varphi(\\varphi;t)}\n\\end{align}\nhas to be equal to $\\sqrt{2\\pi}$ for any $t$.\nBy integrating over $r$ and $z$ in Eq. \\eqref{eq_E} we obtain\n\\begin{align}\n\\nonumber\n& E_{1\\textrm{D}}[\\psi_\\varphi] = \\\\\n\\nonumber\n&\\int_{0}^{2\\pi} d\\varphi\n\\left( \\psi_\\varphi^\\dag(\\varphi)\\left(-\\epsilon \\frac{\\partial^2}{\\partial \\varphi^2} -\\mu-p\\hat{F}_z+q\\hat{F}_z^2\\right)\\psi_\\varphi(\\varphi)\\right.\\\\\n &\\left. +\\frac{n}{2}\\left\\{ g_0 \\left[\\psi_\\varphi^\\dag(\\varphi)\\psi_\\varphi(\\varphi)\\right]^2 + g_2 \\left[\\psi_\\varphi^\\dag(\\varphi)\\hat{\\mathbf{F}}\\psi_\\varphi(\\varphi)\\right]^2\\right\\}\\right),\n\\label{eq_E1D}\n\\end{align}\nwhere \n\\begin{align}\n\\label{eq_epsilon}\n\\epsilon & =\\frac{2\\pi}{N}\\frac{\\hbar^2}{2m}\\int_{0}^{\\infty} rdr\\int_{-\\infty}^{\\infty} dz\\, \\frac{1}{r^2} |\\psi_{r;z}(r,z)|^2\n\\end{align} \nand \n\\begin{align}\n\\label{eq_n}\nn =\\frac{2\\pi}{N}\\int_{0}^{\\infty} rdr\\int_{-\\infty}^{\\infty} dz\\,|\\psi_{r;z}(r,z)|^4.\n\\end{align}\nIn Eq. \\eqref{eq_E1D} we have omitted an overall factor $N\/2\\pi$ multiplying the right-hand side of this equation. \nThe chemical potential $\\mu$ contains the original chemical potential $\\mu_{3D}$ and terms coming from the integration of the kinetic and potential energies. \n The magnetization in the $z$ direction, \n\\begin{align}\nf_z=\\frac{1}{2\\pi} \\int_{0}^{2\\pi} d\\varphi\\,\\psi_\\varphi^\\dag (\\varphi;t)\\hat{F}_z\\psi_\\varphi (\\varphi;t), \n\\end{align} \nis a conserved quantity; the corresponding Lagrange multiplier can be included into $p$. \nIn the following we drop the superscript $\\varphi$ of $\\psi_\\varphi$. \n\n\nWe assume that in the initial state the spin is parallel to the magnetic field.\nIn \\cite{Makela11} it was argued that in a homogeneous system the most unstable states are almost always of this form.\nThis state can be written as \n\\begin{align}\n\\label{psipara}\n\\psi_\\parallel(\\varphi) = \n\\frac{1}{\\sqrt{2}}\n\\begin{pmatrix}\n e^{i k_1\\varphi}\\sqrt{1+f_z}\\\\\n0 \\\\\ne^{i\\theta} e^{i k_{-1}\\varphi}\\sqrt{1-f_z}\n\\end{pmatrix},\n\\end{align}\nwhere $\\theta$ is the relative phase and \nthe integer $k_{\\pm 1}$ is the winding number of the $m_F=\\pm 1$ component. \nThe energy and stability of $\\psi_\\parallel$ are independent of $\\theta$ and therefore we set $\\theta=0$ in the rest of this article. \nIf $k_1=1$ and $k_{-1}=0$, $\\psi_\\parallel$ describes a half-quantum vortex (Alice string), \nsee, e.g., Refs. \\cite{Leonhardt00,Isoshima01,Hoshi08}. \nThe populations of $\\psi_\\parallel$ are time independent and the Hamiltonian giving the time evolution \nof $\\psi_\\parallel$ reads\n\\begin{align}\n\\label{Hparallel}\n\\hat{H}_\\parallel= \\left(g_0 n - \\mu\\right)\\hat{\\mathbb{I}}\n +(g_2 n f_z -p_{\\textrm{eff}})\\hat{F}_z + q_{\\textrm{eff}}\\hat{F}_z^2,\n\\end{align} \nwhere \n\\begin{align}\np_{\\textrm{eff}}=& p-\\frac{\\epsilon}{2} (k_{1}^2-k_{-1}^2),\\\\\nq_{\\textrm{eff}}=& q+\\frac{\\epsilon}{2}(k_1^2+k_{-1}^2). \n\\end{align}\nThe time evolution operator of $\\psi_\\parallel$ is $\\hat{U}_\\parallel(t)=e^{-it \\hat{H}_\\parallel\/\\hbar}$.\n\n\nWe calculate the linear excitation spectrum in a basis where $\\psi_\\parallel$ is stationary \\cite{Makela11,Makela12} using the Bogoliubov approach, that is, we define \n $\\psi(\\varphi;t)=\\psi_\\parallel(\\varphi) +\\delta\\psi(\\varphi;t)$ \n and expand the time evolution equations to first order in $\\delta\\psi$. \nWe write $\\delta\\psi=(\\delta\\psi_1,\\delta\\psi_0,\\delta\\psi_{-1})^T$ as \n\\begin{align}\n\\label{eq_Bogoliubov}\n\\delta\\psi_j(\\varphi;t) \\equiv e^{ik_j\\varphi}\\sum_{s=0}^{\\infty} u_{j;s}(t)\\,e^{i s \\varphi}- v^*_{j;s}(t)\\,e^{-i s\\varphi},\n\\end{align}\nwhere $j=0,\\pm 1$ and $k_0\\equiv 0$. Due to the toroidal geometry, $\\delta\\psi_j(\\varphi+2\\pi;t)=\\delta\\psi_j(\\varphi;t)$ \nhas to hold. As a consequence, $s$ needs to be an integer. \nIn the next two sections we analyze the excitation spectrum in detail; the actual calculation of the spectrum can be found in the appendix. \nThe normalized wave function reads \n\\begin{align}\n\\label{eq_psi}\n\\tilde{\\psi}(\\varphi;t) = c(t)[\\psi_\\parallel(\\varphi)+\\delta\\psi(\\varphi;t)],\n\\end{align}\nwhere $c(t)$ is determined by the condition $\\|\\tilde{\\psi}(t)\\|=\\sqrt{2\\pi}$.\nTo characterize the eigenmodes we define \n\\begin{align}\n\\label{eq_exp_Fz}\n\\langle\\hat{F}_z\\rangle (\\varphi;t) \\equiv \\tilde{\\psi}^\\dag(\\varphi;t)\\hat{F}_z\\tilde{\\psi}(\\varphi;t),\n\\end{align} \nso that $f_z=1\/2\\pi \\int_0^{2\\pi}d\\varphi\\ \\langle\\hat{F}_z\\rangle(\\varphi;t)$ for any $t$. \nFurthermore, we denote the population of the $m_F=0$ spin component by $\\rho_0$, \n$\\rho_0(\\varphi;t)=|\\tilde{\\psi}_0(\\varphi;t)|^2$. Note that here $\\langle\\hat{F}_z\\rangle$ and $\\rho_0$ are calculated in the basis where $\\psi_\\parallel$ is a stationary state. This basis and the original basis are related \nby a basis transformation that only affects the phases of the $m_F=\\pm 1$ components. \nThe densities of the spin components are thus identical in the original and new basis. The numerical calculations \nare done in the original basis. \n\n\nThe excitation spectrum can be divided into spin and magnetization modes.\nThe spin modes keep the value of $\\langle\\hat{F}_z\\rangle$ unchanged in time, \n$\\langle\\hat{F}_z\\rangle(\\varphi;t)=\\langle\\hat{F}_z\\rangle(\\varphi;0)\\approx f_z$, \n but rotate the spin vector by making $\\rho_0$ nonzero. The magnetization modes, on the other hand, lead to $\\varphi$-dependent $\\langle\\hat{F}_z\\rangle(\\varphi;t)$, \nbut leave $\\rho_0$ unaffected. There are in total six eigenmodes. \nWe denote them by $\\hbar\\omega_j$, where $j=1,2,3,4$ \nlabels the magnetization modes and $j=5,6$ the spin modes. \nWe denote the real and imaginary part of $\\omega_{l}$ by $\\omega^{\\textrm{r}}_{l}$ and \n$\\omega^{\\textrm{i}}_{l}$, respectively. \nThe mode labeled by $l$ is unstable if $\\omega^{\\textrm{i}}_l$ is positive. \nWe discuss first the magnetization modes. \n\n\n\n\\section{Magnetization modes}\n\\label{sec_Mag}\n\\subsection{Eigenmodes}\nWe characterize the eigenmodes by the quantities, \n\\begin{align}\nk_\\pm =\\frac{1}{2}\\left(k_{1}\\pm k_{-1}\\right). \n\\end{align}\nNote that the value of $k_\\pm$ can be a half-integer. \nThe magnetization modes are independent of $q$ and can be written as\n\\begin{align}\n\\hbar\\omega_l(s)=2\\epsilon s k_{+}+\\hbar\\tilde{\\omega}_l(s), \n\\end{align} \nwhere $l=1,2,3,4$. \nThe expression for $\\tilde{\\omega}_l$ is too long to be shown here. \nThe value of $\\tilde{\\omega}_l$ depends on $k_{-}$ but is independent of $k_{+}$. \nConsequently, modes with differing $k_{+}$ but equal $k_{-}$ have identical stability. \n\nIf $f_z=0$, the eigenvalues simplify and read \n\\begin{align}\n\\nonumber\n&\\hbar\\omega_{1,2,3,4}(s)\\big|_{f_z=0} = 2\\epsilon s k_{+}\\\\\n&\\pm \\sqrt{\\epsilon s^2\\left[4\\epsilon k_{-}^2+ w \n\\pm \\sqrt{16\\epsilon k_{-}^2w +(g_0 -g_2)^2 n^2}\\right]},\n\\label{o1234fz0} \n\\end{align}\nwhere \n\\begin{align}\nw=\\epsilon s^2+(g_0+ g_2)n. \n\\end{align}\nThe signs are defined such that $++,-+,+-$, and $--$ correspond to $\\omega_1,\\omega_2,\\omega_3$, and $\\omega_4$, respectively. \nUnstable modes appear when the term inside the square brackets becomes negative. \nFor rubidium and sodium $g_0+g_2 > 0$, which guarantees that $\\omega_1$ and $\\omega_2$ are real. Only $\\omega_3$ can have a positive imaginary part. \n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[scale=.92]{fig_wi.pdf}\n\\end{center}\n\\caption{(Color online) The amplitudes of the unstable spin and magnetization modes for rubidium and sodium. Here $\\epsilon=0.75|g_2|n$, $q=2.5 |g_2|n$, $f_z=0$, and the unit of $\\omega^{\\textrm{i}}_{3,5}$ is $|g_2|n\/\\hbar$. The lines have been drawn by treating $s$ as a continuous parameter; dots indicate the actual allowed nonvanishing values of $\\omega^{\\textrm{i}}_{3,5}$. In (c) and (d) the curves are reflection symmetric with respect to \n$s=k_{+}=(k_{1}+k_{-1})\/2$. \n\\label{fig_wi}}\n\\end{figure}\nAs can be seen from Figs. \\ref{fig_wi}(a) and 1(b), the value of $\\omega^{\\textrm{i}}_3(s)$ grows as $|k_{-}|$ increases. The allowed values of $s$ are non-negative integers. The modes corresponding to $s=0$ are always stable, but unstable modes are present for $s= 1,2,\\ldots, \\lfloor\\sqrt{4k_{-}^2-2 g_2 n\/\\epsilon}\\rfloor$, where $\\lfloor\\cdots\\rfloor$ is the floor function. \nTherefore, if there are $j$ unstable modes, they have to be the ones corresponding to $s=1,2,\\ldots ,j$. \n A lower bound for the value of $\\epsilon$ yielding at least one unstable mode is given by the equation $\\epsilon (4k_{-}^2-1)\\geq 2g_2 n$. In the case of a sodium BEC ($g_2 >0$) this means that the magnetization modes corresponding to $k_{-}=0$ and $|k_{-}|=1\/2$ are always stable. \n This is visualized in Fig. \\ref{fig_wi}(b), where $\\omega^{\\textrm{i}}_3(s)$ corresponding to $(k_1,k_{-1})=(0,0)$ and $(k_1,k_{-1})=(2,1)$ is seen to vanish for every $s$. \nIn a rubidium condensate $(g_2<0)$ with $k_{-}=0$ unstable modes exist if $\\epsilon\\leq 2|g_2|n$; \nif $|k_{-}|>0$, instabilities are present regardless of the value of $\\epsilon$.\nFor both rubidium and sodium the wave number $s$ of the fastest-growing instability is approximately given by the integer closest to $\\sqrt{2\/3}\\sqrt{4k_{-}^2-2 g_2 n\/\\epsilon}$. \n\n\\subsection{Experimental observability}\nThe properties of unstable magnetization modes can be studied experimentally \nby measuring $\\langle\\hat{F}_z\\rangle$. We assume that there is one dominant unstable mode and that $f_z=0$. \nThe initial time evolution of $\\langle\\hat{F}_z\\rangle$ reads, then (see the appendix), \n\\begin{align}\n\\label{eq_FzexpApprox}\n\\nonumber\n&\\langle \\hat{F}_z\\rangle (\\varphi;t) \\approx c^2(t)\\left\\{ A e^{\\omega^{\\textrm{i}}_3 t} \n\\cos\\left[\\theta+s\\left(\\varphi-\\frac{2\\epsilon k_{+} t}{\\hbar}\\right)\\right]\\right.\\\\\n&\\left.+ B e^{2\\omega^{\\textrm{i}}_3 t} \\cos\\left[2\\theta+2s\\left(\\varphi-\\frac{2\\epsilon k_{+} t}{\\hbar}\\right)\\right]\\right\\},\n\\end{align}\nwhere $c$ is the normalization factor appearing in Eq.~\\eqref{eq_psi} and $A,B,$ and $\\theta$ are defined in Eqs.~\\eqref{eq_A}, \\eqref{eq_B}, and \\eqref{eq_theta}, respectively. Because typically \n$B\\ll A$, the first term on the right-hand side \nof Eq.~\\eqref{eq_FzexpApprox} dominates over the second term during the initial time evolution. \nThis leads to $\\langle\\hat{F}_z\\rangle$ having $s$ maxima and minima. \nIf $k_{+}\\not =0$, these maximum and minimum regions rotate around the torus as time evolves, indicating that the behavior of $\\langle\\hat{F}_z\\rangle$ depends on $k_{+}$, even though the growth rate of the instabilities $\\omega^{\\textrm{i}}_3$ is independent of $k_{+}$. We study the validity of Eq.~\\eqref{eq_FzexpApprox} by considering a rubidium condensate with $\\epsilon=0.75|g_2|n$, \n$q=2.5|g_2|n$, $k_{1}=2$, and $k_{-1}=1$, corresponding to the blue dash-dotted line in Figs.~\\ref{fig_wi}(a) and~\\ref{fig_wi}(c). \nAnalytical results predict that the only unstable mode of this system is a magnetization mode corresponding to $s=1$. The numerically calculated time evolution of $\\langle\\hat{F}_z\\rangle$ is shown \nin Figs.~\\ref{fig_num_mag}(a) and \\ref{fig_num_mag}(b). \n\\begin{figure}[t]\n\\centering\n\\includegraphics[scale=0.85,clip]{fig_mag_k1_2_km1_1.pdf}\n\\vspace{6mm}\n\\caption{(Color online) (a) Numerically calculated $\\langle\\hat{F}_z\\rangle$ for the parameters corresponding to the blue dash-dotted line in Fig.~\\ref{fig_wi}(a), that is, a ${}^{87}$Rb condensate \nwith $\\epsilon =0.75 |g_2|n, q=2.5|g_2|n, f_z=0, k_{1}=2,$ and $k_{-1}=1$. (b) Magnification of the region bounded by the dashed vertical lines in (a). Here we plot $|\\langle\\hat{F}_z\\rangle|$ instead of $\\langle\\hat{F}_z\\rangle$ and use a logarithmic scale to make the initial growth of $|\\langle\\hat{F}_z\\rangle|$ visible. (c) Analytically calculated $\\langle\\hat{F}_z\\rangle$, see Eq.~\\eqref{eq_FzexpApprox}. \n\\label{fig_num_mag}\n}\n\\end{figure}\nThe $s=1$ magnetization mode can be seen to be unstable. \nThe rotation of the minimum and maximum of $\\langle\\hat{F}_z\\rangle$ around the torus is clearly visible in Fig.~\\ref{fig_num_mag}. The analytically obtained behavior of $\\langle\\hat{F}_z\\rangle$ is shown in Fig. \\ref{fig_num_mag}(c). \nBy comparing Figs.~\\ref{fig_num_mag}(b) and \\ref{fig_num_mag}(c), we see that Eq.~\\eqref{eq_FzexpApprox} describes the time evolution of $\\langle\\hat{F}_z\\rangle$ very precisely \nup to $t\\approx 10\\hbar\/|g_2|n$. \nThe only parameters in Eq.~\\eqref{eq_FzexpApprox} that are not fixed by the parameters \nused in the numerical calculation are the initial global phase and length \n$\\|\\delta\\psi(t=0)\\|$ of $\\delta\\psi(t=0)$. \nIn Fig.~\\ref{fig_num_mag}(c) we have chosen the values of these variables in such a way that the match between the numerical and analytical results is the best possible. \n\n\n\\section{Spin modes}\n\\label{sec_Spin}\n\\subsection{Eigenmodes} \nWe now turn to the spin modes. As shown in the appendix, the spin modes read\n\\begin{align}\n\\label{o56}\n&\\hbar\\omega_{5,6}(s) =2\\epsilon k_{+}(s-k_{+})\\\\\n\\nonumber \n&\\pm \\sqrt{\\left\\{\\epsilon [(s-k_+)^2-k_{-}^2]+g_2 n-q\\right\\}^2-(1-f_z^2)(g_2 n)^2},\n\\end{align}\nwhere $+$ ($-$) corresponds to $\\omega_5$ ($\\omega_6$). \nIf $k_{+}=0$, the effect of vortices can be taken into account by scaling $q\\rightarrow q+\\epsilon k_{-}^2$, {\\it i.e.}, the spin modes of a system with $(k_1,k_{-1})=(k,-k)$ and $q=\\tilde{q}$ are equal to the spin modes of a vortex-free condensate with $q=\\tilde{q}+\\epsilon k^2$. Spin modes are unstable if and only if the term inside the square root is negative. Now only $\\omega_5$ can have a positive imaginary part. \nThe fastest-growing unstable mode is obtained at $\\epsilon [(s-k_+)^2-k_{-}^2]+g_2 n-q=0$ and has the amplitude $\\hbar\\omega^{\\textrm{i}}_5(s)=|g_2|n\\sqrt{1-f_z^2}$. \nUnlike in the case of the magnetization modes, the maximal amplitude is bounded from above and is independent of the winding numbers [see Figs.~\\ref{fig_wi}(c) and \\ref{fig_wi}(d)]. By adjusting the strength of the magnetic field, the fastest-growing unstable mode can be chosen to be located at a specific value of $s$, showing that it is \neasy to adjust the stability properties experimentally. \nAt $f_z=0$ the width of the region on the $s$-axis giving positive $\\omega^{\\textrm{i}}_5$ is \n$|\\sqrt{k_{-}^2+q\/\\epsilon}-\\sqrt{k_{-}^2+q\/\\epsilon- 2g_2 n\/\\epsilon}|$. \nThis region can thus be made narrower by increasing $\\epsilon,k_{-}$, or $q$. \nSince the magnetization modes are insensitive to the magnetic field, the properties of the spin and magnetization modes can be tuned independently. \nThe winding number dependence of unstable spin modes is illustrated \nin Figs. \\ref{fig_wi}(c) and \\ref{fig_wi}(d). \n\n\\subsection{Rotonlike spectrum}\nInterestingly, by tuning $\\epsilon$ and $q$, a rotonlike spectrum can be realized (see the solid and dotted blue lines in Fig. \\ref{fig_roton}). \n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[scale=.90]{fig_roton.pdf}\n\\end{center}\n\\caption{(Color online) The real ($\\omega^{\\textrm{r}}_5$) and imaginary ($\\omega^{\\textrm{i}}_5$) component of the spin mode $\\omega_5$ for rubidium and sodium. Here $\\epsilon=0.2|g_2|n, f_z=0, k_{1}=-k_{-1}$, and $k_1$ is an arbitrary integer. For the blue solid and blue dotted lines $q+\\epsilon k_{-}^2 =2.8|g_2|n$ and for the orange dashed line \n$q+\\epsilonk_{-}^2 =-2|g_2|n$. The unit of $\\omega_5^{\\textrm{r},\\textrm{i}}$ is $|g_2|n\/\\hbar$. \nThe lines have been drawn by treating $s$ as a continuous parameter; dots (open circles) indicate the actual allowed nonvanishing values of $\\omega^{\\textrm{r}}_5$ ($\\omega^{\\textrm{i}}_5$). \n\\label{fig_roton}}\n\\end{figure} \nNow the phonon part of the spectrum is missing, but the roton-maxon feature is present. For $f_z=k_{+}=0$, the roton spectrum exists if \n$q\\geq \\max\\{0,2g_2 n\\}$. Because only integer values of $s$ are allowed, \nit may happen that $\\omega^{\\textrm{i}}_{5}$ is nonzero only in some interval of the $s$ axis that does not contain integers [see Figs. \\ref{fig_wi}(c) and \\ref{fig_wi}(d) for examples of this in the context of magnetization modes]. \nIn this case the rotonic excitations are stable. Alternatively, there can be unstable modes close to the roton minimum (see Fig.~\\ref{fig_roton} and Ref.~\\cite{Matuszewski10}). \nAs evidenced by the orange dashed lines in Fig. \\ref{fig_roton}, the roton spectrum can be made to vanish simply by decreasing $q$. Also the values of $s$ leading to unstable modes can be controlled by varying $q$. For example, using the parameter values corresponding to the blue solid line in Fig. \\ref{fig_roton}, we find that by decreasing (increasing) the value of $q+\\epsilonk_{-}^2$ from $2.8|g_2| n$ to $|g_2| n$ ($4|g_2| n$), the $s=3$ ($s=5$) mode can be made unstable in a rubidium condensate. This opens the way for quench experiments of the type described in Refs.~\\cite{Sadler06,Bookjans11}. \nInstead of altering $q$, instabilities can also be induced by making $\\epsilon$ smaller by changing the trapping frequencies. It is known that a rotonlike spectrum can exist in various types of BECs, such as in a dipolar condensate (see, e.g., Refs.~\\cite{Odell03,Santos03,Cherng09}), in a Rydberg-excited condensate~\\cite{Henkel10}, or in a spin-1 sodium condensate prepared in a specific state~\\cite{Matuszewski10}. \nIn the present case the rotonlike spectrum exists both in a sodium and rubidium BEC and the state \n[Eq.~\\eqref{psipara}] giving rise to it is easy to prepare experimentally. \nNote that the roton-maxon feature exists also in a vortex-free condensate and for \nany $|f_z|<1$. These results suggest that the roton-maxon character of the spectrum is rather a rule than an exception in spinor BECs. \n \n\n\n\n\\subsection{Experimental observability}\nThe properties of unstable spin modes can be studied experimentally by measuring $\\rho_0$. \nAssuming that there is one dominant unstable spin mode located at wave number $s$, we find that (see the Appendix)\n\\begin{align}\n&\\delta\\psi_0(\\varphi;t)\\propto e^{i k_+\\varphi + \\omega^{\\textrm{i}}_5 t}\n\\sin\\left[\\left(s-k_{+}\\right)\\left(\\varphi-\\frac{2\\epsilonk_{+} t}{\\hbar}\\right)+\\frac{\\tilde{\\theta}}{2}\\right].\n\\label{eq_deltapsi0}\n\\end{align}\nThe phase $\\tilde{\\theta}$ is defined in Eq.~\\eqref{eq_thetaSpin}. \nThe sign of $\\delta\\psi_0$ changes at every point where the density $\\rho_0\\propto |\\delta\\psi_0|^2$ vanishes. This is similar to the behavior of the phase of a dark soliton \\cite{Frantzeskakis10}. \nThe number of nodes in $\\rho_0$ is $2|s-k_{+}|$, that is, \nif $2k_{+}$ is even (odd), $\\rho_0$ has an even (odd) number of nodes.\nThe density peaks resulting from the instability rotate around the torus if $k_{+}(s-k_{+})$ is nonzero. In the special case $s=k_{+}$ the density $\\rho_0(\\varphi;t)$ is independent of \n$\\varphi$. A numerically obtained example of this is shown in Fig.~\\ref{fig_Ramanathan}(a). \nIn Fig.~\\ref{fig_num_spin} we compare numerical calculations to analytical results. \n\\begin{figure}[ht]\n\\centering\n\\includegraphics[scale=0.85,clip]{fig_rho0.pdf}\n\\vspace{5mm}\n\\caption{(Color online) (a) Numerically calculated $\\rho_0$ for a ${}^{23}$Na condensate \nwith $\\epsilon =0.75 g_2n, q=2.5g_2n, f_z=0, k_{1}=2,$ and $k_{-1}=1$, corresponding to the \nblue dash-dotted line in Fig.~\\ref{fig_wi}(d). (b) A magnification of the region bounded by the dashed vertical lines in (a). (c) Analytically calculated $\\rho_0$. \nIn (b) and (c) a logarithmic scale has been used. \n\\label{fig_num_spin} \n}\n\\end{figure}\nWe consider a sodium condensate with $\\epsilon=0.75 g_2 n,q=2.5g_2n,k_{1}=2$, and $k_{-1}=1$. \nFor these values the $s=3$ spin mode is the only unstable mode [see the blue dash-dotted line in Figs.~\\ref{fig_wi}(b) and \\ref{fig_wi}(d)]. Numerical calculations give the same result. \nBy comparing Figs.~\\ref{fig_num_spin}(b) and \\ref{fig_num_spin}(c) we see that the analytical \nexpression for $\\rho_0$ approximates the actual dynamics very precisely up to $t\\approx 15\\hbar\/g_2n$. \nAs in the case of the magnetization modes, we choose the initial length and overall phase of $\\delta\\psi(t=0)$ in such a way that the agreement between the numerical and analytical results is the best possible. \n\n\n\\section{Experiments}\n\\label{sec_Exp}\nIn this section we calculate the ratio $\\epsilon\/|g_2|n$ corresponding to two recent experiments. \nTo obtain an analytical estimate for $\\epsilon$, we assume that the particle density $|\\psi_{r;z}(r,z)|^2$ is peaked around $R$ and approximate $1\/r^2\\approx 1\/R^2$ in Eq.~\\eqref{eq_epsilon}. This gives $\\epsilon\\approx \\hbar^2\/2mR^2$. \nApproximating $\\psi_{r;z}$ by the Thomas-Fermi (TF) wavefunction yields \n\\begin{align}\nn &\\approx\\sqrt{\\frac{2 m N\\omega_r\\omega_z}{9\\pi^2g_0 R}}.\n\\end{align} \nWe see that $\\epsilon\/|g_2|n \\propto (\\omega_r\\omega_z N R^3)^{-1\/2}$, so that the properties of the excitation spectrum can be controlled by adjusting the trapping frequencies, number of particles, and the radius of the toroid.\n\nUsing the parameter values of the sodium experiment \\cite{Ramanathan11} we get $\\epsilon\\approx 0.04 g_2 n$. We study numerically the cases $(k_{1},k_{-1})=(0,0)$ and $(k_{1},k_{-1})=(1,0)$.\nWith the help of Eqs.~\\eqref{o1234fz0} and \\eqref{o56} we find that magnetization modes are stable, \nbut spin modes are unstable in both cases. If $0< q \\leq 0.04 g_2 n$, $f_z=0$, and $(k_{1},k_{-1})=(0,0)$, the unstable spin mode leads to a position-independent, homogeneous, increase in $\\rho_0$. If $(k_{1},k_{-1})=(1,0)$, we get $\\rho_0(\\varphi;t)\\sim e^{2\\omega^{\\textrm{i}}_5 t}\\sin^2[(\\epsilon t+\\varphi)\/2]$. The $1$D numerical calculations \nshown in Fig. \\ref{fig_Ramanathan} confirm the validity of these analytical predictions. \nThis example illustrates that even a small $\\epsilon$ can lead to a strongly winding number-dependent \nbehavior of $\\rho_0$. \n\\begin{figure}\n\\center\n\\includegraphics[scale=0.85,clip]{fig_Ramanathan.pdf}\n\\vspace{5mm}\n\\caption{(Color online) Numerically calculated $\\rho_0$ for a ${}^{23}$Na condensate with $\\epsilon=q=0.04|g_2|n$ and $f_z=0$. In (a) $k_{1}=k_{-1}=0$ and in \n(b) $k_{1}=1,k_{-1}=0$. The value of $\\epsilon$ corresponds to that of \\cite{Ramanathan11}. \n\\label{fig_Ramanathan}}\n\\end{figure}\n\n\nThe first experimental realization of a toroidal spin-1 \nBEC was reported recently~\\cite{Beattie13}. The stability of a rubidium BEC with a winding number three vortex in the $m_F=1$ and $m_F=0$ components was found to depend strongly on the population difference of the two components, the most unstable situation corresponding to equal population. Although not directly comparable, our analysis agrees qualitatively with this result: The growth rate of unstable spin and magnetization modes increases as the population difference of the $m_F=1$ and $m_F=-1$ components goes to zero. \nThe parameter values of this experiment yield $\\epsilon\\approx 0.20 |g_2|n$. \nThe $s=1,2,$ and $s=3$ magnetization modes are unstable regardless of the values of winding numbers. \n If $k_{+}=0$ and $q+\\epsilonk_{-}^2 = 2.8|g_2|n$, the spin modes have a rotonlike spectrum (see the left panel of Fig.~\\ref{fig_roton}). The $s=4$ mode can be seen to be the only unstable spin mode. \nThis is confirmed by the numerical results shown in Fig.~\\ref{fig_Beattie}(a). In this figure we have chosen \n$k_{1}=-k_{-1}=1$ and $q=2.6|g_2|n$, so that $q+\\epsilonk_{-}^2 =2.8|g_2|n$. \nBecause $k_{+}=0$, Eqs.~\\eqref{eq_FzexpApprox} and \\eqref{eq_deltapsi0} predict that the nodes of $\\rho_0$ and $\\langle\\hat{F}_z\\rangle$ do not rotate around the torus as time evolves. This is clearly the case in Fig.~\\ref{fig_Beattie}.\nThe $s=3$ magnetization mode can be seen to be the fastest growing unstable mode. However, around $t\\approx 12\\hbar\/g_2 n$, the $s=2$ mode becomes the dominant unstable mode. These observations agree with analytical predictions: Using Eq.~\\eqref{o1234fz0} we find that $\\hbar\\omega^{\\textrm{i}}_3(s)\/|g_2|n=0.72, 1.26$, and $1.34$ for $s=1,2$, and $s=3$, respectively. For other values of $s$ we get $\\omega^{\\textrm{i}}_3(s)=0$. \n\\begin{figure}\n\\center\n\\includegraphics[scale=0.85,clip]{fig_Beattie.pdf}\n\\vspace{5mm}\n\\caption{(Color online) Numerically calculated (a) $\\rho_0$ and (b) $\\langle\\hat{F}_z\\rangle$ for\n a ${}^{87}$Rb condensate with $\\epsilon=0.2|g_2|n,q=2.6|g_2|n,f_z=0$, and $k_{1}=-k_{-1}=1$. \n The value of $\\epsilon$ corresponds to that of Ref.~\\cite{Beattie13}.\n\\label{fig_Beattie}}\n\\end{figure}\n\n\n\n\\section{Conclusions}\n\\label{sec_Con}\nWe have calculated analytically the Bogoliubov spectrum of a toroidal spin-1 BEC that has vortices in the $m_F=\\pm 1$ spin components and is subjected to a homogeneous magnetic field. \nWe treated the strength of the magnetic field and the winding numbers of the vortices as free parameters and assumed that the population of the $m_F=0$ component vanishes. We assumed also that the system is quasi-one-dimensional. We found that the spectrum can be divided into spin and magnetization modes. Spin modes \nchange the particle density of the $m_F=0$ component but leave the particle density difference of the $m_F=1$ and $m_F=-1$ components unchanged. The magnetization modes do the opposite. \nAn important parameter characterizing the spectrum is the ratio of the kinetic to interaction energy, $\\epsilon\/|g_2|n$. \nThe properties of magnetization modes can be tuned by adjusting this ratio, whereas in the case of spin modes also the strength of the magnetic field can be used to control the spectrum. For example, a spin mode spectrum with a roton-maxon structure can be realized both in rubidium and sodium condensates by making the magnetic field strong enough. Furthermore, by changing the strength of the magnetic field or the ratio $\\epsilon\/|g_2|n$, an initially stable condensate can be made unstable. We also showed that some unstable spin modes lead to a transient dark solitonlike wave function of the $m_F=0$ spin component. \nFinally, we discussed briefly two recent experiments on toroidal BECs and \n showed examples of the instabilities that can be realized in these systems. \n\nWe studied the validity of the analytical results by numerical one-dimensional simulations, finding that the former give a very good description of the stability of the condensate and the initial time evolution of the instabilities. \n\n\\begin{acknowledgments}\nThis research has been supported by the Alfred \\mbox{Kordelin} Foundation and the Academy of Finland through\nits Centres of Excellence Program (Project No. 251748).\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}