diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzdhdb" "b/data_all_eng_slimpj/shuffled/split2/finalzzdhdb" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzdhdb" @@ -0,0 +1,5 @@ +{"text":"\\section{Ongoing work - scratch}\n\n\n\\section{Introduction}\n\nAutonomous Underwater Vehicles (AUVs) have found applications in a variety of underwater exploration and monitoring tasks including high-resolution, geo-referenced optical\/acoustic ocean floor mapping and measurements of water column properties such as currents, temperature and salinity \\cite{yoerger1991autonomous}. An advantage of AUVs over other methods of ocean observation is the autonomy and decoupling from a surface vessel that a self-contained robot provides.\n\nThe ability to geo-reference, or to compute the absolute position in a global reference frame, is essential for AUVs for the purposes of path planning for mission requirements, registration with independently navigated information, or revisiting a previous mission. Geo-referenced navigation is often achieved by initializing the navigation solution to the GPS while on the surface and, once submerged, relaying on velocity measurements from a Doppler Velocity Log (DVL). When the water depth is less than the range of the DVL (a 300kHz DVL has a range of $\\sim$200m), the DVL has continuous bottom lock throughout the mission. The DVL sensor provides measurements of the seafloor relative velocity of the AUV. By combining this information with an appropriate heading reference, the observations are placed in the global reference frame and integrated to facilitate underwater dead reckoning. The result is accuracies of 22m per hour (2\\(\\sigma\\)) in position error growth attainable during diving and 8m per hour error growth (2\\(\\sigma\\)) is possible if coupled with a navigation-grade (\\(>\\)\\$100K) Inertial Measurement Unit (IMU) \\cite{napolitano2004phins}.\n\nIn cases where the seafloor depth is greater than the DVL bottom lock range, transitioning from the surface to the seafloor presents a localization problem \\cite{kinseynonlinear2014}, since both GPS and DVL are unavailable in the mid-water column. Traditional solutions include range-limited Long Base Line (LBL) acoustic networks requiring deployment, Ultra Short Base Line (USBL) navigation requiring a dedicated ship, or single range navigation from an acoustic beacon attached to a ship \\cite{webster2012advances} or an autonomous surface vehicle (ASV) \\cite{kinsey2013-auv-asv}. In addition to requiring dedicated infrastructure, acoustic positioning also suffers from multipath returns and the need to accurately measure the sound speed profile through the water column. Acoustic methods typically give \\(\\mathcal{O}({10m})\\) accuracy at 1km ranges \\cite{kinsey2006survey} \\cite{mandt2001integrating}.\n\n\\begin{figure}[!t]\n \\centering\n \\includegraphics [width=0.45\\textwidth] {pictures\/20151203-ff_ocean.jpg}\n \\caption{The \\textit{FlatFish} AUV \\cite{albiez2015flatfish} during sea trails. Image: Jan Albiez, SENAI CIMATEC}\n \\label{fig:flatfish}\n\\end{figure}\n\nIMUs provide a strap down navigation capability through providing body accelerations and rotation rates without external aiding such as GPS, acoustic ranging, or DVL velocities. However, IMUs quickly accumulate position errors, with an unaided tactical grade IMU (\\(>\\)\\$10K) drifting at $\\sim$100km per hour, and a navigation grade IMU drifting at $\\sim$1km per hour \\cite{titterton2004strapdown}. There also exists cases where DVL bottom-lock is not possible, when the altitude is very low, such as in inspection or docking scenarios.\n\nIn \\cite{hegrenaes2011model}, a model-aiding Inertial Navigation System (INS) is applied with water-track from the DVL. Comparatively, the novel contributions of the work presented in this paper are as follows: \n\\begin{enumerate}\n\t\\item Utilizing and validating through experiment a manifold based Unscented Kalman Filter (UKF) which can observe and utilize the Earth rotation for heading estimation,\n\t\\item Incorporating and validating a novel drag and thrust model-based aiding, which accounts for the systematic uncertainty in vehicle parameters by incorporating them as states in the UKF and\n\t\\item Incorporating and validating the use of ADCP measurements in a novel form to further aid the estimation in cases of DVL bottom-lock loss.\n\\end{enumerate}\n\nIMUs with low gyro bias uncertainty allow gyrocompassing by measuring the Earth rotation to estimate heading. The price range for navigation grade IMUs (as used in \\cite{hegrenaes2011model}) with a low bias uncertainty are typically in the \\(>\\)\\$100K USD price range. In this paper, the KVH1750 IMU, in the \\(>\\)\\$10K USD price range, is utilized. In order for this price range IMU to be utilized, the biases are estimated in a fully coupled approach in the navigation filter. Real-world experiments with the \\textit{FlatFish} AUV (Fig. \\ref{fig:flatfish}) show that less than 1$^{\\circ}$ (2$\\sigma$) heading uncertainty is possible in the filter following an initialization within 15$^{\\circ}$ of the true heading (possible from a magnetic sensor). Further experiments also show that the filter is capable of consistent positioning, and data denial validates the method for DVL dropouts due to very low or high altitude scenarios. Additionally this work was implemented using the MTK \\cite{hertzberg2013integrating} and ROCK \\cite{rock} framework in C++\\footnote{The implementation is under open source license available on \\url{https:\/\/github.com\/rock-slam\/slam-uwv_kalman_filters}}, and is capable of running in real-time on computing available on the \\textit{FlatFish} AUV.\n\nThe work in this paper utilizes vehicle model-based aiding and the ADCP sensor for further ocean water current and vehicle velocity constraints. Model-aiding allows physics based constraints on the positioning, and the uncertainty in each parameter can be set to account for the systematic error associated with a system identification. Thus even a low accuracy system identification can still be used with this filter without resulting in filter overconfidence. Additionally, by modeling the vehicle parameters as time-varying, the model itself has become uncertain, as any small deviations in dynamics from the modeling equations can be absorbed by the time-varying parameters. ADCP-aiding in cases where DVL dropout would occur, due to being higher altitude than the bottom-lock range, also can aid the model by providing independent vehicle velocity constraints. ADCP also gives information regarding the surrounding water currents when there is a DVL dropout and the vehicle state estimation relies more on model-aiding.\nGenerally, inertial navigation is achieved using error-state filtering \\cite{hegrenaes2011model}, but this is not necessary as is shown in this paper. This paper gives a more conceptually simplified approach, while also utilizing manifold methods \\cite{forster2015manifold} to represent attitude, which is more general than other methods.\n\n\n\n\n\n\\section{Model-aided Inertial filter design}\n\n\n\n\n\nOur filter design is conceptually simple, since we model all modalities in one filter and model the attitude of the vehicle as a manifold.\nWe utilize an Unscented Kalman Filter (UKF) since it doesn't require the Jacobians of the process or measurement models and can handle non-linearities better than an Extended Kalman Filter \\cite{wan2000unscented}.\n\nThe attitude of the vehicle is an element of $SO(3)$, the group of orientations in $\\mathbb{R}^3$. To directly estimate the attitude in the filter it can be either modeled by a minimal parametrization (Euler angles) or by a over-parametrization (quaternion or rotation matrix). A minimal parametrization has singularities, i.e. small changes in the state space might require large changes in the parameters. An over-parametrization has redundant parameters and needs to be re-normalized as required. In both cases it requires special treatment in the filter, which leads to a conceptually more complex filter design.\nRepresenting the attitude as a manifold is a more general solution in which the filter operates on a locally mapped neighborhood of $SO(3)$ in $\\mathbb{R}^3$ \\cite{hertzberg2013integrating}.\n\n\\begin{table}[h]\n\\centering\n\\caption{Filter state}\n\\begin{tabular}{l|l}\n\\hline\n\\thead{Elements of \\\\ the state vector} & Description \\\\\n\\hline\n$\\mathbf{p}^n \\in \\mathbb{R}^3$\t\t& Position of the IMU in the navigation frame \\\\\n$\\boldsymbol{\\phi}^n \\in \\mathbb{R}^3$ \t& Attitude of the IMU in the navigation frame \\\\\n$\\mathbf{v}^n \\in \\mathbb{R}^3$ \t& Velocity of the IMU in the navigation frame \\\\\n$\\mathbf{a}^n \\in \\mathbb{R}^3$ \t& Acceleration of the IMU in the navigation frame \\\\\n$\\mathbf{M_{\\text{sub}}} \\in \\mathbb{R}^{2\\times3}$ & Inertia sub-matrix of the motion model \\\\\n$\\mathbf{D}_{l,\\text{sub}} \\in \\mathbb{R}^{2\\times3}$ & Linear damping sub-matrix of the motion model \\\\\n$\\mathbf{D}_{q,\\text{sub}} \\in \\mathbb{R}^{2\\times3}$ & Quadratic damping sub-matrix of the motion model \\\\\n$\\mathbf{v}_{c,v}^{n} \\in \\mathbb{R}^{2}$\t& \\begin{tabular}{@{}l@{}}Water current velocity surrounding \\\\ the vehicle in navigation frame\\end{tabular} \\\\\n$\\mathbf{v}_{c,b}^{n} \\in \\mathbb{R}^{2}$\t& \\begin{tabular}{@{}l@{}}Water current velocity below the \\\\ vehicle in navigation frame\\end{tabular} \\\\\n$g^n \\in \\mathbb{R}$\t\t& Gravity in the navigation frame \\\\\n$\\mathbf{b}_{\\omega} \\in \\mathbb{R}^3$ \t& Gyroscope bias \\\\\n$\\mathbf{b}_{a} \\in \\mathbb{R}^3$ \t& Accelerometer bias \\\\\n$\\mathbf{b}_{c} \\in \\mathbb{R}^{2}$ \t& Bias in the ADCP measurements \\\\\n\\hline\n\\end{tabular}\n\\label{state_table}\n\\end{table}\n\nTable \\ref{state_table} shows the state vector of the filter as element of $\\mathbb{R}^{43}$ and gives a detailed description of the higher dimensional elements of the state vector. The navigation frame is North-East-Down (NED).\nThe body and IMU frames are x-axis pointing forward, y-axis pointing left and z-axis pointing up. \nIn the filter design we consider the IMU frame not to be rotated with respect to the body frame.\n\n\n\n\\subsection{Inertial prediction equations}\n\nThe following equations describe the prediction models for position, velocity, acceleration and attitude, applying a constant acceleration model for translation and a constant angular velocity model for rotation:\n\\begin{equation}\n\\mathbf{p}^n_{t} = \\mathbf{p}^n_{t-1} + \\mathbf{v}_{t-1}^n \\delta t\n\\label{eq:pred1}\n\\end{equation}\n\\begin{equation}\n\\label{vn}\n\\mathbf{v}^n_{t} = \\mathbf{v}^n_{t-1} + \\mathbf{a}_{t-1}^n \\delta t\n\\end{equation}\n\\begin{equation}\n\\label{vn}\n\\mathbf{a}^n_{t} = \\mathbf{a}^n_{t-1}\n\\end{equation}\n\\begin{equation}\n\\boldsymbol{\\phi}^n_{t} = \\boldsymbol{\\phi}^n_{t-1} \\boxplus [C^n_{b,t-1}(\\boldsymbol{\\omega}_{t-1}^{b} - \\mathbf{b}_{\\omega,t-1}) - \\boldsymbol{\\Omega}_{e}^{n} \\delta t]\n\\label{eq:pred2}\n\\end{equation}\n\nwhere $\\mathbf{p}^n_{t}$ is the position of the IMU in the navigation frame at time $t$, $\\mathbf{v}^n_{t}$ is the velocity of the IMU in the navigation frame, $\\mathbf{a}^n_{t}$ is the acceleration of the IMU in the navigation frame, $\\mathbf{C}^n_{b,t}$ is the coordinate transformation from body to navigation frame, $\\boldsymbol{\\phi}^n_{t}$ is the attitude of the IMU in the navigation frame, $\\boldsymbol{\\omega}^b_t$ is the rotation rates in the body frame, $\\mathbf{b}_{\\omega,t}$ is the gyroscope bias and $\\boldsymbol{\\Omega}_{e}^{n}$ is the Earth rotation in the navigation frame. The $\\boxplus$ operator in \\eqref{eq:pred2} is a manifold based addition, as defined in \\cite{hertzberg2013integrating}.\nEquations \\eqref{eq:pred1} to \\eqref{eq:pred2} each have corresponding prediction noise added.\n\nThe accelerometer measurements are handled with an update equation on the acceleration state as follows:\n\n\\begin{align}\n\\textbf{z}_{a}(t) = \\mathbf{f}^b_t+ \\mathbf{b}_{a,t} + \\mathbf{C}^{n}_{b,t}\\mathbf{g}^{n}_{t} + \\nu_{a}\n\\end{align}\n\nwhere $\\mathbf{f}^b_t$ is the specific force acting on the vehicle at time $t$, $\\mathbf{b}_{a,t}$ is the accelerometer bias and $\\mathbf{g}^{n}_{t}$ is the gravity vector $\\begin{bmatrix} 0, 0, g_t^n \\end{bmatrix}^T$ in the navigation frame.\nThe gravity state is modeled applying a constant gravity model in order to refine the theoretical gravity according to the WGS-84 ellipsoid earth model starting with a small initial uncertainty.\nThe acceleration state in the filter allows both the accelerometer and model-aiding to act on the filter in a consistent fashion, without resorting to virtual correlation terms when an acceleration state does not exist, such as in \\cite{hegrenaes2011model}.\nAccelerometer and gyro bias terms are modeled as a first order Markov process as follows:\n\n\\begin{equation}\n\\dot{b} = -\\frac{1}{\\tau_{b}}(b-b_{0}) + \\nu_{b}\n\\label{bias_equation}\n\\end{equation}\nwhere $\\tau_{b}$ is the expected rate change of the bias, $b_{0}$ is the mean bias value, and $\\nu_{b}$ is a zero-mean normally distributed random variable with\n\\begin{equation}\n\\sigma_{b} = \\sqrt{\\frac{2f\\sigma_{b\\,drift}^2}{\\tau_{b}}}\n\\end{equation}\nwhere $\\sigma_{b\\,drift}$ is a bound to the bias drift and $f$ is the measurement frequency. The accelerometer and gyro bias are assumed to be zero mean.\n\n\n\\subsection{Model-aiding update equations}\n\nIn this section we show a model-aiding measurement function using a simplified vehicle motion model for which a subset of the parameter space is part of the filter state.\nThis allows the filter to refine the parameters at runtime and to account for the systematic uncertainty in the vehicle parameters.\n\nThe nonlinear equations for motion \\cite{fossen2002marine} of a rigid body with 6 DOF can be written as\n\\begin{equation}\n\\label{eq:motion_fossen}\n\\boldsymbol{\\tau} = \\mathbf{M} \\dot{\\boldsymbol{\\nu}} + \\mathcal{C}(\\boldsymbol{\\nu}) \\boldsymbol{\\nu} + \\mathbf{D}(\\boldsymbol{\\nu}) \\boldsymbol{\\nu} + \\mathbf{g}(\\mathbf{R}_{b}^n)\n\\end{equation}\nwhere $\\boldsymbol{\\tau}$ is the vector of forces and torques, $\\boldsymbol{\\nu}$ is the vector of linear and angular velocities, $\\mathbf{M}$ is the inertia matrix including added mass, $\\mathcal{C}(\\boldsymbol{\\nu})$ is the Coriolis and centripetal matrix, $\\mathbf{D}(\\boldsymbol{\\nu})$ is the hydrodynamic damping matrix and $\\mathbf{g}(\\mathbf{R}_{b}^n)$ is the vector of gravitational forces and moments given the rotation matrix from the body to the navigation frame $\\mathbf{R}_{b}^n$.\n\n\\begin{equation}\n\\mathbf{g}(\\mathbf{R}) = \\begin{bmatrix} \\mathbf{R}^{-1} \\hat{k} (W-B) \\\\ \\mathbf{r}_{G} \\times \\mathbf{R}^{-1}\\hat{k}W - \\mathbf{r}_{B} \\times \\mathbf{R}^{-1}\\hat{k}B \\end{bmatrix}\n\\label{eq:gravity}\n\\end{equation}\nEquation~\\eqref{eq:gravity} shows how the gravitational forces and moments are calculated given the weight $W$, buoyancy $B$, center of gravity $\\mathbf{r}_{G}$ and center of buoyancy $\\mathbf{r}_{B}$ of the vehicle,\nwhere $\\hat{k}$ is the unit vector $\\begin{bmatrix} 0, 0, 1 \\end{bmatrix}^T$.\n\nWe assume the Coriolis and centripetal forces as well as damping terms higher than second order are negligible for vehicles operating within lower speeds (typically below $1.5$ m\/s).\nThis allows us the define the measurement function for the forces and torques in the body frame from \\eqref{eq:motion_fossen} as\n\\begin{equation}\n\\textbf{z}_{\\boldsymbol{\\tau}}(t) = \\mathbf{M}_{t} \\begin{bmatrix}\\mathbf{a}_{t}^{b} \\\\ \\boldsymbol{\\alpha}_{t}^{b}\\end{bmatrix} + \\mathbf{D}(\\begin{bmatrix}\\mathbf{v}_{t}^{b} \\\\ \\boldsymbol{\\omega}^b_t\\end{bmatrix},t) + \\mathbf{g}(\\mathbf{R}_{b,t}^n) + \\nu_{\\boldsymbol{\\tau}}\n\\label{eq:meas_tau}\n\\end{equation}\nwhere $\\mathbf{a}_{t}^{b}$ is the linear acceleration, $\\boldsymbol{\\alpha}_{t}^{b}$ is the angular acceleration, $\\mathbf{v}_{t}^{b}$ is the linear velocity and $\\boldsymbol{\\omega}^b_t$ is the angular velocity, all expressed in the body-fixed frame at time $t$. $\\nu_{\\boldsymbol{\\tau}}$ is the random noise of the force and torque measurement, with a standard deviation given by the thruster manufacturer.\n\n$\\mathbf{a}_{t}^{b}$ can be computed given the acceleration in the navigation frame $\\mathbf{a}_{t}^{n}$ as\n\\begin{equation}\n\\mathbf{a}_{t}^{b} = \\mathbf{C}_{n,t}^{b}\\mathbf{a}_{t}^{n} - \\boldsymbol{\\omega}^b_t \\times (\\boldsymbol{\\omega}^b_t \\times \\mathbf{p}^b)\n\\label{eq:acc_body}\n\\end{equation}\nwhere $\\mathbf{C}_{n,t}^{b}$ is the coordinate transformation matrix from navigation to body frame at time $t$ and $\\mathbf{p}^b$ is the position of the IMU in the body frame.\n\n$\\mathbf{v}_{t}^{b}$ can be computed given the velocity in the navigation frame $\\mathbf{v}_{t}^{n}$ as\n\\begin{equation}\n\\mathbf{v}_{t}^{b} = \\mathbf{C}_{n,t}^{b} (\\mathbf{v}_{t}^{n} - \\mathbf{v}_{c,v,t}^{n}) - \\boldsymbol{\\omega}^b_t \\times \\mathbf{p}^b\n\\label{eq:velocity_body}\n\\end{equation}\nwhere $\\mathbf{v}_{c,v,t}^{n}$ is the water current velocity surrounding the vehicle at time $t$.\n\nEquation~\\eqref{eq:damping} shows how the damping is defined given the linear and angular velocities at time $t$.\n\\begin{equation}\n\\mathbf{D}(\\mathbf{\\nu}_{t}, t) = \\mathbf{D}_{l,t} \\cdot \\mathbf{\\nu}_{t} + |\\mathbf{\\nu}_{t}|^{T} \\cdot \\mathbf{D}_{q,t} \\cdot \\mathbf{\\nu}_{t}\n\\label{eq:damping}\n\\end{equation}\nThe linear damping matrix $\\mathbf{D}_{l,t}$, the quadratic damping matrix $\\mathbf{D}_{q,t}$ and the inertia matrix $\\mathbf{M}_{t}$ are time dependent, since for all of them a sub matrix is part of the filter state.\nThe filter states $\\mathbf{D}_{l,\\text{sub},t}$, $\\mathbf{D}_{q,\\text{sub},t}$, $\\mathbf{M}_{\\text{sub},t}$ $\\in \\mathbb{R}^{2\\times3}$ are defined by removing the rows 3 to 6 and columns 3 to 5 from the full damping and inertia matrices $\\mathbf{D}_{l}$, $\\mathbf{D}_{q}$, $\\mathbf{M}$ $\\in \\mathbb{R}^{6\\times6}$. In other words we model the $x,xy,x\\psi,yx,y,y\\psi$ terms of the matrices in the filter, where $\\psi$ is the yaw. Because we expect them to have the major impact with respect to the horizontal accelerations and velocities, in case of an AUV keeping roll and pitch stable. It would be easy to extended the filter states and add more model terms, however it is a trade-off between the additional benefit, the computational complexity and potential filtering instability.\n\nThe damping and inertia state prediction models have a base time varying component, with a timescale of around one hour, modeled as in \\eqref{bias_equation}.\nThe vehicle parameters are initialized using a prior system identification, with the means of the states set at these values in the first order Markov process equation. Since the vehicle parameters are states in the filter, the systematic uncertainty in their error can be accounted for, which acts like a bias rather than a noise. This allows even a low accuracy system identification, or very crude estimates of the parameter values, to still allow estimation without resulting in overconfidence, due to the bias error having a stronger and different effect to simply increasing the uncertainty in the vehicle model noise. \nThis also allows the vehicle modeling to adapt to different scenarios, such as surfacing or changes to the vehicle following the system identification, while constraining their value range by utilizing the first order Markov process model. \nAlthough these parameters could have been modeled as a constant, by allowing the parameters to have a time-varying component it acts as a way to implement ``model uncertainty''.\nIn this way, the robustness of the filter improves as we no longer fully trust our model to be a perfect representation of the true dynamics, which is most definitely the case with applying a simplified and computationally tractable model for real-time usage to the real-world.\n\n\\subsection{ADCP-aiding update equations}\n\nGiven the 3D velocities output from the ADCP, the observation function for each ADCP measurement is\n\\begin{equation}\n\\textbf{z}_{c,i}(t) = \\textbf{C}_{n,t}^{b}(- \\textbf{v}_{t}^{n} + \\frac{d_{max}-d_{i}}{d_{max}}\\textbf{v}_{c,v,t}^{n} + \\frac{d_{i}}{d_{max}}\\textbf{v}_{c,b,t}^{n}) + \\textbf{b}_{a,t} + \\nu_{a}\n\\label{z_adcp}\n\\end{equation}\n\n\nwhere $\\textbf{z}_{c,i}$ is the ADCP measured current vector in the i$^{th}$ measurement cell, $\\textbf{C}_{n,t}^{b}$ is the coordinate transform from navigation\/world frame to ADCP\/body frame at time $t$,\n$\\textbf{v}_{t}^{n}$ is the vehicle velocity in the world\/navigation frame, $\\textbf{v}_{c,v,t}^{n}$ is the water current velocity surrounding the vehicle, $\\textbf{v}_{c,b,t}^{n}$ is the water current velocity at the maximum ADCP range, $\\textbf{b}_{a,t}$ is the bias in the ADCP measurement and $\\nu_{a}$ is the random noise in the ADCP measurement, with a standard deviation given by the sensor manufacturer.\n\nTo reduce the state number of the filter, the vertical velocity of the water currents are not estimated. The ADCP measurement model is a depth dependent function with two water current states, which linearly interpolates between them. The states are located at the vehicle position, and at a water volume at end of the ADCP measurement range. The water velocity and the ADCP bias state prediction models have a base time varying component, with a timescale of around one hour for the water current and half an hour for the bias, modeled as in \\eqref{bias_equation}. In addition to this, the water velocity state will vary more given spatial motion through a water current vector field. This component scales the process model uncertainty of the water velocity state according to the vehicle velocity. In this way, if the vehicle is slowly traveling through the water current vector field, it can account for the spatial scale of the water currents, which can depend on the environment. For example, water currents near complex bathymetry or strong wind and tides can contribute to smaller spatial scale water current velocity changes, compared to the case of the mid-water ocean \\cite{medagoda2015autonomous}.\n\n\\section{Results}\n\n\n\n\n\nAll the experiments have been made using the \\textit{FlatFish} AUV \\cite{albiez2015flatfish} shown in Fig. \\ref{fig:flatfish}.\nAs relevant sensors for our experiments, the vehicle is equipped with a KVH 1750 IMU, a Rowe SeaProfiler DualFrequency 300\/1200 kHz ADCP\/DVL, a Paroscientific 8CDP700-I pressure sensor, a u-blox PAM-7Q GPS receiver and six 60N Enitech ring thrusters. For heading evaluation purposes we also use a Tritech Gemini 720i Multibeam Imaging Sonar attached to the AUV.\nThe data sets have been collected during the sea trails of the second phase of the \\textit{FlatFish} project close to the shore of Salvador (Brazil) during April 2017.\n\nSince the experiments took place in the open ocean in all data sets, a fiber optic tether was attached to the vehicle for safety reasons. As a result, even though the vehicle model parameters were estimated with a prior system identification, there would be a large error associated with the model given the tether, so there is $\\sim$20\\% uncertainty in the parameter values. Nonetheless, the filter is robustly capable of accounting for this increase in the uncertainty of the vehicle model parameters. This allows the filter to adaptively change the parameters while keeping them in a constrained range through the use of the first order Markov process model.\nThe filter is capable of running in 14$\\times$ real-time with an integration frequency of 100 Hz on computing available on the \\textit{FlatFish} AUV.\n\n\\subsection{Heading estimation experiment}\n\\label{sec:heading_experiment}\n\n \n \n\nIn this data set we show that the filter is able to find its true heading without a global positioning reference, given an initial guess.\nThe mission consists of a initialization phase on the surface followed by a submerged phase before resurfacing. During the initialization phase the vehicle moves for around 8 minutes on a straight line in order to estimate its true heading and position by incorporating GPS measurements. In the submerged phase the vehicle changes its heading to face the target coordinate and follows a straight line for about 112 meters to reach it.\n\n\\begin{figure}[!h]\n \\centering\n \\includegraphics [width=0.45\\textwidth,trim=0 0 0 5,clip] {pictures\/random_init_min_max_heading_with_and_without_gps.pdf}\n \\caption{The plots show the estimated heading during the mission given different filter configurations and initial headings distributed over 30$^{\\circ}$.\n\t The green crosses show independent land mark based heading measurements. 200 seconds in the mission the heading offset was corrected resulting in the short change of attitude.}\n \\label{fig:heading_comp}\n\\end{figure}\n\nFig. \\ref{fig:heading_comp} shows six runs of the same data set in different filter configurations. Three GPS-aided runs with a different initial heading distributed over 30$^{\\circ}$, one with a close initial guess (black line), one with a 15$^{\\circ}$ positive offset (cyan line) and one with a 15$^{\\circ}$ negative offset (blue line). Due to the help of the GPS measurements the estimated headings converge in the first 5 minutes.\nThe three runs not integrating a global position reference starting with the same heading distribution show that the filter is able to find its true heading by observing the rotation of the earth (gyro compassing), relying only on Inertial and velocities.\nAfter 15 minutes the GPS-aided and the non-GPS-aided estimated headings have converged with an uncertainty below 0.5$^{\\circ}$ (1$\\sigma$).\nInitial errors $>$15$^{\\circ}$ will converge as well given more time. Critical however are initial errors close to 180$^{\\circ}$.\nThe green crosses show multiple independent measurements of the expected vehicle heading based on landmarks (poles) visible in the multibeam imaging sonar on the vehicle.\nThe average difference between the landmark based headings to the filter estimates is below 1$^{\\circ}$.\nWe expect the uncertainties of these measurements to be within 5$^{\\circ}$ due to the uncertainties associated with the pole positions in surveyed maps and in the sonar images.\n\n\\begin{figure}[!h]\n \\centering\n \\includegraphics [width=0.45\\textwidth,trim=0 0 0 5,clip] {pictures\/random_init_min_max_heading_convergence_with_and_without_gps.pdf}\n \\caption{The blue solid line is the error in heading with integrated GPS measurements.\n\t The red solid line is the error in heading without the integration of GPS measurements.\n\t The dashed lines are the corresponding uncertainties (1$\\sigma$).}\n \\label{fig:heading_diff}\n\\end{figure}\n\nUsing the GPS-aided heading with a close initial guess shown in Fig. \\ref{fig:heading_comp} (black solid line) as ground truth, we can have a closer look in Fig. \\ref{fig:heading_diff} on the uncertainties and how the estimates improve.\nIn Fig. \\ref{fig:heading_diff} both filter configurations start with an offset of -15$^{\\circ}$ to the ground truth and an initial uncertainty of 30$^{\\circ}$ (1$\\sigma$).\nThe GPS-aided heading estimate converges, as expected, quickly to the ground truth while staying in the 1$\\sigma$ bound.\nFor the heading estimate without global positioning reference we can see that the strong offset and high uncertainty in the beginning leads to a fast compensation in the correct direction with an overshot slightly exceeding the 1$\\sigma$ bound. As the experiment progresses we can see that observing different orientations helps to estimate the gyroscope bias and therefore helps to detect the error between the expected rotation of the earth given the current orientation. We have shown that our filter is able to estimate its true heading by observing the rotation of the earth and that observations from different attitudes help to improve the process.\n\n\\subsection{Repeated square path experiment}\n\n \n \n \n\nIn this experiment we show how the filter performs when the vehicle travels a longer distance of 1 km without horizontal position aiding measurements, such as GPS.\nThe vehicle was following a 5 times repeated square trajectory with an edge length of 50 meter for $\\sim$1 hour. After resurfacing, the position difference to the GPS ground truth is within 0.5\\% of the traveled distance.\n\nStarting with an initialization phase (same as in \\ref{sec:heading_experiment}) on the surface, to estimate its heading and position using GPS measurements, the vehicle submerges to 10 m depth, performs the mission and surfaces at the end.\nThe blue line in Fig. \\ref{fig:square_trajectory} shows the trajectory of the vehicle from minute 20 to minute 80 in the mission, i.e. 1 minute before submerging and 2 minutes after surfacing.\nThe red dots are the GPS measurements including outliers.\n\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics [width=0.45\\textwidth,trim=0 0 0 5,clip] {pictures\/5x50m_square_trajectory.pdf}\n \\caption{The blue solid line shows the trajectory of the vehicle performing 5 times a 50 meter square trajectory in a depth of 10 meter.\n\t After traveling the distance of 1 km the horizontal (North\/East plane) position difference is withing 5 meter (0.5\\% of distance traveled).}\n \\label{fig:square_trajectory}\n\\end{figure}\n\nThe pose filter used on the vehicle at the time the data set was created was not aware of the drift and the initial error in heading.\nOur filter can correct the heading by observing the rotation of the earth and compensate for DVL dropouts utilizing the motion model.\nHowever during the mission a fiber optic tether was attached to the vehicle which represents an unmodeled source of error.\n\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics [width=0.45\\textwidth,trim=0 0 0 -2,clip] {pictures\/5x50m_square_horizontal_pos_error.pdf}\n \\caption{The blue solid line shows the horizontal (North\/East plane) position difference with respect to the GPS measurements (including outliers).\n\t The red and magenta dashed lines represent the corresponding uncertainty (1$\\sigma$ and 2$\\sigma$).}\n \\label{fig:square_pos_diff}\n\\end{figure}\n\nThe blue line in Fig. \\ref{fig:square_pos_diff} shows the position difference on the North\/East plane with respect to the GPS measurements (including outliers).\nDuring the first 20 minutes of the mission the GPS measurements are integrated in the filter allowing initialization.\nAfter resurfacing (minute 78 and onward) the GPS measurements are not integrated allowing us to observe the difference to the ground truth.\nAfter traveling a distance of 1 km the position difference is withing 5 meter (0.5\\% of distance traveled) and in the 2$\\sigma$ bound of the position uncertainty.\n\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics [width=0.45\\textwidth,trim=0 0 0 5,clip] {pictures\/5x50m_square_water_current.pdf}\n \\caption{Estimated water current in north (red) and east (blue) direction. The dashed lines represent the corresponding uncertainties (2$\\sigma$).}\n \\label{fig:5x50m_square_water_current}\n\\end{figure}\n\nIn the case that ADCP measurements are not available the filter will estimate the water currents only by the difference between the motion model based velocity and the DVL based velocity, as modeled in \\eqref{eq:velocity_body}.\nFig. \\ref{fig:5x50m_square_water_current} shows the estimated water current velocities in North and East direction during this experiment without the aiding of ADCP measurements.\nDuring the first 20 minutes the uncertainties of the water current velocities stay constant, since we apply the model-aiding measurements with an increased uncertainty in case the vehicle is surfaced.\nWhen the mission starts and the vehicle submerges (starting around minute 21) to a depth of 10 meters we can see that the estimated water flow changes to the one on the surface and that its velocity continuously increases during the 1 hour mission. The uncertainties reduce during this phase since we trust the model more when submerged.\nThe impact of the tether attached to the vehicle is seen as an unmodeled but estimated drag, which changes depending on the direction the vehicle travels.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics [width=0.45\\textwidth] {pictures\/5x50m_square_damping_terms.pdf}\n \\caption{Linear damping in x (red) and y (blue) direction in the body frame. The dashed lines represent the corresponding uncertainties (2$\\sigma$).}\n \\label{fig:5x50m_damping}\n\\end{figure}\n\nFig. \\ref{fig:5x50m_damping} shows the linear damping terms on the x and y-axis in the body frame of the vehicle and how they are refined during the mission.\nBecause the vehicle travels during the mission mainly in the forward direction, the damping term on the x-axis is refined and the corresponding uncertainty reduces more compared to the y-axis damping term. The uncertainty reduction reaches a limit however due to observability, and the first order Markov process model ensures that the parameters become neither overconfident nor unconstrained. In this way, the model parameters can adapt with time to new conditions and implicitly represents some uncertainty in the model equations themselves.\n\n\n\\subsection{Square path with ADCP}\n\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics [width=0.45\\textwidth,trim=5 10 5 25,clip] {pictures\/adcp_traj.pdf}\n \\caption{The solid blue line shows the trajectory of the vehicle performing a square path in a depth of 2 meter while surfacing in each corner.}\n \\label{fig:adcp_traj}\n\\end{figure}\n\n\\begin{figure*}[!ht]\n \\centering\n \\includegraphics [width=0.75\\textwidth,trim=0 5 0 0,clip] {pictures\/adcp_compare.pdf}\n \\caption{Square path with ADCP - The position uncertainties and differences from the ground truth are compared for different data denials.}\n \\label{fig:adcp_compare}\n\\end{figure*}\n\n\\begin{table}[]\n\\caption{Filter position difference from ground truth and estimated uncertainty}\n\\label{adcp_table}\n\\centering\n\\begin{tabular}{|l|l|l|}\n\\hline\nFilter measurements used & \\thead{Estimated uncertainty\\\\after 1000 seconds} & \\thead{Position difference \\\\from ground truth\\\\ after 1000 seconds} \\\\ \\hline\nInertial + ADCP & 50.9 m (2$\\sigma$) & 22.0 m \\\\ \\hline\nInertial + model & 45.7 m (2$\\sigma$) & 21.5 m \\\\ \\hline\nInertial + model + ADCP & 32.3 m (2$\\sigma$) & 16.1 m \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\nThis mission undergoes a 600 second initialization phase on the surface (as in \\ref{sec:heading_experiment}), then 1000 seconds of data denial to show the performance of the filter in different scenarios. During the data denial phase, the vehicle completes a square trajectory, and surfaces at the corners. The ground truth trajectory is shown in Fig. \\ref{fig:adcp_traj}. The ground truth is determined using Inertial, DVL, GPS, ADCP and model-aiding.\n\nSince this mission also includes ADCP measurements interleaved with DVL, the ADCP-aiding update is applied. During this mission, there are cases where the downward facing DVL drops out due to very low altitude (between 0-2m during the mission), and there is collision with the sandy bottom. Despite this challenging data set, the filter is capable of estimating the position of the vehicle, validated by the smooth trajectory without sudden corrections at the GPS measurements during the corner surfacing shown in Fig. \\ref{fig:adcp_traj}.\n\nWith the full measurement filter (without data denial), the filter is able to handle DVL drop outs, which could be the case in low-altitude scenarios such as inspection or docking, by letting the model-aiding fill in during these time periods. Data denial further validates the filter performance in DVL loss scenarios, as shown in Fig. \\ref{fig:adcp_compare}. In cases of DVL bottom-lock loss due to altitude being too high (simulated through data-denial), the ADCP and model-aiding combined gives the best solution, compared to either ADCP or model-aiding alone.\n\nThe position estimate differences compared to the ground truth for these data denials are consistent with the 2$\\sigma$ uncertainty bounds, while remaining stable. At approximately 400 seconds following data denial, the filter with only ADCP and inertial measurements appears to slightly exceed the 2$\\sigma$ bounds, due to a low altitude section with very little valid ADCP measurements, and some ADCP outliers are incorporated into the filter since the innovation gate increases due to inertial-only dead-reckoning. The ground truth also increases in uncertainty at this stage due to the lack of DVL measurements, relying more on the model-aiding. Following further measurements, the filter recovers and is able to reduce the difference between the filter estimate and the ground truth. This is possible since the water current estimate will not vary significantly in this timescale, so that the vehicle can use this state when there are ADCP measurements available again to estimate the velocity and thus position of the vehicle.\n\nThe ADCP-aiding typically performed worse in this case than the model-aiding, but this can be attributed to low altitude where there are very few valid ADCP measurements available. Nonetheless, incorporating these ADCP measurements into the model-aiding improved on the performance of either option. In addition to another source of velocity-aiding information from the ADCP, it also allows an independent source of information regarding the water currents surrounding the vehicle, which is required to transform the water relative velocity of the vehicle model to the navigation frame position used in the filter.\n\nThe results are further quantitatively compared in Table \\ref{adcp_table}. The combination of the ADCP-aiding and model-aiding results in a significant improvement compared to model-aiding alone, reducing position uncertainty from 45.7m (2$\\sigma$) to 32.3m (2$\\sigma$) during 1000 seconds of data denial.\n\n\n\n\n\\section{Conclusions}\n\n\nThe filter designed and implemented in this paper would be appropriate for general AUV navigation, despite not using a navigation grade IMU. In comparison to \\cite{hegrenaes2011model}, the primary insight to the design of this filter is the incorporation of the acceleration state, and adding many parameters as states to account for their correlated error, while modeling with a first order Markov process to constrain the change the filter can apply. The engineering design trade-off is that adding too many states will unnecessarily add computational complexity and potential filtering instability. \n\nThis furthers the state-of-the-art for robust filter design for INS, model-aiding and ADCP measurements, capable of real-time performance, consistency and stability as outlined in the experiments, while remaining conceptually simple. \nThis paper has shown a manifold based UKF that applies a novel strategy for inertial, model-aiding and ADCP measurement incorporation.\nThe filter is capable of observing and utilizing the Earth rotation for heading estimation to within 1$^{\\circ}$ (2$\\sigma$) by estimating the KVH 1750 IMU biases. \nThe drag and thrust model-aiding accounts for the correlated nature of vehicle model parameter error by applying them as states in the filter. The usage of the model-aiding is validated through observing that the filter remains consistent and does not become overconfident or unstable in the real-world experiments, despite uncertain vehicle model parameters. \n\nIt is hypothesized that the usage of time varying first order Markov processes to model these parameters act as a way to implement ``model uncertainty'', improving the robustness of the filter as we no longer fully trust our model to be a perfect representation of the true dynamics, which is most definitely the case with applying simplified and computationally tractable model for real-time usage to the real-world.\nADCP-aiding provides further information for the model-aiding in the case of DVL bottom-lock loss. The importance of water current estimation is highlighted in underwater navigation in the absence of external aiding, justifying the use of the model-aiding and ADCP sensor. Through data denial, scenarios with no DVL bottom lock are shown to be consistently estimated. Additionally this work was implemented using the MTK and ROCK framework in C++, and is capable of running in 14$\\times$ real-time on computing available on the \\textit{FlatFish} AUV.\n\n\nFuture work would include full spatiotemporal real-time ADCP based methods to more accurately model and observe the water current state around vehicle. This requires implementing a mapping approach, such as the work from \\cite{medagoda2016mid} \\cite{medagoda2015autonomous}.\nThe primary source of bias uncertainty for the KVH 1750 IMU is due to temperature change. If the temperature of the IMU can be controlled, or this bias can be calibrated with further experiments, then the performance can be further improved. Further heading evaluation will be possible with better ground truth, such as a visual confirmation or by utilizing an independent heading estimator such as an iXblue PHINS, so that a more accurate heading comparison can be undertaken. The error in alignments of sensors could also be further compensated, perhaps by adding states to the filter similar to the strategy for other systematic biases. Finally, further experiments and implementations in a variety of scenarios are planned to further test and refine the proposed filtering strategy.\n\n\n\n\n\n\\addtolength{\\textheight}{-12cm} \n \n \n \n \n \n\n\n\n\n\n\n\n\n\n\n\\section*{Acknowledgement}\n\nWe like to thank Shell and SENAI CIMATEC for the opportunity to test the presented work on \\textit{FlatFish}.\n\nWe also like to thank all colleagues of the \\textit{FlatFish} team for their support and Javier Hidalgo-Carri\\'o for his review.\n\nThis work was supported in part by the EurEx-SiLaNa project\n(grant No. 50NA1704) which is funded by the German Federal\nMinistry of Economics and Technology (BMWi).\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\nDislocations can alter different stages of the precipitation process\nin crystalline solids, which consists of nucleation, growth and\ncoarsening \\cite{Larche_1979,Wagner_Kampmann_1991}. Distortion of the\nlattice in proximity of a dislocation can enhance nucleation in\nseveral ways \\cite{Porter_Easterling_1981,Christian_2002}. The main\neffect is the reduction in the volume strain energy associated with\nthe phase transformation. Nucleation on dislocations can also be\nhelped by solute segregation which raises the local concentration of\nthe solute in the vicinity of a dislocation, caused by migration of\nsolutes toward the dislocation, the Cottrell atmosphere effect. When\nthe Cottrell atmosphere becomes supersaturated, nucleation of a new\nphase may occur followed by growth of nucleus. Moreover, dislocation\ncan aid the growth of an embryo beyond its critical size by providing\na diffusion passage with a lower activation energy.\n\n\nPrecipitation of second-phase along dislocation lines has been\nobserved in a number of alloys\n\\cite{Aaronson_et_al_1971,Aaron_Aaronson_1971}. For example, in\nAl-Zn-Mg alloys, dislocations not only induce and enhance nucleation\nand growth of the coherent second-phase MgZn$_2$ precipitates, but\nalso produce a spatial precipitate size gradient around them\n\\cite{Allen_Vandesande_1978,Deschamps_et_al_1999,Deschamps_Brechet_1999}.\nCahn \\cite{Cahn_1957} provided the first quantitative model for\nnucleation of second-phase on dislocations in solids. In Cahn's\nmodel, it is assumed that a cross-section of the nucleus is circular,\nwhich is strictly valid for a screw dislocation\n\\cite{Larche_1979}. Also, it is posited that the nucleus is incoherent\nwith the matrix so that a constant interfacial energy can be allotted\nto the boundary between the new phase and the matrix. An incoherent\nparticle interface with the matrix has a different\natomic configuration than that of the phases. The matrix is\nan isotropic elastic material and the formation of the precipitate\nreleases the elastic energy initially stored in its volume. Moreover,\nthe matrix energy is assumed to remain constant by precipitation. In\nthis model, besides the usual volume and surface energy terms in the\nexpression for the total free energy of formation of a nucleus of a\ngiven size, there is a term representing the strain energy of the\ndislocation in the region currently occupied by the new phase. Cahn's\nmodel predicts that both a larger Burgers vector and\na more negative chemical free energy change between the precipitate\nand the matrix induce higher nucleation rates, in agreement with\nexperiment \\cite{Aaronson_et_al_1971,Aaron_Aaronson_1971}.\n\n\nSegregation phenomenon around dislocations, i.e. the Cottrell\natmosphere effect, has been observed among others in Fe-Al alloys\ndoped with boron atoms \\cite{Blavette_et_al_1999} and in silicon\ncontaining arsenic impurities \\cite{Thompson_et_al_2007}, in\nqualitative agreement with Cottrell and Bilby's predictions\n\\cite{Cottrell_Bilby_1949}. Cottrell and Bilby considered segregation\nof impurities to straight-edge dislocations with the Coulomb-like\ninteraction potential of the form $\\phi=A\\sin \\theta\/r$, where $A$\ncontains the elasticity constants and the Burgers vector, and\n$(r,\\theta)$ are the polar coordinates. Cottrell and Bilby ignored the\nflow due concentration gradients and solved the simplified diffusion\nequation in the presence of the aforementioned potential field. The\nmodel predicts that the total number of impurity atoms removed from\nsolution to the dislocation increases with time $t$ according to $N(t)\n\\sim t^{2\/3}$, which is good agreement with the early stages of\nsegregation of impurities to dislocations, e.g. in iron containing\ncarbon and nitrogen \\cite{harper_1951}. A critical review of the\nBilby-Cottrell model, its shortcomings and its improvements are given\nin \\cite{Bullough_Newman_1970}.\n\n\nThe object of our present study is the diffusion-controlled growth of\na new phase, i.e., a post nucleation process in the presence of\ndislocation field rather than the segregation effect. As in Cahn's\nnucleation model \\cite{Cahn_1957}, we consider an incoherent\nsecond-phase precipitate growing under the action of a screw\ndislocation field. This entails that the stress field due to\ndislocation is pure shear. The equations used for diffusion-controlled\ngrowth are radially symmetric. These equations for second-phase in a solid or from a\nsupercooled liquid have been, in the absence of an external field, solved\nby Frank \\cite{Frank_1950} and discussed by Carslaw and Jaeger\n\\cite{Carslaw_Jaeger_1959}. The exact analytical solutions of the\nequations and their various approximations thereof have been\nsystematized and evaluated by Aaron et al. \\cite{Aaron_et_al_1970},\nwhich included the relations for growth of planar\nprecipitates. Applications of these solutions to materials can\nbe found in many publications, e.g. more recent papers on growth of\nquasi-crystalline phase in Zr-base metallic glasses\n\\cite{Koster_et_al_1996} and growth of Laves phase in Zircaloy\n\\cite{Massih_et_al_2003}. We should also mention another\ntheoretical approach to the problem of nucleation and growth of an\nincoherent second-phase particle in the presence of dislocation\nfield \\cite{Sundar_Hoyt_1991}. Sundar and Hoyt \n\\cite{Sundar_Hoyt_1991} introduced the dislocation field, as in Cahn\n\\cite{Cahn_1957}, in the nucleation part of the model,\nwhile for the growth part the steady-state solution of the concentration field\n(Laplace equation) for elliptical particles was utilized.\n\n\nThe organization of this paper as follows. The formulation of the\nproblem, the governing equations and the formal solutions are given in\nsection \\ref{sec:formul}. Solutions of specific cases are presented in\nsection \\ref{sec:comp}, where the supersaturation as a function of the\ngrowth coefficient is evaluated as well as the spatial variation of\nthe concentration field in the presence of dislocation. In section\n\\ref{sec:disc}, besides a brief discourse on the issue of interaction\nbetween point defects and dislocations, we calculate the\nsize-dependence of the concentration at the curved precipitate\/matrix\nfor the problem under consideration. We have carried out our\ncalculations in space dimensions $d=2$ and $d=3$. Some mathematical\nanalyses for $d=3$ are relegated to appendix \\ref{sec:appa}.\n\n\n\n\\section{Formulation and general solutions}\n\\label{sec:formul}\n\n We consider the problem of growth of the new phase, with radial\nsymmetry (radius $r$), governed by the diffusion of a single entity,\n$u\\equiv u(r,t)$, which is a function of space and time $(r,t)$. $u$\ncan be either matter (solvent or solute) or heat (the latent heat of\nformation of new phase). The diffusion in the presence of an external\nfield obeys the Smoluchowski equation \\cite{Chandra_1943} of the form\n\\begin{eqnarray}\n \\label{eqn:smolu}\n \\frac{\\partial u}{\\partial\nt} & = & \\nabla \\cdot \\mathbf{J}, \\\\\n\\label{eqn:smolu-flux}\n \\mathbf{J} & = & D(\\nabla u-\\beta\\mathbf{F}u),\n\\end{eqnarray}\n\\noindent\nwhere $D$ is the diffusivity, $\\beta=1\/k_BT$, $k_B$ the Boltzmann\nconstant, $T$ the temperature, and $\\mathbf{F}$ is an external field\nof force. The force can be local (e.g., stresses due to dislocation\ncores in crystalline solids) or caused externally by an applied field (e.g., electric field\nacting on charged particles). If the acting force is conservative, it\ncan be obtained from a potential $\\phi$ through $\\mathbf{F}=-\\nabla\n\\phi$. The considered geometric condition applies\nto the case of second-phase particles growing in a\nsolid solution under phase transformation \\cite{Massih_et_al_2003} or\ndroplets growing either from vapour or from a second liquid\n\\cite{Frank_1950}. A steady state is reached when\n$\\mathbf{J}=\\mathrm{const.}=0$, resulting in\n$u=u_0\\exp(-\\beta\\phi)$.\n\nHere, we suppose that the diffusion field is along\nthe core of dislocation line and that a cross-section of the\nprecipitate (nucleus), perpendicular to the dislocation, is circular,\ni.e., the precipitate surrounds the dislocation. Furthermore, we treat\nthe matrix and solution as linear elastic isotropic media. The elastic\npotential energy of a stationary dislocation of length $l$ is given by\n\\cite{Kittel_1996,Friedel_1967}\n\\begin{equation}\n \\label{eqn:screw}\n \\phi = A\\ln\\frac{r}{r_0}, \\qquad \\qquad \\qquad \\textrm{for}\\quad r\\ge r_0\n\\end{equation}\n\\noindent\nwhere $A=Gb^2l\/4\\pi$ for screw dislocation, $G$ is the elastic shear\nmodulus of the crystal, $b$ the magnitude of the Burgers vector, $\\nu$\nPoisson's ratio, and $r_0$ is the usual effective core radius. Also,\nwe assume that the dislocation's elastic energy is relaxed within the\nvolume occupied by the precipitate and that the precipitate is\nincoherent with the matrix. Hence the interaction energy between the\nelastic field of the screw dislocation and the elastic field of the solute\nis zero. In the case of an edge dislocation and coherent\nprecipitate\/matrix interface, this interaction is non-negligible.\n\n\nWe study the effect of the potential field (\\ref{eqn:screw}) on\ndiffusing atoms in solid solution using the Smoluchowski\nequation (\\ref{eqn:smolu}). The governing\nequation in spherical symmetry, in $d$ spatial dimension, with $B \\equiv \\beta A$, is \n\\begin{equation}\n \\label{eqn:pde-1}\n \\frac{1}{D}\\frac{\\partial u}{\\partial t}=\n \\frac{\\partial^2 u}{\\partial r^2}+(d-1+B)\\frac{1}{r}\\frac{\\partial\nu}{\\partial r}+(d-2)B\\frac{u}{r^2}.\n\\end{equation}\n\\noindent\n Making a usual change of variable to the\ndimensionless reduced radius $s=r\/\\sqrt{Dt}$, the partial differential\nequation (\\ref{eqn:pde-1}) is reduced to an ordinary differential\nequation of the form\n\\begin{equation}\n \\label{eqn:ode-1}\n \\frac{{\\rm d^2} u}{{\\rm d}s^2}+\\Big(\\frac{s}{2}+\\frac{d-1+B}{s}\\Big)\\frac{{\\rm\nd}u}{{\\rm d}s}\n+(d-2)B\\frac{u}{s^2}=0,\n\\end{equation}\n\\noindent\nwith the boundary conditions, $u(\\infty) = u_m$, and $u(2\\lambda) =\nu_s$, where $u_m$ is the mean (far-field) solute concentration in the\nmatrix and $u_s$ is the concentration in the matrix at the\nnew-phase\/matrix interface determined from thermodynamics of new\nphase, i.e., phase equilibrium and the capillary effect. Moreover,\nthe conservation of flux at the interface radius $R=2\\lambda\\sqrt{Dt}$ gives\n\\begin{equation}\n \\label{eqn:flux-1}\n K_d R^{d-1}\\vert \\mathbf{J}\\vert_{r=R} = q \\frac{{\\rm d}V_d}{{\\rm d}t},\n\\end{equation}\n\\noindent\nwhere $K_d=2\\pi^{d\/2}\/\\Gamma(d\/2)$, $\\Gamma(x)$ the usual\n$\\Gamma$-function, $V_d=2\\pi^{d\/2}R^d\/d\\Gamma(d\/2)$, and $q$ the\namount of the diffusing entity ejected at the boundary of the growing\nphase per unit volume of the latter (new phase) formed. In $s$-space,\nequation (\\ref{eqn:flux-1}) is written as \n\\begin{equation}\n \\label{eqn:conserve-flux}\n \\Big(\\frac{{\\rm d}u}{{\\rm d}s}\\Big)_{s=2\\lambda} = -\\Big(\\frac{Bu_s}{2\\lambda}+q\\lambda\\Big).\n\\end{equation}\n\\noindent\nThe boundary condition $u(2\\lambda) = u_s$ and equation (\\ref{eqn:conserve-flux})\nwill provide a relationship between $u_s$ and $u_m$ through $\\lambda$.\n\nFor $d=2$, equation (\\ref{eqn:ode-1}) is very much simplified, and we find \n\\begin{equation}\n \\label{eqn:sol-2d}\n u(s)=u_m+\\frac{(Bu_m+2q\\lambda^2)\\lambda^B e^{\\lambda^2}\\Gamma(-B\/2,s^2\/4)}\n {2-B\\lambda^Be^{\\lambda^2}\\Gamma(-B\/2,\\lambda^2)},\n\\end{equation}\n\\noindent\nwhere we utilized $u(\\infty) = u_m$ and equation (\\ref{eqn:conserve-flux}). Here\n $\\Gamma(a,z)$ is the incomplete\ngamma function defined by the integral $\\Gamma(a,z)=\\int_z^\\infty\nt^{a-1}e^{-t}dt$ \\cite{Abramowitz_Stegun_1964}. The yet unknown parameter\n$\\lambda$ is found from relation (\\ref{eqn:sol-2d}) at $u(2\\lambda) =\nu_s$ for a set of input parameters $u_s$, $u_m$ $q$, and $B$, through which the concentration\nfield, equation (\\ref{eqn:sol-2d}), and the growth of second-phase\n($R=2\\lambda\\sqrt{Dt}$) are determined.\n\n\nLet us consider the case of $d=3$, that is assume that the\npotential in equation (\\ref{eqn:screw}) is meaningful for a\nspherically symmetric system. In this case, for $B \\ne 0$, the point $z=0$ is\na regular singularity of equation\n(\\ref{eqn:ode-1}), while $z=\\infty$ is an irregular singularity for\nthis equation, see appendix \\ref{sec:appa} for further\nconsideration. Nevertheless, for\n$d=3$, the general solution of equation (\\ref{eqn:ode-1}) is expressed in the form\n\\begin{equation}\n \\label{eqn:sol-3d}\n u(s)=\n2C_1\\,{_1\\!F}_1\\Big(-\\frac{1}{2};\\frac{1+B}{2};-\\frac{s^2}{4}\\Big)s^{-1}\n+ 2^{B}C_2\\,{_1\\!F}_1\\Big(-\\frac{B}{2};\\frac{3-B}{2};-\\frac{s^2}{4}\\Big) s^{-B},\n\\end{equation}\n\\noindent\nwhere ${_1\\!F}_1(a;b;z)$ is Kummer's confluent hypergeomtric function,\nsometimes denoted by $M(a,b,z)$ \\cite{Abramowitz_Stegun_1964}. The\nintegration constants $C_1$ and $C_2$ in equation (\\ref{eqn:sol-3d}) can be determined\nby invoking equation (\\ref{eqn:conserve-flux}) and also the condition\n$u(\\infty)=u_m$, cf. appendix \\ref{sec:appa}.\n\n\\section{Computations}\n\\label{sec:comp}\n\nTo study the growth behavior of a second-phase in a solid solution\nunder the action of screw dislocation field, we attempt to compute the\ngrowth rate constant as a function of the supersaturation parameter $k$,\ndefined as $k \\equiv (u_s-u_m)\/q_u$ with $q_u=u_p-u_s$, where $u_p$\n is the composition of the nucleus \\cite{Aaron_et_al_1970}. For $d=2$,\ni.e., a cylindrical second-phase platelet, equation (\\ref{eqn:sol-2d}) with\n$u(2\\lambda) = u_s$ yields\n\n\\begin{equation}\n k = \\Bigg[\\frac{2\\lambda^2+Bu_m(u_p-u_s)^{-1}}{2-B\\lambda^B e^{\\lambda^2}\\Gamma\\big(-B\/2,\\lambda^2\\big)}\\Bigg]\n \\lambda^B e^{\\lambda^2}\\Gamma\\big(-B\/2,\\lambda^2\\big).\n\\label{eqn:supsat-2d-exact}\n\\end{equation}\n\\noindent\nFor $B=0$, the relations obtained by Frank \\cite{Frank_1950} are recovered,\nnamely\n\\begin{eqnarray}\n u(z) & = & u_m + q_u \\lambda^2e^{\\lambda^2}E_1(z^2\/4), \\label{eqn:sol-0d2} \\\\\n k & = & \\lambda^2 e^{\\lambda^2}E_1(\\lambda^2), \\label{eqn:flux-0d2}\n\\end{eqnarray}\n\\noindent\nwhere $E_1(x)$ is the exponential integral of order one, related to the\nincomplete gamma function through the identity\n$E_n(x)=x^{n-1}\\Gamma(1-n,x)$ \\cite{Abramowitz_Stegun_1964}.\n\nFrom equation (\\ref{eqn:supsat-2d-exact}), it is seen that a complete separation of the supersaturation parameter\n$k \\equiv (u_s-u_m)(u_p-u_s)^{-1}$ is not possible for $B \\ne\n0$. However, for $u_s << u_p$ (a reasonable proviso) we write\n\\begin{equation}\n k = \\Big(\\lambda^2 + \\frac{B}{2}\\,\\epsilon\\Big)\\, \n \\lambda^B e^{\\lambda^2}\\Gamma\\big(-B\/2,\\lambda^2\\big)+\\mathcal{O}(\\epsilon^2),\n\\label{eqn:supsat-2d-approx}\n\\end{equation}\n\\noindent\nwith $\\epsilon \\equiv u_s\/u_p$. For $B=1$, equations\n(\\ref{eqn:sol-2d}) and (\\ref{eqn:supsat-2d-approx}) yield, respectively\n\\begin{eqnarray}\n u(z) & = & u_m + \\frac{2\\lambda\\,e^{\\lambda^2}\\,(u_m + 2q_u\\lambda^2)\\,E_{3\/2}(z^2\/4)}\n{[2-e^{\\lambda^2} E_{3\/2}(\\lambda^2)]z},\n \\label{eqn:sol-1d2} \\\\\n k & = & \\Big(\\lambda^2 + \\frac{\\epsilon}{2}\\Big)\\,e^{\\lambda^2}E_{3\/2}(\\lambda^2)+\\mathcal{O}(\\epsilon^2).\n\\label{eqn:flux-1d2}\n\\end{eqnarray}\n\\noindent\nSimilarly for $B=2$, we have \n\\begin{eqnarray}\n u(z) & = & u_m + \\frac{4\\lambda^2e^{\\lambda^2}(u_m + q_u \\lambda^2)E_2(z^2\/4)\n}{[1-e^{\\lambda^2} E_{2}(\\lambda^2)]z^2},\n \\label{eqn:sol-2d2} \\\\\n k & = & (\\lambda^2 + \\epsilon)E_2(\\lambda^2)+\\mathcal{O}(\\epsilon^2).\n\\label{eqn:flux-2d2}\n\\end{eqnarray}\n\\noindent\n\nWe have plotted the growth coefficient $\\lambda=R\/2\\sqrt{Dt}$ as a\nfunction of the supersaturation parameter $k$ in figure\n\\ref{fig:k-2d} and the spatial variation of the concentration field\nin figure \\ref{fig:u-2d} for $d=2$ and several values of $B$. The\ncomputations are performed to $\\mathcal{O}(\\epsilon^2)$ with\n$\\epsilon=0.01$. Figure \\ref{fig:k-2d} shows that $\\lambda$ is an\nincreasing function of $k$; and also, as $B$ is raised $\\lambda$ is\nelevated. This means that an increase in the amplitude of dislocation\nforce (e.g., the magnitude of the Burgers vector) enhances second-phase\ngrowth in an alloy.\n\nFigure \\ref{fig:u-2d} displays the reduced concentration versus the\nreduced radius $z=r\/\\sqrt{Dt}$ for $\\lambda=1$. The reduced\nconcentration is calculated via equation (\\ref{eqn:sol-2d}). It is seen that for\n$z \\lesssim 1.6$ the concentration is enriched with increase in $B$,\nwhereas for $z \\gtrsim 1.6$, it is vice versa. So, for\n$\\lambda=1$, the crossover $z$-value is $z_c \\approx 1.6$. Also, as\n$\\lambda$ is reduced, $z_c$ is decreased.\n\n\\begin{figure}[htbp]\n\\begin{center}\n \\includegraphics[width=0.80\\textwidth]{fig\/lambda_k2d}\n \\caption{ Growth coefficient $\\lambda$ as a function of supersaturation $k$ \n at various levels of dislocation force\namplitude $B$ for a circular plate ($d=2$) and $u_s=0.01u_p$.}\n\\label{fig:k-2d}\n \\end{center}\n\\end{figure}\n\\begin{figure}[htbp]\n \\begin{center}\n \\includegraphics[width=0.80\\textwidth]{fig\/u2d}\n\\caption{Reduced concentration field as a function of reduced distance\n from the surface of the circular plate ($d=2$) at various levels of dislocation force\namplitude $B$ and at $\\lambda=1$.}\n\\label{fig:u-2d}\n\\end{center}\n\\end{figure}\n\nFor $d=3$, i.e., a spherical second-phase particle in the absence of\ndislocation field ($B=0$), we find\n\\begin{eqnarray}\n u(z) & = & u_m + 2 q_u\n\\lambda^3 e^{\\lambda^2}\\Big[\\frac{2e^{-z^2\/4}}{z}\n-\\sqrt{\\pi}\\,\\mathrm{erfc}(z\/2)\\Big],\n \\label{eqn:sol-3d-b0} \\\\\n k & = & 2\\lambda^2\\Big[1-\\sqrt{\\pi}\\,\\lambda\\, e^{\\lambda^2}\\mathrm{erfc}(\\lambda)\\Big].\n\\label{eqn:flux-3d-b0}\n\\end{eqnarray}\n\\noindent\nThis corresponds to the results obtained by Frank\n\\cite{Frank_1950}.\n\n For $d=3$ and $B=2$, equation (\\ref{eqn:ode-1}) is\nsimplified and an analytical solution can be found, resulting in\n\\begin{eqnarray}\n u(z) & = & \\Bigg(\\frac{e^{z^2\/4}(z^2+2)\\Big[\\sqrt{\\pi}\\lambda\ne^{\\lambda^2}\\Big(\\mathrm{erf}(\\frac{z}{2})-\\mathrm{erf}(\\lambda)\\Big)-1\\Big]\n+2\\lambda e^{\\lambda^2}z}{\\sqrt{\\pi}\\lambda e^{\\lambda^2}\\mathrm{erfc}(\\lambda)-1}\\Bigg)\n\\frac{e^{-z^2\/4}}{z^2}u_m +\n\\nonumber\\\\\n & & + \\;\n \\Bigg(\\frac{\\lambda^3 e^{\\lambda^2}\\Big[2z-\\sqrt{\\pi}e^{z^2\/4}(z^2+2)\\mathrm{erfc}(\\frac{z}{2})\\Big]}\n{\\sqrt{\\pi}\\lambda e^{\\lambda^2}\\mathrm{erfc}(\\lambda)-1}\\Bigg)\\frac{e^{-z^2\/4}}{z^2}q_u.\n \\label{eqn:sol-3d2}\n\\end{eqnarray}\n\\noindent\nPutting $u(2\\lambda)=u_s$, we obtain\n\\begin{equation}\n k = \\frac{1+2\\lambda^2\\Big(1-\\sqrt{\\pi}\\lambda\\,e^{\\lambda^2}\\mathrm{erfc}(\\lambda)\\Big)}\n{2\\lambda^2\\Big(\\sqrt{\\pi}\\lambda\\,e^{\\lambda^2}\\mathrm{erfc}(\\lambda)-1\\Big)}\\;\\frac{u_m}{q_u}+\n\\frac{2\\lambda^2-(1+2\\lambda^2)\\sqrt{\\pi}\\lambda\\,e^{\\lambda^2}\\mathrm{erfc}(\\lambda)}\n{2\\Big(\\sqrt{\\pi}\\lambda\\,e^{\\lambda^2}\\mathrm{erfc}(\\lambda)-1\\Big)}.\n\\label{eqn:k-3d-2e}\n\\end{equation}\nFor $u_s << u_p$, we write\n\\begin{equation}\n k =\n-2\\lambda^4+\\sqrt{\\pi}\\lambda^3(1+2\\lambda^2)\\,e^{\\lambda^2}\\mathrm{erfc}(\\lambda)\n + \\Big(1-2\\lambda^2 + 2\\sqrt{\\pi}\\lambda^3\\,e^{\\lambda^2}\\mathrm{erfc}(\\lambda)\\Big)\\epsilon\n+\\mathcal{O}(\\epsilon^2).\n\\label{eqn:k-3d-2a}\n\\end{equation}\n\\noindent\n\n\nGeneral analytical expressions of $u(z)$ and $k$, in terms of confluent\nhypergeometric functions, can also be found for even values of $B$ as\ndetailed in appendix \\ref{sec:appa}. Furthermore, asymptotic forms of\n$u(z)$ for large and small $z$ can be calculated, see appendix\n\\ref{sec:appa} for analysis of $z>>1$. Figure \\ref{fig:k-23d} compares $k$\nversus $\\lambda$ for $d=2$ and $d=3$ in the absence of dislocation\nfield ($B=0$).\n\n\\begin{figure}[htbp]\n\\begin{center}\n \\includegraphics[width=0.80\\textwidth]{fig\/lambda_k2d3d}\n \\caption{Growth coefficient $\\lambda$ as a function of supersaturation parameter $k$ \n at $B=0$ for a circular plate ($d=2$) versus a sphere ($d=3$).}\n\\label{fig:k-23d}\n \\end{center}\n\\end{figure}\n\n\\section{Discussion}\n \\label{sec:disc}\n\nThe potential energy in equation (\\ref{eqn:screw}) describes the\nelastic energy of the dislocation relaxed within the volume\noccupied by the second-phase precipitate \\cite{Cahn_1957}. It was treated\nhere as an external field affecting the diffusion-limited growth of\nsecond-phase precipitate. The interaction energy of impurities in a\ncrystalline with dislocations depends on the specific model or\nconfiguration of a solute atom and a matrix which is used. Commonly,\nit is assumed that the solute acts as an elastic center of\ndilatation. It is a fictitious sphere of radius $R^\\prime$ embedded\nconcentrically in a spherical hole of radius $R$ cut in the matrix. If\nthe elastic constants of the solute and matrix are the same, the work\ndone in inserting the atom in the presence of dislocation is\n$w=p\\Delta v$, where $p$ is the hydrostatic pressure and $\\Delta v$ is\nthe difference between the volume of the hole in the matrix and the\nsphere of the fictitious impurity. For a screw dislocation $p=0$,\nwhile near an edge dislocation\n$p=\\frac{(1+\\nu)bG\\sin\\theta}{3\\pi(1-\\nu)r}$ for an impurity with\npolar coordinates $(r,\\theta)$ with respect to the dislocation $0z$,\nhence $w \\propto \\Delta v\\sin\\theta\/r$\n\\cite{Cottrell_Bilby_1949}. Using a nonlinear elastic theory\n\\cite{Nabarro_1987}, a screw dislocation may also interact with the\nspherical impurity with the interaction energy $w \\propto \\Delta\nv\/r^2$. Moreover, accounting for the differences in the elastic\nconstants of a solute and a matrix, the solute will relieve shear\nstrain energy as well as dilatation energy, which will also interact\nwith a screw dislocation with a potential $w \\propto \\Delta v\/r^2$\n\\cite{Friedel_1967}. Indeed, Friedel \\cite{Friedel_1967} has\nformulated that by introducing a dislocation into a solid solution of\nuniform concentration $c_0$, the interaction energy between the\ndislocation and solute atoms can be written as $w \\backsimeq\nw_0(b\/\\delta)^n f(\\theta)$, where $\\delta$ is the distance between the two\ndefects, $w_0$ the binding energy when $\\delta=b$, and $f(\\theta)$\naccounts for the angular dependence of the interaction along the\ndislocation. Also, $n=1$ for size effects and $n=2$ for effects due to\ndifferences in elastic constants. The discussed model for the\ninteraction energy between solute atoms and dislocations has been used\nto study the precipitation process on dislocations by number of\nworkers in the past \\cite{Ham_1959,Bullough_Newman_1962} and thoroughly\nreviewed in \\cite{Bullough_Newman_1970}. These studies concern\nprimarily the overall phase transformation (precipitation of a new\nphase) rather than the growth of a new phase considered in our\nnote. That is, they used different boundary conditions as compared to the\nones used here.\n\n\nLet us now link the supersaturation parameter $k$ to an experimental\nsituation. For this purpose, the values of $u_s$, i.e. the\nconcentration at the interface between the second-phase and matrix\nshould be known. The capillary effect leads to a relationship between\n$u_s$ and the equilibrium composition $u_{eq}$ (solubility line in a\nphase diagram). To obtain this relationship, we consider an incoherent\nnucleation of second-phase on a dislocation \\`a la Cahn\n\\cite{Cahn_1957}. A Burgers loop around the dislocation in the matrix\nmaterial around the incoherent second-phase (circular plate) will have\na closure mismatch equal to $b$. Following Cahn, on forming the\nincoherent plate of radius $R$, the total free energy change per unit\nlength is\n\\begin{equation}\n\\label{eqn:cahn-fe}\n\\mathcal{G}=-\\pi R^2 \\Delta g_v +2\\pi\\gamma\nR-A^\\prime\\ln(R\/r_0),\n\\end{equation}\n\\noindent\nwhere $\\Delta g_v$ is the volume free energy of\nformation, $\\gamma$ the interfacial energy and the last term is the\ndislocation energy, $A^\\prime=Gb^2\/4\\pi$ for screw dislocations,\ncf. equation (\\ref{eqn:screw}). Setting ${\\rm d}\\mathcal{G}\/{\\rm d}R=0$, yields\n\\begin{equation}\n \\label{eqn:crit-rad}\n R = \\frac{\\gamma}{2\\Delta g_v}\\Big(1 \\pm \\sqrt{1-\\alpha}\\Big),\n\\end{equation}\n\\noindent\nwhere $\\alpha=2A^\\prime \\Delta g_v\/\\pi\\gamma^2$. So, if $\\alpha>1$,\nthe nucleation is barrierless, i.e., the phase transition kinetics is\nonly governed by growth kinetics, which is the subject of our\ninvestigation here. If, however, $\\alpha < 1$, there is an energy\nbarrier and the local minimum of $\\mathcal{G}$ at $R=R_0$, which\ncorresponds to the negative sign in equation (\\ref{eqn:crit-rad}),\nensued by a maximum at $R=R^\\ast$ corresponding to the positive sign\nin this equation. The local minimum corresponds to a subcritcal\nmetastable particle of the second-phase surrounding the dislocation\nline, and it is similar to the Cottrell atmosphere of solute atoms in\na segregation problem. When $\\alpha = 0$, corresponding to $B=0$, the\ntwo phases are in equilibrium and the maximum in $\\mathcal{G}$ is\ninfinite, as for homogeneous nucleation.\n\n\nFor a dilute regular solution, $\\Delta\ng_v=(k_BT\/V_p)\\ln(u_s\/u_{eq})$, where $V_p$ is the atomic volume of\nthe precipitate compound, $u_s$ is the concentration of the matrix at\na curved particle\/matrix interface and $u_{eq}$ that of a flat\ninterface, which is in equilibrium with the solute concentration in\nthe matrix. Equation (\\ref{eqn:crit-rad}) gives\n$\\Delta g_v=\\gamma\/R-A^\\prime\/2\\pi R^2$. Hence, for a dilute regular\nsolution, we write\n\\begin{equation}\n \\label{eqn:gibbs-thom}\n u_s=u_{eq}\\exp{\\Big[\\frac{\\zeta}{R}\\Big(1-\\frac{\\eta}{R}\\Big)\\Big]},\n\\end{equation}\n\\noindent\nwhere $\\zeta = \\beta V_p\\gamma$, $\\beta=1\/k_BT$ and $\\eta = A^\\prime\/2\\pi\\gamma$. Subsequently, the\nsupersaturation parameter is expressed by\n\\begin{equation}\n \\label{eqn:supsat}\n k = \\frac{u_{eq}\\exp[{\\frac{\\zeta}{R}(1-\\frac{\\eta}{R})}]-u_m}{u_p-u_{eq}\\exp[\\frac{\\zeta}{R}(1-\\frac{\\eta}{R})]}.\n\\end{equation}\n\\noindent\nTaking the following typical values: $\\gamma=0.2$ Jm$^{-2}$, $G=40$\nGPa, and $b=0.25$ nm, then $A^\\prime \\approx 2.0\\times10^{-10}$ N and\n$\\eta=0.16$ nm. Figure \\ref{fig:thoms-2d} depicts $u_s\/u_{eq}$, from\nequation (\\ref{eqn:gibbs-thom}), as a function of scaled radius\n$R\/\\zeta$ for $V_p=1.66 \\times 10^{-29}$ m$^3$, $\\eta=0$ and\n$\\eta=0.16$ nm at $T=600$ K. Equation\n(\\ref{eqn:gibbs-thom}) is analogous to the Gibbs-Thomson-Freundlich\nrelationship \\cite{Christian_2002} comprising a dislocation defect.\n\n\nRecalling now the values used for the interaction parameter $B$ in the\ncomputations presented in the foregoing section, we note that for\n$B=2$ and the above numerical values for $G$ and $b$ at $T=1000$ K, we\nfind $l\\approx 0.14$ nm, which is close to the calculated value of\n$\\eta$.\n\nIn Cahn's model, the assumption that all the strain energy of the\ndislocation within the volume occupied by the nucleus can be relaxed\nto zero demands that the nucleus is incoherent. For a coherent nucleus\nforming on or in proximity of dislocations, this supposition is not\ntrue. Instead, it is necessary to calculate the elastic interaction\nenergy between the nucleus and the matrix, which for an edge\ndislocation is in the form $Gb^2\/[4\\pi(1-\\nu)r]$ for the energy\ndensity per unit length \\cite{Barnett_1971}. In the same manner, to\nextend our calculations for growth of coherent precipitate, we must\nemploy this kind of potential energy, i.e. the potential energy of the\nform $\\phi(r)=-A\\ln(r\/r_0)+C \\sin\\theta\/r$, in the governing kinetic\nequation rather than relation (\\ref{eqn:screw}).\n\\begin{figure}[htbp] \n\\begin{center}\n\\includegraphics[width=0.80\\textwidth]{fig\/thoms_2d}{} \\\\\n\\caption{The size dependence of the concentration at the curved\nprecipitate\/matrix interface $u_s$ relative to that of the flat\ninterface $u_{eq}$ for a set of parameter values given in the\ntext, cf. eq. (\\ref{eqn:gibbs-thom}).}\n\\label{fig:thoms-2d}\n\\end{center}\n\\end{figure}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Conclusion}\n \n This paper presents a novel unsupervised representation learning framework for complex multivariate time series, called Temporal Neighborhood Coding (TNC). This framework is designed to learn the underlying dynamics of non-stationary signals and to model the progression over time by defining a temporal neighborhood. The problem is motivated by the medical field, where patients transition between distinct clinical states over time, and obtaining labels to define these underlying states is challenging. We evaluate the performance of TNC on multiple datasets and show that our representations are generalizable and can easily be used for diverse tasks such as classification and clustering. \n We finally note that TNC is flexible to be used with arbitrary encoder architectures; therefore, the framework is applicable to many time series data domains. Moreover, in addition to tasks presented in this paper, general representations can be used for several other downstream tasks, such as anomaly detection, which is challenging in supervised learning settings for time series data in sparsely labeled contexts.\n\n\n\n\n\\section{Results}\nIn this section we present the results for clusterability of the latent representations and downstream classification performance for all datasets and across all baselines. Clusterability indicates how well each method recovers appropriate states, and classification assesses how informative our representations are for downstream tasks. \n\n\\subsection{Evaluation: Clusterability}\nMany real-world time series data have underlying multi-category structure, naturally leading to representations with clustering properties. Encoding such general priors is a property of a good representation \\citep{bengio2013representation}.\nIn this section, we assess the distribution of the representations in the encoding space. If information of the latent state is properly learned and encoded by the framework, the representations of signals from the same underlying state should cluster together. Figures \\ref{fig:tcl_dist}, \\ref{fig:trip_dist}, and \\ref{fig:cpc_dist} show an example of this distribution for simulated data across compared approaches. Each plot is a 2-dimensional t-SNE visualization of the representations where each data point in the scatter plot is an encoding $Z \\in R^{10}$ that represents a window of size $\\delta=50$ of a simulated time series. \nWe can see that without any information about the hidden states, representations learned using TNC cluster windows from the same hidden state better than the alternative approaches. The results show that CPC and Triplet Loss have difficulty separating time series that are generated from non-linear auto-regressive moving average (NARMA) models with variable regression parameters.\n\n\\begin{figure}[!h]\n\\begin{subfigure}{.32\\textwidth}\n\\includegraphics[scale=0.32]{TCL\/figures\/encoding_distribution.pdf}\n\\captionof{figure}{TNC representations}\n\\label{fig:tcl_dist}\n\\end{subfigure}\n\\begin{subfigure}{.32\\textwidth}\n\\includegraphics[scale=0.32]{TCL\/figures\/encoding_distribution_trip.pdf}\n\\captionof{figure}{T-loss representations}\n\\label{fig:trip_dist}\n\\end{subfigure}\n\\begin{subfigure}{.32\\textwidth}\n \\centering\n\\includegraphics[scale=.32]{TCL\/figures\/encoding_distribution_cpc.pdf}\n\\caption{CPC representations}\n\\label{fig:cpc_dist}\n\\end{subfigure}\n\\label{fig:distribution}\n\\caption{T-SNE visualization of signal representations for the simulated dataset across all baselines. Each data point in the plot presents a 10-dimensional representation of a window of time series of size $\\delta=50$, and the color indicates the latent state of the signal window. See Appendix \\ref{app:plots} for similar plots from different datasets.}\n\\end{figure}\n\n\nTo compare the representation clusters' consistency for each baseline, we use two very common cluster validity indices, namely, the Silhouette score and the Davies-Bouldin index. We use K-means clustering in the representation space to measure these clusterability scores. The Silhouette score measures the similarity of each sample to its own cluster, compared to other clusters. The values can range from $-1$ to $+1$, and a greater score implies a better cohesion. The Davies-Bouldin Index measures intra-cluster similarity and inter-cluster differences. This is a positive index score, where smaller values indicate low within-cluster scatter and large separation between clusters. Therefore, a lower score represents better clusterability (more details on the cluster validity scores and how they are calculated can be found in Appendix \\ref{app:clustering_metrics}). \n\n\\begin{table}[!h]\n\\begin{tabular}{lcccccc}\n &\\multicolumn{2}{c}{Simulation} & \\multicolumn{2}{c}{ECG Waveform} & \\multicolumn{2}{c}{HAR}\\\\\n \\toprule\n Method & Silhouette $\\uparrow$ & DBI $\\downarrow$ & Silhouette $\\uparrow$ & DBI $\\downarrow$ & Silhouette $\\uparrow$ & DBI $\\downarrow$\\\\\n \\midrule\n \\bf{TNC} & {\\bf{0.71$\\pm$0.01}} & {\\bf{0.36$\\pm$0.01}} & {\\bf{0.44$\\pm$0.02}} & {\\bf{0.74$\\pm$0.04}} & {\\bf{0.61$\\pm$0.02}} & {\\bf{0.52$\\pm$0.04}} \\\\\n CPC & 0.51$\\pm$0.03 & 0.84$\\pm$0.06 & 0.26$\\pm$0.02 & 1.44$\\pm$0.04 & 0.58$\\pm$0.02 & 0.57$\\pm$0.05\\\\\n T-Loss & 0.61$\\pm$0.08 & 0.64$\\pm$0.12 & 0.25$\\pm$0.01 & 1.30$\\pm$0.03 & 0.17$\\pm$0.01 & 1.76$\\pm$0.20\\\\\n \\midrule\n K-means & 0.01$\\pm$0.019 & 7.23$\\pm$0.14 & 0.19$\\pm$0.11 & 3.65$\\pm$0.48 & 0.12$\\pm$0.40 & 2.66$\\pm$0.05\\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Clustering quality of representations in the encoding space for multiple datasets. }\n \\label{tab:sil_score}\n\\end{table}\n\n\nTable \\ref{tab:sil_score} summarizes the scores for all baselines and across all datasets, demonstrating that TNC is superior in learning representations that can distinguish the latent dynamics of time series. CPC performs closely to Triplet loss on waveform data but performs poorly on the simulated dataset, where signals are highly non-stationary, and transitions are less predictable. However, for the HAR dataset, CPC clusters the states very well because most activities are recorded in a specific order, empowering predictive coding. Triplet loss performs reasonably well in the simulated setting; however, it fails to distinguish states $0$ and $2$, where signals come from autoregressive models with different parameters and have a relatively similar generative process. Performing K-means on the original time series generally does not generate coherent clusters, as demonstrated by the scores. However, the performance is slightly better in time series like the ECG waveforms, where the signals are formed by consistent shapelets, and therefore the DTW measures similarity more accurately.\n\n\n\n\\subsection{Evaluation: Classification}\nWe further evaluate the quality of the encodings using a classification task. We train a linear classifier to evaluate how well the representations can be used to classify hidden states. The performance of all baselines is compared to a supervised classifier, composed of an encoder and a classifier with identical architectures to that of the unsupervised models, and a K-nearest neighbor classifier that uses DTW metric. The performance is reported as the prediction accuracy and the area under the precision-recall curve (AUPRC) score since AUPRC is a more accurate reflection of model performance for imbalance classification settings like the waveform dataset.\n\n\\begin{table}[!h]\n \\centering\n \\begin{tabular}{lcccccc}\n &\\multicolumn{2}{c}{Simulation}& \\multicolumn{2}{c}{ECG Waveform}& \\multicolumn{2}{c}{HAR}\\\\\n \\toprule\n Method & AUPRC & Accuracy & AUPRC & Accuracy & AUPRC & Accuracy\\\\\n \\midrule\n \\bf{TNC}& {\\bf{0.99$\\pm$0.00}} & {\\bf{97.52$\\pm$0.13}} & {\\bf{0.55$\\pm$0.01}} & {\\bf{77.79$\\pm$0.84}} & {\\bf{0.94$\\pm$0.007}} & {\\bf{88.32$\\pm$0.12}}\\\\\n CPC & 0.69$\\pm$0.06 & 70.26$\\pm$6.48 & 0.42$\\pm$0.01 & 68.64$\\pm$0.49 & 0.93$\\pm$0.006 & 86.43$\\pm$1.41\\\\\n T-Loss & 0.78$\\pm$0.01 & 76.66$\\pm$1.40 & 0.47$\\pm$0.00 & 75.51$\\pm$1.26 & 0.71$\\pm$0.007 & 63.60$\\pm$3.37 \\\\\n \\midrule\n KNN & 0.42$\\pm$0.00 & 55.53$\\pm$0.65 & 0.38$\\pm$0.06 & 54.76$\\pm$5.46 & 0.75$\\pm$0.01& 84.85$\\pm$0.84\\\\\n \\midrule\n Supervised& {\\bf{0.99$\\pm$0.00}} & {\\bf{98.56$\\pm$0.13}} & {\\bf{0.67$\\pm$0.01}} & {\\bf{94.81$\\pm$0.28}} & {\\bf{0.98$\\pm$0.00}} & {\\bf{92.03$\\pm$2.48}}\\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Performance of all baselines in classifying the underlying hidden states of the time series, measured as the accuracy and AUPRC score. }\n \\label{tab:classification_result}\n\\end{table}{}\n\n\n\nTable \\ref{tab:classification_result} demonstrates the classification performance for all datasets. The performance of the classifiers that use TNC representations are closer to the end-to-end supervised model in comparison to CPC and Triplet Loss. This provides further evidence that our encodings capture informative parts of the time series and are generalizable to be used for downstream tasks. In datasets like the HAR, where an inherent ordering usually exists in the time series, CPC performs reasonably. However, in datasets with increased non-stationarity, the performance drops. Triplet Loss is also a powerful framework, but since it samples positive examples from overlapping windows of time series, it is vulnerable to map the overlaps into the encoding and, therefore, fail to learn more general representations. TNC, on the other hand, samples similar windows from a wider distribution, defined by the temporal neighborhood, where many of the neighboring signals do not necessarily overlap. \nThe lower performance of the CPC and Triplet Loss methods can also be partly because none of these methods explicitly account for the potential sampling bias that happens when randomly selected negative examples are similar to the reference $W_t$.\n\n\n\n\n\\subsection{Evaluation: Trajectory}\n\n\\begin{figure}[!h]\n \\centering\n \\includegraphics[scale=0.2]{TCL\/figures\/embedding_trajectory_hm.pdf}\n \\caption{Trajectory of a signal encoding from the simulated dataset. The top plot shows the original time series with shaded regions indicating the underlying state. The bottom plot shows the 10 dimensional encoding of the sliding windows $W_t$ where $\\delta=50$. }\n \\label{fig:trajectory}\n\\end{figure}\n\n\nThis section investigates the trajectories of our learned encodings over time to understand how the state transitions are captured and modeled in the representation space. This is an important property for non-stationary time series where underlying states change over time, and capturing those changes is critical in many application domains such as healthcare. Figure \\ref{fig:trajectory} shows a sample from the simulated dataset. The top panel shows the signal measurements over time, and the shaded regions indicate the underlying latent states. The bottom panel illustrates the $10$-dimensional representation of a sliding window $W_t$ estimated over time. From the bottom panel of Figure \\ref{fig:trajectory}, we can see that the encoding pattern changes at state transitions and settle into a different pattern, corresponding to the new state. This change happens at every transition, and we can see the distinct patterns for all $4$ underlying states in the representations. This analysis of the trajectory of change could be very informative for the users' post-analysis; for instance, in clinical applications, it could help clinicians visualize the evolution of the patient state over time and plan treatment based on the state progression\n\n\n\\section{Related work}\nWhile integral for many applications, unsupervised representation learning has been far less studied for time series \\citep{langkvist2014review}, compared to other domains such as vision or natural language processing \\citep{denton2017unsupervised, radford2015unsupervised, gutmann2012noise, wang2015unsupervised}. One of the earliest approaches to unsupervised end-to-end representation learning in time series is the use of auto-encoders \\citep{choi2016multi, amiriparian2017sequence, malhotra2017timenet} and seq-to-seq models \\citep{lyu2018improving}, with the objective to train an encoder jointly with a decoder that reconstructs the input signal from its learned representation. Using fully generative models like variational auto-encoders is also useful for imposing properties like disentanglement, which help with the interpretability of the representations \\citep{dezfouli2019disentangled}. However, in many cases, like for high-frequency physiological signals, the reconstruction of complex time series can be challenging; therefore, more novel approaches are designed to avoid this step. Contrastive Predictive Coding \\citep{oord2018representation, lowe2019putting} learns representations by predicting the future in latent space, eliminating the need to reconstruct the full input. The representations are such that the mutual information between the original signal and the concept vector is maximally preserve using a lower bound approximation and a contrastive loss. Very similarly, in Time Contrastive Learning \\citep{hyvarinen2016unsupervised}, a contrastive loss is used to predict the segment-ID of multivariate time-series as a way to extract representation. \\cite{franceschi2019unsupervised} employs time-based negative sampling and a triplet loss to learn scalable representations for multivariate time series. Some other approaches use inherent similarities in temporal data to learn representations without supervision. For instance, in similarity-preserving representation learning \\citep{lei2017similarity}, learned encodings are constrained to preserve the pairwise similarities that exist in the time domain, measured by DTW distance. Another group of approaches combines reconstruction loss with clustering objectives to cluster similar temporal patterns in the encoding space \\citep{ma2019learning}.\n\nIn healthcare, learning representation of rich temporal medical data is extremely important for understanding patients' underlying health conditions. However, most of the existing approaches for learning representations are designed for specific downstream tasks and\nrequire labeling by experts \\citep{choi2016medical, choi2016learning, tonekaboni2020went}. Examples of similar works to representation learning in the field of clinical ML include computational phenotyping for discovering subgroups of patients with similar underlying disease mechanisms from temporal clinical data \\citep{lasko2013computational, suresh2018learning, schulam2015clustering}, and disease progression modeling, for learning the hidden vector of comorbidities representing a disease over time \\citep{wang2014unsupervised, alaa2018forecasting}.\n\n\n\n\n\n\n\n\n\n\\section{Introduction}\n\nReal-world time-series data is high dimensional, complex, and has unique properties that bring about many challenges for data modeling \\citep{yang200610}. In addition, these signals are often sparsely labeled, making it even more challenging for supervised learning tasks. Unsupervised representation learning can extract informative low-dimensional representations from raw time series by leveraging the data's inherent structure, without the need for explicit supervision. These representations are more generalizable and robust, as they are less specialized for solving a single supervised task. Unsupervised representation learning is well studied in domains such as vision \\citep{donahue2019large, denton2017unsupervised, radford2015unsupervised} and natural language processing \\citep{radford2017learning, young2018recent, mikolov2013efficient}, but has been underexplored in the literature for time series settings.\nFrameworks designed for time series need to be efficient and scalable because signals encountered in practice can be long, high dimensional, and high frequency. Moreover, it should account for and be able to model dynamic changes that occur within samples, i.e., non-stationarity of signals. \n\n\nThe ability to model the dynamic nature of time series data is especially valuable in medicine. Health care data is often organized as a time series, with multiple data types, collected from various sources at different sampling frequencies, and riddled with artifacts and missing values. Throughout their stay at the hospital or within the disease progression period, patients transition gradually between distinct clinical states, with periods of relative stability, improvement, or unexpected deterioration, requiring escalation of care that alters the patient's trajectory. A particular challenge in medical time-series data is the lack of well-defined or available labels that are needed for identifying the underlying clinical state of an individual or for training models aimed at extracting low-dimensional representations of these states.\nFor instance, in the context of critical-care, a patient's stay in the critical care unit (CCU) is captured continuously via streaming physiological signals by the bedside monitor. Obtaining labels for the patient's state for extended periods of these signals is practically impossible as the underlying physiological state can be unknown even to the clinicians. This further motivates the use of unsupervised representation learning in these contexts. Learning rich representations can be crucial in facilitating the tracking of disease progression, predicting the future trajectories of the patients, and tailoring treatments to these underlying states. \n\nIn this paper, we propose a self-supervised framework for learning representations for complex multivariate non-stationary time series. This approach, called Temporal Neighborhood Coding (TNC), is designed for temporal settings where the latent distribution of the signals changes over time, and it aims to capture the progression of the underlying temporal dynamics. TNC is efficient, easily scalable to high dimensions, and can be used in different time series settings. We assess the quality of the learned representations on multiple datasets and show that the representations are general and transferable to many downstream tasks such as classification and clustering. We further demonstrate that our method outperforms existing approaches for unsupervised representation learning, and it even performs closely to supervised techniques in classification tasks. The contributions of this work are three-fold:\n\\begin{enumerate}\n \\item We present a novel neighborhood-based unsupervised learning framework for \\emph{non-stationary} multivariate time series data.\n \\item We introduce the concept of a temporal neighborhood with stationary properties as the distribution of similar windows in time. The neighborhood boundaries are determined automatically using the properties of the signal and statistical testing.\n \\item We incorporate concepts from Positive Unlabeled Learning, specifically, sample weight adjustment, to account for potential bias introduced in sampling negative examples for the contrastive loss.\n\\end{enumerate}\n\n\\section{Method}\n\nWe introduce a framework for learning representations that encode the underlying state of a multivariate, non-stationary time series. Our self-supervised approach, TNC, takes advantage of the local smoothness of the generative process of signals to learn generalizable representations for windows of time series. This is done by ensuring that in the representation space, the distribution of signals proximal in time is distinguishable from the distribution of signals far away, i.e., proximity in time is identifiable in the encoding space. We represent our multivariate time series signals as ${X}\\in R^{D\\times T}$, where $D$ is the number of features and $T$ is the number of measurements over time.\n$X_{[t-\\frac{\\delta}{2}, t+\\frac{\\delta}{2}]}$ represents a window of time series of length $\\delta$, centered around time $t$, that includes measurements of all features taken in the interval $[t-\\frac{\\delta}{2}, t+\\frac{\\delta}{2}]$. Throughout the paper, we refer to this window as $W_t$ for notational simplicity. Our goal is to learn the underlying representation of $W_t$, and by sliding this window over time, we can obtain the trajectory of the underlying states of the signal.\n\nWe define the temporal neighborhood ($N_t$) of a window $W_t$ as the set of all windows with centroids $t^*$, sampled from a normal distribution $t^*\\sim\\mathcal{N}(t,\\eta\\cdot\\delta)$. Where $\\mathcal{N}$ is a Gaussian centered at $t$, $\\delta$ is the size of the window, and $\\eta$ is the parameter that defines the range of the neighborhood. Relying on the local smoothness of a signal's generative process, the neighborhood distribution is characterized as a Gaussian to model the gradual transition in temporal data, and intuitively, it approximates the distribution of samples that are similar to $W_t$. The $\\eta$ parameter determines the neighborhood range and depends on the signal characteristics and how gradual the time series's statistical properties change over time. This can be set by domain experts based on prior knowledge of the signal behavior, or for more robust estimation, it can be determined by analyzing the stationarity properties of the signal for every $W_t$. Since the neighborhood represents similar samples, the range should identify the approximate time span within which the signal remains stationary, and the generative process does not change. For this purpose, we use the Augmented Dickey-Fuller (ADF) statistical test to determine this region for every window. Proper estimation of the neighborhood range is an integral part of the TNC framework. If $\\eta$ is too small, many samples from within a neighborhood will overlap, and therefore the encoder would only learn to encode the overlapping information. On the other hand, if $\\eta$ is too big, the neighborhood would span over multiple underlying states, and therefore the encoder would fail to distinguish the variation among these states. Using the ADF test, we can automatically adjust the neighborhood for every window based on the signal behavior. More details on this test and how it is used to estimate $\\eta$ is described in section \\ref{sec:neighbourhood}. \n\n\nNow, assuming windows within a neighborhood possess similar properties, signals outside of this neighborhood, denoted as $\\bar{N_t}$, are considered non-neighboring windows. Samples from $\\bar{N_t}$ are likely to be different from $W_t$, and can be considered as negative samples in a context of a contrastive learning framework. However, this assumption can suffer from the problem of \\emph{sampling bias}, common in most contrastive learning approaches \\citep{chuang2020debiased, saunshi2019theoretical}. This bias occurs because randomly drawing negative examples from the data distribution may result in negative samples that are actually similar to the reference. This can significantly impact the learning framework's performance, but little work has been done on addressing this issue \\citep{chuang2020debiased}. In our context, this can happen when there are windows from $\\bar{N_t}$ that are far away from $W_t$, but have the same underlying state. To alleviate this bias in the TNC framework, we consider samples from $\\bar{N_t}$ as unlabeled samples, as opposed to negative, and use ideas from Positive-Unlabeled (PU) learning to accurately measure the loss function. In reality, even though samples within a neighborhood are all similar, we cannot make the assumption that samples outside this region are necessarily different. For instance, in the presence of long term seasonalities, signals can exhibit similar properties at distant times. In a healthcare context, this can be like a stable patient that undergoes a critical condition, but returns back to a stable state afterwards. \n\nIn PU learning, a classifier is learned using labeled data drawn from the positive class ($P$) and unlabeled data ($U$) that is a mixture of positive and negative samples with a positive class prior $\\pi$ \\citep{du2014analysis, kiryo2017positive, du2014class}. Existing PU learning methods fall under two categories based on how they handle the unlabeled data:\n1) methods that identify negative samples from the unlabeled cohort \\citep{li2003learning};\n2) methods that treat the unlabeled data as negative samples with smaller weights \\citep{lee2003learning, elkan2008learning}.\nIn the second category, unlabeled samples should be properly weighted in the loss term in order to train an unbiased classifier. \\cite{elkan2008learning} introduces a simple and efficient way of approximating the expectation of a loss function by assigning individual weights $w$ to training examples from the unlabeled cohort. This means each sample from the neighborhood is treated as a positive example with unit weight, while each sample from $\\bar{N}$ is treated as a combination of a positive example with weight $w$ and a negative example with complementary weight $1-w$. In the original paper \\citep{elkan2008learning}, the weight is defined as the probability for a sample from the unlabeled set to be a positive sample, i.e. $w = p(y = 1|x)$ for $x \\in U$. In the TNC framework, this weight represents the probability of having samples similar to $W_t$ in $\\bar{N}$. By incorporating weight adjustment into the TNC loss (Equation \\ref{eq:obj}), we account for possible positive samples that occur in the non-neighboring distribution. $w$ can be approximated using prior knowledge of the underlying state distribution or tuned as a hyperparameter. Appendix \\ref{app:w} explains how the weight parameter is selected for our different experiment setups and also demonstrates the impact of weight adjustment on performance for downstream tasks.\n\nAfter defining the neighborhood distribution, we train an objective function that encourages a distinction between the representation of samples of the same neighborhood from the outside samples. An ideal encoder preserves the neighborhood properties in the encoding space. Therefore representations $Z_l=Enc(W_l)$ of samples from a neighborhood $W_l \\in N_t$, can be distinguished from representation $Z_k=Enc(W_k)$ of samples from outside the neighborhood $W_k \\in \\bar{N}_t$.\nTNC is composed of two main components:\n\n\\begin{enumerate}\n \\item An Encoder $Enc(W_t)$ that maps $W_t\\in \\mathbb{R}^{D\\times\\delta}$ to a representation $Z_t\\in\\mathbb{R}^M$, in a lower dimensional space ($M \\ll D\\times\\delta$), where $ D\\times\\delta$ is the total number of measurements in $W_t$.\n \\item A Discriminator $\\mathcal{D}(Z_t,Z)$ that approximates the probability of $Z$ being the representation of a window in $N_t$. More specifically, it receives two samples from the encoding space and predicts the probability of those samples belonging to the same temporal neighborhood. \n\\end{enumerate}\n\nTNC is a general framework; therefore, it is agnostic to the nature of the time series and the architecture of the encoder. The encoder can be any parametric model that is well-suited to the signal properties \\citep{oord2016wavenet, bai2018empirical, fawaz2019deep}. For the Discriminator $\\mathcal{D}(Z_t,Z)$ we use a simple multi-headed binary classifier that outputs $1$ if $Z$ and $Z_t$ are representations of neighbors in time, and $0$ otherwise. In the experiment section, we describe the architectural details of the models used for our experiments in more depth. \n\n\\begin{figure}\n \\begin{subfigure}{.5\\textwidth}\n \\centering\n \\includegraphics[scale=0.4]{TCL\/figures\/tnc_explain_a.pdf} \n \\caption{Neighborhood samples}\n \\label{fig:explanation_a}\n\\end{subfigure}%\n\\begin{subfigure}{.5\\textwidth}\n \\centering\n \\includegraphics[scale=0.4]{TCL\/figures\/tnc_explain_b.pdf} \n \\caption{Non-neighboring samples}\n \\label{fig:explanation_b}\n\\end{subfigure}\n\\caption{Overview of the TNC framework components. For each sample window $W_t$ (indicated with the dashed black box), we first define the neighborhood distribution. The encoder learns the distribution of windows sampled from $N_t$ and $\\bar{N_t}$, in the representation space. Then samples from these distributions are fed into the discriminator alongside $Z_t$, to predict the probability of the windows being in the same neighborhood.}\n\\label{fig:explanation}\n\\end{figure}\n\nFigure \\ref{fig:explanation} provides a summary overview of the TNC framework. We formalize the objective function of our unsupervised learning framework in Equation \\ref{eq:obj}. In essence, we would like the probability likelihood estimation of the Discriminator to be accurate, i.e., close to $1$ for the representation of neighboring samples and close to $0$ for windows far apart. Samples from the non-neighboring region ($\\bar{N}$) are weight-adjusted using the $w$ parameters to account for positive samples in this distribution.\n\n\\begin{equation}\\label{eq:obj}\n \\mathcal{L} = -\\mathbb{E}_{W_t \\sim X}[ \n \\mathbb{E}_{W_l \\sim N_t}[\\log \\underbrace{\\mystrut{1.5ex}\\mathcal{D} (Z_t, Z_l)}_{\\makebox[0pt]{\\text{$\\mathcal{D}(Enc(W_t), Enc(W_l))$}}}] + \n \\mathbb{E}_{W_k \\sim {\\bar{N}_t}} [(1-w_t) \\times \\log{\\underbrace{\\mystrut{1.5ex}(1-\\mathcal{D}(Z_t, Z_k))}_{\\makebox[0pt]{$\\mathcal{D}(Enc(W_t), Enc(W_k))$}}} + \n w_t \\times \\log{\\mathcal{D}(Z_t, Z_k)}]\n ]\n\\end{equation}{}\n\\vspace{-3mm}\n\n\nWe train the encoder and the discriminator hand in hand by optimizing for this objective. Note that the Discriminator is only part of training and will not be used during inference. Similar to the encoder, it can be approximated using any parametric model. However, the more complex the Discriminator, the harder it becomes to interpret the latent space's decision boundaries since it allows similarities to be mapped on complex nonlinear relationships. \n\n\\paragraph{Defining the neighborhood parameter using the ADF test:}\\label{sec:neighbourhood} \nAs mentioned earlier, the neighborhood range can be specified using the characteristics of the data. In non-stationary time series, the generative process of the signals changes over time. We define the temporal neighborhood around every window as the region where the signal is relatively stationary. Since a signal may remain in an underlying state for an unknown amount of time, each window's neighborhood range may vary in size and must be adjusted to signal behavior. \nTo that end, we use the Augmented Dickey-Fuller (ADF) statistical test to derive the neighborhood range $\\eta$. The ADF test belongs to a category of tests called \"Unit Root Test\", and is a method for testing the stationarity of a time series. For every $W_t$, we want to find the neighborhood range around that window that indicates a stationary region. To determine this, we start from $\\eta=1$ and gradually increase the neighborhood size $\\eta$, measuring the $p$-value from the test at every step. Once $p$-value is above a threshold (in our setting $0.01$), it means that it fails to reject the null hypothesis and suggests that within this neighborhood region, the signal is no longer stationary. This way, we find the widest neighborhood within which the signal remains relatively stationary. Note that the window size $\\delta$ is constant throughout the experiment, and during ADF testing, we only adjust the neighborhood's width.\n\n\n\\section{Experiments}\nWe evaluate our framework's usability on multiple time series datasets with dynamic latent states that change over time.\nWe compare classification performance and clusterability against two state-of-the-art approaches for unsupervised representation learning for time series: \n\\begin{inparaenum}\n \\item Contrastive Predictive Coding (CPC) \\citep{oord2018representation} that uses predictive coding principles to train the encoder on a probabilistic contrastive loss.\n \\item Triplet-Loss (T-Loss), introduced in \\citep{franceschi2019unsupervised}, which employs time-based negative sampling and a triplet loss to learn representations for time series windows. The triplet loss objective ensures similar time series have similar representations by minimizing the pairwise distance between positive samples (subseries) while maximizing it for negative ones. \n\\end{inparaenum}\n(See Appendix \\ref{app:baseline_imp} for more details on each baseline.)\n\nFor a fair comparison and to ensure that the difference in performance is not due to the differences in the models' architecture, the same encoder network is used across all compared baselines. Our objective is to compare the performance of the learning frameworks, agnostic to the encoder's choice. Therefore, we selected simple architectures to evaluate how each framework can use a simple encoder's limited capacity to learn meaningful representations.\nWe assess the generalizability of the representations by 1) evaluating clusterability in the encoding space and 2) using the representations for a downstream classification task. \nIn addition to the baselines mentioned above, we also compare clusterability performance with unsupervised K-means and classification with a K-Nearest Neighbor classifier, using Dynamic Time Warping (DTW) to measure time series distance. All models are implemented using Pytorch $1.3.1$ and trained on a machine with Quadro 400 GPU \\footnote{Code implementation can be found at \\url{https:\/\/github.com\/sanatonek\/TNC_representation_learning}}. Below we describe the datasets for our experiments in more detail.\n\n\n\\subsection{Simulated data}\nThe simulated dataset is designed to replicate very long, non-stationary, and high-frequency time series for which the underlying dynamics change over time. Our generated time series consists of $2000$ measurements for $3$ features, generated from $4$ different underlying states. We use a Hidden Markov Model (HMM) to generate the random latent states over time, and in each state, the time series is generated from a different generative process, including Gaussian Processes (GPs) with different kernel functions and Nonlinear Auto-regressive Moving Average models with different sets of parameters ($\\alpha$ and $\\beta$). Besides, for it to further resemble realistic (e.g., clinical) time series, two features are always correlated. More details about this dataset are provided in the Appendix \\ref{app:simulated_data}. \nFor this experimental setup, we use a two-directional, single-layer recurrent neural network encoder. We have selected this simple architecture because it handles time series with variable lengths, and it easily extends to higher-dimensional inputs. The encoder model encodes multi-dimensional signal windows of $\\delta=50$ into $10$ dimensional representation vectors. The window size is selected such that it is long enough to contain information about the underlying state but not too long to span over multiple underlying states. A more detailed discussion on the window size choice is presented in Appendix \\ref{app:window_size}. \n\n\n\\subsection{Clinical waveform data}\nFor a real-world clinical experiment, we use the MIT-BIH Atrial Fibrillation dataset \\citep{moody1983new}.\nThis dataset includes $25$ long-term Electrocardiogram (ECG) recordings ($10$ hours in duration) of human subjects with atrial fibrillation. It consists of two ECG signals; each sampled at 250 Hz. The signals are annotated over time for the following different rhythm types: 1) Atrial fibrillation, 2) Atrial flutter, 3) AV junctional rhythm, and 4) all other rhythms. Our goal in this experiment is to identify the underlying type of arrhythmia for each sample without any information about the labels. This dataset is particularly interesting and makes this experiment challenging due to the following special properties:\n\n\n\\begin{itemize}\n \\item The underlying heart rhythm changes over time in each sample. This is an opportunity to evaluate how different representation learning frameworks can handle alternating classes in non-stationary settings; \n \\item The dataset is highly imbalanced, with atrial flutter and AV junctional rhythm being present in fewer than $0.1\\%$ of the measurements. Data imbalance poses many challenges for downstream classification, further motivating the use of unsupervised representation learning;\n \\item The dataset has samples from a small number of individuals, but over an extended period (around 5 million data points). This realistic scenario, common in healthcare data, shows that our framework is still powerful in settings with a limited number of samples.\n\\end{itemize}\n\nThe simple RNN encoder architecture used for other experiment setups cannot model the high-frequency ECG measurements. Therefore, inspired by state-of-the-art architectures for ECG classification problems, the encoder $Enc$ used in this experiment is a 2-channel, 1-dimensional strided convolutional neural network that runs directly on the ECG waveforms. We use six convolutional layers with a total down-sampling factor of 16. The window size is $2500$ samples, meaning that each convolutional filter covers at least half a second of ECG recording, and the representations are summarized in a 64-dimensional vector.\n\n\\subsection{Human Activity Recognition (HAR) data}\nHuman Activity Recognition (HAR) is the problem of predicting the type of activity using temporal data from accelerometer and gyroscope measurements. We use the HAR dataset from the UCI Machine Learning Repository \\footnote{\\url{https:\/\/archive.ics.uci.edu\/ml\/datasets\/human+activity+recognition+using+smartphones}} that includes data collected from 30 individuals using a smartwatch. Each person performs six activities: 1) walking, 2) walking upstairs, 3) walking downstairs, 4) sitting, 5) standing, and 6) laying. The time-series measurements are pre-processed to extract 561 features. For our purpose, we concatenate the activity samples from every individual over time using the subject identifier to build the full-time series for each subject, which includes continuous activity change. Similar to the simulated data setting, we use a single-layer RNN encoder. The selected window size is $4$, representing about 15 seconds of recording, and the representations are encoded in a $10$-dimensional vector space. \n\\section{Submission of conference papers to ICLR 2021}\n\nICLR requires electronic submissions, processed by\n\\url{https:\/\/openreview.net\/}. See ICLR's website for more instructions.\n\nIf your paper is ultimately accepted, the statement {\\tt\n {\\textbackslash}iclrfinalcopy} should be inserted to adjust the\nformat to the camera ready requirements.\n\nThe format for the submissions is a variant of the NeurIPS format.\nPlease read carefully the instructions below, and follow them\nfaithfully.\n\n\\subsection{Style}\n\nPapers to be submitted to ICLR 2021 must be prepared according to the\ninstructions presented here.\n\n\nAuthors are required to use the ICLR \\LaTeX{} style files obtainable at the\nICLR website. Please make sure you use the current files and\nnot previous versions. Tweaking the style files may be grounds for rejection.\n\n\\subsection{Retrieval of style files}\n\nThe style files for ICLR and other conference information are available online at:\n\\begin{center}\n \\url{http:\/\/www.iclr.cc\/}\n\\end{center}\nThe file \\verb+iclr2021_conference.pdf+ contains these\ninstructions and illustrates the\nvarious formatting requirements your ICLR paper must satisfy.\nSubmissions must be made using \\LaTeX{} and the style files\n\\verb+iclr2021_conference.sty+ and \\verb+iclr2021_conference.bst+ (to be used with \\LaTeX{}2e). The file\n\\verb+iclr2021_conference.tex+ may be used as a ``shell'' for writing your paper. All you\nhave to do is replace the author, title, abstract, and text of the paper with\nyour own.\n\nThe formatting instructions contained in these style files are summarized in\nsections \\ref{gen_inst}, \\ref{headings}, and \\ref{others} below.\n\n\\section{General formatting instructions}\n\\label{gen_inst}\n\nThe text must be confined within a rectangle 5.5~inches (33~picas) wide and\n9~inches (54~picas) long. The left margin is 1.5~inch (9~picas).\nUse 10~point type with a vertical spacing of 11~points. Times New Roman is the\npreferred typeface throughout. Paragraphs are separated by 1\/2~line space,\nwith no indentation.\n\nPaper title is 17~point, in small caps and left-aligned.\nAll pages should start at 1~inch (6~picas) from the top of the page.\n\nAuthors' names are\nset in boldface, and each name is placed above its corresponding\naddress. The lead author's name is to be listed first, and\nthe co-authors' names are set to follow. Authors sharing the\nsame address can be on the same line.\n\nPlease pay special attention to the instructions in section \\ref{others}\nregarding figures, tables, acknowledgments, and references.\n\n\nThere will be a strict upper limit of 8 pages for the main text of the initial submission, with unlimited additional pages for citations. Note that the upper page limit differs from last year!Authors may use as many pages of appendices (after the bibliography) as they wish, but reviewers are not required to read these. During the rebuttal phase and for the camera ready version, authors are allowed one additional page for the main text, for a strict upper limit of 9 pages.\n\n\\section{Headings: first level}\n\\label{headings}\n\nFirst level headings are in small caps,\nflush left and in point size 12. One line space before the first level\nheading and 1\/2~line space after the first level heading.\n\n\\subsection{Headings: second level}\n\nSecond level headings are in small caps,\nflush left and in point size 10. One line space before the second level\nheading and 1\/2~line space after the second level heading.\n\n\\subsubsection{Headings: third level}\n\nThird level headings are in small caps,\nflush left and in point size 10. One line space before the third level\nheading and 1\/2~line space after the third level heading.\n\n\\section{Citations, figures, tables, references}\n\\label{others}\n\nThese instructions apply to everyone, regardless of the formatter being used.\n\n\\subsection{Citations within the text}\n\nCitations within the text should be based on the \\texttt{natbib} package\nand include the authors' last names and year (with the ``et~al.'' construct\nfor more than two authors). When the authors or the publication are\nincluded in the sentence, the citation should not be in parenthesis using \\verb|\\citet{}| (as\nin ``See \\citet{Hinton06} for more information.''). Otherwise, the citation\nshould be in parenthesis using \\verb|\\citep{}| (as in ``Deep learning shows promise to make progress\ntowards AI~\\citep{Bengio+chapter2007}.'').\n\nThe corresponding references are to be listed in alphabetical order of\nauthors, in the \\textsc{References} section. As to the format of the\nreferences themselves, any style is acceptable as long as it is used\nconsistently.\n\n\\subsection{Footnotes}\n\nIndicate footnotes with a number\\footnote{Sample of the first footnote} in the\ntext. Place the footnotes at the bottom of the page on which they appear.\nPrecede the footnote with a horizontal rule of 2~inches\n(12~picas).\\footnote{Sample of the second footnote}\n\n\\subsection{Figures}\n\nAll artwork must be neat, clean, and legible. Lines should be dark\nenough for purposes of reproduction; art work should not be\nhand-drawn. The figure number and caption always appear after the\nfigure. Place one line space before the figure caption, and one line\nspace after the figure. The figure caption is lower case (except for\nfirst word and proper nouns); figures are numbered consecutively.\n\nMake sure the figure caption does not get separated from the figure.\nLeave sufficient space to avoid splitting the figure and figure caption.\n\nYou may use color figures.\nHowever, it is best for the\nfigure captions and the paper body to make sense if the paper is printed\neither in black\/white or in color.\n\\begin{figure}[h]\n\\begin{center}\n\\fbox{\\rule[-.5cm]{0cm}{4cm} \\rule[-.5cm]{4cm}{0cm}}\n\\end{center}\n\\caption{Sample figure caption.}\n\\end{figure}\n\n\\subsection{Tables}\n\nAll tables must be centered, neat, clean and legible. Do not use hand-drawn\ntables. The table number and title always appear before the table. See\nTable~\\ref{sample-table}.\n\nPlace one line space before the table title, one line space after the table\ntitle, and one line space after the table. The table title must be lower case\n(except for first word and proper nouns); tables are numbered consecutively.\n\n\\begin{table}[t]\n\\caption{Sample table title}\n\\label{sample-table}\n\\begin{center}\n\\begin{tabular}{ll}\n\\multicolumn{1}{c}{\\bf PART} &\\multicolumn{1}{c}{\\bf DESCRIPTION}\n\\\\ \\hline \\\\\nDendrite &Input terminal \\\\\nAxon &Output terminal \\\\\nSoma &Cell body (contains cell nucleus) \\\\\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\section{Default Notation}\n\nIn an attempt to encourage standardized notation, we have included the\nnotation file from the textbook, \\textit{Deep Learning}\n\\cite{goodfellow2016deep} available at\n\\url{https:\/\/github.com\/goodfeli\/dlbook_notation\/}. Use of this style\nis not required and can be disabled by commenting out\n\\texttt{math\\_commands.tex}.\n\n\n\\centerline{\\bf Numbers and Arrays}\n\\bgroup\n\\def1.5{1.5}\n\\begin{tabular}{p{1in}p{3.25in}}\n$\\displaystyle a$ & A scalar (integer or real)\\\\\n$\\displaystyle {\\bm{a}}$ & A vector\\\\\n$\\displaystyle {\\bm{A}}$ & A matrix\\\\\n$\\displaystyle {\\tens{A}}$ & A tensor\\\\\n$\\displaystyle {\\bm{I}}_n$ & Identity matrix with $n$ rows and $n$ columns\\\\\n$\\displaystyle {\\bm{I}}$ & Identity matrix with dimensionality implied by context\\\\\n$\\displaystyle {\\bm{e}}^{(i)}$ & Standard basis vector $[0,\\dots,0,1,0,\\dots,0]$ with a 1 at position $i$\\\\\n$\\displaystyle \\text{diag}({\\bm{a}})$ & A square, diagonal matrix with diagonal entries given by ${\\bm{a}}$\\\\\n$\\displaystyle {\\textnormal{a}}$ & A scalar random variable\\\\\n$\\displaystyle {\\mathbf{a}}$ & A vector-valued random variable\\\\\n$\\displaystyle {\\mathbf{A}}$ & A matrix-valued random variable\\\\\n\\end{tabular}\n\\egroup\n\\vspace{0.25cm}\n\n\\centerline{\\bf Sets and Graphs}\n\\bgroup\n\\def1.5{1.5}\n\n\\begin{tabular}{p{1.25in}p{3.25in}}\n$\\displaystyle {\\mathbb{A}}$ & A set\\\\\n$\\displaystyle \\mathbb{R}$ & The set of real numbers \\\\\n$\\displaystyle \\{0, 1\\}$ & The set containing 0 and 1 \\\\\n$\\displaystyle \\{0, 1, \\dots, n \\}$ & The set of all integers between $0$ and $n$\\\\\n$\\displaystyle [a, b]$ & The real interval including $a$ and $b$\\\\\n$\\displaystyle (a, b]$ & The real interval excluding $a$ but including $b$\\\\\n$\\displaystyle {\\mathbb{A}} \\backslash {\\mathbb{B}}$ & Set subtraction, i.e., the set containing the elements of ${\\mathbb{A}}$ that are not in ${\\mathbb{B}}$\\\\\n$\\displaystyle {\\mathcal{G}}$ & A graph\\\\\n$\\displaystyle \\parents_{\\mathcal{G}}({\\textnormal{x}}_i)$ & The parents of ${\\textnormal{x}}_i$ in ${\\mathcal{G}}$\n\\end{tabular}\n\\vspace{0.25cm}\n\n\n\\centerline{\\bf Indexing}\n\\bgroup\n\\def1.5{1.5}\n\n\\begin{tabular}{p{1.25in}p{3.25in}}\n$\\displaystyle {a}_i$ & Element $i$ of vector ${\\bm{a}}$, with indexing starting at 1 \\\\\n$\\displaystyle {a}_{-i}$ & All elements of vector ${\\bm{a}}$ except for element $i$ \\\\\n$\\displaystyle {A}_{i,j}$ & Element $i, j$ of matrix ${\\bm{A}}$ \\\\\n$\\displaystyle {\\bm{A}}_{i, :}$ & Row $i$ of matrix ${\\bm{A}}$ \\\\\n$\\displaystyle {\\bm{A}}_{:, i}$ & Column $i$ of matrix ${\\bm{A}}$ \\\\\n$\\displaystyle {\\etens{A}}_{i, j, k}$ & Element $(i, j, k)$ of a 3-D tensor ${\\tens{A}}$\\\\\n$\\displaystyle {\\tens{A}}_{:, :, i}$ & 2-D slice of a 3-D tensor\\\\\n$\\displaystyle {\\textnormal{a}}_i$ & Element $i$ of the random vector ${\\mathbf{a}}$ \\\\\n\\end{tabular}\n\\egroup\n\\vspace{0.25cm}\n\n\n\\centerline{\\bf Calculus}\n\\bgroup\n\\def1.5{1.5}\n\\begin{tabular}{p{1.25in}p{3.25in}}\n$\\displaystyle\\frac{d y} {d x}$ & Derivative of $y$ with respect to $x$\\\\ [2ex]\n$\\displaystyle \\frac{\\partial y} {\\partial x} $ & Partial derivative of $y$ with respect to $x$ \\\\\n$\\displaystyle \\nabla_{\\bm{x}} y $ & Gradient of $y$ with respect to ${\\bm{x}}$ \\\\\n$\\displaystyle \\nabla_{\\bm{X}} y $ & Matrix derivatives of $y$ with respect to ${\\bm{X}}$ \\\\\n$\\displaystyle \\nabla_{\\tens{X}} y $ & Tensor containing derivatives of $y$ with respect to ${\\tens{X}}$ \\\\\n$\\displaystyle \\frac{\\partial f}{\\partial {\\bm{x}}} $ & Jacobian matrix ${\\bm{J}} \\in \\mathbb{R}^{m\\times n}$ of $f: \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$\\\\\n$\\displaystyle \\nabla_{\\bm{x}}^2 f({\\bm{x}})\\text{ or }{\\bm{H}}( f)({\\bm{x}})$ & The Hessian matrix of $f$ at input point ${\\bm{x}}$\\\\\n$\\displaystyle \\int f({\\bm{x}}) d{\\bm{x}} $ & Definite integral over the entire domain of ${\\bm{x}}$ \\\\\n$\\displaystyle \\int_{\\mathbb{S}} f({\\bm{x}}) d{\\bm{x}}$ & Definite integral with respect to ${\\bm{x}}$ over the set ${\\mathbb{S}}$ \\\\\n\\end{tabular}\n\\egroup\n\\vspace{0.25cm}\n\n\\centerline{\\bf Probability and Information Theory}\n\\bgroup\n\\def1.5{1.5}\n\\begin{tabular}{p{1.25in}p{3.25in}}\n$\\displaystyle P({\\textnormal{a}})$ & A probability distribution over a discrete variable\\\\\n$\\displaystyle p({\\textnormal{a}})$ & A probability distribution over a continuous variable, or over\na variable whose type has not been specified\\\\\n$\\displaystyle {\\textnormal{a}} \\sim P$ & Random variable ${\\textnormal{a}}$ has distribution $P$\\\\% so thing on left of \\sim should always be a random variable, with name beginning with \\r\n$\\displaystyle \\mathbb{E}_{{\\textnormal{x}}\\sim P} [ f(x) ]\\text{ or } \\mathbb{E} f(x)$ & Expectation of $f(x)$ with respect to $P({\\textnormal{x}})$ \\\\\n$\\displaystyle \\mathrm{Var}(f(x)) $ & Variance of $f(x)$ under $P({\\textnormal{x}})$ \\\\\n$\\displaystyle \\mathrm{Cov}(f(x),g(x)) $ & Covariance of $f(x)$ and $g(x)$ under $P({\\textnormal{x}})$\\\\\n$\\displaystyle H({\\textnormal{x}}) $ & Shannon entropy of the random variable ${\\textnormal{x}}$\\\\\n$\\displaystyle D_{\\mathrm{KL}} ( P \\Vert Q ) $ & Kullback-Leibler divergence of P and Q \\\\\n$\\displaystyle \\mathcal{N} ( {\\bm{x}} ; {\\bm{\\mu}} , {\\bm{\\Sigma}})$ & Gaussian distribution %\nover ${\\bm{x}}$ with mean ${\\bm{\\mu}}$ and covariance ${\\bm{\\Sigma}}$ \\\\\n\\end{tabular}\n\\egroup\n\\vspace{0.25cm}\n\n\\centerline{\\bf Functions}\n\\bgroup\n\\def1.5{1.5}\n\\begin{tabular}{p{1.25in}p{3.25in}}\n$\\displaystyle f: {\\mathbb{A}} \\rightarrow {\\mathbb{B}}$ & The function $f$ with domain ${\\mathbb{A}}$ and range ${\\mathbb{B}}$\\\\\n$\\displaystyle f \\circ g $ & Composition of the functions $f$ and $g$ \\\\\n $\\displaystyle f({\\bm{x}} ; {\\bm{\\theta}}) $ & A function of ${\\bm{x}}$ parametrized by ${\\bm{\\theta}}$.\n (Sometimes we write $f({\\bm{x}})$ and omit the argument ${\\bm{\\theta}}$ to lighten notation) \\\\\n$\\displaystyle \\log x$ & Natural logarithm of $x$ \\\\\n$\\displaystyle \\sigma(x)$ & Logistic sigmoid, $\\displaystyle \\frac{1} {1 + \\exp(-x)}$ \\\\\n$\\displaystyle \\zeta(x)$ & Softplus, $\\log(1 + \\exp(x))$ \\\\\n$\\displaystyle || {\\bm{x}} ||_p $ & $L^p$ norm of ${\\bm{x}}$ \\\\\n$\\displaystyle || {\\bm{x}} || $ & $L^2$ norm of ${\\bm{x}}$ \\\\\n$\\displaystyle x^+$ & Positive part of $x$, i.e., $\\max(0,x)$\\\\\n$\\displaystyle \\bm{1}_\\mathrm{condition}$ & is 1 if the condition is true, 0 otherwise\\\\\n\\end{tabular}\n\\egroup\n\\vspace{0.25cm}\n\n\n\n\\section{Final instructions}\nDo not change any aspects of the formatting parameters in the style files.\nIn particular, do not modify the width or length of the rectangle the text\nshould fit into, and do not change font sizes (except perhaps in the\n\\textsc{References} section; see below). Please note that pages should be\nnumbered.\n\n\\section{Preparing PostScript or PDF files}\n\nPlease prepare PostScript or PDF files with paper size ``US Letter'', and\nnot, for example, ``A4''. The -t\nletter option on dvips will produce US Letter files.\n\nConsider directly generating PDF files using \\verb+pdflatex+\n(especially if you are a MiKTeX user).\nPDF figures must be substituted for EPS figures, however.\n\nOtherwise, please generate your PostScript and PDF files with the following commands:\n\\begin{verbatim}\ndvips mypaper.dvi -t letter -Ppdf -G0 -o mypaper.ps\nps2pdf mypaper.ps mypaper.pdf\n\\end{verbatim}\n\n\\subsection{Margins in LaTeX}\n\nMost of the margin problems come from figures positioned by hand using\n\\verb+\\special+ or other commands. We suggest using the command\n\\verb+\\includegraphics+\nfrom the graphicx package. Always specify the figure width as a multiple of\nthe line width as in the example below using .eps graphics\n\\begin{verbatim}\n \\usepackage[dvips]{graphicx} ...\n \\includegraphics[width=0.8\\linewidth]{myfile.eps}\n\\end{verbatim}\nor\n\\begin{verbatim}\n \\usepackage[pdftex]{graphicx} ...\n \\includegraphics[width=0.8\\linewidth]{myfile.pdf}\n\\end{verbatim}\nfor .pdf graphics.\nSee section~4.4 in the graphics bundle documentation (\\url{http:\/\/www.ctan.org\/tex-archive\/macros\/latex\/required\/graphics\/grfguide.ps})\n\nA number of width problems arise when LaTeX cannot properly hyphenate a\nline. Please give LaTeX hyphenation hints using the \\verb+\\-+ command.\n\n\\subsubsection*{Author Contributions}\nIf you'd like to, you may include a section for author contributions as is done\nin many journals. This is optional and at the discretion of the authors.\n\n\\subsubsection*{Acknowledgments}\nUse unnumbered third level headings for the acknowledgments. All\nacknowledgments, including those to funding agencies, go at the end of the paper.\n\n\n\n\n\\section{Appendix}\n\n\n\\subsection{Simulated Dataset}\\label{app:simulated_data}\n\n\\begin{figure}[!h]\n \\centering\n \\includegraphics[scale=0.42]{TCL\/figures\/simulation_sample.pdf}\n \\caption{A normalized time series sample from the simulated dataset. Each row represents a single feature, and the shaded regions indicate one of the $4$ underllying simulated states.}\n \\label{fig:sim_data_example}\n\\end{figure}{}\n\nThe simulated time series consists of $3$ features generated from different underlying hidden states. Figure \\ref{fig:sim_data_example} shows a sample from this dataset. Each panel in the figure shows one of the features, and the shaded regions indicate the underlying state of the signal in that period. We use a Hidden Markov Model (HMM) to generate these random latent states over time. The transition probability is set equal to \\%5 for switching to an alternating state, and \\%85 for not changing state. In each state, the time series is generated from a different signal distribution. Table \\ref{tab:state_signal_distribution} describes the generative process of each signal feature in each state. Note that feature $1$ and $2$ are always correlated, mainly to mimic realistic clinical time series. As an example, physiological measurements like pulse rate and heart rate are always correlated. \n\n\\begin{table}[h]\n \\centering\n \\begin{tabular}{lllll}\n \\toprule\n & State 1 & State 2 & State 3 & State 4\\\\\n \\midrule\n Feature 1& GP (periodic) & NARMA$_\\alpha$ & GP (Squared Exp.) & NARMA$_\\beta$ \\\\\n \n Feature 2 & GP (periodic) & NARMA$_\\alpha$ & GP (Squared Exp.) & NARMA$_\\beta$ \\\\\n \n Feature 3& GP (Squared Exp.) & NARMA$_\\beta$ & GP (periodic) & NARMA$_\\alpha$ \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Signal distributions for each time series feature of the simulated dataset}\n \\label{tab:state_signal_distribution}\n\\end{table}{}\n\nIn the state $1$, the correlated features are generated by a Gaussian Process (GP) with a periodic kernel. Feature 3, which is uncorrelated with the other two features, comes from another GP with a squared exponential kernel. In addition to GPs, we also have multiple Non-Linear Auto-Regressive Moving Average (NARMA) time series models. The linear function of NARMA$_\\alpha$ and NARMA$_\\beta$ are shown in Equation \\ref{eq:narma_alpha} and \\ref{eq:narma_beta}.\n\n\\begin{equation}\n\\text{NARMA}_\\alpha: y(k+1) = 0.3 y(k) + 0.05 y(k) \\sum_{i=0}^{n-1} y(k-i) + 1.5 u(k-(n-1)) u(k) + 0.1\n \\label{eq:narma_alpha}\n\\end{equation}\n\n\\begin{equation}\n\\text{NARMA}_\\beta: y(k+1) = 0.1 y(k) + 0.25 y(k) \\sum_{i=0}^{n-1} y(k-i) + 2.5 u(k-(n-1)) u(k) -0.005\n \\label{eq:narma_beta}\n\\end{equation}\n\nA white Gaussian noise with $\\sigma=0.3$ is added to all signals, and overall, the dataset consists of 500 instances of $T=2000$ measurements.\n\n\n\n\\subsection{Baseline implementation details}\\label{app:baseline_imp}\nImplementation of all baselines are included in the code base for reproducibility purposes, and hyper-parameters for all baselines are tuned using cross-validation.\n\n\n\\paragraph{Contrastive Predictive Coding (CPC):} The CPC baseline first processes the sequential signal windows using an encoder $Z_t = Enc(X_t)$, with a similar architecture to the encoders of other baselines. Next, an autoregressive model $g_{ar}$ aggregates all the information in $Z_{\\leq t}$ and summarizes it into a context latent representation $c_t = g_{ar}(Z_{\\leq t})$. In our implementation, we have used a single layer, a one-directional recurrent neural network with GRU cell and hidden size equal to the encoding size as the auto-regressor. Like the original paper, the density ratio is estimated using a linear transformation, and the model is trained for $1$ step ahead prediction.\n\\vspace{-2mm}\n\\paragraph{Triplet-Loss (T-Loss):} The triplet loss baseline is implemented using the original code made available by the authors on Github\\footnote{\\url{https:\/\/github.com\/White-Link\/UnsupervisedScalableRepresentationLearningTimeSeries}}. \n\\vspace{-2mm}\n\\paragraph{KNN and K-means:} These two baselines for classification and clustering are implemented using the tslearn library \\footnote{\\url{https:\/\/tslearn.readthedocs.io\/en\/stable\/index.html}}, that integrates distance metrics such as DTW. Note that evaluating DTW is computationally expensive, and the tslearn implementation is not optimized. Therefore, for the waveform data with windows of size 2500, we had to down-sampled the signal frequency by a factor of two. \n\n\n\\subsection{TNC implementation extra details}\\label{app:TNC_imp}\nTo define the neighborhood range in the TNC framework, as mentioned earlier, we use the Augmented-Dickey Fuller (ADF) statistical test to determine this range ($\\eta$) as the region for which the signals remain stationary. More precisely, we gradually increase the range, from a single window size up to 3 times the window size (the upper limit we set), and repeatedly perform the ADF test. We use the $p$-value from this statistical test to determine whether the Null hypothesis can be rejected, meaning that the signal is stationary. At the point where the $p$-value is above our defined threshold ($0.01$), we can no longer assume that the signal is stationary, and this is where we set the $\\eta$ parameter. Now, once the neighborhood is defined, we make sure the non-neighboring samples are taken from the distribution of windows with at least $4\\times \\eta$ away from $W_t$, ensuring a low likelihood of belonging to the neighborhood. Note that for implementation of ADF, we use the stats model library \\footnote{\\url{https:\/\/www.statsmodels.org\/dev\/generated\/statsmodels.tsa.stattools.adfuller.html}}. Unfortunately, this implementation is not optimized and does not support GPU computation, therefore evaluating the neighborhood range using ADF slows down TNC framework training. As a future direction, we are working on an optimized implementation of the ADF score for our framework.\n\n\n\n\n\\subsection{Selecting the window size}\\label{app:window_size}\n\nThe window size $\\delta$ is an important factor in the performance of a representation learning framework, not only for TNC but also for similar baselines such as CPC and triplet loss. Overall, the window size should be selected such that it is long enough to contain information about the underlying state and not too long to span over multiple underlying states. In our settings, we have selected the window sizes based on our prior knowledge of the signals. For instance, in the case of an ECG signal, the selected window size is equivalent to 7 seconds of recording, which is small enough such that the ECG remains in a stable state and yet has enough information to determine that underlying state. Our understanding of the time series data can help us select an appropriate window size, but we can also experiment with different $\\delta$ to learn this parameter. Table \\ref{tab:window_size} shows classification performance results for the simulation setups, under different window sizes. We can clearly see the drop in performance for all baseline methods when the window size is too small or too large.\n\n\\begin{table}[]\n \\centering\n \\begin{tabular}{lcccccc}\n &\\multicolumn{2}{c}{$\\delta=10$}& \\multicolumn{2}{c}{$\\delta=50$}& \\multicolumn{2}{c}{$\\delta=100$} \\\\\n \\toprule\n & AUPRC & Accuracy & AUPRC & Accuracy & AUPRC & Accuracy \\\\\n \\midrule\n TNC & 0.74 $\\pm$ 0.01 & 71.60 $\\pm$ 0.59 & 0.99 $\\pm$ 0.00 & 97.52 $\\pm$ 0.13 & 0.84 $\\pm$ 0.11 & 84.25 $\\pm$ 9.08\\\\\n CPC & 0.49 $\\pm$ 0.02 & 51.85 $\\pm$ 1.81 & 0.69 $\\pm$ 0.06 & 70.26 $\\pm$ 6.48 & 0.49 $\\pm$ 0.05 & 56.65 $\\pm$ 0.81\\\\\n T-Loss & 0.48 $\\pm$ 0.06 & 56.70 $\\pm$ 1.07 & 0.78 $\\pm$ 0.01 & 76.66 $\\pm$ 1.14 & 0.73 $\\pm$ 0.008 & 73.29 $\\pm$ 1.58 \\\\\n \\bottomrule\n \n \\end{tabular}\n \\caption{Downstream classification performance for different window size $\\delta$ on the simulated dataset}\n \\label{tab:window_size}\n\\end{table}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Clustering metrics}\\label{app:clustering_metrics}\nMost cluster validity measures assess certain structural properties of a clustering result. In our evaluation, we have used two measures, namely the Silhouette score and Davies-Bouldin index, to evaluate the representations' clustering quality. Davies-Bouldin measures intra-cluster similarity (coherence) and inter-cluster differences (separation).\nLet $\\mathcal{C} = \\{\\mathcal{C}_1,..., \\mathcal{C}_k\\}$ be a clustering of a set $D$ of objects. The Davies-Bouldin score is evaluated as follows: \n\n\\begin{equation}\n DB = \\frac{1}{k}\\sum_{i=1}^{k} max_{j} \\frac{s(\\mathcal{C})+s(\\mathcal{C})}{\\delta \\mathcal{C}_i\\mathcal{C}_j}\n \\label{eq:db_score}\n\\end{equation}\n\nWhere $s(\\mathcal{C})$ measures the scatter within a cluster, and $\\delta$ is a cluster to cluster distance measure. On the other hand, the silhouette score measures how similar an object is to its cluster \\emph{compared} to other clusters. Both measures are commonly used for the evaluation of clustering algorithms. A comparison of 2 metrics has shown that the Silhouette index produces slightly more accurate results in some cases. However, the Davies-Bouldin index is generally much less complex to compute \\cite{petrovic2006comparison}. \n\n\n\n\n\n\n\\subsection{Setting the weights for PU learning}\\label{app:w}\n\nAs mentioned in the Experiment section, the weight parameter in the loss is the probability of sampling a positive window from the non-neighboring region. One way to set this parameter is using prior knowledge of the number and the distribution of underlying states. Another way is to learn it as a hyperparameter. Table \\ref{tab:weight} shows the TNC loss for different weight parameters. The loss column reports the value measured in Equation \\ref{eq:obj}, and the accuracy shows how well the discriminator identifies the neighboring samples from non-neighboring ones for settings with different weight parameters. To also assess the impact of re-weighting the loss on downstream classification performance, we compared these performance measures for weighted and non-weighted settings. Table \\ref{tab:weight_TF} demonstrates these results and confirms that weight adjusting the loss for non-neighboring samples improves the quality of learned representations.\n\n\\begin{table}[!h]\n \\centering\n \\begin{tabular}{lcccccc}\n &\\multicolumn{2}{c}{Simulation}& \\multicolumn{2}{c}{ECG Waveform}& \\multicolumn{2}{c}{HAR}\\\\\n \\toprule\n Weight & Loss & Accuracy & Loss & Accuracy & Loss & Accuracy\\\\\n \\midrule\n 0.2 & 0.582$\\pm$0.002 & 74.29$\\pm$0.61 & 0.631$\\pm$0.011 & 60.44$\\pm$2.56 & 0.475$\\pm$0.004 & 85.75$\\pm$0.5 \\\\\n 0.1 & 0.571$\\pm$0.011 & 75.41$\\pm$0.37 & 0.637$\\pm$0.011 & 63.67$\\pm$1.29 & 0.413$\\pm$0.003 & 88.21$\\pm$1.29\\\\\n 0.05 & 0.576$\\pm$0.002 & 75.73$\\pm$0.24 & 0.622$\\pm$0.023 & 66.04$\\pm$3.46 & 0.383$\\pm$0.001 & 87.33$\\pm$0.17\\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Training the TNC framework using different weight parameters. The loss is the measured value determined in Equation \\ref{eq:obj}, and the Accuracy is the accuracy of the discriminator.}\n \\label{tab:weight}\n\\end{table}{}\n\n\n\n\n\\begin{table}[!h]\n \\centering\n \\begin{tabular}{lccc}\n Weighting? &\\multicolumn{1}{c}{Simulation}& \\multicolumn{1}{c}{ECG Waveform}& \\multicolumn{1}{c}{HAR}\\\\\n \\toprule\n True & {\\bf{97.52$\\pm$0.13}} & {\\bf{77.79$\\pm$0.84}} & {\\bf{88.32$\\pm$0.12}}\\\\\n False & 97.17$\\pm$0.44 & 75.26$\\pm$1.48 & 75.25$\\pm$13.6\\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Downstream classification accuracy on simulated data with the TNC frameworks, using 2 different weighting strategies: 1)Trained with weight adjustment, 2)Trained with $w=0$.}\n \\label{tab:weight_TF}\n\\end{table}{}\n\n\n\n\\subsection{Supplementary Figure}\\label{app:plots}\n\n\n\\subsubsection{Clinical waveform data}\nIn order to understand what TNC framework encodes from the high dimensional ECG signals, we visualize the trajectory of the representations of an individual sample over time. Figure \\ref{fig:trajectory_wf} demonstrates this example, where the top 2 rows are ECG signals from two recording leads and the bottom row demonstrates the representation vectors. We see that around second $40$ the pattern in the representations change as a result of an artifact happening in one of the signals. With the help from our clinical expert, we also tried to interpret different patterns in the encoding space. For instance, between time $80$ and $130$, where features 0-10 become more activated, the heart rate (HR) has increased. Increase in HR can be seen as increased frequency in the ECG signals and is one of the indicators of arrhythmia that we believe TNC has captured. Figure \\ref{fig:distribution_wf} shows the distribution of the latent encoding of ECG signals for different baselines, with colors indicating the arrhythmia class. \n\n\\begin{figure}[!h]\n\\begin{subfigure}{.32\\textwidth}\n\\includegraphics[scale=0.32]{TCL\/figures\/encoding_distribution_wf.pdf}\n\\end{subfigure}\n\\begin{subfigure}{.32\\textwidth}\n\\includegraphics[scale=0.32]{TCL\/figures\/encoding_distribution_wf_trip.pdf}\n\\end{subfigure}\n\\begin{subfigure}{.32\\textwidth}\n \\centering\n\\includegraphics[scale=.32]{TCL\/figures\/encoding_distribution_wf_cpc.pdf}\n\\end{subfigure}\n\\caption{T-SNE visualization of {\\bf{waveform}} signal representations for unsupervised representation learning baselines. Each point in the plot is a 64 dimensional representation of a window of time series, with the color indicating the latent state.}\n\\label{fig:distribution_wf}\n\\end{figure}\n\n\\begin{figure}[!h]\n \\centering\n \\includegraphics[scale=0.2]{TCL\/figures\/embedding_trajectory_hm_wf.pdf}\n \\caption{Trajectory of a {\\bf{waveform}} signal encoding. The top two plots show the ECG recordings from 2 ECG leads. The bottom plot shows the 64 dimensional encoding of the sliding windows $W_t$ where $\\delta=2500$. }\n \\label{fig:trajectory_wf}\n\\end{figure}\n\n\n\n\n\\subsubsection{HAR data}\nFigure \\ref{fig:distribution_har} and \\ref{fig:trajectory_har} are similar plots to the ones demonstrated in the previous section, but for the HAR dataset. As shown in Figure \\ref{fig:trajectory_har}, the underlying states of the signal are clearly captured by the TNC framework as different patterns in the latent representations.\n\n\\begin{figure}[!h]\n\\begin{subfigure}{.32\\textwidth}\n\\includegraphics[scale=0.32]{TCL\/figures\/encoding_distribution_har.pdf}\n\\end{subfigure}\n\\begin{subfigure}{.32\\textwidth}\n\\includegraphics[scale=0.32]{TCL\/figures\/encoding_distribution_har_trip.pdf}\n\\end{subfigure}\n\\begin{subfigure}{.32\\textwidth}\n \\centering\n\\includegraphics[scale=.32]{TCL\/figures\/encoding_distribution_har_cpc.pdf}\n\\end{subfigure}\n\\caption{T-SNE visualization of {\\bf{HAR}} signal representations for all baselines. Each point in the plot is a 10 dimensional representation of a window of $\\delta=4$, with colors indicating latent states.}\n\\label{fig:distribution_har}\n\\end{figure}\n\n\\begin{figure}[!h]\n \\centering\n \\includegraphics[scale=0.2]{TCL\/figures\/embedding_trajectory_hm_har.pdf}\n \\caption{Trajectory of a {\\bf{HAR}} signal encoding. The top plot shows the original time series with shaded regions indicating the underlying state. The bottom plot shows the 10 dimensional encoding of the sliding windows $W_t$ where $\\delta=4$. }\n \\label{fig:trajectory_har}\n\\end{figure}\n\n\n\n\n\\subsubsection{Simulation data}\nIn addition to the initial experiment, we also show the trajectory of the encoding for a smaller encoding size ($3$). In this setting, we have 4 underlying states in the signal, and only 3 dimensions for the encoding.\n\n\\begin{figure}[!h]\n \\centering\n \\includegraphics[scale=0.2]{TCL\/figures\/embedding_trajectory_3.pdf}\n \\caption{Trajectory of a {\\bf{simulation}} signal encoding. The top plots shows the signals and the bottom plot shows the 3 dimensional encoding of the sliding windows $W_t$ where $\\delta=50$. }\n \\label{fig:trajectory_sim_3}\n\\end{figure}\n\\subsubsection*{Acknowledgments}\nResources used in preparing this research were provided, in part, by the Government of Canada through CIFAR, and companies sponsoring the Vector Institute. This research was undertaken, in part, thanks to funding from the Canadian Institute of Health Research (CIHR) and the Natural Sciences and Engineering Research Council of Canada (NSERC).\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{INTRODUCTION}\n\nNovae occur in binary systems in which a Roche lobe filling secondary is\nlosing hydrogen-rich material through the inner Lagrangian point onto a\nwhite dwarf (WD) primary. Mass transfer can also occur in long period \nsystems if the secondary has a significant wind, {\\it e.g.} the giant \nsecondary in RS Oph or V407 Cyg. Core material is mixed into the accreted \nmaterial and is violently ejected into space when the pressure at the \nWD-accretion interface becomes great enough to initiate a thermonuclear \nrunaway (TNR). Novae eject, into the interstellar medium (ISM), a \nmixture of material accreted from the companion star, highly processed \nmaterial from the underlying WD, and products of nucleosynthesis occurring \nduring the TNR. As a result of the TNR, up to 10$^{-4}$ M$_{\\odot}$ of \nmaterial can be ejected from the WD enriched in C, N, O, Ne, Mg, Al \nand other species \n\\btxt{\\citep{2006NuPhA.777..550J}}\nat $v \\sim 10^2 - 10^4$ km s$^{-1}$. Any remaining \nhydrogen still bound to the WD continues to burn in hydrostatic \nequilibrium until it is consumed or ejected via a wind.\n\nInitially, the radiative output of a nova occurs in the optical but as the\nphotosphere of the WD recedes, the spectral energy distribution\nshifts to higher energies \\citep{1978ARA&A..16..171G}. The rate of the \noptical decline defines a nova's primary characteristics\n\\citep[{\\it e.g.},][and references therein]{Warner2008}, namely the time \nto decline 2 magnitudes from visual maximum, t$_2$. The decline \nrate depends on the amount of mass ejected, its velocity, composition,\nand if it runs into circumbinary material. The bolometric luminosity \nduring the outburst is high, near or exceeding the Eddington limit\n(for the fastest novae), and thus additional \nmaterial is ejected via a strong stellar wind\n\\citep{1998MNRAS.300..931S,2001MNRAS.320..103S}. In some novae the \ncollision between this fast wind and the initial exploded mass or any\npre-existing circumbinary material can produce X-ray emission from shocks. \nThe emission from this early X-ray phase is hard, has a low luminosity, \nof order 10$^{33-35}$ erg s$^{-1}$, and declines relatively rapidly \n\\citep{1998ApJ...499..395B,2001MNRAS.326L..13O}. As fuel continues to burn, \nmass loss causes the photosphere of the WD to shrink \n\\citep{1985ApJ...294..263M}. \n\\btxt{The effective temperature increases, \npeaking in the soft X-rays, at (2-8)$\\times$10$^5$ K \n\\citep{1996ApJ...456..788K,1996ApJ...463L..21S,2010ApJ...717..363R}}. \nOnce the ejecta have cleared sufficiently, and if the line of sight extinction \nis not severe, some novae exhibit characteristics similar to the\nSuper Soft X-ray binary sources \n\\citep[SSSs:][]{1997ARA&A..35...69K} with strong and soft, \nE$_{peak}$ $<$ 1 keV, X-ray emission. This point in novae evolution is \ncalled the SSS phase. At low spectral resolution, the UV\/X-ray\nspectral energy distributions (SED) resembles\nblackbodies, but higher resolution {\\it Chandra}\\ or {\\it XMM}\\ grating observations\nreveal a significantly more complex picture. The spectra frequently have\nP-Cygni profiles or emission lines superimposed on a line blanketed\natmosphere. Models sophisticated enough to interpret the high resolution\ndata are only now becoming available \\citep{2010AN....331..175V}. Once \nnuclear burning ends, the X-ray light curve rapidly declines as the WD cools \nmarking the end of the SSS phase and the outburst. At some point mass\ntransfer resumes and eventually another eruption occurs. These are\ncalled classical novae (CNe) until a second outburst is observed then they\nbecome recurrent novae (RNe). \n\\btxt{Detailed reviews of nova evolution are \npresented by \\citet{Starrfield08} and \\citet{2010AN....331..160B}.\n\\citet{2010AN....331..169H} discuss the theoretical implications of X-ray \nobservations of novae while \\citet{2010ApJS..187..275S} discusses the \ncurrent understanding of the RN class.}\n\nAn important, but not the sole\ndriver of the nova phenomenon is the mass of the WD. \nExplosions on larger mass WDs expel less mass but at higher velocities. \nThey have larger luminosities, are in outburst for less time, and (should)\nhave shorter recurrence times than novae on lower mass WDs. \nHigh mass ($>$ 1.25 M$_{\\odot}$) WDs reach TNR ignition more rapidly\nthan low mass WDs and thus do not have the chance to accrete as much\nmaterial. They also reach higher peak temperatures during the TNR leading\nto a more energetic explosion. However, other factors are believed to \nplay important roles leading to a nova event. These include\nthe composition of the WD, either CO or ONe, \nthe initial temperature of the WD during accretion, the mass accretion \nrate \\citep{2005ApJ...623..398Y}, the composition of the accreted \nmaterial \\citep{2000AIPC..522..379S}, and the mixing history\nof the core\/envelope. All impact \nhow much mass can be transfered to the WD before \nan outburst begins. Models show that different combinations of these \ncharacteristics can reproduce a wide range of nova outbursts \n\\citep{2005ApJ...623..398Y,WoodStar11}. \nUnfortunately very few of these parameters\nhave been observationally verified in any nova.\n\nThe X-ray regime is a crucial component for the study of novae providing \ninsight into TNR burning processes, WD mass and composition, accretion and \nmixing mechanisms, dust grain formation and destruction, and mass loss \nprocesses. In addition, high mass novae such as RNe are potential\nSN Ia progenitors via the single degenerate scenario \n\\citep[e.g.][]{2008A&A...484L...9W,2010ASPC..429..173W,2010Ap&SS.329..287M}.\nTo make progress understanding the physics of these important astrophysical\nphenomena, observations of a large number of novae are required \nto sample all the contributing \nfactors. Prior to the launch of {\\it Swift}\\ the general X-ray temporal\nevolution of novae was far from complete \nas only a few novae had been observed at more than one epoch in X-rays. \n\n{\\it Swift}\\ is an excellent facility for studying novae as it has a superb\nsoft X-ray response with its XRT instrument \\citep{2005SSRv..120..165B}.\n\\citet{2007ApJ...663..505N} show how the XRT favorably compares with \ncurrently available X-ray instruments. \n{\\it Swift}\\ also has a co-aligned UV\/optical instrument, UVOT\n\\citep[see][for details]{2005SSRv..120...95R}, which provides either 6 \nfilter photometry or low resolution grism spectroscopy. The other \n{\\it Swift}\\ instrument is a $\\gamma$-ray detector, BAT. \n\\btxt{However, novae are generally not strong $\\gamma$-ray \nsources \\citep{2005NuPhA.758..721H}.}\nThe decay of $^{22}$Na\n(half-life 2.6 yrs) generates a 1275 keV emission line but only $>$ \n1.25 M$_{\\odot}$ WDs are predicted to produce sufficient \n$^{22}$Na during the TNR. This line \nhas not yet been definitively detected by satellites \\citep{Hernanz08}\nbut there is a recent claim by \\citet{2010ApJ...723L..84S}\nthat their models with Compton decay of $^{22}$Na can account for\nthe hard X-ray flux in V2491 Cyg provided an exceptionally large amount\nof $^{22}$Na, 3$\\times$10$^{-5}$M$_{\\odot}$, was synthesized.\nAnother $\\gamma$-ray emission mechanism is electron-positron \nannihilation very early in the outburst but this is expected to be\ndetectable only in nearby novae. \nThe symbiotic RN RS Oph, at 1.6 kpc, was clearly detected \nin the lowest energy channels of the {\\it Swift}\/BAT \\citep{2008A&A...485..223S},\nbut that emission is consistent with that from high temperature\nshocks as the outburst ejecta plow into the pre-existing red giant\nwind. Recently the symbiotic RN V407 Cyg was detected in the GeV band \nby {\\it Fermi-LAT} \\citep{2010Sci...329..817A} only a few days after\nvisual maximum. \\citet{2010Sci...329..817A} show that the $\\gamma$-ray\nemission can be explained by either Compton scattering of infrared photons \nin the red giant wind or $\\pi^0$ decay from proton-proton collisions.\n\\citet{2011arXiv1101.6013L} predict that $\\pi^0$ $\\gamma$-rays\nwill be created in the high circumbinary densities of very long orbital \nperiods systems such as V407 Cyg, with a period of $\\sim$ 43 years \n\\citep{1990MNRAS.242..653M,2011arXiv1109.5397S}.\n\n{\\it Swift}\\ has a rapid response ToO procedure and flexible scheduling \nwhich is critical in obtaining well sampled X-ray light curves of \ntransient events. Initial {\\it Swift}\\ results of 11 novae were \npresented in \\citet{2007ApJ...663..505N}.\nSince that time significantly more data have been obtained by the \n{\\it Swift}\\ Nova-CV group\\footnote{The current members of the group and \nobservation strategy are provided at \\url{http:\/\/www.swift.ac.uk\/nova-cv\/}.}\nwhich has devised an observing strategy to efficiently utilize the\nsatellite's unique capabilities and maximize the science return by \nobserving interesting and bright novae with low extinction\nrecently discovered in the Milky Way and Magellanic Clouds. In five \nyears {\\it Swift}\\ has performed multiple visits for \\totalswift\\ classical and \nrecurrent Galactic\/Magellanic Cloud novae totaling well over 2 Ms of \nexposure time. \n\nHere we present a summary of all the Galactic\/Magellanic Cloud {\\it Swift}\\\nnova observations from launch (2004 November 20) to 2010 July 31 using\nthe XRT (0.3-10 keV) X-ray instrument (count rates and hardness ratios) \nand the available UVOT (1700-8000\\AA) filter photometry. {\\it Swift}\\ \nobservations of novae in the M31 group are reported in \n\\citet{2010A&A...523A..89H}, \\citet{2010AN....331..187P} and \nreferences within. We combine the {\\it Swift}\\ Galactic\/Magellanic Cloud \ndata with archival pointed observations of CNe and RNe from {\\it ROSAT}, \n{\\it XMM}, {\\it Chandra}, {\\it BeppoSax}, {\\it RXTE}, and {\\it ASCA} to produce\nthe most comprehensive X-ray sample of local nova. \nThe sample includes \\totalSSS\\ systems that were observed during the SSS phase.\n\nIn Section 2, we summarize the properties of the \\total\\ novae in the\nX-ray sample. The averaged {\\it Swift}\\ XRT count rates and UVOT magnitudes\nfor each observational session are also provided. Studies of high \nfrequency phenomena in individual objects are either left for future\nwork or have previously been published \\citep[V458 Vul, V2941 Cyg, V598 Pup, \nRS Oph, V407 Cyg, and V723 Cas in][ respectively]{2009AJ....137.4160N,\n2010MNRAS.401..121P,2009A&A...507..923P,2011ApJ...727..124O,\n2011A&A...527A..98S,2008AJ....135.1328N}. \nSections 3 and 4 detail the observations and results \nduring the hard and SSS phases, respectively. A discussion \nfollows in Section 5 articulating trends between\nthe SSS duration and t$_2$, expansion velocity of the ejecta, and\norbital period plus the role of SSS emission in \ndust-forming novae. Also included is a discussion on the origin of the \ndifferent variability observed in the X-ray and UV light curves of the \n{\\it Swift}\\ sources. Optical characteristics indicative of SSS emission\nin CN and RN are also presented. The last section, Section 6, \nprovides a summary of this work.\n\n\\section{THE X-RAY DATA SET}\n\n\\subsection{Characteristics}\n\nTable \\ref{chartable} presents the primary characteristics of all the \nGalactic\/Magellanic Cloud novae with pointed X-ray observations prior\nto July 31st, 2010. In addition to the {\\it Swift}\\ data, the sample includes \nall the publicly available pointed observations from the {\\it ROSAT}, {\\it XMM}, \n{\\it Chandra}, {\\it BeppoSax}, {\\it RXTE}, and {\\it ASCA} archives.\nThe columns give the nova name, visual magnitude at maximum, Julian date of\nvisual maximum, time to decline two magnitudes from visual maximum,\nthe Full-Width at Half-Maximum, FWHM, of H$\\alpha$ or H$\\beta$ taken near\nvisual maximum, E(B-V) and averaged Galactic hydrogen column density, N$_H$, \nalong the line of sight, proposed orbital period, estimated distance, \nwhether the nova was observed to form dust, and if the nova is a \nknown RN. The numbers in the parentheses are the literature \nreferences given in the table notes. The names of novae with {\\it Swift}\\ \nobservations are shown in {\\it bold}.\n\n\\begin{deluxetable}{lllrrrrrrll}\n\\tablecaption{Observable Characteristics of \nGalactic\/Magellanic Cloud novae with X-ray observations\\label{chartable}}\n\\rotate\n\\tablewidth{700pt}\n\\tabletypesize{\\scriptsize}\n\\tablehead{\n\\colhead{Name\\tablenotemark{a}} & \\colhead{V$_{max}$\\tablenotemark{b}} & \n\\colhead{Date\\tablenotemark{c}} & \\colhead{t$_2$\\tablenotemark{d}} & \n\\colhead{FWHM\\tablenotemark{e}} & \\colhead{E(B-V)} & \n\\colhead{N$_H$\\tablenotemark{f}} & \\colhead{Period} & \n\\colhead{D} & \\colhead{Dust?\\tablenotemark{g}} & \\colhead{RN?} \\\\ \n\\colhead{} & \\colhead{(mag)} & \\colhead{(JD)} & \\colhead{(d)} & \n\\colhead{(km s$^{-1}$)} & \\colhead{(mag)} & \\colhead{(cm$^{-2}$)} &\n\\colhead{(d)} & \\colhead{(kpc)} & \\colhead{} & \\colhead{}\n} \n\\startdata\nCI Aql & 8.83 (1) & 2451665.5 (1) & 32 (2) & 2300 (3) & 0.8$\\pm0.2$ (4) & 1.2e+22 & 0.62 (4) & 6.25$\\pm5$ (4) & N & Y \\\\\n{\\bf CSS081007}\\tablenotemark{h} & \\nodata & 2454596.5\\tablenotemark{i} & \\nodata & \\nodata & 0.146 & 1.1e+21 & 1.77 (5) & 4.45$\\pm1.95$ (6) & \\nodata & \\nodata \\\\\nGQ Mus & 7.2 (7) & 2445352.5 (7) & 18 (7) & 1000 (8) & 0.45 (9) & 3.8e+21 & 0.059375 (10) & 4.8$\\pm1$ (9) & N (7) & \\nodata \\\\\nIM Nor & 7.84 (11) & 2452289 (2) & 50 (2) & 1150 (12) & 0.8$\\pm0.2$ (4) & 8e+21 & 0.102 (13) & 4.25$\\pm3.4$ (4) & N & Y \\\\\n{\\bf KT Eri} & 5.42 (14) & 2455150.17 (14) & 6.6 (14) & 3000 (15) & 0.08 (15) & 5.5e+20 & \\nodata & 6.5 (15) & N & M \\\\\n{\\bf LMC 1995} & 10.7 (16) & 2449778.5 (16) & 15$\\pm2$ (17) & \\nodata & 0.15 (203) & 7.8e+20 & \\nodata & 50 & \\nodata & \\nodata \\\\\nLMC 2000 & 11.45 (18) & 2451737.5 (18) & 9$\\pm2$ (19) & 1700 (20) & 0.15 (203) & 7.8e+20 & \\nodata & 50 & \\nodata & \\nodata \\\\\n{\\bf LMC 2005} & 11.5 (21) & 2453700.5 (21) & 63 (22) & 900 (23) & 0.15 (203) & 1e+21 & \\nodata & 50 & M (24) & \\nodata \\\\\n{\\bf LMC 2009a} & 10.6 (25) & 2454867.5 (25) & 4$\\pm1$ & 3900 (25) & 0.15 (203) & 5.7e+20 & 1.19 (26) & 50 & N & Y \\\\\n{\\bf SMC 2005} & 10.4 (27) & 2453588.5 (27) & \\nodata & 3200 (28) & \\nodata & 5e+20 & \\nodata & 61 & \\nodata & \\nodata \\\\\n{\\bf QY Mus} & 8.1 (29) & 2454739.90 (29) & 60: & \\nodata & 0.71 (30) & 4.2e+21 & \\nodata & \\nodata & M & \\nodata \\\\\n{\\bf RS Oph} & 4.5 (31) & 2453779.44 (14) & 7.9 (14) & 3930 (31) & 0.73 (32) & 2.25e+21 & 456 (33) & 1.6$\\pm0.3$ (33) & N (34) & Y \\\\\n{\\bf U Sco} & 8.05 (35) & 2455224.94 (35) & 1.2 (36) & 7600 (37) & 0.2$\\pm0.1$ (4) & 1.2e+21 & 1.23056 (36) & 12$\\pm2$ (4) & N & Y \\\\\n{\\bf V1047 Cen} & 8.5 (38) & 2453614.5 (39) & 6 (40) & 840 (38) & \\nodata & 1.4e+22 & \\nodata & \\nodata & \\nodata & \\nodata \\\\\n{\\bf V1065 Cen} & 8.2 (41) & 2454123.5 (41) & 11 (42) & 2700 (43) & 0.5$\\pm0.1$ (42) & 3.75e+21 & \\nodata & 9.05$\\pm2.8$ (42) & Y (42) & \\nodata \\\\\nV1187 Sco & 7.4 (44) & 2453220.5 (44) & 7: (45) & 3000 (44) & 1.56 (44) & 8.0e+21 & \\nodata & 4.9$\\pm0.5$ (44) & N & \\nodata \\\\\n{\\bf V1188 Sco} & 8.7 (46) & 2453577.5 (46) & 7 (40) & 1730 (47) & \\nodata & 5.0e+21 & \\nodata & 7.5 (39) & \\nodata & \\nodata \\\\\n{\\bf V1213 Cen} & 8.53 (48) & 2454959.5 (48) & 11$\\pm2$ (49) & 2300 (50) & 2.07 (30) & 1.0e+22 & \\nodata & \\nodata & \\nodata & \\nodata \\\\\n{\\bf V1280 Sco} & 3.79 (51) & 2454147.65 (14) & 21 (52) & 640 (53) & 0.36 (54) & 1.6e+21 & \\nodata & 1.6$\\pm0.4$ (54) & Y (54) & \\nodata \\\\\n{\\bf V1281 Sco} & 8.8 (55) & 2454152.21 (55) & 15:& 1800 (56) & 0.7 (57) & 3.2e+21 & \\nodata & \\nodata & N & \\nodata \\\\\n{\\bf V1309 Sco} & 7.1 (58) & 2454714.5 (58) & 23$\\pm2$ (59) & 670 (60) & 1.2 (30) & 4.0e+21 & \\nodata & \\nodata & \\nodata & \\nodata \\\\\n{\\bf V1494 Aql} & 3.8 (61) & 2451515.5 (61) & 6.6$\\pm0.5$ (61) & 1200 (62) & 0.6 (63) & 3.6e+21 & 0.13467 (64) & 1.6$\\pm0.1$ (63) & N & \\nodata \\\\\n{\\bf V1663 Aql} & 10.5 (65) & 2453531.5 (65) & 17 (66) & 1900 (67) & 2: (68) & 1.6e+22 & \\nodata & 8.9$\\pm3.6$ (69) & N & \\nodata \\\\\nV1974 Cyg & 4.3 (70) & 2448654.5 (70) & 17 (71) & 2000 (19) & 0.36$\\pm0.04$ (71) & 2.7e+21 & 0.081263 (70) & 1.8$\\pm0.1$ (72) & N & \\nodata \\\\\n{\\bf V2361 Cyg} & 9.3 (73) & 2453412.5 (73) & 6 (40) & 3200 (74) & 1.2: (75) & 7.0e+21 & \\nodata & \\nodata & Y (40) & \\nodata \\\\\n{\\bf V2362 Cyg} & 7.8 (76) & 2453831.5 (76) & 9 (77) & 1850 (78) & 0.575$\\pm0.015$ (79) & 4.4e+21 & 0.06577 (80) & 7.75$\\pm3$ (77) & Y (81) & \\nodata \\\\\n{\\bf V2467 Cyg} & 6.7 (82) & 2454176.27 (82) & 7 (83) & 950 (82) & 1.5 (84) & 1.4e+22 & 0.159 (85) & 3.1$\\pm0.5$ (86) & M (87) & \\nodata \\\\\n{\\bf V2468 Cyg} & 7.4 (88) & 2454534.2 (88) & 10: & 1000 (88) & 0.77 (89) & 1.0e+22 & 0.242 (90) & \\nodata & N & \\nodata \\\\\n{\\bf V2491 Cyg} & 7.54 (91) & 2454567.86 (91) & 4.6 (92) & 4860 (93) & 0.43 (94) & 4.7e+21 & 0.09580: (95) & 10.5 (96) & N & M \\\\\nV2487 Oph & 9.5 (97) & 2450979.5 (97) & 6.3 (98) & 10000 (98) & 0.38$\\pm0.08$ (98) & 2.0e+21 & \\nodata & 27.5$\\pm3$ (99) & N (100) & Y (101) \\\\\n{\\bf V2540 Oph} & 8.5 (102) & 2452295.5 (102) & \\nodata & \\nodata & \\nodata & 2.3e+21 & 0.284781 (103) & 5.2$\\pm0.8$ (103) & N & \\nodata \\\\\nV2575 Oph & 11.1 (104) & 2453778.8 (104) & 20: & 560 (104) & 1.4 (105) & 3.3e+21 & \\nodata & \\nodata & N (105) & \\nodata \\\\\n{\\bf V2576 Oph} & 9.2 (106) & 2453832.5 (106) & 8: & 1470 (106) & 0.25 (107) & 2.6e+21 & \\nodata & \\nodata & N & \\nodata \\\\\n{\\bf V2615 Oph} & 8.52 (108) & 2454187.5 (108) & 26.5 (108) & 800 (109) & 0.9 (108) & 3.1e+21 & \\nodata & 3.7$\\pm0.2$ (108) & Y (110) & \\nodata \\\\\n{\\bf V2670 Oph} & 9.9 (111) & 2454613.11 (111) & 15: & 600 (112) & 1.3: (113) & 2.9e+21 & \\nodata & \\nodata & N (114) & \\nodata \\\\\n{\\bf V2671 Oph} & 11.1 (115) & 2454617.5 (115) & 8: & 1210 (116) & 2.0 (117) & 3.3e+21 & \\nodata & \\nodata & M (117) & \\nodata \\\\\n{\\bf V2672 Oph} & 10.0 (118) & 2455060.02 (118) & 2.3 (119) & 8000 (118) & 1.6$\\pm0.1$ (119) & 4.0e+21 & \\nodata & 19$\\pm2$ (119) & \\nodata & M \\\\\nV351 Pup & 6.5 (120) & 2448617.5 (120) & 16 (121) & \\nodata & 0.72$\\pm0.1$ (122) & 6.2e+21 & 0.1182 (123) & 2.7$\\pm0.7$ (122) & N & \\nodata \\\\\n{\\bf V382 Nor} & 8.9 (124) & 2453447.5 (124) & 12 (40) & 1850 (23) & \\nodata & 1.7e+22 & \\nodata & \\nodata & \\nodata & \\nodata \\\\\nV382 Vel & 2.85 (125) & 2451320.5 (125) & 4.5 (126) & 2400 (126) & 0.05: (126) & 3.4e+21 & 0.146126 (127) & 1.68$\\pm0.3$ (126) & N & \\nodata \\\\\n{\\bf V407 Cyg} & 6.8 (128) & 2455266.314 (128) & 5.9 (129) & 2760 (129) & 0.5$\\pm0.05$ (130) & 8.8e+21 & 15595 (131) & 2.7 (131) & \\nodata & Y \\\\\n{\\bf V458 Vul} & 8.24 (132) & 2454322.39 (132) & 7 (133) & 1750 (134) & 0.6 (135) & 3.6e+21 & 0.06812255 (136) & 8.5$\\pm1.8$ (133) & N (135) & \\nodata \\\\\n{\\bf V459 Vul} & 7.57 (137) & 2454461.5 (137) & 18 (138) & 910 (139) & 1.0 (140) & 5.5e+21 & \\nodata & 3.65$\\pm1.35$ (138) & Y (140) & \\nodata \\\\\nV4633 Sgr & 7.8 (141) & 2450895.5 (141) & 19$\\pm3$ (142) & 1700 (143) & 0.21 (142) & 1.4e+21 & 0.125576 (144) & 8.9$\\pm2.5$ (142) & N & \\nodata \\\\\n{\\bf V4643 Sgr} & 8.07 (145) & 2451965.867 (145) & 4.8 (146) & 4700 (147) & 1.67 (148) & 1.4e+22 & \\nodata & 3 (148) & N & \\nodata \\\\\n{\\bf V4743 Sgr} & 5.0 (149) & 2452537.5 (149) & 9 (150) & 2400 (149) & 0.25 (151) & 1.2e+21 & 0.281 (152) & 3.9$\\pm0.3$ (151) & N & \\nodata \\\\\n{\\bf V4745 Sgr} & 7.41 (153) & 2452747.5 (153) & 8.6 (154) & 1600 (155) & 0.1 (154) & 9.0e+20 & 0.20782 (156) & 14$\\pm5$ (154) & \\nodata & \\nodata \\\\\n{\\bf V476 Sct} & 10.3 (157) & 2453643.5 (157) & 15 (158) & \\nodata & 1.9 (158) & 1.2e+22 & \\nodata & 4$\\pm1$ (158) & M (159) & \\nodata \\\\\n{\\bf V477 Sct} & 9.8 (160) & 2453655.5 (160) & 3 (160) & 2900 (161) & 1.2: (162) & 4e+21 & \\nodata & \\nodata & M (163) & \\nodata \\\\\n{\\bf V5114 Sgr} & 8.38 (164) & 2453081.5 (164) & 11 (165) & 2000 (23) & \\nodata & 1.5e+21 & \\nodata & 7.7$\\pm0.7$ (165) & N (166) & \\nodata \\\\\n{\\bf V5115 Sgr} & 7.7 (167) & 2453459.5 (167) & 7 (40) & 1300 (168) & 0.53 (169) & 2.3e+21 & \\nodata & \\nodata & N (169) & \\nodata \\\\\n{\\bf V5116 Sgr} & 8.15 (170) & 2453556.91 (170) & 6.5 (171) & 970 (172) & 0.25 (173) & 1.5e+21 & 0.1238 (171) & 11$\\pm3$ (173) & N (174) & \\nodata \\\\\n{\\bf V5558 Sgr} & 6.53 (175) & 2454291.5 (175) & 125 (176) & 1000 (177) & 0.80 (178) & 1.6e+22 & \\nodata & 1.3$\\pm0.3$ (176) & N (179) & \\nodata \\\\\n{\\bf V5579 Sgr} & 5.56 (180) & 2454579.62 (180) & 7: & 1500 (23) & 1.2 (181) & 3.3e+21 & \\nodata & \\nodata & Y (181) & \\nodata \\\\\n{\\bf V5583 Sgr} & 7.43 (182) & 2455051.07 (182) & 5: & 2300 (182) & 0.39 (30) & 2.0e+21 & \\nodata & 10.5 & \\nodata & \\nodata \\\\\n{\\bf V574 Pup} & 6.93 (183) & 2453332.22 (183) & 13 (184) & 2800 (184) & 0.5$\\pm0.1$ & 6.2e+21 & \\nodata & 6.5$\\pm1$ & M (185) & \\nodata \\\\\n{\\bf V597 Pup} & 7.0 (186) & 2454418.75 (186) & 3: & 1800 (187) & 0.3 (188) & 5.0e+21 & 0.11119 (189) & \\nodata & N (188) & \\nodata \\\\\n{\\bf V598 Pup} & 3.46 (14) & 2454257.79 (14) & 9$\\pm1$ (190) & \\nodata & 0.16 (190) & 1.4e+21 & \\nodata & 2.95$\\pm0.8$ (190) & \\nodata & \\nodata \\\\\n{\\bf V679 Car} & 7.55 (191) & 2454797.77 (191) & 20: & \\nodata & \\nodata & 1.3e+22 & \\nodata & \\nodata & \\nodata & \\nodata \\\\\n{\\bf V723 Cas} & 7.1 (192) & 2450069.0 (192) & 263 (2) & 600 (193) & 0.5 (194) & 2.35e+21 & 0.69 (195) & 3.86$\\pm0.23$ (196) & N & \\nodata \\\\\nV838 Her & 5 (197) & 2448340.5 (197) & 2 (198) & \\nodata & 0.5$\\pm0.1$ (198) & 2.6e+21 & 0.2975 (199) & 3$\\pm1$ (198) & Y (200) & \\nodata \\\\\n{\\bf XMMSL1 J06}\\tablenotemark{j} & 12 (201) & 2453643.5 (202) & 8$\\pm2$ (202) & \\nodata & 0.15 (203) & 8.7e+20 & \\nodata & 50 & \\nodata & \\nodata \\\\\n\\enddata\n\\tablenotetext{a}{Novae with {\\it Swift}\\ observations are presented in {\\it bold}.}\n\\tablenotetext{b}{Visual maximum.}\n\\tablenotetext{c}{Date of visual maximum.}\n\\tablenotetext{d}{As measured from the visual light curve. A \":\" indicates\nan uncertain value due to an estimate from the AAVSO light curve.}\n\\tablenotetext{e}{Of Balmer lines measured at or near visual maximum.}\n\\tablenotetext{f}{Average Galactic N$_H$ within 0.5$\\arcdeg$ of the nova \nposition as given in the HEASARC N$_H$ tool.}\n\\tablenotetext{g}{Dust forming nova? (Y)es, (N)o, or (M)abye.\nNovae with \"N\" but no dust reference were sufficiently observed\nbut no dust was specifically reported in any of the references.\n}\n\\tablenotetext{h}{Full nova name is CSS081007030559+054715.}\n\\tablenotetext{i}{An averaged date based on available photometry.}\n\\tablenotetext{j}{Full nova name is XMMSL1 J060636.2-694933.} \\\\\n\\tablecomments{Numbers in parenthesis are the reference codes.} \n\\tablerefs{\n1 = \\citet{2000IAUC.7411....3H};\n2 = \\citet{2010AJ....140...34S};\n3 = \\citet{2000IAUC.7409....1T};\n4 = \\citet{2010ApJS..187..275S};\n5 = \\citet{2010AN....331..156B};\n6 = \\citet{2008ATel.1847....1S};\n7 = \\citet{1984MNRAS.211..421W};\n8 = \\citet{1983IAUC.3766....1C};\n9 = \\citet{1984A&A...137..307K};\n10 = \\citet{1989ApJ...339L..41D};\n11 = \\citet{2002IAUC.7791....2K};\n12 = \\citet{2002IAUC.7799....3D};\n13 = \\citet{2003MNRAS.343..313W};\n14 = \\citet{2010ApJ...724..480H};\n15 = \\citet{2009ATel.2327....1R};\n16 = \\citet{1995IAUC.6143....2L};\n17 = \\citet{2004IBVS.5582....1L};\n18 = \\citet{2000IAUC.7453....1L};\n19 = \\citet{2003A&A...405..703G};\n20 = \\citet{2000IAUC.7457....1D};\n21 = \\citet{2005IAUC.8635....1L};\n22 = \\citet{2007JAVSO..35..359L};\n23 = Average of our SMARTS spectra;\n24 = Evidence from our SMARTS IR lightcurve;\n25 = \\citet{2009IAUC.9019....1L};\n26 = \\citet{2009ATel.2001....1B};\n27 = \\citet{2005IAUC.8582....2L};\n28 = \\citet{2005IAUC.8582....3M};\n29 = \\citet{2008IAUC.8990....2L};\n30 = \\citet{1998ApJ...500..525S};\n31 = \\citet{2007ApJ...665L..63B};\n32 = \\citet{1985IAUC.4067....2S};\n33 = \\citet{2008ApJ...673.1067N};\n34 = \\citet{2007ApJ...671L.157E};\n35 = \\citet{2010ATel.2419....1S};\n36 = \\citet{2001A&A...378..132E};\n37 = \\citet{2010ATel.2411....1A};\n38 = \\citet{2005IAUC.8596....1L};\n39 = \\citet{2007ApJ...663..505N};\n40 = \\citet{2007ApJ...662..552H};\n41 = \\citet{2007IAUC.8800....1L};\n42 = \\citet{2010AJ....140.1347H};\n43 = \\citet{2007IAUC.8800....2W};\n44 = \\citet{2006ApJ...638..987L};\n45 = \\citet{2004AAS...20515003S};\n46 = \\citet{2005IAUC.8574....1P};\n47 = \\citet{2005IAUC.8576....2N};\n48 = \\citet{2009IAUC.9043....1P};\n49 = From AAVSO lightcurve;\n50 = \\citet{2009IAUC.9043....2P};\n51 = \\citet{2007IAUC.8807....1Y};\n52 = \\citet{2008MNRAS.391.1874D};\n53 = \\citet{2007CBET..852....1M};\n54 = \\citet{2008A&A...487..223C};\n55 = \\citet{2007IAUC.8810....1Y};\n56 = \\citet{2007IAUC.8812....2N};\n57 = \\citet{2007IAUC.8846....2R};\n58 = \\citet{2008IAUC.8972....1N};\n59 = From AAVSO light curve;\n60 = \\citet{2008IAUC.8972....2N};\n61 = \\citet{2000A&A...355L...9K};\n62 = \\citet{1999IAUC.7324....1F};\n63 = \\citet{2003A&A...404..997I};\n64 = \\citet{2001IAUC.7665....2B};\n65 = \\citet{2005IAUC.8540....1P};\n66 = \\citet{2006JBAA..116..320B};\n67 = Average of our SMARTS spectra;\n68 = \\citet{2005IAUC.8640....2P};\n69 = \\citet{2007ApJ...669.1150L};\n70 = \\citet{1994ApJ...431L..47D};\n71 = \\citet{1996AJ....111..869A};\n72 = \\citet{1997A&A...318..908C};\n73 = \\citet{2005IAUC.8483....1N};\n74 = \\citet{2005IAUC.8484....1N};\n75 = \\citet{2005IAUC.8524....2R};\n76 = \\citet{2006CBET..466....2W};\n77 = \\citet{2008A&A...479L..51K};\n78 = \\citet{2006ATel..792....1C};\n79 = \\citet{2006IAUC.8702....2S};\n80 = \\citet{2009ATel.2137....1B};\n81 = \\citet{2008AJ....136.1815L};\n82 = \\citet{2007IAUC.8821....1N};\n83 = \\citet{2009AAS...21349125L};\n84 = \\citet{2007IAUC.8848....1M};\n85 = \\citet{2008ATel.1723....1S};\n86 = \\citet{2009AN....330...77P};\n87 = \\citet{WoodStar11};\n88 = \\citet{2008IAUC.8927....2N};\n89 = \\citet{2008IAUC.8936....2R};\n90 = \\citet{2009ATel.2157....1S};\n91 = \\citet{2008IAUC.8934....1N};\n92 = \\citet{2008ATel.1485....1T};\n93 = \\citet{2008ATel.1475....1T};\n94 = \\citet{2008IAUC.8938....2R};\n95 = \\citet{2008ATel.1514....1B} but Darnley et al. (2011, submitted) do not find convincing evidence of this period in their data.;\n96 = \\citet{2008CBET.1379....1H};\n97 = \\citet{1998IAUC.6941....1N};\n98 = \\citet{2000ApJ...541..791L};\n99 = \\citet{2000ApJ...541..791L};\n100 = \\citet{1998IAUC.7049....1R};\n101 = \\citet{2009AJ....138.1230P};\n102 = \\citet{2002IAUC.7809....1R};\n103 = \\citet{2005PASA...22..298A};\n104 = \\citet{2006IAUC.8671....1P};\n105 = \\citet{2006IAUC.8710....2R};\n106 = \\citet{2006IAUC.8700....1W};\n107 = \\citet{2006IAUC.8730....5L};\n108 = \\citet{2008MNRAS.387..344M};\n109 = \\citet{2007IAUC.8824....1N};\n110 = \\citet{2007IAUC.8846....2R};\n111 = \\citet{2008IAUC.8948....2A};\n112 = \\citet{2008IAUC.8948....3N};\n113 = \\citet{2008IAUC.8956....1R};\n114 = \\citet{2008IAUC.8998....3S};\n115 = \\citet{2008IAUC.8950....1N};\n116 = \\citet{2008CBET.1448....1H};\n117 = \\citet{2008IAUC.8957....1R};\n118 = \\citet{2009IAUC.9064....1N};\n119 = \\citet{2010MNRAS.tmp.1484M};\n120 = \\citet{1992IAUC.5422....1C};\n121 = \\citet{1996ApJ...466..410O};\n122 = \\citet{1996MNRAS.279..280S};\n123 = \\citet{2001MNRAS.328..159W};\n124 = \\citet{2005IAUC.8497....1L};\n125 = \\citet{1999IAUC.7176....1L};\n126 = \\citet{2002A&A...390..155D};\n127 = \\citet{2006AJ....131.2628B};\n128 = \\citet{2010CBET.2199....1H};\n129 = \\citet{2011MNRAS.410L..52M};\n130 = \\citet{2011A&A...527A..98S};\n131 = \\citet{1990MNRAS.242..653M};\n132 = \\citet{2007CBET.1029....1M};\n133 = \\citet{2008Ap&SS.315...79P};\n134 = \\citet{2007IAUC.8862....2B};\n135 = \\citet{2007IAUC.8883....1L};\n136 = \\citet{2010MNRAS.407L..21R};\n137 = \\citet{2007CBET.1183....1M};\n138 = \\citet{2010NewA...15..170P};\n139 = \\citet{2007CBET.1181....1N};\n140 = \\citet{2008IAUC.8936....3R};\n141 = \\citet{1998IAUC.6847....2N};\n142 = \\citet{2001MNRAS.328.1169L};\n143 = \\citet{1998IAUC.6848....1D};\n144 = \\citet{2008MNRAS.387..289L};\n145 = \\citet{2001IAUC.7590....2K};\n146 = \\citet{2001IBVS.5138....1B};\n147 = \\citet{2001IAUC.7589....2A};\n148 = \\citet{2008AstL...34..249B};\n149 = \\citet{2002IAUC.7975....2K};\n150 = \\citet{2003MNRAS.344..521M};\n151 = \\citet{2007AAS...210.0402V};\n152 = \\citet{2003IAUC.8176....3W};\n153 = \\citet{2003IAUC.8126....2L};\n154 = \\citet{2005A&A...429..599C};\n155 = \\citet{2003IAUC.8132....2K};\n156 = \\citet{2006MNRAS.371..459D};\n157 = \\citet{2005IAUC.8607....1S};\n158 = \\citet{2006MNRAS.369.1755M};\n159 = \\citet{2005IAUC.8638....1P};\n160 = \\citet{2006A&A...452..567M};\n161 = \\citet{2005IAUC.8617....1P};\n162 = \\citet{2005IAUC.8644....1M};\n163 = \\citet{2005IAUC.8644....1M};\n164 = \\citet{2004IAUC.8306....1N};\n165 = \\citet{2006A&A...459..875E};\n166 = \\citet{2004IAUC.8368....3L};\n167 = \\citet{2005IAUC.8502....1S};\n168 = \\citet{2005IAUC.8501....2K};\n169 = \\citet{2005IAUC.8523....4R};\n170 = \\citet{2005IAUC.8559....2G};\n171 = \\citet{2008A&A...478..815D};\n172 = \\citet{2005IAUC.8559....1L};\n173 = \\citet{2008ApJ...675L..93S};\n174 = \\citet{2005IAUC.8579....3R};\n175 = \\citet{2007CBET.1010....1M};\n176 = \\citet{2008NewA...13..557P};\n177 = \\citet{2007CBET.1006....1I};\n178 = \\citet{2007IAUC.8884....2R};\n179 = \\citet{2009AAS...21442806R};\n180 = \\citet{2008CBET.1352....1M};\n181 = \\citet{2008IAUC.8948....1R};\n182 = \\citet{2009IAUC.9061....1N};\n183 = \\citet{2004IAUC.8445....3S};\n184 = \\citet{2005IBVS.5638....1S};\n185 = \\citet{Heltonthesis};\n186 = \\citet{2007IAUC.8895....1P};\n187 = \\citet{2007IAUC.8896....2N};\n188 = \\citet{2008IAUC.8911....2N};\n189 = \\citet{2009MNRAS.397..979W};\n190 = \\citet{2008A&A...482L...1R};\n191 = \\citet{2008IAUC.8999....1W};\n192 = \\citet{1996A&A...315..166M};\n193 = \\citet{1996IAUC.6365....2I};\n194 = \\citet{2008AJ....135.1328N};\n195 = \\citet{2007AstBu..62..125G};\n196 = \\citet{2009AJ....138.1090L};\n197 = \\citet{1991IAUC.5222....1S};\n198 = \\citet{1996MNRAS.282..563V};\n199 = \\citet{1994ApJ...420..830S};\n200 = \\citet{1992ApJ...384L..41W};\n201 = \\citet{2009A&A...506.1309R};\n202 = \\citet{2009A&A...506.1309R};\n203 = Standard LMC value.\n}\n\\end{deluxetable}\n\nAlthough P-Cygni absorption profiles provide the best values for\nthe early velocities of the ejecta, they are not nearly as well reported \nin the literature as FWHMs of Balmer lines near maximum. Since nearly \nevery nova has a FWHM citation as part of the spectroscopic confirmation of \nthe initial visual detection, they are used as the expansion velocity\nproxy. Expansion velocities provide another way to classify a nova since \nmore massive WDs eject less mass and at a greater velocity than low mass WDs. \nThis characteristic can be preferable to t$_2$ since the rate of decline \ncan be difficult to determine for novae with secondary maxima, dust \nformation, or that have poorly sampled early light curves. Both FWHM \nand t$_2$ are used as simple proxies for the WD mass.\n\\footnote{While many other parameters also affect these observables, such as \nthe accretion rate, these parameters are generally not known for specific\nnovae and thus their contributions to the secular evolution can not be\ndetermined.}\nThe N$_H$ values were \nobtained from the HEASARC N$_H$ \ntool\\footnote{http:\/\/heasarc.gsfc.nasa.gov\/cgi-bin\/Tools\/w3nh\/w3nh.pl}\nusing the averaged LAB \\citep{2005A&A...440..775K} and DL \n\\citep{1990ARA&A..28..215D} maps within a 0.5$\\arcdeg$ area around each nova.\n\n\\begin{figure*}[htbp]\n\\plotone{newt2vsfwhm.ps}\n\\caption{The t$_2$ vs. FWHM near maximum for the novae in the sample. Filled\ncircles are known RNe. Half filled circles are suspected\nRNe based on their characteristics. Circles with asterisks inside\nindicate dusty novae. The distribution histograms for t$_2$ and the FWHM \nare also shown in the secondary graphs. The dotted lines in the t$_2$ \nhistogram show the boundaries between the \"very fast\", \"fast\", \"moderately \nfast\", and \"slow\" light curve classifications \\citep{Warner2008}. The \nmajority of our novae belong to the \"fast\" or \"very fast\" classifications.\n\\label{t2vsfwhm}}\n\\end{figure*}\n\nFigure \\ref{t2vsfwhm} shows the t$_2$ and FWHM distribution for all the novae \nwith both values. The 7 filled circles are known RNe while the 3\nhalf filled circles are suspected RNe. Dusty novae have an asterisk\ninside their circle symbols. As expected, the RNe tend \ntoward large FWHM and fast t$_2$ times. In this sample the dusty novae \nare scattered throughout the FWHM-t$_2$ phase space showing no particular \npreference for any type of nova. Figure \\ref{t2vsfwhm} also shows\nthat there is a wide dispersion between FWHM and t$_2$, e.g. novae\nwith t$_2$ of 10 days have FWHM values between 1000 and 3000 km s$^{-1}$.\n\nThe top panel of Figure \\ref{t2vsfwhm} shows the distribution of t$_2$ in \none day bins. Using the light curve classifications of \\citet{Warner2008},\nthe sample is heavily weighted toward very fast (t$_2$ $<$ 10 days) and \nfast (11 $>$ t$_2$ $<$ 25 days) novae. These are intrinsically more luminous,\nwith a larger rise from quiescence to maximum light. The peak is \nat 8 days and with a median t$_2$ of 9 days. There are only 5 novae \nin the entire sample with t$_2$ times greater than 50 days, \nIM Nor, LMC 2005, QY Mus, V723 Cas, and V5558 Sgr. The far\nright panel in Figure \\ref{t2vsfwhm} gives the distribution for \nFWHM in 500 km s$^{-1}$ bins. The majority of the novae in the sample\nhave low expansion velocities with the peak in the 1500-2000 km s$^{-1}$ bin. \nThe median FWHM is 1800 km s$^{-1}$. There are only 5 novae with \nFWHM $\\geq$ 4000 km s$^{-1}$ in the sample and all but V4643 Sgr \nare RNe or suspected RNe.\n\nThe X-ray sample is biased toward fast novae for multiple reasons. \nThe bulk of the observations are from {\\it Swift}, and {\\it Swift}\\ has only been \noperational for 5 years. Fast systems, like the RN RS Oph will rise \nand fall on time-scales of months (see Section \\ref{var}) while slow \nnovae, such as V1280 Sco, have not yet had sufficient time to evolve \ninto soft X-ray sources (and may not) thus are therefore underrepresented. \nSlow novae also require more observing time \nto be monitored over their lifetime, particularly if the same coverage of\nthe X-ray evolution is desired. Allocations of {\\it Swift}\\ observing time\nover multiple cycles are difficult to justify and execute unless a compelling\nscientific rationale is forthcoming, such as unusual or significant \nspectral variations (see Section \\ref{fex}), count rate oscillations, \nabundance pattern changes, etc.\nSlow and old novae (many tens of months post-outburst)\nare generally sampled once a year in part due to their slow evolution.\nHowever, the main reason the sample depicted in Fig. \\ref{t2vsfwhm}\nfavors fast novae is due to the strong \nselection effect toward outbursts on high mass WDs. While high mass WDs, \n{\\it e.g.} $\\geq$ 1.2 M$_{\\odot}$, are relatively rare in the field,\nthe time-scale between outbursts is significantly shorter than for low-mass \nWDs, meaning they dominate any observational sample\n\\citep{1994ApJ...425..797L}. Finally, high mass WDs give rise to more\nluminous outbursts and the {\\it Swift}\\ Nova-CV group has a V $<$ 8 magnitude\nselection criterion which leads to preferentially selecting brighter sources.\n\n\\begin{figure*}[htbp]\n\\plotone{dusthistogram.ps}\n\\caption{The distribution of dusty novae in the sample. The cross-hashed\nregion is for novae that showed strong dust characteristics; however, the\npresence of dust in these systems has not been spectrophotometrically \ncorroborated at IR wavelengths. The majority of novae in our X-ray\nselected sample (Table \\ref{chartable}) did not form dust.\n\\label{dusthistogram}}\n\\end{figure*}\n\nFigure \\ref{dusthistogram} shows the distribution of our sample \n(Table \\ref{chartable}) with respect to dust formation frequency. \nOnly $\\sim$ 16\\% of the novae in the sample had clear indications\nin the literature of dust formation in the ejecta. The dust formation \nfrequency increases to 31\\% when including the 7 novae where dust \nlikely formed based on characteristics of the visual light curve but not \nyet confirmed by a measured SED excess in the thermal- and mid-infrared, \n{\\it e.g.} the \"maybe\"s in column 10 of Table \\ref{chartable}.\nThis is consistent with the expectations of the general population\nof dusty novae which ranges from 18\\% to $\\gtrsim 40$\\%. The lower limit\nis set by \\citet{2010AJ....140...34S} who find that 93 well sampled \nAmerican Association of Variable Star Observers (AAVSO) novae have\nthe large dip in their visual light curves indicative of strong \ndust formation \\citep[see][]{1998PASP..110....3G}. The upper limit\nis from a recent \\textit{Spitzer} survey of IR bright novae that finds\nmany novae have weak dust emission signatures with little or no dip in the \nvisual light curve especially at late epochs (many 100s of days post-outburst) \nwhen emission from the dust envelope is a few $\\mu$Jy\n\\citep{WoodStar11,Heltonthesis}\n\nIn order to obtain the best X-ray and UV data, it is desirable to target\nnovae with low extinction along the line of sight. However, determining\nthe extinction early in the outburst is challenging. N$_H$ maps are crude\nsince they sample large regions of the sky. The region size used to \nderive the N$_H$ values in Table \\ref{chartable} was 0.5$^{\\arcdeg}$.\nTypically just a handful of sight lines are available in regions of this\nrelatively small size. The problem is exacerbated in inhomogeneous areas \nlike the Galactic plane where most novae are found. The \nextinction maps of \\citet{1998ApJ...500..525S} can be used to obtain E(B-V) \nsince their spatial resolution is significantly higher. However, the\n\\citet{1998ApJ...500..525S} maps suffer from large errors in the Galactic\nplane, $|b| <$ 5$^{\\arcdeg}$. Maps also give the total Galactic line of\nsight with no information versus distance and thus provide only an upper \nlimit. E(B-V) can also be determined from\nindirect methods but these require either high resolution spectroscopy \nto measure ISM absorption lines \n\\citep[{\\it e.g.} \\ion{Na}{1}~D $\\lambda$5890\\AA;][]{munari97}, \nthe line strengths of optical and near-IR spectroscopy of \\ion{O}{1} lines\n\\citep{rudy89}, or extensive B and V photometry during\nthe early outburst \\citep[{\\it e.g.} intrinsic (B-V) at V$_{max}$ or\nt$_2$;][]{1987A&AS...70..125V}. Finally, \nE(B-V) estimates can be affected by other factors occurring during \nthe outburst such as dust formation or intrinsic absorption from the ejecta\nwhile the expanding material is still dense.\n\nIt is therefore desirable to check \nthat the general relationship between N$_H$ and E(B-V) holds for novae. \nFigure \\ref{ebmvnh} shows N$_H$ versus E(B-V) for the novae in this \npaper with the dotted line showing the average Milky Way extinction law, \nE(B-V) = N$_H$\/4.8$\\times$10$^{21}$ \\citep{1978ApJ...224..132B}, as the \ndotted line. The circles represent novae with Galactic latitudes, \n$|b| \\geq$ 5$^{\\arcdeg}$ while pluses are novae found within the disk,\n$|b| <$ 5$^{\\arcdeg}$. Filled circles are Magellanic novae. Errors are \npresent on all sources when available in the literature. There is good \nagreement with the relationship for novae with E(B-V) $\\leq$0.6 and \nN$_H$ $\\leq$ 2.9$\\times$10$^{21}$ with a correlation coefficient of 0.85. \nThese are primarily novae found outside of the galactic disk and thus\nfit the relationship well. Novae with these low extinction values and\ncolumn densities are ideal for soft X-ray detection. The relationship \nbreaks down at larger values with a lower correlation coefficient of 0.64 \nfor the entire sample as it is dominated by novae embedded within\nthe Galactic disk. Novae with E(B-V) values greater than 1.5 generally \nmake poor SSS candidates due to the large extinction.\n\nThe maximum magnitude vs. rate of decline relationship of\n\\citet{1995ApJ...452..704D} provides an estimate of the distances\nfor the Galactic novae in Table \\ref{chartable}. The distance estimate\nrange extends from the relatively nearby V1280 Sco ($\\sim$ 1 kpc) \nto the other side of the Galaxy for V2576 Oph ($\\sim$ 28 kpc).\nThe median Galactic distance from this relationship is 5.5 kpc\nfor this sample.\n\n\\begin{figure*}[htbp]\n\\plotone{ebmvnh.ps}\n\\caption{Local N$_H$ value versus the estimated E($B-V$). The values\nare from Table~\\ref{chartable}. The dotted line shows the E(B-V) vs. N$_H$ \nrelationship of \\citet{1978ApJ...224..132B}. Circles are $|b| \\geq$ \n5$^{\\arcdeg}$ novae, the $|b| <$ 5$^{\\arcdeg}$ novae are shown as pluses,\nand filled circles are Magellanic novae. Errors bars are given when\navailable.\n\\label{ebmvnh}}\n\\end{figure*}\n\n\\subsection{X-ray evolution\\label{xrayevol}}\n\nAll the available {\\it Swift}\\ XRT and UVOT data of novae in the public archive\nup to 2010 July 31 are presented in Table \\ref{fullswift}. The data were\nprimarily obtained from pointed observations but a few serendipitous \nobservations are also included. The full data \nset is available in the electronic edition with only V1281 Sco shown as an\nexample here. The columns provide the {\\it Swift}\\ observation identification,\nexposure time, day of the observation from visual maximum (see Table \n\\ref{chartable}), XRT total (0.3-10 keV) count rate, the Hard (1-10 keV)\nto Soft (0.3-1 keV) hardness ratio, HR1, the Soft and Hard band count rates,\nthe (Hard-Soft)\/(Hard+Soft) hardness ratio, HR2, and the uvw2 \n($\\lambda_c$ = 1928\\AA), uvm2 (2246\\AA), uvw1 (2600\\AA), $u$ (3465\\AA), \n$b$ (4392\\AA), and $v$ (5468\\AA) UVOT filter magnitudes if available. \nThe UVOT magnitudes do not include the systematic photometric calibration \nerrors from \\citet[][Table 6]{2008MNRAS.383..627P}.\n\nThere is one row in the table per observation ID, however this is not a \nfixed unit of time; most observation IDs are less than 0.13 days \nduration and the median exposure time is 1.76 ks. For this exposure time, \nour 3 sigma detection limit is 0.0037 counts s$^{-1}$ (0.3-10 keV, corrected \nfor typical PSF coverage). This corresponds to an unabsorbed flux limit in \nthe same band, assuming absorption by N$_H$ = 3$\\times$ 10$^{20}$ of \n1.5$\\times$ 10$^{-13}$ and 2.0$\\times$ 10$^{-13}$ erg cm$^{-2}$ s$^{-1}$ for\na 5 keV optically thin thermal spectrum and a 50 eV blackbody spectrum, \nrespectively.\n\nTo create a self consistent dataset for Table~\\ref{fullswift} we used the \nsoftware described by \\citet{2009MNRAS.397.1177E,2007A&A...469..379E}.\nThis extracts source and background event lists from the data (using an \nannular source region where necessary to eliminate pile up), and then bins \nthese data to form the light curve, applying corrections for pile up, bad \npixels and the finite size of the source region as necessary. \nSince novae tend to be soft, we chose the energy bands for the\nhardness ratio to be 0.3--1 keV and 1--10 keV. There is also evidence\nthat, for very soft sources, pile up occurs at lower count rates than for\nhard sources; we thus set the threshold at which pile up is considered a\nrisk to be 0.3 (80) count s$^{-1}$ in PC (WT) mode \\citep[the defaults from]\n[are 0.6 and 150 respectively]{2007A&A...469..379E}.\n\nWe chose to group the data in one bin per {\\it Swift}\\ observation. In the\ncurrent version of the online software (for this binning mode), background\nsubtraction is only carried out using Gaussian statistics, and does not\nproduce upper limits if this results in a non-detection. We thus took\nthe `detailed' light curves produced by the web tools, which include the\nnumber of measured counts in each bin, the exposure time, and the\ncorrection factor (accounting for pile up etc.). Following the approach\nof \\citet{2007A&A...469..379E} for other binning methods, where any bins had\nfewer than 15 detected source counts, we used the Bayesian method of\n\\citet{1991ApJ...374..344K} to determine whether the source was\ndetected at the 3-sigma level. If this was not the case, a 3-$\\sigma$\nupper limit was produced using this Bayesian method, otherwise a data\npoint with standard 1-$\\sigma$ uncertainty was produced using the \n\\citet{1991ApJ...374..344K} approach.\n\nThe hardness ratios were always calculated using Gaussian statistics, \nunless one band had zero detected source photons: in this case no ratio\ncould be produced. The hardness ratios were defined as HR1 = H\/S and \nHR2 = (H-S)\/(H+S) where H = 1.0-10 keV and S = 0.3-1.0 keV.\n\n\\begin{deluxetable}{cccccccccccccc}\n\\tablecaption{{\\it Swift}\\ XRT\/UVOT data for novae in the archive\\label{fullswift}}\n\\tablewidth{0pt}\n\\tabletypesize{\\scriptsize}\n\\rotate\n\\tablecolumns{14}\n\\tablehead{\n\\colhead{ObsID} & \\colhead{Exp} & \\colhead{Day\\tablenotemark{a}} & \n\\colhead{CR\\tablenotemark{b}} & \n\\colhead{HR1\\tablenotemark{c}} & \\colhead{Soft} &\n\\colhead{Hard} & \\colhead{HR2\\tablenotemark{c}} &\n\\colhead{uvw2} & \\colhead{uvm2} & \\colhead{uvw1} & \\colhead{u} & \n\\colhead{b} & \\colhead{v} \\\\ \n\\colhead{} & \\colhead{(ksec)} & \\colhead{(d)} & \\colhead{(ct s$^{-1}$)} &\n\\colhead{} & \\colhead{(ct s$^{-1}$)} & \\colhead{(ct s$^{-1}$)} &\n\\colhead{} & \\colhead{(mag)} & \\colhead{(mag)} & \\colhead{(mag)} & \n\\colhead{(mag)} & \\colhead{(mag)} & \\colhead{(mag)}\n}\n\\startdata\n\\cutinhead{V1281\\,Sco}\n00030891001 & 3.87& 2.95&$<$ 0.0030 & \\nodata & \\nodata&\\nodata&\\nodata&\\nodata&\\nodata &\\nodata&\\nodata&\\nodata&\\nodata \\\\\n00037164001 & 5.24& 338.66& 0.1634$^{+0.0079}_{-0.0079}$ & 0.0090$\\pm$0.0074 & 0.1619& 0.0015 &-0.98 &\\nodata&\\nodata &\\nodata&\\nodata&\\nodata&\\nodata \\\\\n00037164002 & 3.45& 344.07& 0.2429$^{+0.0120}_{-0.0120}$ & 0.0062$\\pm$0.0057 & 0.2414& 0.0015 &-0.99 &19.50 &19.64 &18.20 &\\nodata&\\nodata&\\nodata \\\\\n00037164003 & 4.24& 351.05& 0.6376$^{+0.0282}_{-0.0282}$ & 0.0047$\\pm$0.0053 & 0.6346& 0.0030 &-0.99 &20.32 &\\nodata &\\nodata&\\nodata&\\nodata&\\nodata \\\\\n00037164005 & 1.66& 361.10& 0.2727$^{+0.0185}_{-0.0185}$ & 0.0012$\\pm$0.0081 & 0.2723& 0.0003 &-1.00 &20.43 &\\nodata &\\nodata&\\nodata&\\nodata&\\nodata \\\\\n00037164006 & 2.89& 366.41& 0.2284$^{+0.0129}_{-0.0129}$ & 0.0097$\\pm$0.0089 & 0.2262& 0.0022 &-0.98 &20.42 &\\nodata &\\nodata&\\nodata&\\nodata&\\nodata \\\\\n00037164007 & 2.02& 432.69& 0.0853$^{+0.0079}_{-0.0079}$ & 0.0002$\\pm$0.0063 & 0.0853& 0.0000 &-1.00 &20.22 &\\nodata &19.11 &\\nodata&\\nodata&\\nodata \\\\\n00090248001 & 4.68& 819.99&$<$ 0.0013 & \\nodata & \\nodata&\\nodata&\\nodata&20.32 &$>$20.56&20.07 &\\nodata&\\nodata&19.30 \\\\\n\\enddata\n\\tablenotetext{a}{Days after visual maximum, see Table \\ref{chartable}.}\n\\tablenotetext{b}{corrected for PSF losses and bad columns.\nThe 3 sigma upper limits are given when there is no detection \nbetter than 3 sigma.}\n\\tablenotetext{c}{Hardness ratios defined as HR1=H\/S and HR2=(H-S)\/(H+S) with \nHard(H)=1-10\\,keV and Soft(S)=0.3-1\\,keV}\n\\tablecomments{Table \\ref{fullswift} is published in its entirety in the \nelectronic edition of the {\\it Astrophysical Journal}. A portion is shown \nhere for guidance regarding its form and content.}\n\\end{deluxetable}\n\n\\begin{figure*}[htbp]\n\\includegraphics[angle=90,scale=0.60]{sssgood.ps}\n\\caption{The X-ray epochs for {\\it Swift}\\ sources with the best SSS\nphase coverage.\nThe novae are arranged by increasing optical emission line FWHM with \nthe FWHM values shown either left or right of the source. \"U\" is used \nfor novae with unknown FWHM velocities. Refer to Table \\ref{colordescriptions}\nfor a summary of the color coding. \\label{sssgood}}\n\\end{figure*}\n\n\\begin{figure*}[htbp]\n\\includegraphics[angle=90,scale=0.60]{sssbad.ps}\n\\caption{Same as Figure \\ref{sssgood} except for the {\\it Swift}\\ sources without\nsignificant SSS detections. \\label{sssbad}}\n\\end{figure*}\n\nFigures \\ref{sssgood} and \\ref{sssbad} show the X-ray observations of the \n{\\it Swift}\\ novae as a function of time since visual maximum. The novae are \norganized from bottom to top in increasing FWHM values (Table \n\\ref{chartable}), with the FWHM alternating on the left and right sides\nof the figures. Novae with unknown FWHM are labeled \"U\" and placed\nat the bottom. Figure \\ref{sssgood} shows the novae with confirmed SSS \nemission while Figure \\ref{sssbad} shows the\nnovae with no current SSS detections. Note that some novae in \nFigure \\ref{sssbad}, particularly the slowly evolving ones V5558 Sgr \nand V2468 Cyg,\nmay eventually evolve to the SSS phase. Figure \\ref{sssother} is the same \nbut for well observed SSS novae observed prior to the launch of {\\it Swift};\nthese novae typically have much poorer observational coverage.\nThe black stars are the individual {\\it Swift}\\ observations. \nThe figures also contain supplemental observations obtained with other\nX-ray facilities, {\\it Chandra}, {\\it XMM}, {\\it ASCA}, {\\it RXTE}, \n{\\it BeppoSax}, and {\\it ROSAT}\\ which are shown as circles, downward pointing \ntriangles, upward pointing triangles, yellow squares, diamonds, and red \nsquares, respectively. The colors associated with each bar give the \ntype of emission observed based on the hardness ratio. Red bars indicate\ntime intervals when the HR2 of an individual source was $\\lesssim -0.3$ \nand the uncertainty in the relative error was $<$ 5\\%; the photons in this\ncase are primarily soft and these regions are associated with the SSS phase.\nOrange bars designate observations with the same hardness ratio but \nlarger errors. Yellow\nshows regions between observations where hard\/soft change occurred.\nThe orange and yellow regions represent the maximum limits of the soft \nphase since the transition occurred at some point during these times. \nSection \\ref{sssphase} describes the SSS phase in greater detail. \nGreen regions show times when the overall detected spectrum was \nhard, HR2 $>$ -0.3 and section \\ref{xrayevol} discusses this phase. \nFinally, blue represents \ntimes of non-detections. Table \\ref{colordescriptions} also gives the \ncolor descriptions for Figures \\ref{sssgood} - \\ref{sssother}.\n\n\\begin{deluxetable}{lll}\n\\tablecaption{Figures \\ref{sssgood} - \\ref{sssother} detection\ndefinitions and descriptions and symbol legend\\label{colordescriptions}}\n\\tablewidth{0pt}\n\\tabletypesize{\\scriptsize}\n\\tablehead{\n\\colhead{Color} & \\colhead{HR2\\tablenotemark{a} and error} & \n\\colhead{X-ray emission} \n}\n\\startdata\nBlue & \\nodata & Undetected \\\\\nGreen & $>$-0.3 & Hard \\\\\nYellow & \\nodata & Transition between Green and Orange\/Red classification. \\\\\nOrange & $\\lesssim$-0.3 and $>$ 5\\% error & Soft but with large uncertainty, highly variable during initial rise. \\\\\nRed & $\\lesssim$-0.3 and $<$ 5\\% error & Soft X-rays \\\\\n\\hline\n\\multicolumn{3}{c}{Symbol legend} \\\\\n\\hline\n{\\it Swift} & stars & \\\\\n{\\it Chandra} & circles & \\\\\n{\\it XMM} & downward pointing triangles & \\\\\n{\\it ASCA} & downward pointing triangles & \\\\\n{\\it RXTE} & yellow squares & \\\\\n{\\it BeppoSax} & diamonds & \\\\\n{\\it ROSAT} & red squares & \\\\\n\\enddata\n\\tablenotetext{a}{HR2=(H-S)\/(H+S) with\nHard(H)=1-10\\,keV and Soft(S)=0.3-1\\,keV}\n\\end{deluxetable}\n\nSeveral trends are evident in Figures \\ref{sssgood} - \\ref{sssother}.\nAs the FWHM decreases, the novae in the sample become SSS later and remain \nin the SSS phase longer. This behavior is consistent with \nlarger expansion velocity novae originating on higher mass WDs\n\\citep{2009cfdd.confE.199S}. In addition the early, hard detections \nare generally only observed in the high FWHM novae. The trends evident in \nFigures \\ref{sssgood} and \\ref{sssother} allow for a straightforward \ninterpretation of Figure \\ref{sssbad} - fast novae (loci at the top \nof the panel) are infrequently observed in the SSS phase as \nearly X-ray observations of these systems is often absent. The slower \nnovae at the bottom of the figure have not been followed with \nsufficient temporal coverage late in their evolution or they \nhave not yet reached the SSS phase or have ceased nuclear burning before\ntheir ejecta clears sufficiently to observe SSS emission.\n\nA note of caution about using Figures \\ref{sssgood} and \\ref{sssbad} to\ndetermine nuclear burning time scales is appropriate. These figures \nare based only on the strength and error of HR2 as provided in \nTable \\ref{fullswift} which is based on a fixed hardness threshold for\nall novae in the table.\nNovae that have significant intrinsic hard emission such as V407 Cyg\nmay not be classified as SSSs by this definition even though they \nhave soft X-ray light curves typical of nuclear burning and cessation\non the WD (see Section \\ref{v407cygSSS}). \nHigh extinction will have a similar effect. The red regions\ngenerally also overestimate the duration of the SSS since that phase is\nalso defined by a tremendous rise in the soft X-ray count rate.\nSections \\ref{S:ton} and \\ref{S:toff} provide the determination of nuclear \nburning time scales for the X-ray nova sample.\n\n\\begin{figure*}[htbp]\n\\includegraphics[angle=90,scale=0.60]{non_swift.ps}\n\\caption{Same as Figure \\ref{sssgood} but for pre-{\\it Swift}\\ \nSSS novae.\\label{sssother}}\n\\end{figure*}\n\nFor completeness, Table \\ref{othertable} gives a summary of all the \npublicly available, pointed {\\it XMM}\\ and {\\it Chandra}\\ nova observations.\nThe columns are the nova name, the observational \nidentifier, the exposure time, Julian date and day after visual \nmaximum of the observation, and a short comment on the result of \nthe observation. The instrument set up is also given in the 2nd \ncolumn for the {\\it Chandra}\\ observations. In some cases this data set \nprovides important information on the SSS status of some sources\ndue to a lack of or weak {\\it Swift}\\ detections. \nAn example would be the {\\it XMM}\\ observations\nof V574 Pup which confirms that there was a strong SSS during the \ninterval between 2005 and 2007 when there were no {\\it Swift}\\ data \n\\citep{Heltonthesis}.\n\n\\begin{deluxetable}{llrcrl}\n\\tablecaption{Pointed Chandra and XMM observations of recent novae \n\\label{othertable}}\n\\tablewidth{0pt}\n\\tablehead{\n\\colhead{Name} & \\colhead{Obs ID\\tablenotemark{a}} & \n\\colhead{Exp (ks)} & \\colhead{JD} & \\colhead{Days\\tablenotemark{b}} & \n\\colhead{Result\\tablenotemark{c}}\n}\n\\startdata\nCI Aql & 2465 (ACIS-S) & 2.2 & 2452062 & 396 & Faint (1) \\\\\n & 2492 (ACIS-S) & 19.9 & 2452123 & 457 & Faint (1) \\\\\n & 0652760201 & 26.9 & 2455577 & 3912 & NPA \\\\\nCSS081007\\tablenotemark{d} & 9970 (HRC-S\/LETG) & 35.2 & 2454818 & 222:& SSS (2) \\\\ \nIM Nor & 3434 (ACIS-S) & 5.6 & 2452317 & 28 & Not detected (3) \\\\ \n & 2672 (ACIS-S) & 4.9 & 2452425 & 136 & Faint and hard (3) \\\\\nKT Eri & 12097 (HRC-S\/LETG)& 15.2 & 2455219 & 69 & SSS (4) \\\\\n & 12100 (HRC-S\/LETG)& 5.1 & 2455227 & 77 & SSS \\\\\n & 12101 (HRC-S\/LETG)& 5.1 & 2455233 & 83 & SSS \\\\\n\t & 12203 (HRC-S\/LETG)& 5.1 & 2455307 & 157 & SSS \\\\\nNova LMC 2000 & 0127720201 & 16.3 & 2451751 & 14 & Faint and hard (5) \\\\\n & 0127720301 & 10.0 & 2451785 & 48 & Hard (5) \\\\\n\t & 0127720401 & 10.5 & 2451998 & 291 & Not detected (5) \\\\\nNova LMC 2009a& 0610000301 & 37.7 & 2454957 & 90 & SSS \\\\\n & 0610000501 & 58.1 & 2455032 & 165 & SSS \\\\\n & 0604590301 & 31.9 & 2455063 & 196 & SSS \\\\ \n & 0604590401 & 51.2 & 2455097 & 230 & SSS \\\\ \nNova SMC 2005 & 0311590601 & 11.6 & 2453807 & 219 & Not detected \\\\\nU Sco & 12102 (HRC-S\/LETG)& 23.2 & 2455241 & 17 & SSS (6) \\\\\n & 0650300201 & 63.8 & 2455247 & 23 & SSS (7) \\\\\n & 0561580301 & 62.8 & 2455259 & 35 & SSS (7) \\\\\nV1065 Cen & 0555690301 & 9.4 & 2454837 & 714 & Not detected \\\\\nV1187 Sco & 4532 (ACIS-S) & 5.2 & 2453305 & 96 & $<$ 2keV + NVII line \\\\\n & 4533 (HRC-S\/LETG) & 26.1 & 2453401 & 181 & Not detected \\\\\n & 0404431101 & 4.7 & 2454161 & 941 & Not detected \\\\\n & 0404430301 & 9.4 & 2454161 & 941 & Not detected \\\\\n & 0555691001 & 7.1 & 2454904 & 1684 & Not detected \\\\ \nV1280 Sco & 0555690601 & 4.5 & 2454903 & 756 & 1-keV emission \\\\\nV2361 Cyg & 0405600101 & 11.1 & 2453868 & 456 & Not detected \\\\\n & 0405600401 & 14.9 & 2454028 & 616 & Not detected \\\\\nV2362 Cyg & 0506050101 & 9.9 & 2454225 & 394 & thermal plasma (8) \\\\\n & 0550190501 & 27.9 & 2454821 & 990 & Very weak \\\\\nV2467 Cyg & 0555690501 & 7.0 & 2454780 & 605 & SSS \\\\\nV2487 Oph & 0085580401 & 8.3 & 2451965 & 986 & thermal plasma (9)\\\\\n & 0085581401 & 8.1 & 2452157 & 1178 & thermal plasma (9)\\\\\n & 0085581701 & 7.6 & 2452331 & 1352 & thermal plasma (9)\\\\\n & 0085582001 & 8.5 & 2452541 & 1562 & thermal plasma + Fe K$\\alpha$ line (9)\\\\\nV2491 Cyg & 0552270501 & 39.3 & 2454606 & 39 & SSS (10) \\\\\n & 0552270601 & 30.0 & 2454616 & 49 & SSS (11) \\\\\nV2575 Oph & 0506050201 & 14.9 & 2454347 & 569 & Not detected \\\\ \nV2576 Oph & 0506050301 & 11.5 & 2454376 & 544 & Not detected \\\\\nV2615 Oph & 0555690401 & 9.7 & 2454922 & 735 & Not detected \\\\\nV351 Pup & 0304010101 & 51.8 & 2453525 & 4908 & Faint \\\\\nV458 Vul & 0555691401 & 11.7 & 2454780 & 459 & weak 1-keV emission \\\\\nV4633 Sgr & 0085580301 & 10.2 & 2451828 & 933 & weak (12) \\\\\n & 0085581201 & 7.3 & 2451977 & 1082 & weak (12)\\\\\n\t & 0085581301 & 11.6 & 2452159 & 1264 & weak (12)\\\\\nV4643 Sgr & 0148090101 & 11.9 & 2452716 & 750 & Not detected \\\\\n & 0148090501 & 11.0 & 2452894 & 928 & Not detected \\\\\nV5114 Sgr & 0404430401 & 7.9 & 2454167 & 1086 & Not detected \\\\\n & 0404431201 & 3.6 & 2454167 & 1086 & Not detected \\\\\nV5115 Sgr & 0405600301 & 9.2 & 2454005 & 566 & weak SSS \\\\\n & 0550190201 & 14.9 & 2454925 & 1486 & weak detection \\\\ \nV5116 Sgr & 0405600201 & 12.9 & 2454164 & 608 & SSS (13) \\\\\n & 7462 (HRC-S\/LETG) & 35.2 & 2454336 & 780 & SSS (14) \\\\\n & 0550190101 & 26.6 & 2454893 & 1337 & Not detected \\\\ \nV574 Pup & 0404430201 & 16.6 & 2454203 & 872 & SSS \\\\\nV598 Pup & 0510010901 & 5.5 & 2454402 & 146 & SSS (15) \\\\\nXMMSL1 J060636\\tablenotemark{d} & 0510010501 & 8.9 & 2454270 & 627 & SSS (16) \\\\\n\\enddata\n\\tablenotetext{a}{{\\it Chandra}\\ observations have a four digit IDs and are followed\nby the instrument configuration. {\\it XMM}\\ observations have 10 digit IDs.}\n\\tablenotetext{b}{Days since visual maximum, see Table \\ref{chartable}.}\n\\tablenotetext{c}{The number in parenthesis is the code to the published data.\nNPA stands for \"Not Publicly Available\" and indicates proprietary observations\nat the time of this publication.}\n\\tablenotetext{d}{Full novae names are CSS081007030559+054715 and\nXMMSL1 J060636.2-694933.} \\\\\n\\tablecomments{\n(1) \\citet{2002A&A...387..944G};\n(2) \\citet{2009ATel.1910....1N};\n(3) \\citet{2005ApJ...620..938O};\n(4) \\citet{2010ATel.2418....1N};\n(5) \\citet{2003A&A...405..703G};\n(6) \\citet{2010ATel.2451....1O};\n(7) \\citet{2010ATel.2469....1N};\n(8) \\citet{2007ATel.1226....1H};\n(9) \\citet{2007ASPC..372..519F}; \n(10) \\citet{2008ATel.1561....1N};\n(11) \\citet{2008ATel.1573....1N};\n(12) \\citet{2007ApJ...664..467H};\n(13) \\citet{2008ApJ...675L..93S};\n(14) \\citet{2007ATel.1202....1N};\n(15) \\citet{2008A&A...482L...1R};\n(16) \\citet{2009A&A...506.1309R}.\n}\n\\end{deluxetable}\n\n\\section{THE EARLY HARD X-RAY PHASE}\n\nSome novae have hard X-ray emission, e.g. $>$ 1 keV, early \nin the outburst. These novae tend to be fast or recurrent novae. This \ninitial hard emission is thought to arise from shock heated gas \ninside the ejecta or from collisions with external material, {\\it e.g} the \nwind of the red giant secondary in RS Oph \\citep{2006ApJ...652..629B,\n2006Natur.442..276S,2007ApJ...665..654V, 2009ApJ...691..418D}. Early hard \nX-ray emission observed in the very fast nova V838 Her has been attributed \nto intra-ejecta shocks from a secularly increasing ejection velocity \n\\citep{1992Natur.356..222L,1994MNRAS.271..155O}. Much later in the outburst\nwhen nuclear burning has ceased hard X-rays can again dominate. These hard\nX-rays come from line emission from the ejected shell and\/or emission \nfrom the accretion disk \\citep{2002AIPC..637..345K}, or in the case of\nRS Oph, the re-emergence of the declining shocked wind emission once the \nSSS emission has faded \\citep{2008ASPC..401..269B}.\n\nEvery nova with a FWHM $\\ge$ 3000 km s$^{-1}$ and observations within 100 \ndays after visual maximum in the {\\it Swift}\\ sample exhibited hard X-rays. \nThis detection rate is partially due to the fact that many of these novae \nwere high interest targets, {\\it e.g.} very bright at visual maximum (KT Eri),\nextreme ejection velocity (V2672 Oph), RN (RS Oph, V407 Cyg and\nU Sco), detected prior to outbursts as an X-ray source (V2491 Cyg), etc.; \nthus their early X-ray evolution was well documented. In addition, \na higher cadence of observations during the early phases greatly \nincreased the probability of discovery. \n\nThe evidence of initial hard X-ray emission for slow novae is sparse\nas few were well sampled early in their outbursts. Only V458 Vul\n\\citep{2009AJ....137.4160N,2009PASJ...61S..69T} had early \nobservations which showed a hard component with a duration of hundreds\nof days from its first observation $\\sim$ 70 days after visual maximum.\nThe lack of significant evidence of hard emission in the early outburst \nof slow novae is consistent with\nshocks, either within the ejecta or with a pre-outburst ambient medium, \nbeing the primary source of early hard X-ray emission in the\nfaster novae. Slower novae have lower ejection speeds\nand thus should either have less or delayed shock emission\n\\citep[see equ. 3 in][]{2006ApJ...652..629B}. Hard X-rays\nwere also detected late in the outburst of novae with extreme and multiple \nejection events.\nThe best example of this is V2362 Cyg \nwhich at the time of its unusually bright secondary maximum had\nalready doubled the width of its emission lines and was detected \nas a hard X-ray source \\citep{2008AJ....136.1815L}. \nSimilarly the slow nova V5558 Sgr was also \na late hard source. Its early light curve was marked by numerous \nsecondary maxima similar to V723 Cas \\citep{2008NewA...13..557P}. \n\nAnother interesting case is the slow nova V1280 Sco which was detected \nmultiple times between days 834 - 939 after outburst as an X-ray source. \n\\citet{2009ATel.2063....1N} found that the X-ray count rate was \nrelatively low and the SED was best fit with multiple thermal plasma \nmodels consistent with line emission. They attributed the lines \nfrom shock heating of the ejecta but this is difficult to reconcile \nwith how rapidly shock emission declines. \\citet{2010ApJ...724..480H} showed\nthat V1280 Sco had two bright secondary peaks after maximum. Thus, it is\npossible that this nova experienced additional ejection events later in\nthe outburst that contributed the necessary energy to power shocks.\nContemporary optical spectra from our Small and Moderate Aperture\nResearch Telescope System (SMARTS) nova monitoring program\nshow that the photosphere of V1280 Sco remains optically thick with \nP-Cygni profiles still present more than 4 years after outburst. \nAlternatively, the line emission may be from circumstellar gas \nphotoionized by the initial X-ray pulse of the explosion. Given the \nrelative proximity of V1280 Sco, ranging from 0.63$\\pm$0.10 kpc\n\\citep{2010ApJ...724..480H} to 1.6 kpc \\citep{2008A&A...487..223C}, any X-ray \nemission lines would be much brighter than most novae in our sample which has\na larger median distance of 5.5 kpc. Unfortunately, V1280 Sco was X-ray \nfaint making it impossible to determine the source of its X-ray emission.\n\n\n\\section{THE SSS PHASE\\label{sssphase}}\n\n\\subsection{Rise to X-ray Maximum and the ``Turn-on'' Time\\label{S:ton}}\n\nThe unprecedented temporal coverage of the early outburst in X-rays with\n{\\it Swift}\\ has fully revealed a new phenomenon during the rise to X-ray maximum.\nPrior to {\\it Swift}, V1974 Cyg had the best sampled X-ray light curve\n\\citep[see Fig. 1 in ][]{1996ApJ...456..788K}. The 18 {\\it ROSAT}\\ \nobservations showed a slow and monotonic rise to maximum. This light \ncurve evolution was expected as the obscuration from the ejecta clears \nand the effective temperature of the WD photosphere increases\n\\citep{1985ApJ...294..263M}. However {\\it Chandra}\\ observations of\nV1494 Aql \\citep{2003ApJ...584..448D} and V4743 Sgr\n\\citep{2003ApJ...594L.127N} hinted that this transition was not as smooth\nas previously observed, with short term \"bursts\", periodic oscillations,\nand sudden declines. \n\nWith daily and sometimes hourly {\\it Swift}\\ coverage,\nthe rise to X-ray maximum is unequivocally highly chaotic with large \nchanges in the count rate evident in all well observed {\\it Swift}\\ \nnovae to date. Figure \\ref{kteriearlylc} illustrates this \nphenomenon in KT Eri from the data available in Table \\ref{fullswift}.\nDuring the initial rise to X-ray maximum, it exhibited large\noscillations. The numerous large declines are even more dramatic when the\nobservational data sets are not grouped by observation\nID number as in Table \\ref{fullswift} \nbut broken into small increments (Walter et al. in prep).\nAt 76 days after visual maximum the variability became much smaller \nand the count rate stabilized around $\\sim$150 ct s$^{-1}$. \nIn addition to KT Eri \\citep{2010ATel.2392....1B},\nRS Oph \\citep{2011ApJ...727..124O}, U Sco \\citep{2010ATel.2430....1S},\nnova LMC 2009a \\citep{2009ATel.2025....1B,Bode2011},\nV2672 Oph \\citep{2009ATel.2173....1S}, \nV2491 Cyg \\citep{2010MNRAS.401..121P,2011arXiv1103.4543N}, \nand V458 Vul \\citep{2009AJ....137.4160N} \nall showed this large amplitude variability. The first three novae \nare known RNe while the next two and KT Eri are suspected to be\nRNe based on their observational characteristics. \nThe fact that the less energetic V458 Vul \nalso exhibited this phenomenon indicates that it is not just associated\nwith very fast or recurrent novae. \nSee Section \\ref{var} for further discussion of nova variability.\n\n\\begin{figure*}[htbp]\n\\plotone{KTEriearlylc.ps}\n\\caption{The early X-ray light curve of KT Eri in days since visual maximum.\nThe top panel shows the count rate and the lower panel gives the hardness\nratio, HR1. Dotted lines are added to the top panel to emphasize the\nvariability. Prior to day 65 KT Eri was faint and hard.\nBetween days 65 and 75 the source transitioned to the bright SSS phase\nwith large amplitude oscillations in the count rate and some corresponding\nchanges in HR1. After day 76 the both the count rate and\nhardness ratio significantly stabilized but still showed variability\n(see Section \\ref{var}).\n\\label{kteriearlylc}}\n\\end{figure*}\n\n\\begin{figure*}[htbp]\n\\plotone{turnonvsvel.ps}\n\\caption{SSS turn-on time of novae (Table \\ref{timescales})\nas a function of the ejection velocity (estimated from the FWHMs\nin Table \\ref{chartable}). \nFilled circles are known RNe. Half filled circles\nare suspected RNe based on their characteristics.\nFrom the top to the bottom the lines\nshow the relationship from Shore (2008; Eqn. 9.2) for ejected masses\nof 1$\\times$10$^{-3}$, 1$\\times$10$^{-4}$, 1$\\times$10$^{-5}$,\n1$\\times$10$^{-6}$, and 1$\\times$10$^{-7}$ M$_{\\odot}$, respectively.\nThe downward and upward arrows are estimated upper and lower limits.\n\\label{velturnon}}\n\\end{figure*}\n\nThe emergence of the SSS, referred to as ``turn-on'' time or t$_{on}$ \nhereafter, provides information on the mass of the ejected shell. \nThe turn-on times for the novae in this sample are given in \nTable \\ref{timescales}. The t$_{on}$ time is defined as the time \nafter visual maximum when the HR2 $<$ -0.8 and there is significant\nincrease in the soft count rate. Similarly the ``turn-off'' \ntime (t$_{off}$) is defined as the time after t$_{on}$ when the\nhardness ratio becomes harder, HR2 $>$ -0.8, and the soft count rate\ndeclines rapidly as nuclear burning ends. Note that these\ndefinitions should not be confused with the SSS phases as shown in \nFigures \\ref{sssgood} - \\ref{sssother} as t$_{on}$ and t$_{off}$ also\ninclude the change in the soft count rate. SSS emission can only be \nobserved when the ejecta column density declines \nto the point where the source can be observed. With the expansion\nvelocity and turn-on time, upper limits on the ejected mass can be\nestablished. \\citet{Shore08} gives the relationship (see Equation 9.2),\n\\begin{equation}\nM_{eject} \\sim 6 \\times 10^{-7} \\phi N_{H}(22) v_{exp}(1000)^2 t_{on}^2 M_{\\odot} \\\\\n\\end{equation}\nwhere $\\phi$ is the filling factor, N$_H$(22) is the column density \nin units of 10$^{22}$ cm$^{-2}$, v$_{exp}$(1000) is the expansion velocity in \nunits of 1000\\ km\\ s$^{-1}$, t$_{on}$ is the soft X-ray turn-on time\nin days and assumes spherical geometry. In this study, $\\phi$ = 0.1 and a\ncolumn density of 10$^{22}$ cm$^{-2}$ is used as the minimum N$_H$ for\nthe ejected shell to become transparent to soft X-rays.\nThe expansion velocity is determined from v$_{exp}$ = FWHM\/2.355\n\\citep{2010PASP..122..898M} where FWHM is the width of the Balmer\nlines near visual maximum as given in Table \\ref{chartable}.\nUsing the t$_{on}$ times from Table \\ref{timescales}, \nFigure \\ref{velturnon} shows the estimated ejected masses as a function \nof ejection velocity. \n\\btxt{Note that the velocities derived from these FWHMs are lower limits\nas the X-ray opacity in the ejecta depends on faster material. This has\nthe effect of shifting all the points in Figure \\ref{velturnon} to the \nright.}\nAccordingly the fastest novae, at the bottom \nright, U Sco and V2672 Oph, must have ejected much less than 10$^{-5}$ \nM$_{\\odot}$ otherwise they would not have been observed as SSS sources \nso early after outburst. This inference is consistent with independent \nejected mass estimates \n\\citep[{\\it e.g.} U Sco,][]{2000AJ....119.1359A,2010AJ....140.1860D,\n2010ApJ...720L.195D}.\nConversely, novae in the upper left corner must eject a significant \namount of material. Large mass ejection events are also inferred from \nthe optical spectra of novae like V1280 Sco which still showed P-Cygni \nlines 3 years after outburst \\citep{2010PASJ...62L...5S} and a year \nlater in our recent SMARTS spectroscopy. \n\nNote that external extinction from the ISM is not taken into account \nin Figure \\ref{velturnon} nor is the evolution of the \neffective temperature of the WD photosphere. Novae with large extinction\nmay never be observed in the SSS phase while a slow increase in the \nWD temperature after the ejecta has sufficiently cleared will delay\nthe onset of t$_{on}$ resulting in an overestimate of the ejected mass\nderived from Equ. 1. Both factors along with deviations\nfrom the underlying assumptions such as different filling factors\nand non-spherical symmetry, can lead to different mass values given\nin Figure \\ref{velturnon}. These limitations explain why two novae\nwith the same ejection velocities, V2468 Cyg and V5558 Sgr at \n425 km s$^{-1}$, can have divergent mass estimates due to different\nturn-on times.\n\n\\begin{deluxetable}{lcc}\n\\tablecaption{SSS X-ray time scales\\label{timescales}}\n\\tablewidth{0pt}\n\\tablehead{\n\\colhead{Name} & \\colhead{turn-on} & \\colhead{turn-off} \\\\\n\\colhead{} & \\colhead{(d)} & \\colhead{(d)}\n}\n\\startdata\nCI Aql & \\nodata & $<$396 \\\\\nCSS 081007 & 185$\\pm$68 & 314$\\pm$68 \\\\\nGQ Mus & \\nodata & 3484.5$\\pm$159.5 \\\\\nIM Nor & $>$28 & $<$136 \\\\\nKT Eri & 71$\\pm$1 & 280$\\pm$10 \\\\\nLMC 1995 & $<$1087 & 2545$\\pm$426 \\\\\nLMC 2000 & $>$48 & $<$291 \\\\\nLMC 2009a & 95$\\pm$5 & 270$\\pm$10 \\\\\nRS Oph & 35$\\pm$5 & 70$\\pm$2 \\\\\nU Sco & 23$\\pm$1 & 34$\\pm$1 \\\\\nV1047 Cen & $>$144 & $<$972 \\\\\nV1065 Cen & \\nodata & $<$744\\tablenotemark{a} \\\\\nV1187 Sco & \\nodata & $<$181\\tablenotemark{a} \\\\\nV1213 Cen & $<$322 & $>$494 \\\\\nV1280 Sco & $>$928 & \\nodata \\\\\nV1281 Sco & $<$339 & 627$\\pm$194 \\\\\nV1494 Aql & 217.5$\\pm$30.5 & 515.5$\\pm$211.5 \\\\\nV1974 Cyg & 201$\\pm$54 & 561.5$\\pm$50.5 \\\\\nV2361 Cyg & \\nodata & $<$456 \\\\\nV2362 Cyg & \\nodata & $<$990 \\\\\nV2467 Cyg & $<$456 & 702$\\pm$97 \\\\\nV2468 Cyg & $<$586 & \\nodata \\\\\nV2487 Oph & \\nodata & $<$986 \\\\\nV2491 Cyg & 40$\\pm$2 & 44$\\pm$1 \\\\\nV2672 Oph & 22$\\pm$2 & 28$\\pm$2 \\\\\nV351 Pup & \\nodata & $<$490 \\\\\nV382 Vel & $<$185 & 245.5$\\pm$22.5 \\\\\nV407 Cyg & 15$\\pm$5 & 30$\\pm$5 \\\\\nV458 Vul & 406$\\pm$4 & $>$1051 \\\\\nV4633 Sgr & \\nodata & $<$934 \\\\\nV4743 Sgr & 115$\\pm$65 & 634$\\pm$108 \\\\\nV5114 Sgr & $<$1086 & \\nodata \\\\\nV5115 Sgr & $<$546 & 882$\\pm$336 \\\\\nV5116 Sgr & 332.75$\\pm$275.25 & 938$\\pm$126 \\\\\nV5558 Sgr & $>$850\\tablenotemark{b} & \\nodata \\\\\nV5583 Sgr & $<$81 & 149$\\pm$68 \\\\\nV574 Pup & 571$\\pm$302 & 1192.5$\\pm$82.5 \\\\\nV597 Pup & 143$\\pm$23 & 455$\\pm$15 \\\\\nV598 Pup & \\nodata & $<$127 \\\\\nV723 Cas & $<$3698 & $>$5308 \\\\\nV838 Her & \\nodata & $<$365 \\\\\nXMMSL1 J060636 & \\nodata & $<$291 \\\\\n\\enddata\n\\tablecomments{t$_{on}$ and t$_{off}$ bracket the time after visual maximum\nwhen the hardness ratio HR2 is softer than -0.8.}\n\\tablenotetext{a}{Evolution of $[$\\ion{Fe}{7}$]$ (6087\\AA) and lack of \n$[$\\ion{Fe}{10}$]$ (6375\\AA) in our SMARTS optical\nspectra are consistent with this upper limit from the X-ray non-detection.\nSee Section \\ref{fex}.}\n\\tablenotetext{b}{Optical spectra are slowly becoming more ionized which\nis consistent with slowly increasing SSS emission observed with {\\it Swift}.}\n\\end{deluxetable}\n\n\\subsection{Turn-off time\\label{S:toff}}\n\nTable \\ref{timescales} shows t$_{off}$ times or upper\/lower limits\nfor the novae in our sample. If optical light curve decline times,\n{\\it e.g.} t$_2$, are used as \nsimple proxies for WD masses then \nthere should be a relationship between \nt$_2$ and duration of the SSS phase. In Figure \\ref{t2turnoff} the\nturn-off time, t$_{off}$, is shown versus t$_2$.\nOverplotted as the solid line is the turn-off versus decline relationship\nof \\citet[][Equ. 31]{2010ApJ...709..680H} where t$_3$ was converted \nto t$_2$ using Equ. 7 in \\citet{2007ApJ...662..552H}.\nThe combined uncertainties of both equations is represented by \nthe two dotted lines. \\citet{2010ApJ...709..680H} find that the time when\nnuclear burning ends is $\\propto$ t$_{break}^{1.5}$ (Equ. 26), where\nt$_{break}$ is the time of the steepening of their model free-free \noptical-IR light curves. This relationship is derived using a series of \nsteady state models with a decreasing envelope mass to fit the observed \nmultiwavelength light curves. The X-ray and UV light curves are fit with \nblackbodies while the optical and IR curves use optically thin, free-free \nemission. The parameters of the model are the WD mass, composition of\nthe WD envelope, and its mass prior to outburst. While the general trend\nis similar, the observed data do not fit the \\citet{2010ApJ...709..680H} \nrelationship, especially when the sample is expanded to include the novae \nwith only upper or lower limits.\n\nThe relationship derived by \\citet{2010ApJ...709..680H} utilizes the\nt$_2$ derived from the $y$ band light curve instead of the $V$ band as in \nthis paper. The $y$ band is used by \\citet{2010ApJ...709..680H} since it \ngenerally samples the continuum where as the $V$ band can have a contribution\nin the red wing from strong H$\\alpha$ line emission. However, the difference\nin filters can not explain the poor agreement between the data and the\nrelationship in Fig. \\ref{t2turnoff} since there are similar numbers of\nnovae that fall above the line as below. If a contribution from H$\\alpha$\nin $V$ was significant then the disagreement would not be symmetric.\n\nSimilarly, Figure \\ref{velturnoff} shows the relationship between the \nFWHM and turn-off time with the dotted line depicting the \n\\citet{2003A&A...405..703G} turn-off vs. velocity relation. This relationship\nwas derived from all the SSS nova data available at the time which was\nonly 4 well constrained SSS novae and 4 novae with turnoff limits. With \nthe significantly larger sample currently available it is clear that \nthere is not a tight fit to the relationship.\nThis discrepancy is particularly acute for the slower novae in our sample \nwhich have turned off much sooner than expected. These Figures illustrate \nthat the gross behavior of novae is still poorly understood\nand confirm that the observational characteristics of an individual\nnova is governed by more than just the WD mass.\n\n\\begin{figure*}[htbp]\n\\plotone{newturnoffvst2.ps}\n\\caption{SSS turn-off time as a function of t$_2$ time with the\n\\citet{2010ApJ...709..680H} relationship (solid line) and its\nassociated uncertainty (dotted lines) overplotted. Upper and lower\nlimits are also shown. \nFilled circles are known RNe. Half filled circles \nare suspected RNe based on their characteristics.\n\\label{t2turnoff}}\n\\end{figure*}\n\n\\begin{figure*}[htbp]\n\\plotone{turnoffvsvel.ps}\n\\caption{SSS turn-off time as a function of the FWHM of H$\\alpha$ or\nH$\\beta$ near visual maximum. The relationship of \\citet{2003A&A...405..703G}\nis shown as the dotted line. Upper and lower limits are also shown.\nFilled circles are known RNe. Half filled circles\nare suspected RNe based on their characteristics.\n\\label{velturnoff}}\n\\end{figure*}\n\nAccurate determinations of the duration of nuclear burning\ncan also provide an independent ejected mass estimate. Recently,\n\\citet{2010ApJ...712L.143S} found that the ejected mass is only\ndependent on the total radiated energy, E$_{rad}$, and does not\nrequire knowledge about the geometry and structure of the shell\nas with other methods. E$_{rad}$ is not a trivial value to determine\nas it depends on the bolometric luminosity of the source and the\nduration of the outburst. {\\it Swift}\\ observations can potentially\ndetermine the bolometric flux both when the bulk of the emission is\nin a narrow wavelength region such as the early, optically thick\nphases in the UV and optical or later in the soft X-ray during the\nSSS phase. Estimates of the luminosity during both phases requires\nan accurate determination of the extinction and the distance.\nPerhaps the best example to use the \\citet{2010ApJ...712L.143S}\ntechnique on is RS Oph. With a bolometric luminosity of\n3$\\times$10$^{4}$ L$_{\\odot}$ from TMAP atmosphere models\n\\citep{2011ApJ...727..124O} and a t$_{off}$ of 70 days the estimated\nejected mass $\\sim$ 2$\\times$10$^{-6}$ M$_{\\odot}$. This is consistent\nwith the low mass estimates from the radio\n\\citep[(4$\\pm$2)$\\times$10$^{-7}$ M$_{\\odot}$;][]{2009MNRAS.395.1533E}\nand hydro-dynamical models of the X-ray behavior\n\\citep[1.1$\\times$10$^{-6}$ M$_{\\odot}$;][]{1992MNRAS.255..683O} and\n\\citep[$\\sim$5$\\times$10$^{-6}$ M$_{\\odot}$;][]{2009ApJ...691..418D}\n\n\\subsubsection{SSS phase durations}\n\nFigure \\ref{histogram}a shows the distribution of the duration of the SSS \nphase for this sample of novae. Since there are still relatively few \nnovae with well established turn-off times a coarse histogram with only three \nbins is used. The bins have durations of less than one year, between \none and three years, and greater than three years. Due to large \nuncertainties in their exact turn-off times, ten of the sample novae \ncannot be placed within a single bin and thus are shown as the smaller \ncross-hatched columns between the bins in which they might belong.\nOf the \\totallimitSSS\\ novae with detected SSS emission or with strong \nlimits on the duration of the SSS phase, 89\\%, have turned off in under \n3 years. There are only four novae, GQ Mus, LMC 1995, V574 Pup, and \nV723 Cas, with detected SSS emission beyond 3 years. V458 Vul and \nV1213 Cen were still SSSs at their last observations and could also \nexceed 3 years. A similar rapid turn-off was inferred from a search of\nthe {\\it ROSAT}\\ archive of novae with SSS detections. \\citet{2001A&A...373..542O} \nfound only 3 SSS novae among the 39 Galactic and Magellanic cloud novae \nin the {\\it ROSAT}\\ archive observed at least once within 10 years after \nvisual maximum. The median age of the 19 novae with documented turn-off \ntimes from this sample is 1.4 years.\n\n\n\\begin{figure*}[htbp]\n\\plottwo{histogram.ps}{M31histogram.ps}\n\\caption{Distribution of the durations of well established SSS novae \nin the Galaxy\/Magellanic Clouds (left figure) and M31 (right figure)\nfrom \\citet{2010A&A...523A..89H}.\nThe three duration bins are less than one year, between 1 and 3 years, \nand greater than three years. The hashed areas include the novae with \nonly limits on their turnoff time that precludes placing them in a \nspecific bin.\n\\label{histogram}}\n\\end{figure*}\n\n\nThe situation is different for nova surveys of M31 \n\\citep{2007A&A...465..375P,2010A&A...523A..89H} where SSSs identified as \nclassical novae 5-10 years after outburst are fairly common, {\\it e.g.}\n1995-05b, 1995-11c, and 1999-10a. Figure \\ref{histogram}b shows the same\nbins as before but with the 18 M31 novae detected as SSSs given in Table 9 \nof \\citet{2010A&A...523A..89H}. The difference can be explained by the\npredominance of slower novae in the M31 sample. The mean t$_2$ time \nof the nine M31 novae with reported decline times in the \n\\citet{2010A&A...523A..89H} SSS sample is 31 days whereas the peak \nfor our sample is significantly faster at 8 days (Figure \\ref{t2vsfwhm}). \nThe discrepancy in speed class between the two samples is due to selection \neffects. By design the Galactic\/Magellanic sample consists primarily \nof bright and hence faster novae. M31 surveys sample the entire \ngalaxy but with fewer {\\it Chandra}, {\\it XMM}\\ and {\\it Swift}\\ observations that are \nrandomly scattered in time. The M31 strategy finds many novae since\nthe observed M31 nova rate is greater, $\\sim$ 30 novae yr$^{-1}$ \n\\citep{1989AJ.....97.1622C} than that of the Milky Way\n\\citep[$\\sim$ 5 novae yr$^{-1}$][]{1997ApJ...487..226S}, however, with\nlimited time sampling slower novae with longer SSS phases are easier to\ndetect than fast novae with rapid turn-offs.\n\n{\\it ROSAT}\\ detected 2 SSS novae out of 21 Galactic novae for a Milky Way \ndetection frequency of 9.5\\% \\citep{2001A&A...373..542O}. \nIf the 4 RNe in the {\\it ROSAT}\\ list\nare discarded because their observations were taken $\\gtrsim$ 1 year\nafter outburst, the detection frequency increases to 11.8\\%. The M31\nsurvey has a similar low SSS detection frequency of 6.5\\% \n\\citep{2010AN....331..187P}. These two results show that it is difficult\nto catch novae during their SSS phase via random time sampling. \nHowever, a more systematic approach that 1) targets only \nbright and low extinction novae and 2) obtains multiple\nobservations early in the outburst may have a greater detection\nfrequency. Indeed, {\\it Swift}\\ has a significantly greater SSS detection rate of \n$\\sim$ 45\\% during its five years of operation with this more systematic\napproach. \n\n\\subsection{SSS emission in the hard X-ray spectrum of \nV407 Cyg\\label{v407cygSSS}}\n\nIn the initial analysis of the {\\it Swift}\\ data in \\citet{2011A&A...527A..98S} \na second soft component was required to fit some of the {\\it Swift}\\ X-ray spectra.\nHowever, there were insufficient counts to distinguish between a blackbody \nand an optically-thin plasma model. Assuming a distance of 2.7 kpc\n\\citep{1990MNRAS.242..653M}, the unabsorbed flux of the soft component\nin the day $<$30 model of Table 3 in \\citet{2011A&A...527A..98S} gives a \nblackbody luminosity of 2$\\times$10$^{37}$ erg s$^{-1}$ which is reasonable\nfor nuclear burning on a WD. To investigate whether the soft emission\nin V407 Cyg can be attributed to nuclear burning, we reanalyze the {\\it Swift}\\\nX-ray data with twice as many time bins as previously used. Figure\n\\ref{v407cygmodel} shows the results. As in \\citet{2011A&A...527A..98S}\nthe model abundances are allowed to vary but the temperatures are not \nsignificantly different if the abundances are constrained to be solar.\nThe data prior to day 10 and after day 50 can be fit with a single optically \nthin plasma model. The remaining 4 time bins all require a soft component\nwhich in this analysis is assumed to be a blackbody. Both the derived\nN$_H$ and the optically thin component temperature decline with time\nin the models. The blackbody effective temperature increases until the\nday 36 bin and declines in the day 45 bin, although the error bars are\nlarge enough that it could be constant over the last two dates. The \nderived luminosities (over the 0.3--10~keV X-ray band) \nfor the four dates with blackbody components are \n2.3$\\times$10$^{42}$ erg s$^{-1}$, 9.3$\\times$10$^{37}$ erg s$^{-1}$,\n1.9$\\times$10$^{35}$ erg s$^{-1}$, and 3.1$\\times$10$^{35}$ erg s$^{-1}$,\nrespectively, assuming a distance of 2.7 kpc. The extreme luminosity \nfor the day 16 bin cannot be considered reliable, given the very low \nfitted temperature of $\\sim$ 25~eV below the XRT 0.3~keV low-energy cut off.\nNevertheless, the results of fitting blackbodies to the {\\it Swift}\\ V407 Cyg \ndata are consistent with a scenario where the nuclear burning proceeded \non the WD surface near Eddington limits until about 30 days after \nvisual maximum. The fuel was consumed after that point leading to a \nrapid drop in the luminosity. Thus, although V407 Cyg was not a \ntrue SSS, its soft photon light curve was consistent \nwith expected evolution as seen in other novae.\n\n\\begin{figure*}[htbp]\n\\plotone{v407cygmodel.ps}\n\\caption{Results of model fits to the {\\it Swift}\\ V407 Cyg data set. The top\nleft panel shows the total 0.3-10 keV (squares) and soft 0.3-1 keV (circles)\nlight curves. The derived N$_H$ column for the 6 date bins is shown in \nthe top right panel. The Mekal temperature of the hotter, optical thin\nplasma model is shown in the middle left panel while the right middle panel\nshows the temperature of the blackbody fit to the softer component. A \nsecond, soft component is not needed in the first and last date bins.\nThe bottom panels show the observed (left) and unabsorbed (right) fluxes.\nSquares give the total from all components while the circles show just\nthe blackbody contributions. The right axis of the last panel also shows\nthe corresponding 0.3-10 keV luminosity assuming a 2.7 kpc distance.\n\\label{v407cygmodel}}\n\\end{figure*}\n\n\n\\section{DISCUSSION} \n\n\\subsection{Orbital period and turn-off time}\n\n\\citet{2003A&A...405..703G} found a correlation between the orbital \nperiod and X-ray turn-off time. However, at that time only four novae \nhad both well determined periods and X-ray turn-off times, GQ Mus, \nV1974 Cyg, V1494 Aql, and V382 Vel, and limits on CI Aql and U Sco. \nThe observed trend implied \nthat novae with short orbital periods had the longest duration SSS phases. \n\\citet{2003A&A...405..703G} attributed this relationship to \na feedback loop between the WD and its secondary. The luminous X-rays \nproduced during the SSS phase excessively heat the facing side of the\nsecondary in short period systems. The energy added to the outer\nlayers of the secondary causes it to expand, producing \nhigher mass loss leading to enhanced accretion of material onto \nthe WD. \n\nSince 2003, the turn-off times of \\newturnoffperiod\\ additional \nnovae with known periods have been determined. There are also strong \nlimits on the turn-off times of \\newturnoffperiodlimit\\\nother novae with known orbital periods. \nInclusion of this expanded sample, shown in Figure \\ref{periodvstoff}, \ncauses the trend between orbital period and duration of the \nSSS phase noted by \\citet{2003A&A...405..703G} to disappear. \nThe new distribution, with an increased sample size, shows \nno discernible correlation. Orbital separation apparently\nhas no effect on the duration of nuclear burning. \n\n\\begin{figure*}[htbp]\n\\plotone{newperiod.ps}\n\\caption{SSS turn-off time as a function of orbital period for novae with\nwell established turn-off times and novae with good upper (i.e. still in\nthe SSS phase) and lower limits. \nFilled circles are known RNe. Half filled circles\nare suspected RNe based on their characteristics.\nThe top plot shows the distribution\nhistogram of our sample (solid line) and of all the known novae (dotted\nline) from Table 2.5 in \\citet{Warner2008}.\n\\label{periodvstoff}}\n\\end{figure*}\n\nTo see if the lack of a trend could be explained by having a \nnon-representative sample of novae, the top panel of Figure \n\\ref{periodvstoff} shows the distribution in 1 hour orbital period \nbins of the updated \\citet{Warner2008} sample as the solid line. \nThe distribution of all novae with orbital periods is shown as the \ndotted line and shows that the SSS sample is a consistent sub-sample\nof the known nova period distribution.\n\n\\citet{2010AJ....139.1831S} claim a similar relationship between turn-off \ntime and orbital period, albeit in highly magnetized systems. They find that\nof the eight novae with quiescent luminosities $>$10$\\times$ brighter \nthan pre-eruption, all have long SSS phases, short orbital periods, highly \nmagnetized WDs, and very slow declines during quiescence. \nSimilar to \\citet{2003A&A...405..703G}, \\citet{2010AJ....139.1831S}\npropose that nuclear burning on the WD is prolonged by increased \naccretion from the close secondary but in this case efficiently funneled \non the WD by the strong magnetic fields. The 8 novae \n\\citet{2010AJ....139.1831S} cite are CP Pup, RW UMi, T Pyx, V1500 Cyg, \nGQ Mus, V1974 Cyg, V723 Cas, and V4633 Sgr. \n\nThe hypothesis that these specific characteristics enhance the SSS duration \ncan be directly evaluated using V4633 Sgr, GQ Mus, V1974 Cyg, and V723 Cas, \nsince they all have X-ray observations within the first 3 years of \noutburst. For CP Pup, RW UMi, T Pyx, and V1500 Cyg the assertion of a long \nlasting SSS emission phase depends on secondary evidence as none had any \ndirect X-ray observations during outburst. Lacking direct X-ray observations\nwe will ignore these 4 sources for the test.\n\nThe first X-ray observation of V4633 Sgr was obtained 934 \ndays after visual maximum but it and subsequent observations were of \na hard source implying that any SSS emission was missed.\nWith an upper limit of 2.5 years for its SSS emission, V4633 Sgr can not\nbe considered a long-lived SSS nova based on the distribution shown \nin Figure \\ref{histogram}a. The SSS duration in V1974 Cyg was\neven shorter and much better constrained at 1.53$\\pm$0.14 years. In\naddition, V1974 Cyg was not \"excessively\" luminous in outburst as alleged in \n\\citet{2010AJ....139.1831S}. Its early UV plus optical fluxes were\nconsistent with the Eddington luminosity of a WD with a mass range of \n0.9-1.4 M$_{\\sun}$ \\citep{1994ApJ...421..344S}. The later \"excessive\" X-ray \nluminosities of \\citet{1998ApJ...499..395B} were derived from blackbody fits \nwhich are known to predict higher luminosities than model atmospheres \nfit to the same data.\nWhile V723 Cas has the longest SSS duration known among novae ($\\gtrsim 15$ \nyrs), its orbital period is very long at 16.62 hrs and significantly longer\nthan that of GQ Mus, 1.43 hrs. The claim of magnetic activity in V723 Cas \nis based on the different periodicities observed in the early light curve \nindicating an intermediate polar (IP). However, the multiple periodicities \nused as evidence by \\citet{2010AJ....139.1831S} were from data \nobtained early in the outburst while the nova ejecta were still clearing \n\\citep{1998CoSka..28..121C}. Photometry obtained at this early stage of\ndevelopment frequently results in noisy periodograms. Data obtained\nlater in the outburst by \\citet{2007AstBu..62..125G} and over the\nthe last 4 years from our own photometric monitoring \\citep[Hamilton C.,\nprivate communication,][]{2007AAS...210.0404S} reveal a well defined\n16.7 hr period with a large $\\sim$ 1.5 magnitude amplitude in the UV, \noptical and NIR bands. There is no other evidence in the literature \nto support that V723 Cas is magnetic. \nOf the 4 novae with supporting X-ray observations, only GQ Mus fully\nmatches the criteria of a long lasting SSS on a magnetic WD in \na short period system. \n\nWith our expanded X-ray sample there are 3 additional novae with well \nconstrained SSS durations that can potentially be used to test the hypothesis.\nV4743 Sgr \\citep{2006AJ....132..608K}, V597 Pup \\citep{2009MNRAS.397..979W}, \nand V2467 Cyg \\citep{2008ATel.1723....1S} are IP candidates and thus \nbelieved to have strong magnetic fields. The orbital periods for \nV597 Pup and V2467 Cyg are relatively short at 2.66 and 3.8 hrs,\nrespectively, but the period in V4743 Sgr is much longer at 6.74\nhrs, see Table \\ref{chartable}. While the turn-off times for\nthese novae are all longer than one year they are not exceptionally\nlong, with durations of 1.74$\\pm$0.29, 1.25$\\pm$0.04, and 1.85$\\pm$0.33\nyears for V4743 Sgr, V597 Pup, and V2467 Cyg, respectively. Thus\nthe data available do not imply that short orbital period or\nstrong magnetic fields produce significantly longer SSS \nthan the average novae from our sample.\n\nAn interesting question is why there is no trend between orbital\nperiod and SSS duration as the underlying assumption of enhanced\naccretion due to heating of the secondary is sound. One reason \nwould be that there is no significant enhancement in the \nmass transfer rate from the illuminated secondary, perhaps from shielding\ndue to a thick disk. Another possibility is that there is an\neffect but it is subtle and affected by other variables such as the\nstrength of the magnetic field, composition of the accreted material,\nWD mass, etc. Another possibility is that an accretion disk can not\nform under the harsh conditions during the SSS phase which inhibits \nadditional mass transfer. More observations of novae with different \ncharacteristics are required in order to understand the underlying\nphysics.\n\n\\subsection{Dusty novae}\n\nThe creation, evolution, and eventual destruction of dust occurs on \nrelatively rapid time-scales in novae making them excellent objects \nfor understanding dust grain formation. One curious aspect of dust\nin novae is how grains can grow within the harsh photoionizing \nenvironment. A correlation of the recent \\textit{Spitzer}\nspectroscopic observations of dusty novae \n\\citep[see][for examples]{WoodStar11,Heltonthesis}\nwith this large X-ray sample can bring insights to why most novae do \nnot form dust and the reasons for the large differences in composition \nand amounts in the novae that do form dust.\n\nIn general, it is believed that grain growth occurs within dense clumps \nin the ejecta where they are shielded from hard radiation. Spectroscopic \nand direct imaging show that nova shells are inherently clumpy \n\\citep{1997AJ....114..258S,OBB08}. Grain formation inside dense clumps \nalso explains the higher frequency of dust in slow novae \n\\citep[see Table 13.1 in][]{ER08} as they eject more material at lower \nvelocities and suffer greater remnant shaping than fast novae and thus \nprovide more protection for grain formation. However, even fast novae \nwith small ejected masses have shown some dust formation, such as V838 Her\n\\citep{2007ApJ...657..453S}. A contrary view was proposed that ionization \nactually promotes dust formation via the accretion of grain clusters through\ninduced dipole interactions \\citep{2004A&A...417..695S}.\n\nKnown and likely dusty novae represent 31\\% of the X-ray sample but only \ntwo, V2467 Cyg and V574 Pup, were also SSSs. While there were no \ncharacteristic dips in either visual light curve indicating significant \ndust formation \\citep{2009AAS...21349125L,2005IBVS.5638....1S}, both novae \nshowed evidence of some dust formation from the presence of weak silicate \nemission features in the late \\textit{Spitzer} mid-IR spectra \n\\citep{WoodStar11}. In V2467 Cyg the first {\\it Swift}\\ X-ray detection was 458 \ndays after maximum. It was weak but dominated by soft photons. The following \n{\\it Swift}\\ observation on day 558 revealed the nova was still soft but\nwas also almost 3 times brighter. The {\\it Spitzer} spectra showing \nweak dust features were taken between these {\\it Swift}\\ observations, \naround day 480. V574 Pup was detected as a SSS by \n{\\it XMM}\\ and {\\it Swift}\\ 872 and 1116 days after visual maximum, respectively.\n{\\it Spitzer} observations taken around the same time as the {\\it Swift}\\ data \nshowed the same weak silicate emission features seen in V2467 Cyg. \nThe X-ray observations confirm that dust, albeit weak, can exist in the \nejecta when the amount of photoionizing radiation is at its peak. Detailed\nphotoionization modeling of these novae is required to determine if clumps\nexisted in the ejecta during this time and if the conditions were sufficient\nto protect the dust grains.\n\nThere are also two strong dust formers in the sample with hard X-ray emission.\nV2362 Cyg was detected numerous times by {\\it Swift}\\ \\citep{2008AJ....136.1815L} \nand twice with {\\it XMM}\\ \\citep{2007ATel.1226....1H} but none of the observations\nwere consistent with a SSS. However, V2362 Cyg had significant dust \nemission at the times of the {\\it Swift}\\ and first {\\it XMM}\\ observations. \nThe dust likely formed in the later extraordinary mass ejection event\nthat produced the large secondary peak in the light curve and increased\nejection velocities. The additional material would have absorbed the soft\nX-ray emission and delayed the onset of any SSS phase.\nIn the last {\\it XMM}\\ observation it was extremely faint indicating that \nif there was a SSS phase it was over by 990 days after maximum. \nV1280 Sco was detected as an X-ray source late in its outburst but the X-rays \nwere relatively hard and faint \\citep{2009ATel.2063....1N}. V1280 Sco \nhas yet to be observed as an SSS and its internal extinction is still large.\nIn both V2362 Cyg and V1280 Sco, grain growth was likely enhanced to\nproduce the large dust events due to the effective shielding of the \nlarge mass ejections.\n\n\\subsection{Variability during SSS phase \\label{var}}\n\nAt the maximum effective temperature, (2-8)$\\times$10$^5$ K, the bulk of\nthe emission in a nova outburst comes from X-rays which are primarily soft. \nAssuming the external column is low enough and the effective temperature\nis suitably high enough, this X-ray emission can be detected. The theory\nof constant bolometric luminosity predicts that at X-ray maximum the \nlight curve should be relatively constant since one is observing the \nmajority of the emitted flux. Constant bolometric luminosity has been \nobservationally verified in the early phase of the outburst from the\ncombined UV, optical and near-IR light data, {\\it e.g.} FH Ser\n\\citep{1974ApJ...189..303G}, V1668 Cyg \\citep{1981MNRAS.197..107S} and \nLMC 1988\\#1 \\citep{1998MNRAS.300..931S}. However, the expected X-ray \nplateau in all well studied {\\it Swift}\\ novae has been far from constant. \nIn addition, the rise to X-ray maximum also shows large amplitude \noscillations. What is the source of the variability during both phases?\n\nOne important caveat when discussing the {\\it Swift}\\ data is that the XRT \ncount rate is not a direct measure of the bolometric flux, only the \nportion that is emitted between 0.3 and 10 keV. During the SSS phase the \nvast majority of photons are emitted within this range but if the effective \ntemperature varies due to photospheric expansion or contraction, the XRT \ncount rate will change even if the source has a constant bolometric \nluminosity \\citep[see also][]{2011ApJ...727..124O}.\nFigure \\ref{xrtevol} illustrates how the estimated XRT count \nrate varies as a function of effective temperature for simple blackbody \nmodels (WebPIMMS\\footnote{http:\/\/heasarc.nasa.gov\/Tools\/w3pimms.html})\nassuming a constant luminosity and column density\nsee Section \\ref{Xrayvar}. A decline from 500,000 K\nto 400,000 K drops the total {\\it Swift}\\ XRT count rate by a factor of 6.\nThe change in HR1 is almost a factor of 10 while there is\nessentially no change in the HR2 hardness ratio. Thus changes\nin temperature might in principle account for the observed X-ray \noscillations, see Section \\ref{teffvar}. Why the temperature or radius \nof the WD photosphere would change on the observed \ntime scales remains an open question however. The next sections outline \npossible explanations for the variations seen during the SSS phase.\n\n\\subsubsection{Variable visibility of the WD\\label{Xrayvar}}\n\nFigures 2 and 5 in \\citet{2011ApJ...727..124O} show in exquisite detail\nthe rapid and extreme variability in the X-ray light curve and hardness\nratio evolution in RS Oph. In general the trend was for RS Oph to be softer\nat high X-ray flux but counter examples were also observed.\n\\citet{2011ApJ...727..124O} cite variable visibility of the hot WD as a \npossible explanation of the observed phenomena. Changes in the \nextinction can come from either variable ionization of the ejecta leading \nto changing extinction at higher ionization states or neutral absorption \nfrom high density clumps passing through the line of sight. Changes in \nthe ionization structure of the ejecta are unlikely given the rapid \nhour to day time-scales but are consistent with the crossing times of \nsmall, dense clumps traveling across the line of sight assuming transverse \nvelocities of a few percent of the radial velocity. There is evidence \nfor this at other wavelengths. For example, a sudden absorption component \nthat appeared in the Balmer lines of V2214 Oph in July 1988 was interpreted \nby \\citet{1991ApJ...376..721W} as the passage of an absorbing clump in \nfront of the the emitting region. Both types of absorption changes \nshould be manifest as a hardening of the X-ray spectrum or increase in \nthe hardness ratio with increasing soft flux emission consistent with \nthe counter examples of \\citet{2011ApJ...727..124O}.\n\nAs a test of the neutral absorption theory we use the model results \nfrom a recent photoionization analyses in WebPIMMS to determine the\ncount rates and hardness ratios for different column densities and \nsimulate the effect of clumps. The photoionization models require \ntwo components, high density clumps embedded within a larger diffuse medium\n\\citep[see][for details]{2007ApJ...657..453S,2010AJ....140.1347H}\nto fit the emission lines of the ejected shell. For convenience we\nuse the May 24th, 1991\nmodel parameters for V838 Her in Table 2 of \\citet{2007ApJ...657..453S}.\nThe model uses a blackbody with an effective temperature of 200,000 K to\nphotoionize a two component spherical shell. The model shell has a\nclump-to-diffuse density ratio of 3 with a radius equal to the expansion\nvelocity multiplied by the time since outburst. To facilitate comparisons \nwith the results in Figure \\ref{xrtevol}, the same unabsorbed bolometric \nflux is assumed. WebPIMMS predicts a {\\it Swift}\\ soft band count rate \nof 5.3$\\times$10$^{-3}$ ct s$^{-1}$ through the \nlower density diffuse gas (N$_H$ = 1.2$\\times$10$^{21}$ cm$^{-2}$)\nand 8.5$\\times$10$^{-6}$ ct s$^{-1}$ from the higher density clumps\n(N$_H$ = 3.7$\\times$10$^{21}$ cm$^{-2}$). While the total count rate\ndeclines by over 100$\\times$, the HR2 hardness ratio does not change with\nthis particular model. The HR2 can vary significantly when using different \nmodel parameters such as lower initial densities or higher clump to diffuse\ndensity ratios. Care is required when using hardness ratios of \nlow resolution data. In a SSS source with a hard X-ray component, such as \nRS Oph, the hardness ratio will increase if the soft component decreases \nfor any reason, not just due to absorption. \n\nAnother problem with variable visibility in RNe and very fast CNe is that\nthe amount of mass ejected is very low thus minimizing any effect the ejecta\nhave on the obscuration of the WD. The effect should be greater\nin slower novae with more ejected mass such as V458 Vul. In addition,\n\\citet{2011ApJ...727..124O} find that in RS Oph the ratio of high flux\nstates to low flux states as a function of energy is not consistent with\neither type of variable visibility of the WD. Rather the best fit comes from\nan increase in the effective temperature and declining radius,\nsee Section \\ref{teffvar}.\n\n\n\\begin{figure*}[htbp]\n\\plotone{webpimms.ps}\n\\caption{The logarithmic X-ray count rates, hardness ratios and logarithmic\n$uvw2$ count rates\nas a function of blackbody temperature as calculated by webPIMMS.\nAn unabsorbed, bolometric flux of 3.3$\\times$10$^{-8}$ erg\/s\/cm$^{-2}$ \n(1$\\times$10$^{38}$ erg s$^{-1}$ at 5 kpc) and N$_H$ of \n3$\\times$10$^{21}$ cm$^{-2}$ was used in all models.\nThe top panel shows the soft (0.3-1 keV, solid line and filled circles) \nand hard (0.1-10 keV, dashed line and triangles) count rates. The soft\ncontribution dominates at all effective temperatures. The middle panels \nshow the hardness ratios HR1(=H\/S) and HR2(=(H-S)\/(H+S)). The bottom panel \nshows how the $uvw2$ count rate increases as the blackbody temperature \ndeclines. \\label{xrtevol}}\n\\end{figure*}\n\n\\subsubsection{Periodic oscillations\\label{sssperiods}}\n\nThere are several proposed explanations of the periodic X-ray variations.\nIn the X-ray light curve of V1494 Aql, \\citet{2003ApJ...584..448D} found \nperiodicities that they attributed to non-radial g$^+$-mode pulsations. \nSimilar oscillations have been observed in V4743 Sgr \n\\citep{2003ApJ...594L.127N,2010MNRAS.405.2668D}.\n\nThe factor of almost ten decline in the {\\it XMM}\\ X-ray light curve of \nV5116 Sgr was interpreted by \\citet{2008ApJ...675L..93S} as a partial \neclipse of the WD since its duration was consistent with the orbital period. \nFiner binning of the day 762, 764, and 810 {\\it Swift}\\ observations of \nV5116 Sgr reveals the presence of a 500-800 second oscillation. This\nX-ray periodicity is significantly shorter than the 2.97 h orbital\nperiod found by \\citet{2008A&A...478..815D}. In addition,\nthe day 810 data show a strong flare that increases the count rate by\na factor of three with no significant change in the hardness ratio.\nThis was similar to the flare seen in V1494 Aql \\citep{2003ApJ...584..448D}.\nNo other flares were seen in the V5116 Sgr data set.\n\nOther orbital periods have been detected with {\\it Swift}.\nU Sco is a high inclination system with deep eclipses and an orbital\nperiod of 1.23 days \\citep{2001A&A...378..132E}. Deep eclipses were\nobserved in the 2010 outburst in the {\\it Swift}\\ UVOT light curves while the\nXRT light curves showed generally lower flux levels during the UV eclipses,\nbut did not otherwise exhibit clear eclipse signatures.\n\\citep{2010ATel.2442....1O}. A 1.19 day orbital period was deduced from \nthe {\\it Swift}\\ UVOT light curves in the RN nova LMC 2009a \n\\citep{2009ATel.2001....1B}. This orbital period was also observed in the \nXRT light curve during the SSS phase, but with a lag with respect to the \nUV\/optical of 0.24 days \\citep{Bode2011}.\n\nThe X-ray behavior in CSS 081007:030559+054715 was extremely unusual. This\nodd source was discovered well after optical maximum by the Catalina Real-time\nTransient Survey \\citep{2008ATel.1835....1P}. Its X-ray spectra were\nextremely soft, consistent with the low extinction along its position\nhigh above the Galactic plane ($b$ = -43.7$\\arcdeg$) \nwhich places it well outside the plane of the Galaxy where novae\nare generally not located. Figure \\ref{csslc} shows the {\\it Swift}\\ XRT\/UVOT\nlight curves compiled from the data in Table 2. \nTo first order both light curves are in phase with \nsignificant variability superimposed over three major maxima. \n\\citet{2010AN....331..156B} report that the \n{\\it Swift}\\ light curves are unique with a 1.77 day periodicity. They speculate \nthat the period is due to obscuration of the X-ray source in a high \ninclination system with a 1.77 day orbital period.\n\nOscillations significantly shorter than the hours-to-days of typical \nnovae orbital periods have also been detected with {\\it Swift}. Oscillations \nof order 35 s have been observed in RS Oph \n\\citep{2006ATel..770....1O,2011ApJ...727..124O}\nand KT Eri \\citep{2010ATel.2423....1B}. Some WDs have rotation \nperiods in this range \\citep[{\\it e.g.} 33 s in AE Aqr;][]{2008PASJ...60..387T}.\nIt seems unlikely that RS Oph and KT Eri should both have nearly identical \nrotation periods unless the pulsations are tied to the mass of the WD\nwhich for both novae are predicted near the Chandrashkar limit. \nAnother reason the observed\nvariability might not be associated with the rotating WD is that \nthe $\\sim$ 35 second periodicity is not always detected in the {\\it Swift}\\\nand {\\it Chandra}\\ X-ray light curves. The 35 second pulsations could be due to \na nuclear burning instability on the WD surface \n\\citep[see ][]{2011ApJ...727..124O}. If so, then the period then is a \nfunction of WD mass, and perhaps indicates that the WDs in RS Oph and KT Eri\nare near the Chandrasekhar mass.\n\n\\subsubsection{Temperature variations\\label{teffvar}}\n\nLong lived SSS, such as Cal 83, have non-periodic X-ray on\/off states. \n\\citet{2000A&A...354L..37R} speculate that the \ndecline in X-ray flux is due to accretion disk interactions such\nas an increase in the mass accretion rate causing the WD photosphere \nto expand and shifting the SED into the EUV. These sources then become \noptically brighter from the irradiation of the accretion disk and \nsecondary by the larger WD photosphere. The source remains X-ray faint \nuntil the WD photosphere shrinks back to its\noriginal size. Figure \\ref{v458vullc} shows similar behavior in the \n{\\it Swift}\\ X-ray and UV light curves of V458 Vul\ncompiled from the data in Table 2. The 100$\\times$ decline\nin the X-ray light curve is matched by a 1.5 magnitude increase\nin the UV light curve. Figure \\ref{xrtevol} shows that similar X-ray\nand UV variations can be achieved by large declines in the effective \ntemperature. For example, a decline from 700,000 K to 500,000 K \nproduces a factor of 85 decline in the X-ray count rate and a 1.1 \nmagnitude $uvw1$ band increase. \nIf the underlying phenomenon in V458 Vul \nis the same as proposed for RX J0513.9-6951 \\citep{2000A&A...354L..37R}\nand Cal 83 \\citep{2002A&A...387..944G} and the accretion disk has been\nreestablished, V458 Vul should have an orbital period of order one\nday to produce an accretion rate high enough to drive stable nuclear burning.\nHowever, \\citet{2010MNRAS.407L..21R} find a short orbital period of $\\sim$ \n98 minutes implying that V458 Vul will not have a long term SSS phase.\n\n\\begin{figure*}[htbp]\n\\plotone{csslc.ps}\n\\caption{The X-ray and uvw2 light curves of the particular nova\nCSS 081007:030559+054715. The X-ray and UV evolution are in phase.\n\\label{csslc}}\n\\end{figure*}\n\n\\begin{figure*}[htbp]\n\\plotone{v458vullc.ps}\n\\caption{{\\it Swift}\\ X-ray (top panel) and uvw1 (bottom panel) light curve \nfor V458 Vul. The X axis is the number of days after visual maximum. \nPrior to day 400, V458 Vul was in transition to X-ray maximum. \nAfter day 400 the majority of the X-ray observations had a count rate \nof $\\sim$ 1 ct s$^{-1}$. However, during the later phase there \nare three periods where the X-ray counts declined by about a factor \nof 100. During these times the uvw1 ($\\lambda_c$ = 2600\\AA)\nphotometric brightness increased by a magnitude.\n\\label{v458vullc}}\n\\end{figure*}\n\n\\subsection{Estimating time scales in a variable environment}\n\nThe variability of novae also raises questions about how confident \none can be in the determination of turn-off times. A prime example can be \nseen in the X-ray light curve of V458 Vul in Figure \\ref{v458vullc}. If \nmonitoring had stopped following the four observations between days 450 \nand 480 the subsequent recovery would never have been found, and it\nwould have been noted that V458 Vul had a turn-off time of 1.2 years \ninstead of $\\gtrsim$ 2.9 years. While this could be a significant \nproblem with the determination of a turn-off time in most novae, it is \nlikely that the phenomenon observed in V458 Vul is rare. The X-ray \nbehavior of V458 Vul, a 100$\\times$ decline in flux and a subsequent \nrecovery, is the \\textit{only} case observed in the \\totallimitswiftSSS\\ \nnovae studied by \\textit{Swift} with SSS emission.\n\\citet{2001A&A...373..542O} found no similar \"reborn\" SSSs in their review\nof the {\\it ROSAT}\\ all sky survey although some of the novae in M31 previously\nthought to be RNe with very rapid outburst time scales may actually be\nnormal novae but with on\/off states similar to those in V458 Vul. Since the \nsudden X-ray declines in V458 Vul also had corresponding UV rises, if these\nsource exist in M31, they should be easily found with X-ray and UV\ncapable facilities such as {\\it Swift}\\ and {\\it XMM}.\n\n\\subsection{SSS in RNe and the light curve plateau\\label{RNplateau}}\n\nA plateau in visible light of RNe is speculated to arise from the reradiation \nof the SSS emission from an accretion disk dominating the emission after \nthe free-free emission has faded \\citep{2008ASPC..401..206H}. Once nuclear \nburning ends and the accretion disk is no longer irradiated, the light curve \ncontinues its decline to quiescence. Figure 17.1 and \n17.2 show that the optical plateaus are nearly coincident with \nthe SSS emission in RS Oph and U Sco.\n\nThe other well observed RNe in the {\\it Swift}\\ archive are novae LMC 2009a \nand V407 Cyg. LMC 2009a was previously seen in outburst in 1971 \n\\citep{2009IAUC.9019....1L}. It had a much longer SSS phase than RS Oph \nand U Sco and it ended 270 days after maximum. Unfortunately, the V band light \ncurve compiled from the AAVSO archives and our own SMARTS photometry \ndoes not extend beyond 110 days after visual maximum so we can not \ndetermine whether an optical plateau was observed later in this outburst,\nsee Figure 17.3. However, the {\\it Swift}\\ uvw2 and SMARTS B\nband light curves are relatively flat during the SSS phase (see Bode et al. \nsubmitted) indicating LMC 2009a did go through an optical plateau phase.\nThe data\nare not as extensive for V407 Cyg but the rise in the soft X-ray emission\nconsistent with nuclear burning on the WD (see Section \\ref{v407cygSSS})\nis coincident with a short plateau in the optical light curve as shown\nin Figure 17.4.\n\nThere are three other novae with well observed SSS phases in the {\\it Swift}\\\narchive that are suspected to be RNe based on their outburst characteristics.\nThe novae are V2491 Cyg, KT Eri, and V2672 Oph. Figure 17.5\nshows that there is no indication of a plateau in V2491 Cyg while it was\na SSS. However, the SSS phase in V2491 Cyg was extremely short, $<$10 days,\nwhich may not be sufficient time to produce a noticeable optical plateau\nor the system did not have its accretion disk reform this early in the\noutburst. \n\nThe early outburst spectra of KT Eri were indicative of the He\/N class with \nhigh expansion velocities typical of RNe \\citep{2009ATel.2327....1R}. \nKT Eri also had short X-ray light curve modulation similar \nto RS Oph, see Section \\ref{sssperiods} and\n\\citet{2010ATel.2392....1B}, while Bode et al. (2011, in prep) draw \nattention to KT Eri's similarities with the X-ray behavior of LMC 2009a.\nThe X-ray and V band observations are shown in Figure 17.6.\nThe AAVSO V band light curve shows a flattening at 80 days after visual maximum\nor about 10 days after KT Eri became a SSS implying there was an\noptical plateau.\n\nThe case for V2672 Oph as a RNe is based on its extreme expansion velocities\nat maximum \\citep{2009IAUC.9064....2A} and early radio synchrotron emission\nsimilar to that observed in RS Oph \\citep{2009ATel.2195....1K}. \n\\citet{2010MNRAS.tmp.1484M} also find many similarities between V2672 Oph\nand U Sco. Unfortunately, the X-ray and optical observations were \nhampered due to the faintness at visual maximum and the relatively \nlarge column density.\nBased on the hardness ratio, V2672 Oph was in its SSS phase between days 15 \nand 30 after visual maximum (Figure 17.7). The AAVSO \nV band light curve is supplemented with SMARTS V band photometry which\nshows a plateau between day 10 through 50 after visual maximum.\n\nOf the 4 known RNe and 3 suspected RNe, there are sufficient optical data \nto reveal the presence of a plateau in six. Of those six all but V2491\nCyg have evidence of an optical plateau correlated with the X-ray SSS\nemission. However, \\citet{2010ApJS..187..275S} finds that not all Galactic \nRNe have optical plateaus. It is interesting to note that if the plateau\nphase is caused by reradiation off an accretion disk as suggested by\n\\citet{2008ASPC..401..206H} then there is no effect on the presence \nor strength of the plateau due to the inclination of the system. One\nwould expect the effect in more face-on systems like RS Oph,\n$i$ = 39$^{+1}_{-10}$$^{\\circ}$ \\citep{2009ApJ...703.1955R} than in\nedge-on systems such as U Sco, $i = 82.7\\pm2.9^{\\circ}$ \n\\citep{2001MNRAS.327.1323T}. Regardless of the root cause of optical \nplateaus, their presence can clearly be used as a proxy signature of \nSSS emission. \nHowever, it should be stressed that while optical plateaus likely \nindicate soft X-ray emission, the start and ending of this phase in the \noptical light curve does not necessarily correspond to the turn-on and\nturn-off times in the SSS phase. Relationships between the optical\ntimescales and the X-ray are only weakly correlated, e.g. \nFig. \\ref{t2turnoff}, and the two phases do not always align in the \nRN and suspected RN in this sample (Fig. 17.1-17.7).\n\nOptical\/NIR plateaus should only be observed in RNe and other fast \nnovae that eject very little mass. In slower novae the later spectra \n({\\it i.e.} several tens of weeks after maximum light) are dominated by \nhydrogen recombination and nebular line emission effectively hiding any \nirradiation effects. The continuum from the WD or a hot accretion disk \ncan only be observed after the ejecta have sufficiently cleared. \n\n{\\bf Fig. Set} \n\\figsetnum{17}\n\\figsettitle{X-ray and optical evolution}\n\n\n\\figsetgrpnum{17.1}\n\\figsetgrptitle{RS Oph}\n\\figsetplot{f17_1.eps}\n\\figsetgrpnote{X-ray and optical evolution of RS Oph. The top panel is the {\\it Swift}\\ XRT (0.3-10 keV) count rate and the middle panel is the hardness ratio, (H-S)\/(H+S) where H = 1-10 keV and S = 0.3-1 keV. The bottom panel shows the AAVSO V band light curve. \\label{rsophplat}}\n\n\n\n\\figsetgrpnum{17.2}\n\\figsetgrptitle{U Sco}\n\\figsetplot{f17_2.eps}\n\\figsetgrpnote{X-ray and optical evolution of U Sco. The top panel is the {\\it Swift}\\ XRT (0.3-10 keV) count rate and the middle panel is the hardness ratio, (H-S)\/(H+S) where H = 1-10 keV and S = 0.3-1 keV. The bottom panel shows the AAVSO V band light curve. \\label{uscoplat}}\n\n\n\n\\figsetgrpnum{17.3}\n\\figsetgrptitle{Nova LMC 2009 A}\n\\figsetplot{f17_3.eps}\n\\figsetgrpnote{X-ray and optical evolution of Nova LMC 2009a. The top panel is the {\\it Swift}\\ XRT (0.3-10 keV) count rate and the middle panel is the hardness ratio, (H-S)\/(H+S) where H = 1-10 keV and S = 0.3-1 keV. The bottom panel shows the AAVSO V band light curve and includes our own SMARTS photometry. \\label{nlmc09plat}}\n\n\n\n\\figsetgrpnum{17.4}\n\\figsetgrptitle{V407 Cyg}\n\\figsetplot{f17_4.eps}\n\\figsetgrpnote{X-ray and optical evolution of V407 Cyg. The top panel is the {\\it Swift}\\ XRT (0.3-10 keV) count rate and the middle panel is the hardness ratio, (H-S)\/(H+S) where H = 1-10 keV and S = 0.3-1 keV. The bottom panel shows the AAVSO V band light curve. To accentuate the soft contribution to the total in the top panel, the squares show the soft, 0.3-1 keV, light curve. The V band light curve includes the AAVSO data (filled circles) and the photometry of \\citet{2011MNRAS.410L..52M} (diamonds). \\label{v407cygplat}}\n\n\n\n\\figsetgrpnum{17.5}\n\\figsetgrptitle{V2491 Cyg}\n\\figsetplot{f17_5.eps}\n\\figsetgrpnote{X-ray and optical evolution of V2491 Cyg. The top panel is the {\\it Swift}\\ XRT (0.3-10 keV) count rate and the middle panel is the hardness ratio, (H-S)\/(H+S) where H = 1-10 keV and S = 0.3-1 keV. The bottom panel shows the AAVSO V band light curve. \\label{v2491cygplat}}\n\n\n\n\\figsetgrpnum{17.6}\n\\figsetgrptitle{KT Eri}\n\\figsetplot{f17_6.eps}\n\\figsetgrpnote{X-ray and optical evolution of KT Eri. The top panel is the {\\it Swift}\\ XRT (0.3-10 keV) count rate and the middle panel is the hardness ratio, (H-S)\/(H+S) where H = 1-10 keV and S = 0.3-1 keV. The bottom panel shows the AAVSO V band light curve. The gaps in the light curves are due to KT Eri being behind the Sun. \\label{kteriplat}}\n\n\n\n\\figsetgrpnum{17.7}\n\\figsetgrptitle{V2672 Oph}\n\\figsetplot{f17_7.eps}\n\\figsetgrpnote{X-ray and optical evolution of V2672 Oph. The top panel is the {\\it Swift}\\ XRT (0.3-10 keV) count rate and the middle panel is the hardness ratio, (H-S)\/(H+S) where H = 1-10 keV and S = 0.3-1 keV. The bottom panel shows the AAVSO V band light curve and includes our own SMARTS photometry. \\label{v2672ophplat}}\n\n\n\n\n\\begin{figure*}[htbp]\n\\plotone{f17_1.eps}\n\\caption{X-ray and optical evolution of RS Oph. The top panel is the {\\it Swift}\\\nXRT (0.3-10 keV) count rate and the middle panel is the hardness ratio, \n(H-S)\/(H+S) where H = 1-10 keV and S = 0.3-1 keV. The bottom panel\nshows the AAVSO V band light curve. Similar figures for U Sco, Nova LMC 2009\nA, V407 Cyg, V2491 Cyg, KT Eri, and V2672 Oph are available in the electronic\nedition. \\label{rsophplat}}\n\\end{figure*}\n\n\\subsection{SSS proxies at other wavelengths: The $[$\\ion{Fe}{10}$]$\\ line\\label{fex}}\n\n\\btxt{\\citet{2001AJ....121.1126V} used the evolution of UV emission\nline light curves developed in \\citet{1996ApJ...463L..21S} for V1974 Cyg\nto estimate turn-off times. This allowed \\citet{2001AJ....121.1126V} to\ndetermine the nuclear burning timescales of five novae with no pointed\nX-ray observations but significant amounts of {\\it IUE} data. Unfortunately,\nit is currently difficult to obtain sufficient UV emission line data to \nutilize this technique while the optical plateau (\\S \\ref{RNplateau})\nonly applies to fast and recurrent novae. Another X-ray proxy is \nneeded for slower novae.}\n\nThe emergence of the coronal $[$\\ion{Fe}{10}$]$\\ 6375\\AA\\ line in the nebular spectra\nof novae has been long recognized as a strong indication of photoionization \nof the ejecta from a hot source \\citep[e.g.,][]{1989ApJ...341..968K}. \nWith an ionization potential of 235 eV, an ejected shell must be highly \nionized by a hot WD to produce $[$\\ion{Fe}{10}$]$. While shocks can produce \nhigh temperatures, they only contribute very early in the outburst\nand are insignificant during the later nebular phase when $[$\\ion{Fe}{10}$]$\\ is typically\nobserved, in all but the RS Oph-type RNe. For example, strong $[$\\ion{Fe}{10}$]$\\ \nand [\\ion{Fe}{14}] 5303\\AA\\ emission has been observed in RS Oph in all \noutbursts with adequate spectroscopic coverage \\citep{2009ApJ...703.1955R}. \nHowever, these lines appear well before the SSS phase begins. A \nrelationship between $[$\\ion{Fe}{10}$]$\\ and soft X-ray emission has not been previously \ndemonstrated but can be strengthened with our larger nova sample.\n\nSeven novae with confirmed SSS emission, GQ Mus \\citep{1989ApJ...341..968K},\nV1974 Cyg \\citep{1995A&A...294..488R}, V1494 Aql \\citep{2003A&A...404..997I},\nV723 Cas \\citep{2008AJ....135.1328N}, V574 Pup \\citep{Heltonthesis}, V597 Pup, \nand V1213 Cen \\citep{2010ATel.2904....1S} all had strong $[$\\ion{Fe}{10}$]$\\ lines in \ntheir late nebular spectra. Example spectra of V597 Pup and V1213 Cen from \nour SMARTS archive are shown in Figure \\ref{fexplots}. In addition, \nextensive optical spectra from our Steward Observatory northern \nhemisphere nova monitoring \ncampaign shows that V2467 Cyg may also have had weak $[$\\ion{Fe}{10}$]$\\ emission at the \nsame time it was a SSS but this can not be confirmed due to nearby \n\\ion{O}{1} lines. These novae clearly show that the presences of strong\n$[$\\ion{Fe}{10}$]$\\ in the optical spectrum is indicitive of underlying SSS emission.\nTo our knowledge there has never been a nova with strong $[$\\ion{Fe}{10}$]$\\ emission \nthat was not also a SSS during contemporaneous X-ray observations. \nWhile additional optically and X-ray observations are needed to fully test\nthis hypothesis, ground based spectroscopic monitoring is a powerful tool \nfor detecting SSS novae from $[$\\ion{Fe}{10}$]$\\ emission in novae with significant\nejected mass. The RNe and very fast CNe \nwith rapid turn-on\/off times are not strong photoionization sources \nlong enough to produce $[$\\ion{Fe}{10}$]$\\ in their meager ejected shells.\n\n\\begin{figure*}[htbp]\n\\plottwo{v1213cen_100627.ps}{v597pup080326.ps}\n\\caption{$[$\\ion{Fe}{10}$]$\\ 6375\\AA\\ emission in V1213 Cen (left) and V597 Pup (right)\nobtained on June 27th, 2010 (415 days from visual maximum) \nand March 26th, 2008 (133 days from visual maximum), respectively.\nThe $[$\\ion{Fe}{7}$]$ 6087\\AA\\ line is also visible in both spectra. Strong\n$[$\\ion{Fe}{10}$]$\\ emission relative to $[$\\ion{Fe}{7}$]$ is a hallmark of novae in\ntheir SSS phase. \n\\label{fexplots}}\n\\end{figure*}\n\n\n\\section{SUMMARY}\n\nOver the last decade our knowledge of the X-ray behavior of \nnovae has increased dramatically with the launch of \nthe latest generation\nX-ray facilities. Observations of novae when they are radiating the majority \nof their flux in the soft X-ray band provide critical insight into \nthe behavior of the WD and TNR processes. Currently \\totalSSS\\ \nGalactic\/Magellanic novae have been observed as SSSs of which \\totalswiftSSS\\ \nsuch classifications have come from \nover 2 Ms of {\\it Swift}\\ observations during the last five years. \n\nThis large sample shows that individual novae can differ significantly \nfrom fits to smaller ensemble data sets such as the t$_2$ relationship of \n\\citet{2010ApJ...709..680H} and the expansion velocity relationship of \n\\citet{2003A&A...405..703G}. Surprisingly, there is also no relationship \nbetween orbital period and the duration of nuclear burning. This large\ndata set confirms that many factors are in play in the evolution of the \nSSS phase.\n\nThe duration of nuclear burning on the WD is short, with 89\\% of the \nnovae have turned off within 3 years in this expanded sample. The median\nduration of the sample is 1.4 years. This contrasts with the same distribution\nin M31 which is peaked at longer burning novae. The difference is likely\na selection effect between the two surveys.\n\nThe new {\\it Swift}\\ data are also challenging our understanding of novae \nwith highly variable X-ray light curves both during the rise to and at\nX-ray maximum. Various mechanisms are likely at work to produce the \nvariability. Additional observations are warranted not only to help\ndecipher the current peculiar observations but also to be sure that we\nhave captured the full range of variability behaviors both periodic and\nnon-periodic that novae may yet produce. \nLong {\\it XMM}\\ and {\\it Chandra}\\ grating observations can explore the short \nterm oscillations more effectively than {\\it Swift}\\ whereas {\\it Swift}\\ can\neasily track the long term behavior such as turn-on and turn-off times.\nIn addition, simultaneous X-ray\/UV \nobservations only available through {\\it XMM}\\ and {\\it Swift}\\ will continue to\nbe a powerful tool to test the evolution of the emission from the WD \nduring the outburst.\n\nTo date no strong dust-forming novae have been detected as a SSS. \nV2362 Cyg did have detectable soft X-ray photons but it was not similar \nto any of the other SSS novae. While V574 Pup and V2467 Cyg were in\nthe SSS phase they had IR features indicating weak silicate dust emission.\nV1280 Sco had a large DQ Her-like dust event but also ejected so much \nmaterial and at a low velocity that it is still optically thick several\nyears after visual maximum. Any SSS phase will not be detected until\nthis material clears.\n\nThere are optical behaviors that track SSS emission in novae. \nFor the RNe with well defined plateaus \nin their optical light curves, RS Oph and U Sco, the X-ray light curves\nreach maximum around the same time. However, not all RNe and suspected\nRNe in the sample had optical plateaus even though they had well \ndocumented observations during X-ray maximum. An optical spectroscopic \nsignature indicative of an SSS phase is the presence\nof strong $[$\\ion{Fe}{10}$]$\\ 6375\\AA\\ emission. In the sample, all novae with $[$\\ion{Fe}{10}$]$\\ that\nwere subsequently observed in the X-ray were SSSs. These were slower\nnovae that ejected significantly more material than the RNe. The inverse \nof the $[$\\ion{Fe}{10}$]$\\ relationship does not hold since the source may turn off before \n$[$\\ion{Fe}{10}$]$\\ can be created in the ejecta. While the presences of neither optical\nplateaus or $[$\\ion{Fe}{10}$]$\\ has yet been shown to be simultaneous with SSS emission,\nthese relationships offer excellent oppertunities to use ground-based \nmonitoring to coordinate X-ray observations during the important \nSSS phase. \n\nAdditional X-ray data need to be collected since the sample is statistically \nmeager with only \\totalSSS\\ known SSS novae and is smaller still for novae\nwith early, hard X-ray detections. Trends can be difficult to confirm given\nthe wide range of behavior observed during the different X-ray phases.\nWith the sample heavily biased toward fast and recurrent novae, efforts \nshould be expended on novae that are not \ncurrently well represented in the X-ray sample such as slow and \ndust forming novae. The monitoring of the two slow novae that \nhave been detected as X-ray sources, V5558 Sgr and V1280 Sco, but \nhave not yet evolved to a SSS state, will help in \nunderstanding slow systems. Likewise, {\\it Swift}\\ monitoring of the \ntwo long lasting SSSs, V723 Cas and V458 Vul, are also of interest \nsince they are rare, and thus, important to our understanding\nof why they persist. \n\nFinally, it is important to continue to collect X-ray observations of \nnovae and build on this sample. This analysis shows that each nova is in\nsome ways unique and that attempts to predict their behavior based on a \nrelationship to a single observational value, {\\it e.g.} t$_2$ versus the \nnuclear burning timescale, is fraught with difficulties. Some of these\nproblems can be addressed by expanding the sample to include regions of\nthe parameter space that are not well represented. This X-ray sample\nincludes few slow novae which likely explains the differences between\nthe nuclear burning timescale of the Milky Way and M31 surveys.\nIt is also equally important to obtain numerous, high \nquality data for all bright novae through their evolution and at different\nwavelengths from X-ray to radio. Multiwavelength observations are critical\nto properly interpret nova phenomena such as the apparent early turn-off in\nV458 Vul and to verify periodicities seen in the X-ray, particularly\npotential orbital periods. With the understanding that comes from a few \nwell observed novae \nlike RS Oph and U Sco, the entire nova data set can be anchored to nova \ntheory. These large data sets also reveal new phenomena such as the \nstrong X-ray variability that is not appreciated in novae with \nsparser observations or detected at other wavelengths.\n\n\\acknowledgments\n\nThis research has made use of data obtained from NASA's {\\it Swift}\\ satellite. \nWe thank Neil Gehrels and the {\\it Swift}\\ team for generous allotments of ToO \nand fill in time. Funding support from NASA NNH08ZDA001N1. \nStony Brook University's initial participation in the SMARTS consortium was \nmade possible by generous contributions from the Dean of Arts and Sciences, \nthe Provost, and the Vice President for Research of Stony Brook University. \nWe acknowledge with thanks the variable star observations from the AAVSO \nInternational Database contributed by observers worldwide and used in this \nresearch. \nJPO, KP, PE \\& AB acknowledge the support of the STFC. \nSS acknowledges partial support from NASA and NSF grants to ASU. \nJJD was supported by NASA contract NAS8-39073 to the {\\it Chandra}\\ X-ray Center.\n\n{\\it Facilities:} \\facility{Swift(UVOT\/XRT)}, \\facility{AAVSO}, \n\\facility{CTIO:1.3m}, \\facility{CTIO:1.5m}, \\facility{Bok(B\\&C spectrograph)},\n\\facility{Spitzer(IRS)}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\subsection*{Acknowledgments}\nWe thank M. Calandra for useful discussions. We acknowledge funding from EU Graphene Flagship, ERC Grant Hetero2D, EPSRC Grant Nos. EP\/509K01711X\/1, EP\/K017144\/1, EP\/N010345\/1, EP\/M507799\/ 5101, and EP\/L016087\/1 and the Joint Project for the Internationalization of Research 2015 by Politecnico di Torino and Compagnia di San Paolo.\n\nThe authors declare no competing financial interests.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}