diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzziygz" "b/data_all_eng_slimpj/shuffled/split2/finalzziygz" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzziygz" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{sec:intro}\n\n\nThe recent development of micro-fluidic based devices has pushed the\nscientific community to reconsider classical issues in fluid dynamics\nsuch as the hydrodynamical behavior of micro-scale bodies\nin the creeping regime. For instance, propulsion of microswimmers,\nsee e.g.~\\cite{fauci2006biofluidmechanics,lauga2009hydrodynamics,guasto2012fluid,elgeti2014physics}, \nhas been addressed for the production of energy oriented to \nmicro-devices~\\citep{di2010bacterial,sokolov2010swimming} \nand for the self-propulsion of micro-robots~\\citep{pak2012micropropulsion,alouges2013optimally}. \nThe microswimmer's behavior in presence of confining surfaces is also \ncrucial for biofilm formation. Most studies~\\citep{pratt1998genetic, rusconi2010laminar}\nfocused on the biofilm formation at solid-liquid interfaces.\nBiofilm formation near air-liquid interfaces is also a problem of significant concern, since\nthe liquid-air biofilm can be advantageous for aerobic microorganisms, \nproviding them access to oxygen (from the air) and nutrients \n(from the liquid) at the same time~\\citep{constantin2009bacterial, koza2009characterization}.\nSeveral species of flagellated bacteria are known to form biofilms at liquid\/air interfaces, e.g. \nBacillus subtilis, Bacillus cereus and Pseudomonas fluorescens. \nFor instance, the preparation of the well known Natto from Japanese cooking, consisting of fermented soy beens, involves \nB. subtilis in significant concentrations~\\citep{chantawannakul2002characterization}. In contrast, some strains of B. cereus are known to be harmful \nto humans and cause foodborne illness~\\citep{ehling2004bacillus}. P. fluorescens, instead, is able to contaminate heparinized saline flushes used in cancer therapy~\\citep{gershman2008multistate}. Despite its biological implications, the mechanics of flagellated swimmers close to a liquid-air interface is much less explored than the case of a liquid-solid interface.\n\n\nAt micro-scales viscous effects overwhelm inertia leading to the development\nof apparently counterintuitive swimming strategies as proved by the \nScallop theorem~\\citep{purcell1977life,lauga2011life}. In a nutshell, \nif the swimmer deforms through a sequence of body configurations which are\nperiodic and time-reversible (reciprocal motion), its average motion is zero. \nThe reciprocal motion can be exploited for locomotion only when \nnon-linear or memory effects are relevant.\nExamples are the non-Newtonian behavior of\nthe fluid~\\citep{qiu2014swimming,lauga2009life,keim2012fluid} or motions occurring close to \ndeformable interfaces~\\citep{trouilloud2008soft}. Microorganisms developed \nmany different strategies based on nonreciprocal effects to overcome the above \nrestrictions. The \\textit{Spiroplasma} deforms its cytoskeleton by propagating \npairs of kinks~\\citep{trachtenberg2003bacterial,yang2009kinematics}.\nOther microswimmers exploit their cilia wavy motion~\\citep{maxey2011biomimetics}\nas done by \\textit{Paramecium}~\\citep{jana2012paramecium}.\n\nHowever, many microorganisms take advantage of single or multiple \nflagella, such as the \\textit{Caulobacter crescentus}, which has a single \n(right-handed) helical filament driven by a rotary motor~\\citep{li2006low} and the \n\\textit{Escherichia coli} that has multiple flagella~\\citep{berg2004coli}. \nThe flagellar motor which activates and controls the filament rotation is \nable to switch between both rotation directions~\\citep{wang2014switching}, \nand, as a first approximation, the torque applied to the filament is \nconstant~\\citep{berg2004coli}. \nIn the case of the\n\\textit{Escherichia coli} the flagella arrange in a bundle, characterized by \n a left-handed rotation of the motor\nwhich \nconfers to bacteria a smooth forward motion (\\textit{run} phase).\nThe flagella is also able to invert the rotation direction in \norder to let the bacteria at rest and change its direction \n(\\textit{tumble} phase).\t\nDue to their internal structure, the filaments can only assume twelve prescribed \nshapes (helical polymorphic states) but only one of those is the \n\\lq\\lq normal state\\rq\\rq, i.e. it is the most observed one during the \\textit{run} \nphase~\\citep{darnton2007force,vogel2010force}.\nHence, the flagella can be \nmodeled as a rigid single filament~\\citep{phan1987boundary,shum2010modelling} \nrotating around the bacterial head axis.\n\n\nThe main purpose of this work is an extensive analysis of the hydrodynamical\nbehavior of a flagellated microswimmer close to air-liquid interfaces, with particular \nattention to free surfaces. \nFor the sake of definiteness, we focus on a simplified geometry modeling the configuration of E. coli whose swimming is probably the most widely studied from both experimental and numerical point of view. Anyhow, in order to explore the effect of different geometrical parameters, several modified configurations are also discussed.\n\n\nFlagellated swimmers close to a free-surface have been already addressed by simplified models, see e.g. \n~\\cite{crowdy2011two, di2011swimming, lopez2014dynamics} where two-dimensionality, resistive force theory, and multipole expansions techniques were exploited, respectively.\nTo the best of our knowledge, the present work addresses the first fully three dimensional numerical simulation of a swimmer in presence of a liquid-air interface.\nGiven the linearity of\nthe Stokes equations which are the appropriate model for creeping flows, the numerical\napproach exploits the Boundary Element Method (BEM). The BEM \ncan easily handle the complex geometry of the microswimmer and account for\ndifferent boundary conditions. Moreover, a flat and \ninfinitely extended free surface and\/or solid wall (when needed for comparisons) \ncan be easily included by considering the appropriate Green's function, thus \navoiding any undesired effect of the numerical truncation of the domain.\n\nThere is a wide body of literature available dealing with the motion of \nmicroorganisms in free space or close to solid boundaries. \nSince the first studies by~\\cite{taylor1951analysis}, the \nattention was principally focused on microorganisms whose \nflagellum was modeled as an infinite cylindrical filament\nin an unbounded fluid endowed with small amplitude wavy motion. \nIn fact, it was initially believed that the flagella were only moved by \nwave propagation. However~\\cite{berg1973bacteria} showed that bacteria could\nalso rotate their flagella in a corkscrew-like motion, moving the flagellar bundle as \na single filament.\nThe flagellum has a large aspect ratio, with length exceeding \nthickness by even more than two orders of magnitude~\\citep{lauga2009hydrodynamics}. \nThis particular geometry pushed the adoption of a \nSlender Body Theory (SBT) that has been exploited in several studies to evaluate \nthe translational velocity and\/or the torque applied by the \nswimmer~\\citep{hancock1953self}. \nSuccessively \\cite{higdon1979hydrodynamic,higdon1979hydrodynamics} \ntransformed the Stokes equations into a system of singular integral equations \naccounting for the swimmer translational and angular velocities. He added to the SBT \n a variable strength of the singularity along the flagellum centerline, \nthus modelling different centerline geometries. In this case both the planar \nsinusoidal motion and the rotation about the body axis were amenable to modelling.\nBy an appropriate system of images the SBT could also account for a spherical cell body and \nthe presence of a wall. However too many restrictions still confined the SBT application \nto extremely simplified configurations. \n\nIn recent years the increase of computational resources, led to a more \nextensive use of the BEM which overcomes many drawbacks of the SBT in dealing \nwith microswimmers both in free space and confined conditions. \\cite{phan1987boundary} \nused the BEM to study the motion of a microorganism in free space. \nSuccessively,~\\cite{ramia1993role} applied the BEM \nto the interaction between the swimmer and a solid wall, showing that, \nwhen swimming close to a solid wall, the swimmer exhibited a \ncircular motion. The BEM was also \nused to study the interactions between two neighboring flagellated microswimmers\nhighlighting the possibility of a coordination between their flagellar motion \nin order to maximize their velocities~\\citep{ramia1993role}. \n \nMore recently,~\\cite{lauga2006swimming} experimentally investigated the motion \nof an E. coli near a solid wall and found, as predicted by~\\cite{ramia1993role}, \na circular clockwise motion. In the paper the authors also\nprovided a simple theoretical model which was able to explain their experimental\nobservations. Interestingly, the same model suggested that the bacteria\nshould reverse its rotation when swimming in proximity of a free surface. \nThe same authors also showed that the swimmer is attracted by a solid wall.\nThis behavior was deeply investigated a few years later \nby~\\cite{giacche2010hydrodynamic} in a numerical work which highlighted \nhow the bacteria could move at a stable distance from a wall. At the same \ntime~\\cite{shum2010modelling} investigated the motion of a microswimmer \nclose to solid walls by considering many geometrical configurations\nand relating the shape of the body to the propulsion efficiency and to the\npossibility of achieving different motions, i.e. to be attracted by the wall, \nto escape from the wall or to reach stable circular orbits at a given \nwall-normal distance.\nThe same authors~\\citep{shum2012effects} also focused on the flexible hooks \nlinking each flagellum to the cell. They studied the modifications in the microswimmer trajectories\nwhen changing the hook rigidities. They found that, within an intermediate range of rigidities, the \nswimmer behavior doesn't change too much with respect to the simpler model of \nrigid hook. The same work highlighted how, for particular values of relative hook stiffness, \nthere is a transient phase of periodic motion with constant average distance from the wall, \nleading to boundary accumulation. \nThe tendency of swimming microorganisms to accumulate near solid \nwalls~\\citep{li2009accumulation}, with particular emphasis on collisions with the surface \nand rotational Brownian motion, has also been investigated and linked \nto the swimming speed and the cell size~\\citep{li2011accumulation} of the microswimmer.\nRecent works extended the investigation about the motion of microswimmers \nto more complex surfaces: a clean fluid-fluid interface, a slipping rigid wall, and a fluid interface covered \nby surfactants~\\citep{lopez2014dynamics}. \nThe authors used an asymptotic, far field approximation \nto represent the actual swimmer, \nretaining\ninformation about velocities and rotations.\nThe case of two fluid interfaces, in the limit of vanishing \nviscosity of one of them, allowed to describe a free surface. \nIn such conditions, the swimmer exhibited counter-clockwise motion.\nThis confirmed the results anticipated by~\\cite{lauga2006swimming}. \nAlso~\\cite{di2011swimming} supported by means of experimental \nobservations the theoretical prediction made through a simplified model based \non the method of images and the resistive force theory. Even if neglecting all the \ndynamics of the swimmer outside the interface plane, this work provided a simple \nexplanation for a counter-clockwise motion over a perfectly slipping surface, showing \na good agreement between the results of the simplified model and the experimental \nobservation. \nBriefly, ~\\cite{lauga2006swimming} explain that, when swimming above a no slip surface, a positive rotation rate of the swimmer head around its longitudinal axis produces a lateral force which is opposite to the one induced by the negative tail rotation rate. As a consequence, a net torque \nnormal to the wall is exerted on the body such that the swimmer follows a clockwise (CW) trajectory.\nOtherwise, when swimming close to a liquid-air interface,~\\cite{di2011swimming} devise a simplified model based on resistive force theory endowed with suitable symmetries to satisfy the free slip condition at the interface. \nIn this model, the swimmer moves under the effect of the velocity generated by its mirror image below the interface.\nThe counter-rotating image head produces on the swimmer head a lateral velocity which is opposite to the force that is exerted in the case of a solid wall.\nSuch relative velocity gives rise to a corresponding viscous force in the same direction. Since the same reasoning applies to the counter rotating tail, the overall torque on the microswimmer is also opposite to the one experienced on a no-slip wall, hence\n a net CCW motion is produced. \nIn literature a clockwise motion \nhas been observed~\\citep{lemelle2010counterclockwise} \nalso in presence of a free surface. \nBased on experimental observations,~\\cite{morse2013molecular} attributed such a behavior to \nthe molecular adsorption (due to the presence of biological material in the growth medium) \naltering the rheological properties of the air\/water interface, thus determining the swimming \npattern of nearby cells. \n\nIn principle, the mechanical causes that can affect the swimming direction are: \n$i)$ the rotation direction of the flagellum bundle;\n$ii)$ the effective boundary condition at the planar surface.\nThis study focuses on the point $ii)$ through the numerical simulation of\nthe motion of a \\textit{E. coli}-like microswimmer \nclose to free-slip and no-slip surfaces, assuming a standard left-handed arrangement for \nthe flagellum bundle. \nThe aim is investigating the behavior of the swimmer by \naddressing in full detail the complete three dimensional nature of the hydrodynamical \ninteraction between the swimmer and the surface. \nIt is worth noting that free-slip and no-slip are the limits \nof the more general Navier boundary condition that \nconnects the velocity at the liquid boundary with the tangential \nstress~\\citep{bazant2008tensorial}.\nFor liquid water moving on a solid surface, the actual slip is \nnegligible at micro-scale also for hydrophobic coatings \n\\citep{chinappi2010intrinsic,sega2013regularization} and only the presence \nof vapor bubbles trapped in the surface asperities (superhydrophobic surfaces) \nleads to significant slippage~\\citep{gentili2014pressure,bolognesi2013novel}, potentially\naltering the motion of particles close to the surface\n~\\citep{pimponi2014mobility,nizkaya2014flows}. Hence, the no-slip boundary is, for\nthe present purposes, a reliable model of a rigid wall. \nOn the other hand, the proper boundary conditions at the liquid-air interface \nare impermeability and continuity of the tangential stress \ncomponents, that reduces to free-slip (zero tangential stress) since \nair density is order of magnitude smaller than the liquid one.\n\nThe paper is organized as follows:\nthe geometrical model of the swimmer and the BEM is addressed in \n\\S~\\ref{sec:model}. \\S~\\ref{sec:FreeSurf} reports the salient results \nconcerning the motion of the swimmer in presence of a free surface.\nFinally, we discuss the major findings and point out the main conclusions \n(\\S~\\ref{sec:conclusions}), giving a perspective view of future work. \n\\section{Boundary integral formulation for the microswimmer}\\label{sec:model}\n\n\\subsection{Swimmer model}\\label{ssec:swim_mod}\n\nThis section concerns the modelling of a microswimmer \ninspired to \\textit{Escherichia coli}. This bacteria has \nbeen deeply investigated in the literature and a lot of information about \nits geometry and propulsion mechanism is available. \\textit{E. coli} has a \ncell length which varies between $1.6$ and $3.9 \\, \\mu m$, \nthe cell width ranges between $0.9$ and $1.7 \\, \\mu m$, \nwith a resulting cell volume from \n$1.5$ to $4.4 \\, \\mu m^3$~\\citep{volkmer2011condition} and an\naverage length of flagella of about $7 \\mu m$~\\citep{lauga2006swimming}.\n\n\\begin{figure}\n \\centerline{\\includegraphics[height=6cm,width=12cm]{fig1.pdf}}\n \\caption{The model of the flagellated swimmer comprises an\n ellipsoidal head (cell) and a tubular, helical, rigid, tail (flagellum).\n The tail rotates about its axis $\\vec{e}_T$ of an angle $\\phi(t)$ . \n The dimensionless semi-axis of the head are $a_1$ and $a_2$. \n The head aspect ratio is kept constant, $AR=a_1\/a_2=2$, and the \n equivalent radius (i.e. the radius $\\bar{a}$ of the sphere with same \n volume) is the assumed reference length.\n The dimensionless tail length is $L=7$, with a cross-section \n radius $a_t=0.05$. The dimensionless helix amplitude and \n wavelength are $A$ and $\\lambda$, respectively. \n $\\{ \\vec{e_1},\\vec{e_2},\\vec{e_3} \\}$ are the orthonormal \n vectors of the frame attached to the swimmer head (body frame). \n $\\vec{e_1}$ is longitudinal and identifies the swimmer \n orientation with respect to the unit vectors of the \n fixed frame $\\{ \\vec{X},\\vec{Y},\\vec{Z} \\}$.\n }\n\\label{fig:geometry}\n\\end{figure}\nFigure~\\ref{fig:geometry} sketches the simplified geometry comprising the \nellipsoidal axisymmetric cell and the corkscrew tail. Hereafter the equivalent \nradius of the ellipsoidal cell $\\bar{a}$ (the radius of the sphere having the same \nvolume) will be used as reference length-scale, i.e. the dimensionless cell volume is \n$V = 3 V'\/ {4 \\pi \\bar a}^3 = 1$, where the prime identifies dimensional \nquantities. The cell aspect ratio is $AR=a_1\/a_2=2$, being $2 a_1$ and $2 a_2$ the \nlongitudinal and the transversal (dimensionless) axis. \nFollowing~\\cite{shum2010modelling}, the tail bundle is modelled \nas a single helix with radius $a_T=0.05$. The tail \nrotates around its axis $\\vec{e_T}$ (see figure~\\ref{fig:geometry}). \nThe dimensionless axial length of the helix is \n$L$, $A$ denotes the helix amplitude and $\\lambda$ its wavelength. \nIn the present work we selected typical values \nfor the tail length and cell axes, namely $L = 7$, $a_1 = 1.6$ and $a_2 = 0.8$, \nwhile the amplitude $A$ and the wavelength $\\lambda$ of the tail are systematically \nvaried. With the above choices, the\nswimmer is rescaled into an actual \\textit{E. coli} \nby assuming $\\bar{a}=1 \\,\\, \\mu m$.\n\\begin{figure}\n \\centerline{\\includegraphics[height=6cm,width=12cm]{fig2.pdf}}\n \\caption{Sketch of the discretized swimmer near a planar surface.\n The configuration is identified by three parameters,\n namely, the distance $h$ of the reference point $\\vec{x}_J$ \n from the plane and the pitch angle $\\Theta$. \n The third parameter is the tail rotation angle $\\phi$ defined in\n figure~\\ref{fig:geometry}. \n }\n\\label{fig:pitch_angle}\n\\end{figure}\n\nIt is useful to introduce a body reference frame with orthonormal base vectors \n$\\{ \\vec{e_1},\\vec{e_2},\\vec{e_3} \\}$ \nwhere $\\vec{e_1}=-\\vec{e}_T$ is longitudinal (see figure~\\ref{fig:geometry}).\nThe subscript $H$ identifies the head and the subscript $T$ refers \nto the tail, which can rotate with respect to the head about its axis $\\vec{e_T}$. \nDuring the tail rotation around $\\vec{e}_T$, each point of the rigid tail \ndescribes a circle in the plane normal to $\\vec{e}_T$, being $\\phi$ the \ncorresponding rotation angle (or flagellum phase).\nThe time derivative $\\dot{\\phi}(t)$ is the tail rotational velocity $\\Omega_T$. \nThe swimmer position is identified in a fixed reference frame \nwith base \n$\\{\\vec{X},\\vec{Y},\\vec{Z}\\}$ by the three coordinates of the cell-to-tail \njunction point $\\vec{x}_J$, see figure~\\ref{fig:pitch_angle}.\nThe swimmer translates with dimensionless velocity $\\vec{U}=\\vec{U'}\/v$\nand has angular velocity $\\vec{\\Omega}_H=\\vec{\\Omega'}_H \\bar{a}\/ v$,\nwhere $v \\simeq 20\\mu m \/s$ is a typical swimming velocity \ntaken as reference quantity.\nThe $\\{ \\vec{X},\\vec{Y} \\}$ coordinate plane \nis taken to coincide with the planar boundary, either\nthe solid wall or the free surface, see figure~\\ref{fig:pitch_angle}.\n\nThe kinematics of the swimmer is described by seven degrees of freedom, \nthe junction position $\\vec{x}_J(t)$, the flagellum phase $\\phi(t)$ and the \nthree parameters defining the rotation of \nthe head. We stress that the discretely evolved rotation matrix should\nbelong to the proper matrix subspace, namely the $\\rm SO(3)$ subgroup. \nThis is enforced through a description in terms \nof quaternions with unit norm, see e.g.~\\cite{diebel2006representing}. \nIt follows,\n\\begin{eqnarray}\n\\vec{\\dot{x}}_J &=& \\vec{U}(t) \\,\\, , \\label{eqn:kinematic1} \\\\\n\\dot{\\phi} &=& \\Omega_T (t) \\,\\, , \\label{eqn:kinematic2} \\\\\n\\vec{\\dot{q}} &=& \\frac{1}{2}\\vec{S}(\\vec{q}) \\, \\vec{\\Omega}_H(t)\\,\\, , \\label{eqn:kinematic3}\n\\end{eqnarray}\nwhere $\\vec{q}= (q_0 , q_1 , q_2 , q_3)$ is the quaternion vector\nwith $\\vert \\vec{q} \\vert =1$. When $\\vec{\\Omega}_H(t)$ \nis expressed in the \n$\\{\\vec{X},\\vec{Y},\\vec{Z}\\}$ base, $\\vec{S}$ reads \n\\begin{equation}\n \\vec{S}(\\vec{q})=\n \\left[ {\\begin{array}{rrr}\n -q_1 & -q_2 & -q_3 \\\\\n q_0 & -q_3 & q_2 \\\\\n q_3 & q_0 & -q_1 \\\\\n -q_2 & q_1 & q_0 \\\\\n \\end{array} } \\right] \\, .\n\\label{eqn:w_def}\n\\end{equation}\nGiven the quaternion $\\vec{q}(t)$, the components of the body frame unit vectors\ncan be reconstructed as\n\\begin{eqnarray}\n\\nonumber \n\\vec{e_1}= \\left( q_0^2+q_1^2-q_2^2-q_3^2, 2 (q_1 q_2+q_0 q_3), 2 (q_1 q_3 - q_0 q_2) \\right) \\,\\, , \\\\\n\\vec{e_2}= \\left( 2 (q_1 q_2 - q_0 q_3), q_0^2-q_1^2+q_2^2-q_3^2, 2(q_2 q_3 + q_0 q_1) \\right) \\,\\, , \\\\\n\\nonumber \n\\vec{e_3}= \\left( 2(q_1 q_3+q_0 q_2), 2(q_2 q_3 - q_0 q_1), q_0^2-q_1^2-q_2^2+q_3^2 \\right) \\,\\, . \n\\label{eqn:bodyunitvectors}\n\\end{eqnarray}\nTime integration of equations (\\ref{eqn:kinematic1}),(\\ref{eqn:kinematic2}),\n(\\ref{eqn:kinematic3}), allows to track the swimmer trajectory once \n$\\vec{U}$, $\\vec{\\Omega}_H$ and $\\Omega_T$ are determined by the solution of the \nhydrodynamical interactions between the swimmer and the fluid, \nas discussed in the next section. Since the\nnumerical integration error in the equations for the quaternion may affect\nits norm, the quaternion is normalized to unit length at each time step.\nIt easily shown that the accuracy \nof the integration scheme is exactly preserved by this procedure.\n\n\\subsection{Hydrodynamic model}\\label{ssec:math_model}\n\nGiven the characteristic length scale $\\bar{a}$ and swimming velocity $v$ of \nthe microorganism, \nthe typical Reynolds number in water is \n$Re=\\rho \\bar{a} v \/ \\mu \\simeq 2*10^{-5}$ where $\\mu$ and $\\rho$ \nare the water viscosity and density, respectively.\nHence the inertial terms are negligible and, for Newtonian fluids, the\nflow in the domain ${\\cal D}$ is described by the Stokes equations, \n\\begin{subeqnarray}\n\\boldsymbol{\\nabla} \\cdot \\vec{u} &=& 0 \\,\\, , \\label{eqn:Continuity} \\\\\n\\nabla^2 \\vec{u}-\\boldsymbol{\\nabla} p &=& 0 \\,\\, ,\n\\label{eqn:Stokes}\n\\end{subeqnarray}\nwhere $\\vec{u}=\\vec{u'}\/v$ and $p=p' \\bar{a}\/(\\mu v)$.\n\nThe flow velocity $\\vec{u}$ is forced by the tension \n$\\vec{f}$ at the swimmer surface acting on the fluid.\nThe swimmer propulsion \nis due to the internal (constant) torque $\\tau_M = \\tau'_M\/(\\mu v \\bar{a}^2)$\nexchanged between head and tail.\nBeing a free body, the total force and torque on the swimmer (head and tail) \nis zero, \n\\begin{subeqnarray} \n\\int_{H \\cup T}{\\vec{f} dS} &=& 0 \\,\\, , \\\\\n\\int_{H \\cup T}{\\vec{r} \\wedge \\vec{f} dS}&=&0 \\,\\, .\n\\label{eqn:balances}\n\\end{subeqnarray}\nwhere $\\vec{r} = \\vec{x} - \\vec{x}_J$. \nThe torque $\\tau_M$ exerted on the tail is balanced by the torque \nproduced by the fluid stresses $\\vec{f}$ on the tail boundary, namely\n\\begin{equation}\n\\vec{e_T} \\cdot \\int_{T}{\\vec{r} \\wedge \\vec{f} dS}=-\\tau_M \\,\\, .\n\\label{eqn:Torque}\n\\end{equation}\n\nThe system ~(\\ref{eqn:Stokes}),\n~(\\ref{eqn:balances}) and ~(\\ref{eqn:Torque}) \nneeds proper boundary conditions \nat the swimmer surface and external boundaries, i.e. wall or free surface.\nOn the microswimmer surface the no-slip \ncondition yields\n\\begin{subeqnarray} \n\\vec{u}(\\vec{x}) &=& \\vec{U}+\\vec{\\Omega}_H \\wedge \\vec{r}, \\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\, (\\vec{x}\\in H) \\,\\,, \\\\\n\\vec{u}(\\vec{x}) &=& \\vec{U}+(\\vec{\\Omega}_H + \\Omega_T \\vec{e}_T) \\wedge \\vec{r}, \\,\\,\\,\\,\\, (\\vec{x}\\in T) \\,\\, .\n\\label{eqn:swim_velocity}\n\\end{subeqnarray}\nNo-slip boundary condition is also used to model the solid wall.\nFor the free surface vanishing \nnormal fluid velocity (impermeability) and zero tangential stresses \n(free slip condition) are the appropriate prescriptions,\n\\begin{subeqnarray}\n\\vec{u} \\cdot \\vec{n} &=& 0 \\,\\, , \\label{eqn:impermeab} \\\\\n(\\left( \\vec{\\nabla}\\vec{u} + (\\vec{\\nabla} \\vec{u})^\\mathrm{T} \\right) \\cdot \\vec{n})(\\vec{I}-\n\\vec{n}\\otimes\\vec{n}) &=& 0 \\,\\, .\n\\label{eqn:noshear}\n\\end{subeqnarray}\n\n\\subsection{Boundary Element Method}\\label{ssec:num_solution}\n\nThe solution of the Stokes equations~(\\ref{eqn:Stokes}) can be \nrecast in integral form\n\\begin{equation}\nE(\\vec{x}_0) \\, u_i(\\vec{x}_0)\n=\n\\int_{\\partial {\\cal D}}\n\\left[ u_j(\\vec{x}) T_{ikj}(\\vec{x},\\vec{x}_0) n_k(\\vec{x}) - G_{ij}(\\vec{x},\\vec{x}_0) \nf_j(\\vec{x}) \\right] dS\n\\label{eqn:BIM}\n\\end{equation}\nwhere $E(\\vec{x_0})=1$ for points belonging to the interior \nof the domain. \n$G_{ij}(\\vec{x},\\vec{x}_0)$ is the \nfree-space Green function, i.e. the $i-$th component of the\nvelocity at $\\vec{x}$ induced by a Dirac $\\delta$ like force\nat $\\vec{x}_0$ acting in direction $j$, and \n$T_{ikj}(\\vec{x},\\vec{x_0})$\nthe associated stress tensor~\\citep{happel1983low}.\nEquation (\\ref{eqn:BIM}) expresses the $i$-th velocity component at \nthe collocation point $\\vec{x}_0$ in the fluid domain ${\\cal D}$.\nIt requires the knowledge of the velocity $u_j(\\vec{x})$ and the stresses $f_j(\\vec{x})$ \nat the boundary $\\partial {\\cal D}$.\nThe integral representation (\\ref{eqn:BIM}) can be turned \ninto a boundary integral equation in the limit $\\vec{x}_0 \\to \\partial {\\cal D}$.\nThe resulting expression can be written in the same form (\\ref{eqn:BIM}) now with\n$E(\\vec{x}_0) = 1\/2$ and the integral understood in the Cauchy principal \nvalue sense.\nWe emphasize that the boundary $\\partial {\\cal D}$ \nconsists of two disjointed parts: the swimmer surface and the planar interface.\nThe integral can be restricted to \nthe swimmer's surface by exploiting the symmetries \nassociated with the boundary condition at the planar interface. \nThis is tantamount to \nthe use of the appropriate Green's function, see appendix~\\ref{appA}, for the free-slip case \nand \\cite{blake1971note} for the no-slip case.\n\n\nDue to the boundary condition~(\\ref{eqn:swim_velocity}) \nthe fluid velocity at the swimmer surface is \nexpressed in term of seven unknowns, namely, \nthe velocity of the junction point $\\vec{U}$, the angular velocity\n$\\vec{\\Omega}_H$, and the tail rotation velocity $\\Omega_T$,\nwhich together with the tension $\\vec{f}$ at \nthe swimmer surface complete the set of unknowns. \nThe system of equations consists of the vector boundary \nintegral equation (\\ref{eqn:BIM}), the global force and \ntorque balance (\\ref{eqn:balances}), and the \ntorque balance for the swimmer tail (\\ref{eqn:Torque}).\nIt can be shown that the solution exists and is unique\n\\citep{ladyzhenskaya1969mathematical}.\n \nThe above system is discretized by means of $N$ curved 6-point elements \n(typically $N = 3518$ total elements, with $N_H = 512$ panels on the swimmer head), \nas sketched in figure~\\ref{fig:pitch_angle}, with piecewise constant shape \nfunctions~\\citep{pozrikidis1992boundary}. By selecting the center of each element as \na collocation point $\\vec{x_0}$, the boundary integral equation~(\\ref{eqn:BIM}) with \n$E(\\vec{x_0})=1\/2$ is recast in a system of $3N$ scalar equations for the stresses.\nAs will be shown in \\S~\\ref{sec:FreeSurf} the swimmer may \napproach the interface. In this case some of the panels belonging to the swimmer and to \nits image may get very close. This potentially spoils the accuracy of the integrals\nproviding the influence coefficients appearing in the discrete equations.\nCare is taken in the simulations discussed below to have the diameter of \nthe typical panel smaller than the distance between the body and its image\nthereby preventing the undesired loss of accuracy.\nConcerning the solution of the algebraic system, inverting a matrix coming from \nthe discretization of boundary integral \nequations may require some care, since the algebraic system may be prone to severe \nill-conditioning. An example is provided by the piecewise constant approximation of \na first kind Fredholm operator \\citep{hsiao1973solution}. \nIn the context of the Stokes equations, such operator arises when solving for \nthe stresses given the velocity. \nHowever, the problem of the swimmer addressed in the present paper \ndoes not involve a pure first kind Fredholm operator, since the equations are augmented by \nthe free-body constrains (zero net forces and torques) and by the equation enforcing the internal torque, with \nunknowns the stresses on the body plus the rigid body velocities and the relative tail-body rotation rate. \nThe spectral properties of the ensuing matrix differ substantially from those of a pure first kind Fredholm equation. \nIn fact, the analysis based on the singular value decomposition, \\citep{golub2012matrix}, excludes matrix ill-conditioning.\nAs a consequence, the complete system of $3N+7$ algebraic equations for the $3N+7$ unknowns collectively denoted $\\boldsymbol{\\chi}$, \n$\\vec{A} \\cdot \\boldsymbol{\\chi} = \\vec{b}$, can be solved by standard techniques, like the Gauss-Jacobi method here implemented in \nan in-house MPI parallel code.\nOnce the solution $\\boldsymbol{\\chi}$ is obtained, the integration of the kinematic equations~(\\ref{eqn:kinematic1}),~(\\ref{eqn:kinematic2}),\n~(\\ref{eqn:kinematic3}), performed via a third order low-storage Runge-Kutta method, allows to track the swimmer trajectory.\n\n\\section{Swimming close to a free surface: results and discussion}\\label{sec:FreeSurf}\n\n \\begin{figure}\n \\subfigure[]{\\includegraphics[width=0.495\\textwidth]{fig3a.pdf}} \n \\subfigure[]{\\includegraphics[width=0.495\\textwidth]{fig3b.pdf}}\n \\subfigure[]{\\includegraphics[width=0.495\\textwidth]{fig3c.pdf}}\n \\subfigure[]{\\includegraphics[width=0.495\\textwidth]{fig3d.pdf}}\n \\caption{\n Panel~(\\textit{a}): phase plane $\\hat{\\Theta}-\\hat{h}$ for a microswimmer\n close to a rigid no-slip wall. The curvature radius of the trajectory,\n equation (\\ref{eqn:traj_radius}), is provided by the color map.\n Due to the presence of the wall the blanked regions are forbidden to\n the swimmer. Three typical trajectories $I$, $II$ and $III$ are\n shown by the solid black lines. Trajectories\n $I$ and $II$ converges to the stable point. The corresponding \n trajectory in physical space is shown in panel~(\\textit{b}) \n where the curvature radius in the $XY-$plane is colour coded.\n Results for the free-slip interface are shown in\n panels (\\textit{c}) and (\\textit{d}). \n All trajectories eventually intersect the planar interface \n and no stable orbit exists. \n Note the opposite sign of the curvature radius in comparison with the\n no-slip case.\n }\n \\label{fig:NS_maptraj}\n \\end{figure}\n\n\\subsection{Phase plane analysis and trajectories for a no-slip wall}\n\\label{ssec:PhField}\n\nThe interaction between the swimmer\nand a homogeneous planar surface is completely determined by three parameters: \nthe distance $h$ between the junction and the surface, the flagellum rotation \nphase angle $\\phi$ and the pitch angle $\\Theta$ between the swimmer longitudinal \naxis $\\vec{e_1}$ and the surface plane, see figures~\\ref{fig:geometry} \nand~\\ref{fig:pitch_angle}. \n$\\phi$ is a fast variable, thus, following~\\cite{shum2010modelling}, \nthe kinematics of the system can be described in term of \n$\\phi -$averaged quantities, in the following denoted by a circumflex. \nThe kinematics in the reduced $(\\hat{\\Theta},\\hat{h})$\nspace is ruled by \n\\begin{eqnarray}\n\\dot{\\hat{\\Theta}} &=& \\hat{\\Omega}_2 \\,\\, ,\\\\\n\\dot{\\hat{h}} &=& \\hat{U}_Z \\,\\, , \n\\label{eqn:Theta_h_punto}\n\\end{eqnarray}\nwhere $\\hat{\\Omega}_2 (\\hat{\\Theta},\\hat{h})\n= \\hat{\\vec \\Omega}_H \\cdot {\\vec{e}_2}$ \nand $\\hat{U}_Z(\\hat{\\Theta},\\hat{h})= \\hat{\\vec{U}} \\cdot {\\vec{Z}}$. \nFigure~\\ref{fig:NS_maptraj}(\\textit{a}) shows the phase plane for a \nmicroswimmer close to a solid no-slip surface.\nThree reduced trajectories, $I$, $II$ and $III$ \nare traced. The trajectory $III$ hits the wall while\ntrajectories $I$ and $II$, although starting from different initial \nconfigurations, both converge to the same attractor. \nThe attractor is an \nasymptotically stable equilibrium point~\\citep{cencini2009chaos}, \n($\\hat{\\Omega}_2$ and $\\hat{U}_Z$ are zero), i.e. \nnearby trajectories converge to the equilibrium point.\n\nThe $\\phi -$averaged trajectory in 3D space corresponding to \n$I$ is obtained by integrating \nthe $\\phi -$averaged version of the \nkinematic equations~(\\ref{eqn:kinematic1}), (\\ref{eqn:kinematic3}) \nand is reported in figure~\\ref{fig:NS_maptraj}(\\textit{b}). Apparently, \nafter a transient, the swimmer stabilizes on a circular clockwise (CW)\norbit which corresponds to the \nstable point in the reduced space $(\\hat{\\Theta},\\hat{h})$.\nFollowing~\\cite{shum2010modelling}, the curvature radius of the stable orbit \nis given by \n\\begin{equation}\nR = \\frac{|\\vec{\\hat{U}(\\hat{\\Theta},\\hat{h})}|}\n{\\hat{\\Omega}_Z -\\hat{\\Omega}_X \\tan{\\hat{\\Theta}}} \\,\\, ,\n\\label{eqn:traj_radius}\n\\end{equation}\nwhich yields $R=-22.1$. As expected this result \nmatches within numerical accuracy \nits direct measure $R=-21.9$ obtained from the 3D \ntrajectory, see figure~\\ref{fig:NS_maptraj}(\\textit{b}). \nNote that equation (\\ref{eqn:traj_radius}) differs in sign \nfrom the original equation given in~\\cite{shum2010modelling}\ndue to a different choice of the body reference frame. It is also worth \nnoting that equation (\\ref{eqn:traj_radius}) provides the exact curvature radius \nonly for the stable orbit. Nevertheless,\nit also gives a reasonable approximation\nin the other conditions here explored, as discussed below. \nIn figure~\\ref{fig:NS_maptraj}(\\textit{a}) \nthe color map refers to $R$ as estimated from eq.~(\\ref{eqn:traj_radius}). \nThe color code in figure~\\ref{fig:NS_maptraj}(\\textit{b}) \ncorresponds to the local curvature radius of the trajectory \nprojection in the $xy$-plane. This information is also reported along the curve \n$I$ in the phase plane, panel~\\ref{fig:NS_maptraj}(\\textit{a}), through the \ncolor of the open circles superimposed to the trajectory $I$. From the data it \nis apparent that the color differences between the circles\nand the color map in the background can hardly be appreciated. \n\n\\subsection{Swimming in presence of a free-slip interface}\\label{ssec:free_slip}\n\nThe phase plane analysis has been performed for a microswimmer moving \nclose to a free-slip interface, as illustrated in \nfigures~\\ref{fig:NS_maptraj}(\\textit{c}) and~\\ref{fig:NS_maptraj}(\\textit{d}).\nThe first significant result is the sign of the curvature \nradius, now positive, i.e. the swimmer exhibits a counter-clockwise (CCW) motion \nin contrast to what observed for the no-slip wall, compare the panels (\\textit{b}) \nand (\\textit{d}) in figure~\\ref{fig:NS_maptraj}.\nThe curvature radius is smaller with respect to the no-slip case\nat corresponding $(\\hat{\\Theta}, \\hat{h})$. \nThe curvature radius measured in our simulation favorably\ncompares with experimental observation of~\\cite{di2011swimming} where \na CCW motion with a curvature radius of $\\simeq 10 \\mu m$ is described. \nIndeed, assuming $\\bar{a}=1 \\mu m$, appropriate for an E.coli, \nthe typical radius of curvature found from the present numerics, $R \\simeq 7$,\ngives $R' = R \\bar{a} \\simeq 7\\mu m$.\n \nThe second significant result \nis that no stable trajectory exists when the \nswimmer moves close to a free surface. In fact, a swimmer similar to an E.coli moving\nalmost parallel to the free-surface (low $\\hat{\\Theta}$) is always attracted to it. \nThis result confirms the\nindication of the 2D model proposed by~\\cite{crowdy2011two}\nfor undeformable interfaces.\nThis conclusion also partially agrees with the results illustrated by~\\cite{lopez2014dynamics}, \nwhere the authors, basing on a multipole expansion technique, highlighted that the average surface-normal \nvelocity $\\hat{U}_z$ of the swimmer is always negative for $\\hat{\\Theta} = 0$. Our work adds that the surface \nis no more able to attract the swimmer if it swims above a certain height $h$, even with \nzero tilt angle. \n\n\\begin{figure}\n \\includegraphics[width=0.47\\textwidth]{fig4a.pdf}\n \\includegraphics[width=0.47\\textwidth]{fig4b.pdf}\n \\caption{A wider view of the phase plane is displayed for\n the microswimmer near the no-slip (left) and the free-slip (right)\n surface. The range exploited in figure~\\ref{fig:NS_maptraj} \n correspond to the small dashed yellow box.\n Three regions of the phase plane are identified by colours:\n a trajectory belonging to the\n light blue region ends colliding with wall, one in the green\n region escapes away, and trajectories in the orange region are \n attracted to the stable orbit. \n Note that attraction basin is absent in the free-slip case (right).\n Unstable equilibrium points are marked by the red circles.\n }\n\\label{fig:large_angles}\n\\end{figure}\n\nIn figure ~\\ref{fig:large_angles} we extend our analysis to a wider range \nof initial conditions, for both no-slip and free-slip surfaces. \nIn both cases, at high positive $\\hat{\\Theta}$ the microswimmer escapes from the \nsurface while, at high negative $\\hat{\\Theta}$, it eventually hits the boundary. \nOn the no-slip wall three cases are possible depending on the initial \nconditions, namely the swimmer achieves a stable orbit, it escapes \nfrom the wall or it collides with the wall. \nIn contrast, for a free-slip surface only \ntwo possibilities exist, namely, \nescaping from the surface or being attracted towards the interface.\nThe maps also highlight the presence of a further equilibrium point \n$(\\hat{\\Theta}_u,\\hat{h}_u)$\n where the time derivatives \nof both ${\\hat{h}}$ and $\\hat{\\Theta}$ vanish. However, as shown \nby the behavior of the streamlines in the neighbor of\n$(\\hat{\\Theta}_u,\\hat{h}_u)$, this \npoint is unstable for both no-slip and free-slip cases, i.e. \nnearby trajectories escape from the equilibrium point.\n\nIn order to extend the analysis, the dynamics of swimmers with different geometrical \ncharacteristics was also examined, focusing in particular on the shape of the head and the tail (axial) length. \nFigure~\\ref{fig:many_geometries} shows the phase plane $\\hat{\\Theta}-\\hat{h}$\nwhen the tail length $L \\in (3,5,10,15)$ and the head aspect ratio $AR=a_1\/a_2 \\in (1,3,4,5)$. \nWhen changing the aspect ratio, the swimmer does not substantially modify its behavior. In particular, \nthe orientation as described by the pitch angle $\\hat \\Theta$ does not qualitatively change and the regions of phase space where\nthe swimmer is attracted to the free-surface or where it escapes away from it remain very similar. The general trend is an increase of the curvature of the trajectory for given $\\left({\\hat \\Theta}, \\, {\\hat h}\\right)$ as $AR$ increases.\nOn the other hand, when changing the relative tail-to-head length, the swimmer hits the wall with different pitch angles \n$\\hat \\Theta$. Decreasing the tail length, the region of phase plane where the swimmer escapes becomes larger while the region where it is attracted to\nthe free-surface shrinks. The curvature of the trajectory, for given $\\left({\\hat \\Theta}, \\, {\\hat h}\\right)$, is found to increase with the tail length.\n\\begin{figure}\n \\includegraphics[width=\\textwidth]{fig5.pdf}\n \\caption{Phase plane $\\hat{\\Theta}-\\hat{h}$ for a microswimmer\n close to a to a free-slip interface for different geometries, namely \n head aspect ratio $AR=a_1\/a_2 \\in (1,3,4,5)$ and tail length $L \\in (3,5,10,15)$.\n The color map corresponds to the curvature radius along trajectories (e.g. solid black lines).\n }\n\\label{fig:many_geometries}\n\\end{figure}\n\n\\subsection{Drift along the trajectory}\\label{ssec:drifting}\n\nTypically the longitudinal axis of the microswimmer is misaligned with the local velocity evaluated at the junction point.\nThis behavior was experimentally observed for \\textit{E.Coli}\nmoving close to a liquid-air interface by~\\cite{di2011swimming}. \nIn order to compare with experiments we define the drift angle $\\alpha$ as \nthe angle between the ${X,Y}$ projections of the body frame unit vector \n$\\vec{e}_1$ and the swimmer velocity $\\vec{U}$. \nThe sign of $\\alpha$ is taken to be positive when the swimmer points \noutward with respect to the trajectory, see figure~\\ref{fig:drift_traj}a.\n\nIn figure \\ref{fig:drift_traj}b and c, the $\\phi-$averaged \ndrift angle is reported for both no-slip and free-slip cases.\nIn the free-slip case the head points outside the trajectory\nand $\\hat{\\alpha}$ increases as the swimmer approaches the wall.\nThe spanned range of values, $\\hat{\\alpha} \\in (10^\\circ,30^\\circ)$, is \nin good agreement with the experimental observation~\\citep{di2011swimming}.\nIn contrast, in the no-slip case, $\\hat{\\alpha}$ is slightly \nnegative, $\\hat{\\alpha} \\simeq -2^\\circ$, meaning that the swimmer is\nalmost aligned with the trajectory of the reference point $\\vec{x}_J$. \n\n\\begin{figure} \n\\centering\n\\subfigure[]{\\includegraphics[width=0.9\\textwidth]{fig6a.pdf}}\n\\subfigure[]{\\includegraphics[width=0.49\\textwidth]{fig6b.pdf}}\n\\subfigure[]{\\includegraphics[width=0.49\\textwidth]{fig6c.pdf}}\n\\caption{Panel (\\textit{a}), sketch illustrating the drift angle\n$\\alpha$. Panels (\\textit{b}) and (\\textit{c}) report \nthe $\\phi-$averaged drift angle $\\hat{\\alpha}$ in the phase-plane (colour code),\nfor the no-slip and free-slip interface, respectively.\nEach inset show a representative trajectory with superimposed the\nlongitudinal unit vector $\\vec{e}_1$ (the color coding corresponds to the\nlocal value of $\\hat{\\alpha}$).\n}\n\\label{fig:drift_traj}\n\\end{figure}\n\nHere, to complete the discussion, the effect of modifying the tail geometry is briefly addressed.\nIndeed, among the large number of different parameters\ndefining the flagellated geometry, probably the most \nuncertain ones concern the tail, specifically amplitude of the helix, $A$, and\nnumber of turns, $N_\\lambda$.\nFigure~\\ref{fig:imm_grande} reports the radius of curvature $R$ and the drift angle\n$\\hat\\alpha$ for three different tails. The comparison shows that the results discussed so far are generic. \n\n\\begin{figure} \n\\centering\n\\includegraphics[width=\\textwidth]{fig7.pdf}\n\\caption{Different tail geometries. This figure shows the radius of curvature $R$ and the \ndrift angle $\\hat\\alpha$ when modifying the tail amplitude $A$ or the \nnumber of turns $N_\\lambda$. Three cases are reported: \n$A=0.8$ and $N_\\lambda=1$ (left), \n$A=0.4$ and $N_\\lambda=3$ (center), \n$A=0.8$ and $N_\\lambda=5$ (right). \nThe hydrodynamic behavior is qualitative the same observed for the reference configuration\nin figures~\\ref{fig:NS_maptraj} and~\\ref{fig:drift_traj} with \nfew changes in the variables values.}\n\\label{fig:imm_grande}\n\\end{figure}\n\n\n\\section{Conclusions}\\label{sec:conclusions}\n\nThe motion of a flagellated microswimmer close to a boundary, either a solid wall or \na free surface, is relevant to several applications spanning from micro-robots to biology and medicine, as concerning in particular biofilm formation. Its dynamics can in principle be affected by several physical phenomena occurring at \nthe microscale and, for biological applications, by the behavior of the microorganism.\nThe chemical and physical nature of the interface may play a role in the interaction between the\nswimmer and the surface, e.g. surface charges and chemicals adsorbed at the interface may have a significant influence. \nConcerning the specific case of E. coli, taken as representative of most flagellated, recent experimental studies reported a characteristic motion\nof the swimmer in the two extreme cases of a solid wall and a free surface. In the first case, the experimental observation\nconsistently show that the microswimmer typically moves in circulatory orbits oriented in clockwise direction (CW) \\cite{lauga2006swimming}.\nIn contrast, there is evidence that the orientation of the trajectory is reversed (counterclockwise, CCW) when swimming occurs near a free surface \\cite{di2011swimming}.\nHowever, the effect of the free surface is less neatly defined and, sometimes, CW motion is reported, probably due to\nthe presence of contaminants adsorbed at the interface \\citep{lemelle2010counterclockwise}.\nTheoretical models proposed to explain this behavior typically exploit several form of approximation, e.g. multipole expansion, that may become\npartially inaccurate when the microswimmer gets very close to the interface.\n\nThe present paper provides a complete description of the motion, considering a reasonably realistic geometry of the flagellated in \npresence of a free-surface modeled as a rigid, free-slip plane.\nThe resulting model removes any concurrent effect, retaining a full hydrodynamics description.\nThe results for the free-slip boundary were compared with those already analyzed in~\\cite{shum2010modelling} for the no-slip case.\nThe data clearly indicate that the motion close to a liquid-air interface is CCW, in agreement with the experimental data~\\citep{di2011swimming}\nand with the theoretical results obtained by using multipole expansions~\\citep{lopez2014dynamics} and resistive force theory~\\citep{di2011swimming}.\n\nOther available experimental information, namely the orientation of the bacteria with respect to its trajectory~\\citep{di2011swimming},\nis satisfactorily reproduced by the present simulations, confirming that the head of the swimmer points outward.\nIn contrast, the bacteria is roughly aligned with its trajectory close to a no-slip wall.\nIn principle, this observable can be used together with the rotation direction to interpret experimental results on the interaction of a \nmicroswimmer with an interface.\nA characteristic aspect of the motion near a solid surface is the occurrence of stable orbits \\citep{giacche2010hydrodynamic,shum2010modelling}.\nTo the contrary, no stable orbit has been presently found on a free-slip interface. \n\nIn conclusion, the boundary conditions are confirmed to deeply influence the hydrodynamical behavior of the swimmer.\nThis consideration paves the way to suitably textured surfaces which, properly engineered to stably support a \nsuper-hydrophobic state, can be exploited to passively control the microswimmer motion. \nIndeed the ability of superhydrophobic surfaces to alter the flow has been recently used for passive particle separation\n\\citep{asmolov2015principles}. Under this respect fully resolved hydrodynamic simulations able to model complex physical surfaces, \nlike the BEM adopted here, could provide the required fundamental knowledge to extend passive control strategies to active suspensions.\n\\\\\n\nThe authors acknowledge the CINECA Iscra C Award (IscrC-BSM-LAI) for the availability of HPC resources\nand the ERC Grant No. [339446] {\\bf BIC}: {\\sl Bubbles from Inception to Collapse}.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nConsider a set $X$, a group $G$ and a positive integer $m$. An action $G\\times X\\to X$ is said to be $m$-transitive if it is transitive on ordered $m$-tuples of pairwise distinct points in $X$, and is infinitely transitive if it is $m$-transitive for all positive integers $m$.\n\nIt is easy to see that the symmetric group $S_n$ acts $n$-transitively on a set of order $n$, while the action of the alternating group $A_n$ is $(n-2)$-transitive. A generalization of a classical result of Jordan~\\cite{Jo} based on the classification of finite simple groups claims that there\nare no other $m$-transitive finite permutation groups with $m>5$.\n\nClearly, the group $S(X)$ of all permutations of an infinite set $X$ acts infinitely transitively on $X$. The first explicit example of an infinitely transitive and faithful action of the free group $F_n$ with the number of generators $n\\ge 2$ was constructed in~\\cite{McD}; see \\cite{FMS,HO} and references therein for recent results in this direction.\n\nInfinite transitivity on real algebraic varieties was studied in \\cite{HM,HM1,BM,KM}. For multiple transitive actions of real Lie groups on real manifolds, see~\\cite{Bo,Kram}.\n\nA classification of multiple transitive actions of algebraic groups on algebraic varieties over an algebraically closed field is obtained in~\\cite{Kn}. It is shown there that the only 3-transitive action is the action of $\\PGL(2)$ on the projective line $\\PP^1$. Moreover, for reductive groups the only 2-transitive action is the action of $\\PGL(m+1)$ on $\\PP^m$.\n\nIn this paper we consider highly transitive actions in the category of algebraic varieties over an algebraically closed field $\\KK$ of characteristic zero. By analogy with the full permutation group $S(X)$ it is natural to ask about transitivity properties for the full automorphism group $\\Aut(X)$ of an algebraic variety $X$. The phenomenon of infinite transitivity for $\\Aut(X)$ in affine and quasiaffine settings was studied in many works, see~\\cite{Re,KZ,AKZ,AFKKZ,AFKKZ1,FKZ,APS}. The key role here plays the special automorphism group $\\SAut(X)$.\n\nMore precisely, let $\\GG_a$ (resp. $\\GG_m$) be the additive (resp. multiplicative) group of the ground field $\\KK$. We let $\\SAut(X)$ denote the subgroup of $\\Aut(X)$ generated by all algebraic one-parameter unipotent subgroups of $\\Aut(X)$, that is, subgroups in $\\Aut(X)$ coming from all regular actions $\\GG_a\\times X\\to X$.\n\nLet $X$ be an irreducible affine variety of dimension at least 2 and assume that the group $\\SAut(X)$ acts transitively on the smooth locus $X_{\\text{reg}}$. Then \\cite[Theorem~0.1]{AFKKZ} claims that the action is infinitely transitive. This result can be extended to quasiaffine varieties; see \\cite[Theorem~2]{APS} and \\cite[Theorem~1.11]{FKZ}.\n\nWe address the question whether transitivity of $\\SAut(X)$ is the only possibility for the automorphism group $\\Aut(X)$ of an irreducible quasiaffine variety $X$ to act infinitely transitively on $X$. We show that 2-transitivity of the group $\\Aut(X)$ implies transitivity\nof the group $\\SAut(X)$ provided $X$ admits a nontrivial $\\GG_a$- or $\\GG_m$-action; (Theorem~\\ref{tmain} and Corollary~\\ref{ctrans}). We conjecture that the assumption on existence of a nontrivial $\\GG_a$- or $\\GG_m$-action on $X$ is not essential and 2-transitivity of $\\Aut(X)$ always implies transitivity of $\\SAut(X)$ and thus infinite transitivity of $\\Aut(X)$ (Conjecture~\\ref{conj}).\n\nThe quasiaffine case differs from the affine one at least by two properties: the algebra of regular functions $\\KK[X]$ need not be finitely generated and not every locally nilpotent derivation on $\\KK[X]$ gives rise to a $\\GG_a$-action on $X$. These circumstances require new ideas when transferring the proofs obtained in the affine case. Our interest in the quasiaffine case, especially when the algebra $\\KK[X]$ is not finitely generated, is motivated by several reasons.\nHomogeneous quasiaffine varieties appear naturally as homogeneous spaces $X=G\/H$ of an affine algebraic group $G$. By Grosshans' Theorem, the question whether the algebra $\\KK[G\/H]$ is finitely generated is crucial for the Hilbert's fourteenth problem, see~\\cite{Gr} and \\cite[Section~3.7]{PV}. The group $\\Aut(X)$ acts infinitely transitively on $X$ provided the group $G$ is semisimple \\cite[Proposition~5.4]{AFKKZ}. On the other hand, quasiaffine varieties, including the ones with not finitely generated algebra of regular functions, appear as universal torsors $\\widehat{X}\\to X$ over smooth rational varieties $X$ in the framework of the Cox ring theory, see e.g. \\cite[Propositions~1.6.1.6, 4.3.4.5]{ADHL}. By \\cite[Theorem~3]{APS}, for a wide class of varieties $\\widehat{X}$ arising in this construction, the special automorphism group $\\SAut(\\widehat{X})$ acts infinitely transitively on $\\widehat{X}$.\n\nLet us give a short overview of the content of the paper. In Section~\\ref{s1} we recall basic facts on the correspondence between $\\GG_a$-actions on an affine variety $X$ and locally nilpotent derivations of the algebra $\\KK[X]$. Proposition~\\ref{lndga} extends this correspondence to the case when $X$ is quasiaffine.\n\nIn Section~\\ref{s2} we generalize the result of \\cite{AG} on the automorphism group of a rigid affine variety to the quasiaffine case. Recall that an irreducible algebraic variety $X$ is called rigid if $X$ admits no nontrivial $\\GG_a$-action. Theorem~\\ref{trigid} states that the automorphism group of a rigid quasiaffine variety contains a unique maximal torus; the proof is an adaptation of the method of \\cite[Section~3]{FZ1} to our setting.\n\nAlso we describe all affine algebraic groups which can be realized as a full automorphism group of a quasiaffine variety (Proposition~\\ref{pdref}); the list of such groups turns out to be surprisingly short.\n\nSection~\\ref{s3} contains our main results, Theorem~\\ref{tmain} and Corollary~\\ref{ctrans}. In Corollary~\\ref{cunirat} we observe that if an irreducible quasiaffine variety $X$ admits a nontrivial $\\GG_a$- or $\\GG_m$-action, the group $\\Aut(X)$ acts on $X$ with an open orbit $\\OO$, and the action of $\\Aut(X)$ is 2-transitive on $\\OO$, then $X$ is unirational. This result follows also from~\\cite[Corollary~3]{Po}.\n\nIn the last section we discuss some questions related to Conjecture~\\ref{conj}. We pose a problem on transitivity properties for the automorphism group on a quasiaffine variety with few locally finite automorphisms (Problem~\\ref{p1}) and ask about classification of homogeneous algebraic varieties (Problem~\\ref{p2}).\n\nThe author would like to thank Sergey Gaifullin, Alexander Perepechko, Andriy Regeta and Mikhail Zaidenberg for helpful comments and remarks. Also he is grateful to the anonymous referee for valuable suggestions.\n\n\n\\section{Locally nilpotent derivations and $\\GG_a$-actions} \\label{s1}\n\nIn this section we discuss basic facts on locally nilpotent derivations and $\\GG_a$-actions on quasiaffine varieties; see~\\cite[Section~1.1]{FKZ}, \\cite[Section~2]{APS}, and \\cite{DL} for related results.\n\nLet $A$ be a $\\KK$-domain and $\\partial\\colon A\\to A$ a derivation, i.e., a linear map satisfying the Liebniz rule $\\partial(ab)=\\partial(a)b+a\\partial(b)$ for all\n$a,b\\in A$. The derivation $\\partial$ is called locally nilpotent if for any $a\\in A$ there exists a positive integer $m$ such that $\\partial^m(a)=0$. Let us denote the set of all locally nilpotent derivations of $A$ by $\\LND(A)$. Clearly, if $\\partial\\in\\LND(A)$ and $f\\in\\Ker(\\partial)$, then $f\\partial\\in\\LND(A)$.\n\nEvery locally nilpotent derivation defines\na one-parameter subgroup $\\{\\exp(s\\partial),\\, s\\in\\KK\\}$ of automorphisms of the algebra $A$.\nThis subgroup gives rise to an algebraic action of the group $\\GG_a$ on the algebra $A$.\nThe latter means that every element $a\\in A$ is contained in a finite dimensional $\\GG_a$-invariant subspace $U$ of $A$, and the $\\GG_a$-module $U$ is rational. Conversely, the differential of an algebraic $\\GG_a$-action on $A$ is a locally nilpotent derivation; see \\cite[Section~1.5]{F} for details.\n\nAssume that the domain $A$ is finitely generated and $X=\\Spec(A)$ is the corresponding irreducible affine variety. The results mentioned above establish a bijection between locally nilpotent derivations on $A$ and algebraic actions $\\GG_a\\times X\\to X$. Moreover, the algebra of invariants $A^{\\GG_a}$ coincides with the kernel of the corresponding locally nilpotent derivation.\n\nIf $X$ is an irreducible quasiaffine variety, then again every action $\\GG_a\\times X\\to X$ defines a locally nilpotent derivation of $A:=\\KK[X]$. Since regular functions separate points on $X$,\nsuch a derivation determines a $\\GG_a$-action uniquely. At the same time, not every locally nilpotent derivation of $A$ corresponds to a $\\GG_a$-action on $X$. For example, the derivation\n$\\frac{\\partial}{\\partial x_2}$ of the polynomial algebra $\\KK[x_1,x_2]$ does not correspond to\na $\\GG_a$-action on $X:=\\AA^2\\setminus\\{(0,0)\\}$, while the derivation $x_1\\frac{\\partial}{\\partial x_2}$ does.\n\n\\smallskip\n\nThe following result seems to be known, but for lack of a precise reference we give it with a complete proof.\n\n\\begin{proposition} \\label{lndga}\nLet $X$ be an irreducible quasiaffine variety and $A=\\KK[X]$. Then\n\\begin{enumerate}\n\\item[(i)]\nfor every $\\partial\\in\\LND(A)$ there exists a nonzero $f\\in\\Ker(\\partial)$ such that the locally nilpotent derivation $f\\partial$ corresponds to a $\\GG_a$-action on $X$;\n\\item[(ii)]\nif $\\partial\\in\\LND(A)$ corresponds to a $\\GG_a$-action on $X$, then for every $f\\in\\Ker(\\partial)$ the derivation $f\\partial$ corresponds to a $\\GG_a$-action on $X$.\n\\end{enumerate}\n\\end{proposition}\n\n\\begin{proof}\nWe begin with~(i). Fix a derivation $\\partial\\in\\LND(A)$ and the corresponding $\\GG_a$-action on~$A$. Consider an open embedding $X\\hookrightarrow Z$ into an irreducible affine variety $Z$. Fix a finite dimensional $\\GG_a$-invariant subspace $U$ in $A$ containing a set of generators of\n$\\KK[Z]$. Let $B$ be the subalgebra in $A$ generated by $U$ and $Y$ be the affine variety\n$\\Spec(B)$. Since $B$ is $\\GG_a$-invariant, we have the induced $\\GG_a$-action on $Y$. The inclusion $B\\subseteq A$ defines an open embedding $X\\hookrightarrow Y$.\n\n\\begin{claim}\nEvery divisor $D\\subseteq Y$ contained in $Y\\setminus X$ is $\\GG_a$-invariant.\n\\end{claim}\n\n\\begin{proof}\nAssume that the variety $Y$ is normal and take a function $f\\in\\KK(Y)$ which has a pole along\nthe divisor $D$. Multiplying $f$ by a suitable function from $B$ we may suppose that $f$ has\nno pole outside $D$. Then $f$ is contained in $A$. If the divisor $D$ is not $\\GG_a$-invariant,\nthere is an element $g\\in\\GG_a$ such that $g\\cdot D$ intersects $X$. It shows that the function $g\\cdot f$ has a pole on $X$ and thus is not in $A$, a contradiction.\n\nIf $Y$ is not normal, we lift the $\\GG_a$-action to the normalization of $Y$ and apply the same arguments to integral closures of $A$ and $B$.\n\\end{proof}\n\n\\begin{claim}\nThere is an open $\\GG_a$-invariant subset $W\\subseteq Y$ which is contained in $X$.\n\\end{claim}\n\n\\begin{proof}\nLet $F$ be the union of irreducible components of $Y\\setminus X$ of codimension at least $2$.\nThen the closure $\\overline{\\GG_a\\cdot F}$ is a proper closed $\\GG_a$-invariant subset whose complement intersected with $X$ is the desired subset $W$.\n\\end{proof}\n\nLet $Y_0:=Y\\setminus W$. This is a closed $\\GG_a$-invariant subvariety in $Y$ and its ideal\n$I(Y_0)$ in $B$ is a $\\GG_a$-invariant subspace. Applying the Lie-Kolchin Theorem, we find a nonzero $\\GG_a$-invariant function $f\\in I(Y_0)$. Then $f\\in\\Ker(\\partial)$ and the $\\GG_a$-action on $Y$ corresponding to the derivation $f\\partial$ fixes all points outside $W$. In particular,\nthis action induces a $\\GG_a$-action on $X$. This proves~(i).\n\nNow we come to~(ii). Consider the action $\\GG_a\\times X\\to X$ corresponding to $\\partial$.\nBy~\\cite[Theorem~1.6]{PV}, there is an open equivariant embedding $X\\hookrightarrow Y$ into an affine variety $Y$. For any $f\\in\\Ker(\\partial)$, the orbits of the $\\GG_a$-action on $Y$ corresponding to $f\\partial$ coincide with the orbits of the original actions on $Y\\setminus \\{f=0\\}$, while all points of the set $\\{f=0\\}$ become fixed. In particular, this action leaves the set $X$ invariant. This completes the proof of Proposition~\\ref{lndga}.\n\\end{proof}\n\n\\begin{corollary} \\label{corcc}\nLet $X$ be an irreducible quasiaffine variety and $A=\\KK[X]$. The variety $X$ admits a nontrivial $\\GG_a$-action if and only if there is a nonzero locally nilpotent derivation on $A$.\n\\end{corollary}\n\n\\section{Torus actions on rigid quasiaffine varieties} \\label{s2}\n\nIn this section we generalize the results of \\cite[Section~3]{FZ1} and \\cite[Theorem~1]{AG} to the case of a quasiaffine variety. Let us recall that an irreducible algebraic variety $X$ is called \\emph{rigid}, if it admits no nontrivial $\\GG_a$-action.\n\n\\begin{theorem} \\label{trigid}\nLet $X$ be a rigid quasiaffine variety. There is a subtorus $\\TT\\subseteq\\Aut(X)$ such that\nfor every torus action $T\\times X\\to X$ the image of $T$ in $\\Aut(X)$ is contained in $\\TT$. In other words, $\\TT$ is a unique maximal torus in $\\Aut(X)$.\n\\end{theorem}\n\nLet us begin with some preliminary results.\n\n\\begin{lemma} \\label{lemloc}\nLet $X$ be an irreducible quasiaffine variety and $T\\times X\\to X$ be an action of a torus. Then there is a $T$-semi-invariant $f\\in\\KK[X]$ such that the localization $\\KK[X]_f$ is finitely generated.\n\\end{lemma}\n\n\\begin{proof}\nBy \\cite[Theorem~1.6]{PV}, there exists an open equivariant embedding $X\\hookrightarrow Z$ into an irreducible affine $T$-variety $Z$. Let $I$ be the ideal of the subvariety $Z\\setminus X$ in $\\KK[Z]$. Since $I$ is $T$-invariant, there is a non-constant $T$-semi-invariant $f\\in I$.\nThe principal open subset $Z_f$ is contained in $X$. Since the algebra $\\KK[Z_f]$ is the localization $\\KK[Z]_f$ and $\\KK[X]$ is contained in $\\KK[Z_f]$, we conclude that the algebra\n$\\KK[X]_f=\\KK[Z]_f$ is finitely generated.\n\\end{proof}\n\nLet $A=\\oplus_{i\\in\\ZZ} A_i$ be a graded $\\KK$-algebra and $\\partial\\colon A\\to A$ a derivation. We define a linear map $\\partial_k\\colon A\\to A$ by setting $\\partial_k(a)$ to be\nthe homogeneous component $\\partial(a)_{\\deg(a)+k}$ of the element $\\partial(a)$ for every homogeneous element $a\\in A$. It is easy to check that $\\partial_k$ is a derivation for all $k\\in\\ZZ$. We call it the $k$th homogeneous component of the derivation $\\partial$.\n\n\\begin{proof}[Proof of Theorem~\\ref{trigid}]\nAssume that there are two torus actions $T_i\\times X\\to X$, $i=1,2$, such that the images of $T_i$ in $\\Aut(X)$ are not contained in some torus $\\TT$. The latter means that the actions do not commute. We may assume that $T_1$ and $T_2$ are one-dimensional. Let $A:=\\KK[X]$ and\n$$\nA=\\bigoplus_{u\\in\\ZZ} A_u \\quad \\text{and} \\quad A=\\bigoplus_{u\\in\\ZZ} A_u'\n$$\nbe gradings corresponding to the actions of $T_1$ and $T_2$, respectively. Consider semisimple derivations $\\partial$ and $\\partial'$ on $A$ defined by $\\partial(a)=ua$ for every $a\\in A_u$ and\n$\\partial'(b)=ub$ for every $b\\in A_u'$.\n\nLet $\\partial'_k$ be the $k$th homogeneous component of $\\partial'$ with respect to the first grading. We claim that there are only finitely many nonzero homogeneous components and thus\nthe sum\n$$\n\\partial'=\\sum_{k\\in\\ZZ} \\partial'_k\n$$\nhas only finite number of nonzero terms.\n\nConsider a localization $\\KK[X]_f$ from Lemma~\\ref{lemloc}, where $f$ is homogeneous with respect to the first grading. The algebra $\\KK[X]_f$ is generated by some elements $f_1,\\ldots,f_k\\in \\KK[X]$, which are homogeneous with respect to the first grading, and the element $\\frac{1}{f}$.\n\nSince $\\KK[X]$ is contained in $\\KK[X]_f$, every element $h\\in\\KK[X]$ is a linear combination of elements of the form\n$$\n\\frac{f_1^{a_1}\\ldots f_k^{a_k}}{f^a}\n$$\nand the image $\\partial'(h)$ is a linear combination of the elements\n$$\n\\sum_s\\frac{a_s\\partial'(f_s)f_1^{a_1}\\ldots f_s^{a_s-1}\\ldots f_k^{a_k}}{f^{a}}-\\frac{a\\partial'(f)f_1^{a_1}\\ldots f_k^{a_k}}{f^{a+1}}.\n$$\nIt shows that the shift of degree with respect to the first grading from $h$ to $\\partial'(h)$ does not exceed the maximal shift of degree for $f_1,\\ldots,f_k,f$. Hence the shift is bounded and we obtain the claim.\n\nLet $\\partial_m'$ be a nonzero homogeneous component of $\\partial'$ with maximal absolute value\nof the weight $m$. Since the derivations $\\partial$ and $\\partial'$ do not commute, we have $m\\ne 0$. Then $(\\partial_m')^r(a)$ is the highest (or the lowest) homogeneous component of the element $(\\partial')^r(a)$ for every homogeneous $a\\in A$. Since $a$ is contained in a finite dimensional $\\partial'$-invariant subspace in $A$, the elements $(\\partial')^r(a)$ cannot have nonzero projections to infinitely many components~$A_u$. Thus $(\\partial_m')^r(a)=0$ for $r\\gg 0$. We conclude that $\\partial_m'$ is a nonzero locally nilpotent derivation of the algebra $A$. By Corollary~\\ref{corcc}, we obtain a contradiction with the condition that $X$ is rigid.\n\\end{proof}\n\n\\begin{corollary}\nIn the setting of Theorem~\\ref{trigid}, the maximal torus $\\TT$ is a normal subgroup of $\\Aut(X)$.\n\\end{corollary}\n\nLet us finish this section with a description of affine algebraic groups which can be realized as automorphism groups of quasiaffine varieties. When this paper was already written, I found the same result in \\cite[Theorem~1.3]{Kr}, cf. also \\cite[Theorem~4.10~(a)]{LZ}.\n\n\\begin{proposition} \\label{pdref}\nLet $X$ be an irreducible quasiaffine variety. Assume that the automorphism group $\\Aut(X)$ admits a structure of an affine algebraic group such that the action $\\Aut(X)\\times X\\to X$ is a morphism of algebraic varieties. Then either $\\Aut(X)$ is finite, or isomorphic to a finite extension of a torus, or isomorphic to the linear group\n$$\nG=\\left\\{\n\\left(\n\\begin{array}{cc}\n1 & 0 \\\\\na & t\n\\end{array}\n\\right), \\ \\ a\\in\\KK, \\ t\\in\\KK^{\\times}\n\\right\\}.\n$$\n\\end{proposition}\n\n\\begin{proof} We assume first that $X$ is a rational curve. If $X=\\AA^1$ then $\\Aut(X)$ is isomorphic to the group $G$. If $X$ is $\\AA^1$ with one point removed, then $\\Aut(X)$ is an extension of 1-torus. If we remove more than one point from $\\AA^1$, the group $\\Aut(X)$ becomes finite. For a singular rational curve $X$, the automorphism group $\\Aut(X)$ lifts to normalization and preserves the preimage of the singular locus. Thus $\\Aut(X)$ is contained in an extension of 1-torus.\n\nIt follows from the description of the automorphism group of an elliptic curve and from Hurwitz's Theorem that the automorphism group of an affine curve $X$ of positive genus is finite.\n\nNow let us assume that $\\dim X\\ge 2$. If $X$ is rigid then the affine algebraic group $\\Aut(X)$\ncontains no one-parameter unipotent subgroup. It means that the unipotent radical and the semisimple part of $\\Aut(X)$ are trivial. Hence $\\Aut(X)$ is either finite or a finite extension of a torus.\n\nFinally, let $\\GG_a\\times X\\to X$ be a non-trivial action and $\\partial\\in\\LND(\\KK[X])$ the corresponding locally nilpotent derivation. By~\\cite[Principle~11]{F}, the transcendence degree\nof the algebra $\\Ker(\\partial)$ equals $\\dim(X)-1\\ge 1$. Let $U$ be a subspace in\n$\\Ker(\\partial)$. Proposition~\\ref{lndga},~(ii) implies that the automorphisms $\\exp(f\\partial)$, $f\\in U$, form a commutative unipotent subgroup in $\\Aut(X)$ of dimension $\\dim(U)$. Since $\\dim(U)$ may be arbitrary, the group $\\Aut(X)$ does not admit a structure of an affine algebraic group.\n\\end{proof}\n\n\\begin{remark}\nMany examples of affine algebraic varieties whose automorphism group is a finite extension of a torus are provided by trinomial hypersurfaces, see~\\cite[Theorem~3]{AG}.\n\\end{remark}\n\n\\begin{remark}\nThe class of affine algebraic groups which can be realized as the automorphism groups of complete\nvarieties is much wider. For example, the automorphism group of a complete toric variety is always an affine algebraic group of type A. A description of such groups is given in \\cite{De,Cox}. Some other affine algebraic groups appear as the automorphism groups of Mori Dream Spaces; see e.g.\n\\cite[Theorem~7.2]{AHHL}. It is shown in~\\cite[Theorem~1]{Br} that any connected algebraic group over a perfect field is the neutral component of the automorphism group scheme of some normal projective variety.\n\\end{remark}\n\n\\section{Main results} \\label{s3}\n\nWe come to a characterization of transitivity properties for the automorphism group $\\Aut(X)$ in terms of the special automorphism group $\\SAut(X)$.\n\n\\begin{theorem} \\label{tmain}\nLet $X$ be an irreducible quasiaffine variety of dimension at least $2$. Assume that $X$ admits a nontrivial $\\GG_a$- or $\\GG_m$-action and the group $\\Aut(X)$ acts on $X$ with an open orbit $\\OO$. Then the following conditions are equivalent.\n\\begin{enumerate}\n\\item\nThe group $\\Aut(X)$ acts 2-transitively on $\\OO$.\n\\item\nThe group $\\Aut(X)$ acts infinitely transitively on $\\OO$.\n\\item\nThe group $\\SAut(X)$ acts transitively on $\\OO$.\n\\item\nThe group $\\SAut(X)$ acts infinitely transitively on $\\OO$.\n\\end{enumerate}\n\\end{theorem}\n\n\\begin{proof} Let us prove implications $(1)\\Rightarrow (3)\\Rightarrow (4) \\Rightarrow (2) \\Rightarrow (1)$. Implications ${(4)\\Rightarrow (2)\\Rightarrow (1)}$ are obvious. Implication $(3)\\Rightarrow (4)$ is proved in \\cite[Theorem~2.2]{AFKKZ} for $X$ affine and in \\cite[Theorem~2]{APS}, \\cite[Theorem~1.11]{FKZ} for $X$ quasiaffine.\n\nIt remains to prove $(1)\\Rightarrow (3)$\\footnote{This is the only implication where we use the condition on $\\GG_a$- or $\\GG_m$-action.}. Assume first that there is a nontrivial $\\GG_a$-action on~$X$. Let us take two distinct points $x_1$ and $x_2$ in $\\OO$ on one $\\GG_a$-orbit. By assumption, for every distinct points $y_1,y_2\\in\\OO$ there exists an automorphism $\\varphi\\in\\Aut(X)$ with\n$\\varphi(x_i)=y_i$, $i=1,2$. Then the points $y_1$ and $y_2$ lie in the same orbit for the $\\GG_a$-action obtained from the initial one by conjugation with $\\varphi$. It means that the group $\\SAut(X)$ acts transitively on $\\OO$.\n\nNow assume that $X$ is rigid and admits a nontrivial $\\GG_m$-action. If the maximal torus $\\TT$ from Theorem~\\ref{trigid} acts transitively on $\\OO$, then $\\OO$ is isomorphic to the torus $\\TT$ and $\\Aut(X)$ acts on $\\OO$ transitively, but not 2-transitively. Indeed, let us fix an isomorphism between $\\OO$ and $(\\KK^{\\times})^n$. The group $\\Aut(\\OO)$ is isomorphic to a semidirect product of $\\TT$ and the group $\\GL_n(\\ZZ)$. It shows that the stabilizer in $\\Aut(\\OO)$ of the unit in $(\\KK^{\\times})^n$ preserves the set of points with rational coordinates. Consequently, the group $\\Aut(\\OO)$, and thus the group $\\Aut(X)$, cannot act 2-transitively on $\\OO$.\n\nNow assume that the action of $\\TT$ is not transitive on $\\OO$. Let us take points\n$x_1,x_2,x_3\\in\\OO$ such that $x_1\\ne x_2$ lie in the same $\\TT$-orbit and $x_3$ belongs to other $\\TT$-orbit. By Corollary~\\ref{corcc}, every automorphism of $X$ permutes $\\TT$-orbits on $X$ and thus there is no automorphism preserving $x_1$ and sending $x_2$ to $x_3$, a contradiction with 2-transitivity.\n\nThis completes the proof of Theorem~\\ref{tmain}.\n\\end{proof}\n\n\\begin{remark}\nImplication $(1)\\Rightarrow (3)$ for an affine variety $X$ admitting a nontrivial $\\GG_a$-action was observed earlier in~\\cite{BGT}.\n\\end{remark}\n\n\\begin{corollary} \\label{ctrans}\nLet $X$ be an irreducible quasiaffine variety of dimension at least $2$. Assume that $X$ admits a nontrivial $\\GG_a$- or $\\GG_m$-action. Then the following conditions are equivalent.\n\\begin{enumerate}\n\\item\nThe group $\\Aut(X)$ acts 2-transitively on $X$.\n\\item\nThe group $\\Aut(X)$ acts infinitely transitively on $X$.\n\\item\nThe group $\\SAut(X)$ acts transitively on $X$.\n\\item\nThe group $\\SAut(X)$ acts infinitely transitively on $X$.\n\\end{enumerate}\n\\end{corollary}\n\nWe recall that the \\emph{Makar-Limanov invariant} $\\text{ML}(A)$ of an algebra $A$ is the intersection of kernels of all locally nilpotent derivations on $A$. Using Proposition~\\ref{lndga}, one can easily show that the Makar-Limanov invariant $\\text{ML}(\\KK[X])$ of the algebra of regular functions on an irreducible quasiaffine variety $X$ coincides with the algebra of invariants $\\KK[X]^{\\SAut(X)}$ of the special automorphism group. We denote $\\text{ML}(\\KK[X])$ just by $\\text{ML}(X)$. Note that a~quasiaffine variety $X$ is rigid if and only if $\\text{ML}(X)=\\KK[X]$.\n\nIn \\cite{Lie}, a field version of the Makar-Limanov invariant is introduced. Namely,\nthe \\emph{field Makar-Limanov invariant} $\\text{FML}(X)$ of an irreducible quasiaffine variety $X$ is the subfield of $\\KK(X)$ consisting of all rational $\\SAut(X)$-invariants. The condition $\\text{FML}(X)=\\KK$ implies $\\text{ML}(X)=\\KK$, but the converse is not true in general. By \\cite[Corollary~1.14]{AFKKZ}, we have $\\text{FML}(X)=\\KK$ if and only if the group $\\SAut(X)$ acts on $X$ with an open orbit. In this case the variety $X$ is unirational \\cite[Proposition~5.1]{AFKKZ}. Together with Theorem~\\ref{tmain} this yields the following result.\n\n\\begin{corollary} \\label{cunirat}\nLet $X$ be an irreducible quasiaffine variety. Assume that $X$ admits a nontrivial $\\GG_a$- or $\\GG_m$-action and the group $\\Aut(X)$ acts on $X$ with an open orbit $\\OO$. If the group $\\Aut(X)$ is 2-transitive on $\\OO$, then $X$ is unirational.\n\\end{corollary}\n\n\\begin{remark}\nCorollary~\\ref{cunirat} is a particular case of \\cite[Theorem~5]{Po}. The latter theorem claims that if $X$ is an irreducible variety, the group $\\Aut(X)$ acts generically 2-transitive on $X$, and $\\Aut(X)$ contains a non-trivial connected algebraic subgroup, then $X$ is unirational. Moreover, if $X$ is irreducible, complete, and the group $\\Aut(X)$ acts generically 2-transitive on $X$, then $X$ is unirational \\cite[Corollary~3]{Po}.\n\\end{remark}\n\nLet us finish this section with the following conjecture.\n\n\\begin{conjecture} \\label{conj}\nConditions (1)-(4) of Theorem~\\ref{tmain} are equivalent for any irreducible quasiaffine variety $X$ of dimension at least $2$.\n\\end{conjecture}\n\n\\begin{remark}\nJelonek~\\cite{Je} has proved that every quasiaffine variety $X$ with an infinite automorphism\ngroup is uniruled, i.e., for a generic point in $X$ there exists a rational curve in $X$ through this point.\n\\end{remark}\n\n\\section{Concluding remarks and questions} \\label{s4}\n\nIn this section we discuss some results and questions related to Conjecture~\\ref{conj}.\nLet $\\phi$ be an automorphism of a quasiaffine variety $X$ and $\\phi^*$ be the induced\nautomorphism of the algebra $\\KK[X]$. We say that $\\phi$ is \\emph{locally finite} if every element\nof $\\KK[X]$ is contained in a finite dimensional $\\phi^*$-invariant subspace.\n\nThe following fact is well known to experts, but for the convenience of the reader we give it with a short proof.\n\n\\begin{proposition}\nLet $X$ be an irreducible quasiaffine variety and $\\phi$ an automorphism of~$X$. The following conditions are equivalent.\n\\begin{enumerate}\n\\item[(1)]\nThere exists a regular action $G\\times X\\to X$ of an affine algebraic group $G$ on $X$ such that $\\phi$ is contained in the image of $G$ in the group $\\Aut(X)$.\n\\item[(2)]\nThe automorphism $\\phi$ is locally finite.\n\\end{enumerate}\n\\end{proposition}\n\n\\begin{proof}\nFor implication $(1)\\Rightarrow (2)$, see e.g. \\cite[Lemma~1.4]{PV}. Conversely, assume that\n$\\phi$ is locally finite and let $U$ be a finite-dimensional $\\phi^*$-invariant subspace in $\\KK[X]$ which generates a subalgebra $A$ in $\\KK[X]$ such that the morphism $X\\to Z:=\\Spec(A)$ is an open embedding. Let $G$ be the subgroup of all automorphisms of $X$ that preserve the subspace~$U$. Since $U$ generates the field $\\KK(X)$, the group $G$ is a subgroup of the general linear group~$\\GL(U)$. Moreover, every element of $G$ induces an automorphism of $Z$. The subgroup $G'$ of all elements of $\\GL(U)$ which induce an automorphism of $Z$ is closed in $\\GL(U)$. The subgroup $G$ of $G'$ consists of automorphisms of $Z$ which preserve the (closed) subvariety $Z\\setminus X$. This proves that $G$ is an affine algebraic group.\n\\end{proof}\n\n\\begin{remark}\nFor further characterizations of automorphisms belonging to algebraic subgroups of $\\Aut(X)$, see~\\cite{Ra}.\n\\end{remark}\n\nClearly, every automorphism of finite order is locally finite. The condition that a quasiaffine variety $X$ admits no nontrivial actions of the groups $\\GG_a$ and $\\GG_m$ means that every locally finite automorphism of $X$ has finite order.\n\n\\begin{problem} \\label{p1}\nLet $X$ be an irreducible quasiaffine variety such that every locally finite automorphism of $X$ has finite order. Can the group $\\Aut(X)$ act transitively (2-transitively, infinitely transitively) on $X$?\n\\end{problem}\n\nLet us give examples of automorphisms which are not locally finite. Let $X$ be a 2-torus with the algebra of regular functions\n$\\KK[X]=\\KK[T_1,T_1^{-1},T_2,T_2^{-1}]$. Then the map\n$$\n\\phi\\colon (t_1,t_2) \\mapsto (t_1t_2,t_2)\n$$\nis an automorphism of $X$ and the function $T_1$ is not contained in a finite dimensional $\\phi^*$-invariant subspace of $\\KK[X]$.\n\nAn automorphism of the affine plane $\\AA^2$ which is not locally finite may be given as\n$$\n(x,y)\\mapsto (x+y^2, x+y+y^2).\n$$\n\nMore examples of automorphisms which are not locally finite can be found in~\\cite{BD}. The authors describe a family of rational affine surfaces $S$ such that the normal subgroup $\\Aut(S)_{\\text{alg}}$ of $\\Aut(S)$ generated by all algebraic subgroups of $\\Aut(S)$ is not generated by any countable family of such subgroups, and the quotient $\\Aut(S)\/\\Aut(S)_{\\text{alg}}$ contains a free group over an uncountable set of generators. A description of automorphisms in~\\cite{BD} is given in a purely geometric terms. It seems to be an important problem to find more methods for constructing automorphisms of quasiaffine varieties which are not locally finite.\n\nWorking with Conjecture~\\ref{conj}, one may wish to replace an arbitrary quasiaffine variety by a quasiaffine variety admitting a nontrivial $\\GG_a$- or $\\GG_m$-action. For example, let $X$ be an irreducible quasiaffine variety such that the group $\\Aut(X)$ is 2-transitive on $X$. Is it true that the group $\\Aut(X\\times\\AA^1)$ is 2-transitive on $X\\times\\AA^1$? This question is related to algebraic families of automorphisms in the sense of~\\cite{Ra}.\n\n\\smallskip\n\nLet us finish this section with a general problem on transitivity for algebraic varieties. We say that an algebraic variety $X$ is \\emph{homogeneous} if the group $\\Aut(X)$ acts transitively on~$X$. A wide class of homogeneous varieties form homogeneous spaces of algebraic groups. At~the same time, not every homogeneous variety is homogeneous with respect to an algebraic group; an example of a homogeneous quasiaffine toric surface which is not a homogeneous space of an algebraic group is given in~\\cite[Example~2.2]{AKZ}. More generally, it follows from ~\\cite[Theorem~2.1]{AKZ} that every smooth quasiaffine toric variety is homogeneous. We plan to describe all homogeneous toric varieties in a forthcoming publication.\n\n\\begin{problem} \\label{p2}\nDescribe all homogeneous algebraic varieties.\n\\end{problem}\n\nConjecture~\\ref{conj} can be considered as a first step towards the solution of this problem.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{introduction}\nDatabase systems (DBs) are largely embracing ML. With data volumes reaching unprecedented levels, ML can provide highly-accurate methods to perform central data management tasks more efficiently. Applications abound: AQP engines are leveraging ML to answer queries much faster and more accurately than traditional DBs \\cite{ma2019dbest,hilprecht2019deepdb,thirumuruganathan2020approximate,ma2021learned}.\nCardinality\/selectivity estimation, has improved considerably leveraging ML \\cite{yang2019deep,yang2020neurocard,hasan2020deep,zhu2020flat,wang2020we}. Likewise for query optimization \n\\cite{marcus2019neo,kipf2018learned,marcus2021bao},\nindexes \\cite{kraska2018case,ding2020alex,nathan2020learning,ding2020tsunami}, cost estimation \\cite{zhi2021efficient, siddiqui2020cost}, workload forecasting \\cite{zhu2019novel}, DB tuning \\cite{van2017automatic,li2019qtune,zhang2019end}, synthetic data generation \\citep{xu2019modeling,choi2017generating,park2018data}, etc. \n\n\\subsection{Challenges}\\label{challenges}\nAs research in learned DB systems\nmatures, two key pitfalls are emerging. First, if the \"context\" (such as the data, the DB system, and\/or the workload) changes, previously trained models are no longer accurate. Second, training accurate ML models is costly. Hence, retraining from scratch when the context changes should be avoided whenever possible.\nEmerging ML paradigms, such as active learning, transfer learning, meta-learning, and zero\/few-shot learning are a good fit for such context changes and have been the focus of recent related works \\cite{ma2020active, hilprecht2021one, wu2021unified}, where the primary focus is to glean what is learned from existing ML models (trained for different learning tasks and\/or DBs and\/or workloads), \nand adapt them for new tasks and\/or DBs, and\/or workloads, while avoiding the need to retrain models from scratch.\n\n{\\bf OOD Data insertions.} In analytical DBs data updates primarily take the form of new data insertions. New data may be OOD (representing new knowledge -- distributional shifts), rendering previously-built ML models obsolete\/inaccurate.\nOr, new data may not be OOD. In the former case, the model must be updated and it must be decided how the new data could be efficiently reflected in the model to continue ensuring accuracy.\nIn the latter case, it is desirable to avoid updating the model, as that would waste time\/resources.\nTherefore, it is also crucial to check (efficiently) whether the new data render the previously built model inaccurate. \nHowever, related research has not yet tackled this problem setting, whereby\n\\textit{models for the same learning tasks (e.g., AQP, DG, CE, etc.) trained on old data, continue to provide high accuracy for the new data state} (on old and new data, as queries now may access both old data and new data, old data, or simply the new data).\nRelated work for learned DB systems have a limited (or sometimes completely lack the) capability of handling such data insertions (as is independently verified in \\cite{wang2020we} and will be shown in this paper as well).\n\n{\\bf Sources of Difficulty and Baselines.} \nIn the presence of OOD, a simple solution is adopted by some of the learned DB components like Naru \\cite{yang2019deep}, NeuroCard \\cite{yang2020neurocard}, DBest++ \\cite{ma2021learned}, and even the aforementioned transfer\/few-shot learning methods \\cite{wu2021unified, hilprecht2021one}. That is to \"fine-tune\" the original model $M$ on the new data. Alas, this is problematic. For instance, while a DBest++ model on the \"Forest\" dataset has a 95th percentile q-error of 2, updating it with an OOD sample using fine-tuning increases the 95th q-error to ~63. A similar accuracy drop occurs for other key models as well -- \\cite{wang2020we} showcases this for learned CE works.\nThis drastic drop of accuracy is due to the fundamental problem of \\textit{catastrophic forgetting}{} \\cite{mccloskey1989catastrophic}, where retraining a previously learned model on new tasks, i.e. new data, causes the model to lose the knowledge it had acquired about old data. To avoid \\textit{catastrophic forgetting}{}, Naru and DBest++ suggest using a smaller learning rate while fine-tuning with the new data. This, however, causes another fundamental problem, namely \\textit{intransigence}, \\cite{chaudhry2018riemannian} whereby the model resists fitting to new data, rendering queries on new data inaccurate.\n\nAnother simple solution to avoid these problems would be to aggregate the old data and new data and retrain the model from scratch. However, as mentioned, this is undesirable in our environment. As a concrete example, training Naru\/NeuroCard on the \"Forest\" dataset (with only 600k rows) on a 40-core CPU takes ca. 1.5 hours. Similarly high retraining overheads are typically observed for neural network models, for various tasks.\nAnd, retraining time progressively increases as the DB size increases. \n\nTherefore, more sophisticated approaches are needed, which can avoid \\textit{intransigence} and \\textit{catastrophic forgetting}{},\nupdate models only when needed and do so while ensuring much smaller training overheads than retraining from scratch and at the same time ensure high accuracy for queries on old and new data. While for some tasks, like CE, some researchers question whether achieving very high accuracy through learned models will actually help the end-task (query optimization) \\cite{marcus2021bao}, for tasks like AQP (which is itself the end-task) and for DG (with classification as the end-task) high accuracy is clearly needed, as shown here. Even for CE, with OOD data, accuracy can become horribly poor, as shown here, which is likely to affect query optimization.\n\n\\subsection{Contributions} \\label{contribution}\nTo the best of our knowledge, this work proposes the first updatability framework (DDUp) for learned DBs (in the face of new data insertions possibly carrying OOD data)\nthat can ensure high accuracy for queries on new and\/or old data. \nDDUp is also efficient and \nit can enjoy wide applicability, capable of being utilized for different NNs and\/or different learning tasks (such as AQP, DG, CE, etc.). DDUp consists of a novel OOD detection and a novel model-update module. More specifically, the contributions of DDUp are:\n\n\\begin{itemize}[leftmargin=10pt]\n \\item A general and principled two-sample test for OOD detection. Generality stems from it being based on the training loss function of the NNs. Compared to prior art, it introduces no extra costs and overheads, and could be used with different NNs, with different loss functions, in different applications. To further minimize detection time, it is divided into offline and online phases.\n \\item A novel and general formulation of transfer-learning based on sequential self-distillation for model updating. This formulation allows a higher degree of freedom in balancing tasks w.r.t new and old data, can adapt to different models and tasks, and maximizes performance via self-distillation.\n \\item Importantly, DDUp can be used by any pre-trained NN without introducing any assumptions on models or requiring additional components that might require to retrain models or incur more costs. Here, we instantiate it for three different tasks (namely, the CE task, using the Naru\/NeuroCard deep autoregressive network (DARN) models \\cite{yang2019deep, yang2020neurocard}, the AQP task, using the DBEst++ mixture density network (MDN) model \\cite{ma2021learned}, and for the DG task, using the Tabular Variational AutoEncoder (TVAE) model \\cite{xu2019modeling}) each of which employs a different NN type. These are representative learning tasks and networks with evident importance in DBs and beyond. These instantiations are also novel, showing how to distil-and-update MDNs, DARNs, and TVAEs.\n \\item Finally, DDUp is evaluated using six different datasets and the three instantiated learned DB components, for AQP, CE, and DG\n\\end{itemize}\n\n\n\\subsection{Limitations} \\label{limits}\nDDUp focuses only on data insertions, which are essential and dominant in analytical DBs, and not on updates in place and deletes, which are prevalent \nin transactional DBs.\nNonetheless, the latter touch upon an open problem in the ML literature, namely $\"unlearning\"$, \nwhere it typically concerns privacy (e.g., removing sensitive data from images in classification tasks) \n(e.g., \\citep{sekhari2021remember, golatkar2020eternal}).\nStudying unlearning for DB problem settings is a formidable task of its own and of high interest for future research.\n\n\nAlso, DDUp is designed for NN-based learned DB components. This is so as neural networks are a very rich family of models which have collectively received very large attention for learned DBs. Extending DDUp principles beyond NN models is also left for future research.\n\n\n\\section{The Problem and Solution Overview} \\label{problemdef}\n\\subsection{Problem Formulation} \\label{problemformulation}\nConsider a database relation \\(R\\) with attributes \\(\\{A_1, A_2, ..., A_m\\}\\). This can be a raw table or the result of a join query. Also consider a sequence of \\(N\\) insertion updates denoted by \\(I=\\{I_1,I_2,...I_N\\}\\). Each \\(I_t\\) is an insert operation which appends a data batch \\(D_t=\\{(A_1, A_2, ..., A_m)_t^{(i)}; i=1,..., n_t\\}\\) to \\(R\\), where \\(n_t\\) is the number of rows. Let \\(S_t\\) be a sufficient sample of \\(D_t\\) and \\(S^{\\leq}_{t-1}\\) be a sufficient sample from \\(\\cup_{j=0}^{t-1} D_j\\). We naturally assume that \\(|R|\\) is finite. \nAnd, due to the training restrictions of existing models, we also make the natural assumption:\n\\[\\forall A_i \\in R: supp(D_{t}(A_i)) \\subseteq supp(D_{t-1}(A_i)) \\]\nwhere \\(supp(D(A_i))\\) is the support of attribute \\(A_i\\) in dataset \\(D\\). This assumption satisfies the condition based on which the domain of each attribute is not violated in the upcoming update batches. \n\n\\textbf{Statistical test for data changes}. We define out-of-distribution detection as a two-sample hypothesis test between a sample of historical data and a sample of the new data. Let \\(S^{\\leq}_{t-1}\\) have a joint distribution of \\(P(A_1,\\dots, A1_m) \\equiv \\mathbb{P}\\) and \\(S_{t}\\) have a joint distribution of \\(Q(A_1,\\dots, A_m) \\equiv \\mathbb{Q}\\). We define the null hypothesis \\(H_0: \\mathbb{P}=\\mathbb{Q}\\) which asserts that \\(S_{t}\\) and \\(S^{\\leq}_{t-1}\\) are coming from a same distribution; and the alternative hypothesis \\(H_A: \\mathbb{P}\\neq \\mathbb{Q}\\) which declares that the two samples are generated by two different distributions. \n\n\\textbf{Incrementally updating the model}. Consider for \\(I_0\\) a model \\(M_{0}\\) is trained by minimizing a loss function \\(\\mathscr{L}(D_{0};\\Theta_0\\)). This model may be stale for \\(I_t; t>0\\). Ideally, the goal of incremental learning is: at time \\(t\\) train a model \\(M_{t}\\) that minimizes a function over \\(\\sum_{i=1}^{t} \\mathscr{L}(D_{i};\\Theta_i)\\). This new model should not forget \\(\\{I_{i}; i=0,1,...,t-1\\}\\) and also learn \\(I_t\\).\n\n\n\\subsection{A High Level View of DDUp}\\label{highlevel} \nThe overall architecture of DDUp is depicted in \\autoref{fig:Arch}.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.9\\linewidth, height=4cm]{figures\/Detailed-DDUp-Architecture.png}\n \\caption{The overall structure of DDUp. DDUp uses the latest model and previous data to build a sampling distribution for the two-sample test, and updates the learned component based on the shift in the data distribution.}\n \\label{fig:Arch}\n \\vspace{-0.3cm}\n\\end{figure*}\n\nDDUp process batches of tuples at a time. Such batched handling of insertions is typical in analytical DBs. Furthermore, this takes into account that \nthe effect of single tuples is usually negligible for the overall large space modelled by NNs. And, for most tasks like CE, AQP and DG, the effect of single tuples in the final result is very small, considering the large sizes of tables. And batching amortizes detect-and-update costs over many insertion operations.\n\nUpon a new batch insertion, DDUp takes the latest model \\(M_{t-1}\\), and performs a bootstrapping sampling from the previous data to build the sampling distribution for the average loss values. DDUp uses this distribution to calculate a significance level corresponding to a confidence interval (e.g a 95th confidence interval). The general idea is that if the new data is similar to the previous data (IND in \\autoref{fig:Arch}), the loss values of \\(M_{t-1}\\) for this new data should lie within the threshold. This means that the new data has the same distribution and therefore the model could be left intact (updating maybe just the hyper-parameters of the system, including possible frequency tables and other table statistics. Alternatively, a simple fine-tuning can be performed to adapt the model to the new data.\n\nIf the loss values exceeded the threshold, this implies that the data distribution has significantly changed. DDUp will deploy a teacher-student transfer learning method based on knowledge distillation to learn this new distribution without forgetting the knowledge of the old data. In this framework, while the student directly learns the distribution of the new data, the teacher act as a regularizer to make the student also learn about the old distribution.\n\\vspace{-0.3cm}\n\\section{Out-of-Distribution Detection} \\label{driftdetect}\n\\subsection{Background} \\label{oodback}\nIn ML, OOD is typically addressed from a classification perspective. Formally, assume \\(D\\) is a dataset of \\((x,y)\\) pairs which are drawn from a joint distribution, \\(p(x,y)\\), where \\(x \\in \\mathcal{X} := \\{x_1, x_2, \\dots, x_n\\}\\) is the input (independent variable) consisting of \\(n\\) features, and \\(y \\in \\mathcal{Y} := \\{1,2, \\dots, k\\}\\) is the label corresponding to one of the \\(k\\) in-distribution classes. A sample \\((x,y)\\), that probably is generated by a different distribution than \\(p(x,y)\\), is called OOD, if \\(y \\notin \\mathcal{Y}\\), i.e it does not belong to any previously seen classes. \n \nA similar problem has previously been addressed in statistics as {\\it concept drift} detection, where different types of shifts are distinguished by expanding \\(p(x,y)\\) using the Bayes rule:\n\\begin{equation}\\label{bayesrule}\n p(x,y)=p(x)p(y|x)\n\\end{equation}\nBased on Eq. \\ref{bayesrule}, changes in \\(P(y|x)\\) are usually referred to as \\textit{Real drift}, while changes in \\(P(x)\\) are called \\textit{virtual drift} \\cite{gama2014survey}. In \\(X\\rightarrow y\\) problems the latter mostly is known as \\textit{covariate shift}.\nDeciding which drift to detect is dependent on the underlying models. For example, deep autoregressive networks (e.g., used by \\cite{yang2019deep}) learn the full joint distribution of a table. Hence, they are sensitive to \\textit{covariate shift} upon insertions. \nOn the other hand, mixture density networks (e.g., used by \\cite{ma2021learned}), model the conditional probability between a set of independent attributes and a target attribute. Hence, for these models, one would be interested in detecting \\textit{real shift}.\n\\vspace{-0.2cm}\n\\subsection{Loss based OOD Detection} \\label{llfordrift}\nThere are several challenges that make it difficult to simply adopt one of the OOD detection algorithms in the ML or statistical learning literature.\nFirst, DB tables are multivariate in nature and learned models are usually trained on multiple attributes. As a result, uni-variate two-sample tests like Kolmogorov\u2013Smirnov (KS) test are not suitable for this purpose. Second, the test should introduce low overheads to the system as insertions may be frequent. Therefore, multivariate tests like kernel methods that require to learn densities and perform expensive inference computations are not desirable. Third, we aim to support different learning tasks for which different models might be used. Thus, most of OOD detection methods in ML that are based on probability scores (confidence) of classification tasks are not useful here. Moreover, the test should be able to adapt efficiently to the case where insertions occur within old data, that is, without having to recalculate baseline thresholds etc.\n\nAn efficient OOD detection method is now proposed that resolves all above issues by leveraging the underlying ML models themselves. Central to most learned data system components is the ability to derive from the underlying data tables a model for the joint or conditional data distribution like \\(p(x)\\) or \\(p(y|x)\\). A model usually achieves this by learning a set of parameters \\(\\Theta\\) that represent a function \\(f\\) by iteratively optimizing over a loss function as follows:\n\n\\begin{equation} \\label{generalopt}\n f_\\Theta = \\argmin_{f \\in \\mathcal{F}} \\frac{1}{n} \\sum_{i=1}^n \\mathscr{L}(f(x);\\Theta) + \\Omega(f)\n\\end{equation}\n\nwhere, \\(\\Omega\\) is a regularizer term, \\(n\\) is the number of samples, and \\(f\\) could be the outputs of the model in the last layer (called \\textit{logits}), or the probabilities assigned by a \"softmax\" function.\n\n\nWe will later discuss different loss functions in more details when instantiating different models. In general, loss functions are usually highly non-convex with many local mimina. However, a good learning strategy will find the global minimum. Because of the large data sizes, training is usually done by iterating over mini-batches and a gradient descent algorithm updates the parameters based on the average of loss of the samples in each mini-batch per iteration. For the rest of the paper, when we mention 'loss value' we mean average of losses of the samples in a batch. Once the model is trained, i.e. the loss values have converged, the model can serve as a transformer to map (high-dimensional) input data to the one-dimensional loss functions space around the global minimum. Accordingly, the previous data (seen by the model) are closer to the global minimum compared to the out of distribution data.\n\nThe above discussion explains the possibility to compare in- and out-of distribution data just by relying on the underlying models without any further assumptions\/components, in a low-dimensional space. With these in hand, we can perform a statistical testing to compare the loss values of old data and new data. In the following we will explain a two-sample test for this purpose. \n\n\\subsection{A Two-Sample Test Procedure}\nThe steps for a two-sample hypothesis test are: \n1. Define the null, \\(H_0\\), and alternative hypothesis, \\(H_A\\). \n2. Define a test statistic \\(d\\) that tests whether an observed value is extreme under \\(H_0\\). \n3. Determine a significance level \\(\\delta\\in[0,1]\\) that defines the \\(type\\mhyphen1\\ error\\) (false positives) of the test. \n4. Calculate \\(p\\mhyphen value\\) which equals the probability that a statistical measure, e.g. distance between two distributions, will be greater than or equal to the probability of observed results.\n5. If \\(p\\mhyphen value <= \\delta\\) then the \\(p\\mhyphen value\\) is statistically significant and shows strong evidence to reject \\(H_0\\) in favor of \\(H_A\\). Otherwise, the test failed to reject \\(H_0\\). \n\nThe main challenge herein is how to calculate the test significance of the test statistic, i.e the \\(p\\mhyphen value\\). As explained in Section \\ref{problemdef}, we aim to detect if a new data that is inserted to the system at time \\(t\\) has a different distribution than the previous data. Consider \\(S_{t-1}^{\\leq}\\) be a sample of the previous data and \\(S_{t}\\) be a sample of the newly inserted data. Let \\(d(S_{t-1}^{\\leq},\\ S_{t})\\) be a distance function that measures the distance between the two samples. If \\(P_d\\) is the distribution that explains the test statistic \\(d\\) under the null hypothesis, then the test significance could easily be computed by \\(p\\mhyphen value=P(P_d < d | H_0)\\). Note that since we assume that our test statistic is a distance function, we would perform a one-side left-tail test.\n\n\\textbf{Choosing the test statistic}. The test statistic should reflect the similarity of new data to old data. According to our discussion in Section \\ref{llfordrift}, we use\nthe loss function values after convergence of the models. We use a linear difference between the loss values of the two samples as our test statistics as follows:\n\\begin{equation}\\label{teststatistic}\nd(S_{t-1}^{\\leq},S_{t}) = \\frac{1}{|S_{t-1}|}\\sum_{s\\in S_{t-1}}\\mathscr{L}(s;\\Theta) - \\frac{1}{|S_t|}\\sum_{s\\in S_{t}}\\mathscr{L}(s;\\Theta)\n\\end{equation}\n\nwhere \\(\\mathscr{L}\\) is a loss function achieved by training model \\(M\\) with parameters \\(\\Theta\\). From Eq. \\ref{teststatistic} follows that if the loss function is Negative Log Likelihood, and the likelihoods are exact, the test statistics will be the logarithm of the well-known \\textit{likelihood-ratio} test. Eq. \\ref{teststatistic} also gives intuition about the effect size: the larger \\(d\\) is, the larger the difference between two data distributions would be.\nAlthough many of the learned DB models are trained by maximizing likelihood, some other models (e.g., regressions) are trained using a \\textit{Mean-Squared-Error} objective. It has been shown \\cite{watkins1992maximum} that MSE optimization maximizes likelihood at the same time. Therefore, the form of the distance function in Eq. \\ref{teststatistic} still holds. \nThe important consequence of Eq. \\ref{teststatistic} is that, under i.i.d assumptions for both samples, it can be shown that the central limit theorem holds for \\(P_d\\) \nhence, it has a normal limiting distribution with a mean at 0 and unknown standard deviation. The normality of \\(P_d\\) allows us to make inference based on the confidence intervals. To estimate the standard deviation (std), we perform a bootstrapping approach.\n\n\\subsection{Offline and Online Steps}\nThe main bottleneck of such an OOD detection is bootstrapping. Fortunately, this part could be performed offline before data insertion. In the offline phase, \nwe draw \\(n\\) bootstrap samples of size \\(|S^{\\leq}_{t-1}|\\) from \\(S^{\\leq}_{t-1}\\). (In practice, when we have access to the original data, we make $n$ bootstrap samples of size $|S_{t-1}^{\\leq}|$) from $D_{t-1}^{\\leq}$). We use the model \\(M_{t-1}\\) to compute the likelihoods (or other losses) of each sample and create a sampling distribution using them. Then, we calculate the standard deviation of the sampling distribution, \\(std\\), and use it to find the significance level. In the online phase, we make a sample of the new data, \\(S_{t}\\) and use the latest model, \\(M_{t-1}\\) to calculate the likelihood of \\(S_{t}\\). Finally we compare the test statistic with the threshold. If \\(d > 2\\times std\\) (equivalently \\(p\\mhyphen value \\leq \\delta\\) where \\(\\delta=0.05\\)) we declare a significant shift in data and reject the null hypothesis in the favor of the alternative hypothesis. Otherwise the test fails to reject the null hypothesis and signals \"in-distribution\". \n\n\\subsection{The Test Errors}\\label{testerrors}\nThere are two errors associated with a hypothesis testing. \\textit{type-1 error} is rejecting the null hypothesis when it should not. \\textit{Type-2 error} is the error of accepting the null hypothesis when it should be rejected. The first one introduces false positives to the system and the second causes false negatives. \nFalse positives (FPs) are only a (rather small) performance concern only, spending time to update the model while accuracy is preserved. False negatives (FNs), however, can cause a loss of accuracy. \nTherefore, the system can afford to be stricter with respect to the significance level, in order to reduce the risk of false negatives and accuracy loss.\n\nDDUp uses the loss of the trained NNs for OOD detection. Sometimes NNs could be over-confident \\cite{nguyen2015deep,ren2019likelihood,nalisnick2018deep} which may introduce bias. \nHowever, we have not witnessed it for our tasks here on tabular data.\nIf there were bias, the FP and FN rates discussed above would signal it. \nWe have evaluated DDUp with respect to FPs\/FNs in Section \\ref{oodeval} showing that this is not a concern.\n\n\\section{Model Update} \\label{KD}\nIn this section, we propose a transfer-learning based method that can retain previous knowledge of the model while adapt it to the new insertions. The OOD detection module will either output 'in-distribution' or 'out-of-distribution' signals.\n\n\\textbf{The in-distribution case}. When no drift occurs, the new data distribution is similar to that of the historical data and this distribution could be represented by a similar parameter space of the latest model, \\(M_{t}\\). \nHence, the learned component of the system could remain unchanged. More specifically, the framework can copy \\(M_{t}\\) to \\(M_{t+1}\\) and update the required meta-data associated with the system (such as the frequency tables in DBEst++, or table cardinalities in Naru\/NeuroCard). Even if there are slight permutations in data, fine-tuning the latest model's parameters on the new data will adjust it to the general representation of both old and new data. \nWe will show that when knowing that data is not OOD, \\textit{fine-tuning}{} with a relatively small learning rate, can retain model performance. \nSpecifically, with an \\textbf{in-distribution} signal at time \\(t+1\\), \\(M_{t}\\) is retrained on \\(S_{t+1}\\) with a small learning rate, $lr$. This learning rate could be tuned, as a hyper-parameter.\nWe intuitively set \\(lr_{t} = \\frac{|D_{t+1}|}{|D_{t}^\\leq|}\\ \\times \\ lr_{0}\\) and experimentally show that it is a good choice. \n\n\\textbf{The OOD case}. With a distributional shift,\nby fine-tuning on new data, the model's parameters would bias toward the new data distribution. Even smaller learning rates cause tiny deviations from the previous parameter space which may yield large errors during inference. And, retraining using all the data from scratch is too time consuming. Thus, we propose an updating approach grounded on the transfer-learning paradigm. The general idea is to use the learned model \\(M_{t}\\) and incorporate it in training \\(M_{t+1}\\). To this end, we utilize the \\textit{knowledge distillation}{} principles, which help to transfer the previously learned knowledge to a new model. Our rationale for such a model updating approach is based on the following: \n\\begin{itemize}[leftmargin=*]\n \\item Distillation has several benefits including: faster optimization, better generalization, and may even outperform the directly trained models. \\cite{yim2017gift}.\n \\item It is accurate for queries on old as well as new data.\n \\item It allows us to control the weights for queries on new and old data with just a couple of parameters.\n \\item It is efficient memory-wise as well as computationally-wise, compared to methods like Gradient Episodic Memory, or Elastic Weight Consolidation and PathInt (cf. Section \\ref{litraturere})\n \\item It does not make any assumptions about the training of the underlying models. This property, is especially desirable since: a) we can use it to update different neural networks; b) it prevents the high costs of rebuilding base models; c) different pre-processings could be left intact. For instance, Naru, DBEst++ and TVAE all use completely different types of embedding\/encoding. DDUp can update the model regardless of these differences.\n\\end{itemize}\n\n\\subsection{General Knowledge Distillation (KD)}\nKD was first introduced in \\cite{hinton2015distilling} for $model \\ compression$ by transferring knowledge from an accurate and \"cumbersome\" model, called \\textit{teacher}, to a smaller model called \\textit{student}. In its basic form, instead of fitting the student model directly to the actual data \\textit{labels}, one would use the class probability distribution learned by the teacher to fit the student model. Hinton et al. \\cite{hinton2015distilling} argued that small probabilities in \"wrong\" label logits, known as \"soft labels\", include extra information called \"dark knowledge\" that result in better learning than actual \"hard labels\". Distillation has since been extensively studied. \\autoref{fig:kdfig} shows a general view of the principles of a distillation process. A small dataset referred to as \\textit{transfer-set} is fed into a pre-trained model (teacher) and a new model (student) to be trained. A $distillation \\ loss$ is calculated using the predictions of the pre-trained model instead of the actual labels. This loss and a typical loss using actual labels will be used to train the new model. \n\n\\begin{figure}[hb]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/distillation-diagram.png}\n \\vspace{-0.2cm}\n \\caption{The knowledge distillation process.}\n \\label{fig:kdfig}\n \\vspace{-0.35cm}\n\\end{figure}\n\nTo formulate \\textit{knowledge distillation}{}, consider a model with parameters \\(\\Theta\\), representing a function \\(f_t\\) (\\(t\\) for teacher) which has been trained via Eq. \\ref{generalopt}. We would like to transfer knowledge from this teacher model to a student model with parameter \\(\\Theta'\\), representing a function \\(f_s\\). This new model could be trained as follows:\n\n\\begin{equation} \\label{distillopt}\n f_{s\\Theta'} = \\argmin_{f \\in \\mathcal{F}} \\frac{1}{|tr|} \\sum_{i\\in tr} \\left[\\lambda\\mathscr{L}_d(f_s(i);f_t(i);\\Theta;\\Theta') + (1-\\lambda)\\mathscr{L}(f_s(i);\\Theta')\\right]\n\\end{equation}\n\\\\\nfor weight \\(\\lambda\\), distillation loss \\(\\mathscr{L}_d\\), and transfer-set \\(tr\\). \n\n\\subsection{DDUp: Updating By Knowledge Distillation}\\label{upbykd}\n\n\\cite{furlanello2018born,seq-self-distill} showed that, for classification tasks, if instead of having a compact student model, one uses the same architecture of the teacher, and repeat distillation sequentially for several generations, the student models in the later generations could outperform the teacher model. This approach is called {\\it sequential self-distillation}.\nInspired by this and anticipating that this will be valid for our learning tasks, DDUp also employs a sequential self-distillation approach.\n\nTo update a model using KD,\na copy of the previously trained model becomes the new student. Then, the student is updated using a distillation loss (to be defined soon). After updating, the previous teacher is replaced with the new updated model. This cycle repeats with every new insertion batch.\n\\begin{comment}\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/Sequential Distillation.png}\n \\caption{Sequential updating in a self-distillation scheme}\n \\label{fig:sequpdate}\n\\end{figure}\n\\end{comment}\n\nTo formulate our training loss function, we consider two aspects that we would like to have in our updating scheme. First, to have control over the the new data\/queries versus the old data\/queries. Second, to make it general so that different learned DB systems could adopt it. As such, we first write down the general form of the total loss function and then, use cross-entropy and mean-squared-error as the loss functions to instantiate different models. Training in each update step is as follows:\n\\\\\n\\begin{equation} \\label{totalloss}\n\\begin{split}\n f_{s\\Theta'} = \\argmin_{f \\in \\mathcal{F}} & \\bigg(\\alpha \\times \\frac{1}{|tr|} \\sum_{x\\in tr} \\big[\\lambda\\mathscr{L}_d(f_s(x),f_t(x);\\Theta') \\\\ \n & + (1-\\lambda)\\mathscr{L}(f_s(x);\\Theta')\\big] \\\\\n & + (1-\\alpha) \\times \\frac{1}{|up|}\\sum_{x\\in up}\\mathscr{L}(f_s(x);\\Theta') \\bigg)\n\\end{split}\n\\end{equation}\n\\\\\nHere, \\(\\alpha\\) and \\(\\lambda\\) are the new data and the distillation weights, respectively. Also, \\(tr\\) and \\(up\\) are the transfer-set and the update batch. \nIn summary, the rationale for proposing this novel loss function is: \nThe transfer-set term acts as a regularizer to avoid overfitting on new data.\nThe same goal is also helped by self-distillation (when copying the teacher to the student). Additionally, as mentioned sequential self-distillation \\cite{seq-self-distill} may attain increasingly higher accuracy, even outperforming \"retrain from scratch\"\n(cf. Section 5.3).\n\nFor models that provide a conditional probability in the last layer of the network (e.g. using a Softmax function), an annealed cross-entropy loss will be employed. Otherwise, we utilize mean-squared-error using the logits from the last layer of the network. Eq. \\ref{cedistillloss} and Eq. \\ref{mseloss} show these two loss functions. \n\\\\\n\\begin{equation}\\label{cedistillloss}\n \\mathscr{L}_{ce}(D_{tr};z_t,z_s) = - \\sum_{i\\in [k]} \\frac{exp(z_{t_i}\/T)}{\\sum_{j\\in [k]}exp(z_{t_j}\/T)} \\log \\frac{exp(z_{s_i}\/T)}{\\sum_{j\\in [k]}exp(z_{s_j}\/T)}\n\\end{equation}\n\\\\\n\\begin{equation} \\label{mseloss}\n \\mathscr{L}_{mse}(D_{tr};z_t,z_s) = \\sum_{i\\in[|z_t|]}(z_{t_i} - z_{s_i})^2\n\\end{equation}\n\\\\\nwhere \\(D_{tr}\\) is the \\textit{transfer-set} , \\(T\\) is a temperature scalar to smooth the probabilities so that it produces \"softer\" targets, and \\([k]\\) is the vector \\([0,1,\\dots,n]\\) which are the class probabilities and \\([|z_t|]\\) indicates the logits of the network.\n\n\\subsection{Instantiating the Approach}\n\n\\textbf{Mixture Density Networks}. MDNs consist of an NN to learn feature vectors and a mixture model to learn the \\textit{probability density function} (pdf) of data. Ma et al. \\cite{ma2021learned} uses MDNs with Gaussian nodes to perform AQP. For the Gaussian Mixture, the last layer of MDN consists of three sets of nodes \\(\\{\\omega_i, \\mu_i, \\sigma_i\\}_{i=1}^m\\) that form the pdf according to Eq. \\ref{mdneq}. \n\n\\begin{equation}\\label{mdneq}\n\\hat{P}(y|x_1, ..., x_n) = \\sum_{i=1}^{m}\\omega_i.\\mathscr{N}(\\mu_i, \\sigma_i) \n\\end{equation}\n\nwhere \\(m\\) is the number of Gaussian components, \\(y\\) is the dependent variable and \\((x_1, ..., x_n)\\) is a set of independent variables, \\(w_i\\) is the weight of the \\(i^{th}\\) Gaussian with a mean of \\(\\mu_i\\) and a standard deviation of \\(\\sigma_i\\).\nFor MDNs, we define distillation loss as follows:\n\n\\begin{equation} \\label{mdnkdloss}\n\\mathscr{L}_d = \\mathscr{L}_{ce}(D_{tr}, \\omega_{t}, \\omega_{s}) + \\mathscr{L}_{mse}(D_{tr}, \\mu_{t}, \\mu_{s}) + \\mathscr{L}_{mse}(D_{tr}, \\sigma_{t}, \\sigma_{s})\n\\end{equation}\n\nThis summation of terms help us retain both the shape of data distribution as well as the intensity levels. \n\n\\textbf{Deep Autoregressive Networks}. The Naru and NeuroCard cardinality estimators \\cite{yang2019deep, yang2020neurocard} use deep autoregressive networks (DARNs) to approximate a fully factorized data density. DARNs are generative models capable of learning full conditional probabilities of a sequence using a masked autoencoder via Maximum Likelihood. Once the conditionals are available, the joint data distribution could be represented by the product rule as follows:\n\\[\n\\hat{P}(A_1, A_2, \\dots, A_n) = \\hat{P}(A_1)\\hat{P}(A_2|A_1)\\dots \\hat{P}(A_n|A1,\\dots ,A_{n-1})\n\\]\n\nwhere \\(A_i\\) is an attribute in a relation \\(R\\). Naru and NeuroCard use cross-entropy between input and conditionals as the loss function. This allows us to formulate the distillation loss function using the conditionals of the teacher and the student networks. Also, in Naru and NeuroCard, each conditional is calculated using a set of logits, hence we average over all as follows:\n\\begin{equation} \\label{narukdloss}\n\\mathscr{L}_d = \\frac{1}{|A|}\\sum_{i=1}^{|A|}\\mathscr{L}_{ce}(D_{tr}, z_{s_i}, z_{t_i})\n\\end{equation}\n\nWhere \\(|A|\\) is the number of attributes corresponding to the number of conditionals.\n\n\\textbf{Variational Autoencoders}. VAEs have been used for a number of DB components: \\cite{thirumuruganathan2020approximate} for AQP, \\cite{hasan2020deep} for CE, and \\cite{xu2019modeling} for synthetic tabular data generation. \nThey are a type of autoencoders that instead of learning deterministic encoder, decoder, and compressed vector (known as bottleneck), they learn a probabilistic encoder, decoder, and a latent random variable instead of the compressed vectors. (For more details, see the seminal paper \\cite{kingma2013auto}). \nInterestingly, a VAE is trained using a different loss function, known as Evidence-Lower-Bound (ELBO) loss (which amounts to a lower bound estimation of the likelihoods).\nHere we shall use TVAE for learned synthetic tabular data generation (of particular importance in privacy-sensitive environments, or when data is scarce for data augmentation purposes, or when wishing to train models over tables and accessing raw data is expensive in terms of time or money).\n\nTo distill a VAE, one must cope with the random noise added to the input of the decoder by the latent variable. For that, the latent variable in the teacher network is removed, and we use the same noise generated by the student in the teacher. The reason for doing this is that distillation tries to teach the student to behave like the teacher for a specific observation or action. If there is randomness, the student might mimic the teacher's behaviour for a completely different observation. After this change, the corresponding logits of the encoder\/encoder of the student and the teacher are compared using MSE. Finally, the loss function is:\n\\\\\n\\begin{equation}\n \\mathscr{L}_d = \\frac{1}{2}( \\mathscr{L}_{mse}(D_{tr}, z_t^{(e)}, z_s^{(e)}) + \\mathscr{L}_{mse}(D_{tr}, z_t^{(d)}, z_s^{(d)}) )\n\\end{equation}\n\\\\\nwhere, \\(e\\) and \\(d\\) correspond to the encoder and the decoder networks. \n\n\n\n\\subsection{An Example}\nWe create a simple synthetic dataset consisting of a categorical attribute, \\(x\\), with 10 \\(distinct\\mhyphen values=\\{1,2,3,\\dots,9,10\\}\\), and with each category having 1000 real values. The dataset is balanced and the real values for each category are generated by a \\textit{Mixture of Gaussians} (MoG) with five peaks. \\autoref{fig:toyexam}.a is the dataset corresponding to \\(x=1\\). We fit a \\textit{Mixture Density Network} with ten components on this dataset. \\autoref{fig:toyexam}.b shows a sample generated by this MDN which asserts that the model has perfectly learnt the data distribution. Next, we introduce an update batch generated by a MoG with two different means. \\autoref{fig:toyexam}.c shows the update batches in red color compared to the previous data in blue. We update the previously learned MDN with the proposed loss function in Eq. \\ref{mdnkdloss}. We repeat updates for 50 batches generated with the new MoG. \\autoref{fig:toyexam}.d shows the final distribution learnt by the MDN.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/toy_example.png}\n \\caption{An example to show how DDUp learns new data without forgetting. 'a' is the histogram of synthetic data corresponding to $x=1$. 'b' is the sample generated by the learned MDN for $x=1$. 'c' shows a sample of an update batch coming from different Gaussians. 'd' is the sample generated by the MDN after being updated by the DDUp loss function. We have performed the update 50 times to see the effect of high frequency updates (This explains the higher frequencies around the last two peaks for 'd').}\n \\vspace{-0.4cm}\n \\label{fig:toyexam}\n\\end{figure}\n\n\n\\subsection{Handling Join Operations}\\label{joins}\nDDUp can operate either on raw tables or tables from join results.\nIf the old data $R$ is the result of a join, the new data batch needs to be computed, due to new tuples being inserted in any of the joined tables in $R$.\nConsider at time \\(t-1\\) a model \\(M_{t-1}\\) has been trained on \\(R=\\bigcup_{j=0}^{t-1}{T_1^j} \\bowtie \\bigcup_{j=0}^{t-1}{T_2^j} \\dots \\bowtie \\bigcup_{j=0}^{t-1}{T_n^j}\\), where $T_r^j$ denotes the new batch for table $T_r$ at time $j$.\nWithout loss of generality, suppose a new insertion operation $I_t$ at time \\(t\\) adds new data to table \\(T_i\\), denoted \\(T_i^t\\). The new data for DDUp in this setting is \\( D_t = (R\\ \\setminus \\ \\bigcup_{j=0}^{t-1}T_i^{j}) \\bowtie T_i^{t} \\), where $\\setminus$ denotes a (multi)set-difference operator. Therefore, for the detection module, \\(S^{\\leq}_{t-1}\\) is a sample of R, and \\(S_t\\) a sample from $D_t$. Furthermore, during updating the transfer-set is a sample from $R$ and the new data is $D_t$. \nPlease note that all this data preparation and how each model deals with joins is orthogonal to DDUp.\nTherefore, it can be done by either computing the actual joins above or using join samplers like \\cite{zhao2018random,shanghooshabad2021pgmjoins}, as is done in NeuroCard and compared against in Section \\ref{joinexp}.\n\n\\begin{comment}\n\\subsection{Putting it All Together}\nAt time \\(t=0\\) we have data \\(D_{0}\\). We create two samples from \\(D_{0}\\): \n\\( S^{\\leq}_{0}\\) for OOD detection and a sample as the \\textit{transfer-set}. \nWe use \\(D_{0}\\) to train a model \\(M_{0}\\). \nNext, at time \\(t=1\\), a new data batch, \\(D_{1}\\) arrives. \nDDUp follows the following steps:\n\\begin{enumerate}\\setlength\\itemsep{0.5em}\n \\item The OOD detection module uses \\(S^{\\leq}_{0}\\) and a sample of \\(D_{1}\\) to test for a significant change in data distribution and issues an 'in-distribution' or 'out-of-distribution' signal.\n \\item If 'in-distribution', go to step 3, otherwise, go to step 4.\n \\item The Model Update module deploys a baseline method to update \\(M_{0}\\) to \\(M_{1}\\). This baseline method is usually fine-tuning with a smaller learning rate. Finally, go to step 5. \n \\item Start the distillation procedure:\n \\begin{itemize}\\setlength\\itemsep{0.1em}\n \\item Make an exact copy of \\(M_{0}\\)\n \\item Feed the transfer-set into \\(M_{0}\\) and \\(M_{1}\\) \n \\item Use Eq. \\ref{totalloss} to update the copied model \\(M_{1}\\)\n \\end{itemize}\n \\item Set the updated model as \\(M_{1}\\), Drop \\(M_{0}\\), Update \\(S^{\\leq}_{0}\\) and \\textit{transfer-set}.\n \\item End.\n\\end{enumerate}\n\\end{comment}\n\n\\section{Experimental Evaluation} \\label{eval}\nWe evaluate DDUp for three different models for learned DB components: (i) Naru\/NeuroCard \\cite{yang2019deep,yang2020neurocard} which use DARN models for CE; (ii) DBest++ \\cite{ma2021learned} that uses MDNs for AQP; and (iii) TVAE \\cite{xu2019modeling}, that uses variational autoencoders for DG.\nWe evaluate in terms of model accuracy and update time.\nWe use as reference points the baseline update approach provided for AQP and CE (TVAE provides no update approach).\nWe also add as reference points the accuracy when retraining from scratch and when leaving models stale. With respect to \\textit{OOD detection}, we investigate whether it can detect significant data shifts successfully and how this will contribute to the final performance of the underlying models in their specific application, CE, AQP, DG. \nUltimately, the experiments are to address the following questions:\n\n\\begin{itemize}[leftmargin=*]\n \\item How to best evaluate DDUp? (Section \\ref{setup})\n \\item Can DDUp accurately detect a distributional shift? (Section \\ref{oodeval})\n \\item Is DDUp accurate under in- $and$ out-of- distribution settings? (Section \\ref{perfeval})\n \\item How does DDUp compare to the baseline approaches in accuracy and update time? (Section \\ref{perfeval})\n \\item What is the effect of distillation? (Section \\ref{distilleval})\n \\item Is DDUp efficient? (Section \\ref{overheads})\n\\end{itemize}\n\n\\vspace{-0.3cm}\n\\subsection{Experimental Setup} \\label{setup}\nTo establish a dynamic setup, we make a copy of the base table and randomly sample 20\\% of its rows as new data. In this setting, new data follows the previous data distribution which we denote as \\textit{in-distribution}. We introduce distributional drift as is typically done for tabular data settings, say in \\cite{wang2020we}. As such, after making the copy, we sort every column of the copied table individually in-place to permute the joint distribution of attributes. Next, we shuffle the rows and randomly select \\(20\\%\\) of the rows - this now becomes the new data.\nWith these new data, we perform two types of experiments. First, we consider the whole 20\\% sample as a new data batch and update the model with it. Second, to show the updatability in incremental steps, we split the 20\\% data into 5 batches. \nIn general, the size of the transfer-set is a tunable parameter \\cite{hinton2015distilling}, influenced by the dataset complexity, the underlying model generalization ability, and the downstream tasks. \nAfter tuning, we used a 10\\% transfer-set for MDN and DARN and a 5\\% for TVAE, which could be further tuned with methods like Grid search.\n\nDDUp does not impose any further constraints to those of the underlying models. For DBest++ we use a query template with a range and an equality attribute. Also, we use one-hot encoding to encode categorical attributes and normalize the range attribute to \\([-1,1]\\). For Naru\/NeuroCard and TVAE, we use the same settings as explained in their code documentation. We use the learned hyper-parameters of the base model, i.e the model we build at time zero, for all subsequent updates. Furthermore, we intuitively set \\(\\alpha\\) parameter in Eq. \\ref{totalloss} to the fraction of update batch size to the original data size and tune \\(\\lambda\\) for values in \\([9\/10, 5\/6, 1\/4, 1\/2]\\). \n\n\\subsubsection{Datasets} \\label{datasets}\nWe have mainly used three real-world datasets (census, forest, DMV) \n(see \\autoref{tab:Datasets}). These datasets \nhave been widely used in the learned DB literature. \nFor CE, \\cite{wang2020we} uses also forest, census and DMV, while NeuroCard\/Naru use JOB\/DMV. For AQP DBEst++ uses TPCDS. For DG, \\cite{xu2019modeling} uses census and forest. Thus, we have also used census, forest, DMV, and TPCDS (\\texttt{store sales} table, scaling factor of 1). Finally, for join queries, we have used JOB (on IMDB data) and TPCH benchmarks, which are also used in \\cite{yang2020neurocard, yang2019deep}.\n\n\\begin{table}[hb]\n \\caption{Characteristics of datasets.}\n \\vspace{-0.3cm}\n \\label{tab:Datasets}\n \\begin{tabular}{c c c c} \n \\toprule\n Dataset&Rows&Columns&Joint Domain\\\\\n \\midrule\n Census & 49K & 13 & $10^{16}$ \\\\\n Forest & 581K & 10 & $10^{27}$ \\\\\n DMV & 11.6M & 11 & $10^{15}$ \\\\\n TPCDS & 1M & 7 & $10^{30}$ \\\\\n \\bottomrule\n \\end{tabular}\n \\vspace{-0.5cm}\n\\end{table}\n\n\\subsubsection{Workload} \\label{workload}Each model is evaluated using 2,000 randomly generated queries. These queries are generated at time zero for each model and are used throughout the subsequent updates. When an update batch is performed, the ground truth of the queries will be updated. For Naru\/NeuroCard, we use their generator to synthesize queries: It randomly selects the number of filters per query (forest:[3,8], census: [5,12], TPCDS: [2,6], dmv: [5,12]). Then, it uniformly selects a row of the table and randomly assigns operators \\([=,>=,<=]\\) to the columns corresponding to the selected filters. Columns with a domain less than 10 are considered categorical and only equality filters are used for them. For DBest++, we select a \\(lower\\mhyphen bound\\) and a \\(higher\\mhyphen bound\\) for the range filter and uniformly select a category from the categorical column for the equality filter. Throughout the experiments, we discard queries with actual zero answer. The structure of a typical query in our experiments is:\n\n\\begin{lstlisting}[mathescape=true,\n basicstyle=\\footnotesize,\n]\nSELECT AGG(y) FROM $T_1 \\bowtie T_2 \\dots \\bowtie T_n$ WHERE $F_1$ AND ... AND $F_{d}$\n\\end{lstlisting}\n\nwhere, \\(F_i\\) is a filter in one of these forms: \\([att_i = val, att_i >= val, att_i <= val]\\). Also, \\texttt{AGG} is an aggregation function like \\texttt{COUNT}, \\texttt{SUM}, \\texttt{AVG}. For DBest++, the query template contains one categorical attribute and one range attribute. As such, we select the following columns from each dataset: census:[\\texttt{age, country}]; forest:[\\texttt{slope, elevation}]; dmv:[\\texttt{body type, max gross weight}]; TPCDS:[\\texttt{ss quantity,ss sales price}]; IMDB:[\\texttt{info type id,production year}]; TPCH:[\\texttt{order date,total price}] where the first\/second attribute is categorical\/numeric. Furthermore, Naru could not train on the full TPCDS dataset as the encodings were too large to fit to memory. Hence, we selected the following columns [\\texttt{ss sold date sk}, \\texttt{ss item sk}, \\texttt{ss customer sk},\\texttt{ss store sk}, \\texttt{ss quantity}, \\texttt{ss net profit}], and made a 500k sample.\n\n\\subsubsection{Metrics}\nFor \\textit{count} queries, we use \\textit{q-error} as follows:\n\n\\begin{equation}\n error = \\frac{max(pred(q), real(q))}{min(pred(q), real(q))}\n\\end{equation} \n\nFor \\textit{sum} and \\textit{avg} aggregates, we use \\textit{relative-error} as follows:\n\\begin{equation}\n error = \\frac{|pred(q) - real(q)|}{real(q)}\\times100\n\\end{equation} \n\nAdditionally, Lopez et al. \\cite{lopez2017gradient} introduce the notions of Backward Transfer (BWT) and Forward Transfer (FWT) as new metrics in class incremental learning tasks. BWT is the average accuracy of the model on old tasks, and FWT is the average accuracy of the model on new tasks. Here, we re-frame BWT and FWT.\nWe generate the queries at time \\(0\\) and use them for all update steps. At each step \\(t\\), we calculate \\(diff = real_t(q) - real_{t-1}(q)\\) for each query, \\(q\\), which gives us three set of queries; \\(G_{fix}\\) with \\(diff=0\\), \\(G_{changed}\\) with \\(diff>0\\), and \\(G_{all} = G_{fix} \\cup G_{changed}\\). With these groups, we define three measures. \\(AT\\): average q-error over \\(G_{all}\\). \\(FWT\\): average q-error over \\(G_{changed}\\). \\(BWT\\): average q-error over \\(G_{fix}\\).\n\n\\subsubsection{Evaluating Variational Autoencoders}\nDG is an interesting learned application which is recently supported using TVAE. Thus, we evaluate DDUp for TVAE. In TVAE, once the training is done, only the decoder network is kept and used, as this is the generator. Hence, we apply our distillation-update method to the decoder network. We evaluate TVAE via the accuracy of an XGboost classifier trained by the synthetic samples, as in \\cite{xu2019modeling}. \nWe hold-out 30\\% of table as the test set, and train two classifiers with original and synthetic data, then predict the classes of the held-out data. We report \\textit{micro f1-score} for classifiers. For census, forest and DMV, we use: \\textit{income}, \\textit{cover-type}, and \\textit{fuel-type}, as the target class, respectively.\nFor TVAE, we created a smaller DMV with 1m records, as training TVAE on the whole DMV is very time\/resource consuming (proving indirectly the need to avoid retraining).\n\n\\subsection{OOD Detection} \\label{oodeval}\n\n\\subsubsection{Loss Functions as Signals}\nWe first show the results of loss\/log-likelihoods when the detector receives samples from the same distributions or from different distributions. The results are shown in \\autoref{tab:avgll}. For Naru\/NeuroCard and DBEst++ we report the actual log-likelihood values (not negatives, so higher is better). For TVAE, we report the ELBO loss values (hence lower is better). \n\n\\begin{table}[hb]\n \\centering\n \\caption{Average log-likelihood and ELBO loss values of data samples on a trained model. $S_{old}$ is a sample of the previous training data. \"IND\", is a 20\\% sample from a straight copy of the original table; \"OOD\", is a 20\\% sample from a permuted copy of the original table.}\n \\vspace{-0.2cm}\n \\label{tab:avgll}\n \\resizebox{\\linewidth}{!}{%\n \\begin{tabular}{c c c c | c c c | c c c } \n \\toprule\n\\multirow{2}{*}{Dataset} & \\multicolumn{3}{c|}{DBEst++} & \\multicolumn{3}{c|}{Naru\/NeuroCard} & \\multicolumn{3}{c}{TVAE} \\\\\n& $S_{old}$ & IND & OOD & $S_{old}$ & IND & OOD & $S_{old}$ & IND & OOD \\\\\n \\midrule\n Census & -0.362 & -0.361 & -0.366 & -20.99 & -20.87 & -36.95 & -15.21\t& -15.22 & 81.47 \\\\\n Forest & -0.0194 & -0.0202 & -0.052 & -43.16 & -43.9 & -141.10 & -19.96 & -20.09 & 142.38 \\\\\n DMV & 2.520 & 2.532 & 2.444 & -13.74 & -13.16 & -18.67 & 9.114 & 9.28 & 34.95 \\\\\n \\bottomrule\n \\end{tabular}}\n \\vspace{-0.35cm}\n\\end{table}\n\n\\autoref{tab:avgll} shows that the loss function (log likelihood and ELBO in our cases) can reliably signal OOD data.\nInterestingly, this corroborates similar findings in \\cite{detectOOD-iclr17} for classification tasks in various vision and NLP tasks, where the NN outputs can be used to signal OOD. Here we show it for tabular data and for NNs developed for AQP, CE, and DG tasks. \n\nIn Naru\/NeuroCard and TVAE, when permuting, all columns are sorted individually, hence the large difference in likelihoods. \nFor DBEst++, only the selected columns for a query template have been permuted, yielding a small difference in likelihoods.\n\n\\begin{comment}\n\\begin{table}[hb]\n\\centering\n \\caption{Change of log-likelihood with the number of permuted columns for a trained autoregressive model. 0 means no columns has been sorted individually therefore the data sample is following the distribution of the training data}\n \\label{tab:permlevel}\n \\begin{tabular}{c c c c} \n \\toprule\n \\#columns & census & forest & DMV\\\\\n \\midrule\n0&-20.992&-43.16048&-13.745\\\\\n1&-28.687&-44.673&-14.616\\\\\n2&-31.103&-115.736&-17.935\\\\\n3&-31.201&-127.560&-18.151\\\\\n4&-31.591&-129.549&-18.308\\\\\n5&-34.793&-127.916&-18.955\\\\\n6&-34.359&-127.054&-18.838\\\\\n7&-35.626&-140.589&-18.858\\\\\n8&-35.938&-143.223&-18.836\\\\\n9&-36.969&-141.106&-18.670\\\\\n10&-37.029&&\\\\\n11&-37.243&&\\\\\n12&-36.953&&\\\\\n \\bottomrule\n \\end{tabular}\n\\end{table}\n\\end{comment}\n\n\\subsubsection{The two-sample test results} \n\\autoref{tab:driftdetect} shows results for two-sample testing for OOD detection. The significance level of the test (threshold) is \\(2\\times variance\\) of the bootstrapping distribution, which was obtained by $>$1000 iterations. \nIn each iteration, we use a 1\\% sample with replacement from previous data and a 10\\% sample without replacement from new data to calculate the test statistic. The results show that when data is permuted, the test statistic is far away from the threshold. This means it appears at a great dissonance in the tails of the bootstrapping distribution. \nAnd since the critical value to test for OOD is found by bootstrapping over \\(S_{old}\\), i.e., \\(S^{\\leq}_{t}\\), it will adjust even to small differences when faced with OOD. \nCase in point, the DBEst++ OOD likelihood value for census (which is similar to IND\/$S_{old}$ in \\autoref{tab:avgll}) vs the corresponding test-statistic value in \\autoref{tab:driftdetect}.\n\n\\begin{table*}[t]\n \\caption{The test-statistic values. Threshold is $2\\times variance$ and bs-mean is the mean of bootstrapping distribution. }\n \\label{tab:driftdetect}\n \\resizebox{\\textwidth}{!}{\n \\begin{tabular}{c | c c c c | c c c c | c c c c } \n \\toprule\n\\multirow{2}{*}{Dataset} & \\multicolumn{4}{c|}{DBEst++} & \\multicolumn{4}{c}{Naru\/NeuroCard} & \\multicolumn{4}{c}{TVAE} \\\\\n& bs-mean & threshold & IND & OOD & bs-mean & threshold & IND & OOD & bs-mean & threshold & IND & OOD \\\\\n \\midrule\nCensus&-0.3524 & 0.007 & 0.001 & 0.05 & -21.0076 & 0.0529 & 0.032 & 16.0052 & -15.1834 & 0.6041 & 0.0419 & 100.5126 \\\\\nForest&-0.0228 & 0.0122 & 0.007 & 0.2315 & -41.35 & 0.0141 & 0.0084 & 72.5473 & -19.99 & 0.0868 & 0.0417 & 167.0502 \\\\\nDMV&2.52 & 0.1287 & 0.0145 & 4.5745 & -13.7674 & 0.0012& 0.0007& 5.1145 & 9.1209 & 0.0177 & 0.0015 & 25.1398 \\\\\n \\bottomrule\n \\end{tabular}}\n\\end{table*}\n\n\n\n\\subsubsection{FP and FN rates in OOD detection}\\label{fpfnrates}\n\nTo evaluate OOD detection, we measure FP and FN rates (FPR, FNR). \nWe created an OOD test-set and an IND test-set, each equaling half the original size of the table. The latter is just a random sample from the original table. The former is constructed as follows. The perturbed data is obtained by perturbing one or more of five columns of the table, say $C1, \\ ... \\ C5$. First we perturb $C1$ and take a sample of the resulting table of size $10\\%$ and append it to the OOD test-set. Then we perturb $C1$ and $C2$ and similarly sample and append it to the OOD test-set. We repeat this for perturbations on $C1, C2, C3$, on $C1, C2, C3, C4$, and on $C1, C2, C3, C4, C5$, ending up with an OOD test-set of size 50\\% of the original table. Note that this setup creates a more-challenging case, as the degree of perturbations (for OOD data) is finer-grained.\nThen, at each batch, we fed a random sample from the OOD test-set and of the IND test-set to the DDUp detector. For each batch, the detector would signal IND or OOD and we recorded and calculated FPR and FNR. The batch size was 2,000 and we repeated the experiment for 1,000 batches.\n\nWe used the same parameters for all datasets and models: the bootstrapping size is 32 and the threshold is \\(2 \\times std\\). For DBEst++, the results are reported in \\autoref{tab:fprfnr}. FPR and FNR for Naru\/NeuroCard and TVAE were always zero. These results further confirm that the OOD detection algorithm is not biased.\n\n\\begin{table}[hb]\n \\vspace{-0.3cm}\n \\centering\n \\caption{FPR and FNR for DBEst++.}\n \\vspace{-0.3cm}\n \\label{tab:fprfnr}\n \\begin{tabular}{c c c } \n \\toprule\nDataset & FPR & FNR \\\\\n \\midrule\n Census & 0.15 & 0.01 \\\\\n Forest & 0.10 & 0 \\\\\n DMV & 0.01 & 0 \\\\\n \\bottomrule\n \\end{tabular}\n \\vspace{-0.35cm}\n\\end{table}\n\nFurthermore, we studied the sensitivity on the batch size and varied it from a size of 1 to 2,000. Results are shown in \\autoref{fig:oodsens}, which clearly show that after a low-threshold batch size, FPR and FPN tend to zero. The same results hold for other models and datasets, and are omitted here for space reasons.\n\n\\begin{figure}\n\\begin{minipage}{.5\\linewidth}\n\\subfloat[]{\\label{main:a}\\includegraphics[scale=.29]{figures\/dbest-forest-fprfnr.png}}\n\\end{minipage}%\n\\begin{minipage}{.5\\linewidth}\n\\subfloat[]{\\label{main:b}\\includegraphics[scale=.29]{figures\/tvae-dmv-fpfn.png}}\n\\end{minipage} %\n\\vspace{-0.4cm}\n\\caption{Sensitivity of OOD detection vs batch size.}\n\\label{fig:oodsens}\n\\vspace{-0.55cm}\n\\end{figure}\n\n\\subsection{Accuracy Results} \\label{perfeval}\n\\subsubsection{When there is OOD data} \\label{whenood}\n\nFor Naru\/NeuroCard, DBEst++, and TVAE, and for each dataset, we compare 4 updating approaches against each other and against the base model before any new data is inserted. The 4 approaches are as follows: \n\"\\texttt{Retrain}\", retrains the model from scratch using both old and new data. \"\\texttt{Baseline}\" is the baseline approach in Naru\/NeuroCard and DBest++ where a trained model is updated with new data by performing \\textit{SGD} with a smaller learning rate. \"\\texttt{DDUp}\" is the proposed method.\nFinally, in \"\\texttt{stale}\", the model is not updated -- this is a do-nothing approach.\nFor reference, we also include the numbers for $M_0$, i.e., the original model accuracy before any new data came.\n\\autoref{tab:qerror} and \\autoref{tab:aqpacc} show the accuracy results for CE and AQP (SUM and AVG operations), respectively.\nFor TVAE, the classification f1-scores are reported in \\autoref{tab:tvaef1}. Results of these three tables correspond to the case where the update sample is permuted. \nDDUp always performs better than the baseline approach. \nMost of the times, the performance of DBEst++ on DMV dataset is not as well as for the other datasets. This probably is due to the complexity of data (large scale and highly correlated attributes). Nevertheless, DDUp stands on the top of the underlying models and regardless of the model's performance, DDUp ensures that it will retain the accuracy.\nPlease note the DMV dataset results in \\autoref{tab:qerror} and \\autoref{tab:aqpacc} and, census and forest datasets in \\autoref{tab:tvaef1}, where, DDUp even outperforms retraining from scratch. \nInterestingly, this corroborates similar evidence for sequential self-distillation (for boosting embeddings for) classification tasks \\cite{seq-self-distill}. This was one of the reasons we adapted a self-distillation based approach.\nFinally, baseline methods have poor performance for 95th and 99th percentiles. \n\n\\begin{table*}[t]\n \\caption{Results of updating a base model with a 20\\% permuted sample in terms of q-error. $M_{0}$ denotes the base model.}\n \\label{tab:qerror}\n \\centering\n \\begin{tabular}{c c | c | c | c | c | c | c | c | c | c | c} \n \\toprule\n\\multirow{2}{*}{Dataset} & \\multirow{2}{*}{metric} & \\multicolumn{5}{c|}{DBEst++} & \\multicolumn{5}{c}{Naru\/NeuroCard} \\\\\n&&$M_{0}$&DDUp&baseline&stale&retrain&$M_{0}$&DDUp&baseline&stale&retrain \\\\\n \\midrule\n\\multirow{4}{*}{census}&median&1.05&1.11&1.17&1.16&1.07&1.08&1.09&4&1.14&1.07\\\\\n&95th&2&2&2.20&2&2&2&2&471.80&2&2\\\\\n&99th&3&3&4&3&3&3&3&1534.69&3.16&3\\\\\n&max&5&7&11&10.50&5&5.25&7&8385&21.88&6\\\\\n\\midrule\n\\multirow{4}{*}{forest}&median&1.026&1.046&2&1.18&1.02&1.04&1.07&1.54&1.10&1.05\\\\\n&95th&2&2&63.40&2&1.64&2.48&3&41&2.50&2.75\\\\\n&99th&2&2.583&503.12&5.60&2&4&6&157.16&5.48&5\\\\\n&max&4&5.33&3470&90.85&5.33&27&65.66&1691&484&34.66\\\\\n\\midrule\n\n\\multirow{4}{*}{DMV}&median&1.20&1.143&3.48&1.88&1.34&1.02&1.04&2.57&1.16&1.02\\\\\n&95th&4.91&5.07&234.88&7.00&5.50&1.20&1.41&468.68&1.50&1.25\\\\\n&99th&9.65&10&3897.87&12.50&8&1.83&2.31&4734.62&2.84&2\\\\\n&max&18.83&19&65875&39&17&8&9.81&343761&9.49&5\\\\\n\n\\midrule\n\\multirow{4}{*}{TPCDS}&median&1.02&1.04&57&1.27&1.02&1.01&1.07&1.15&1.10&1.05\\\\\n&95th&1.16&1.26&269&1.58&1.18&2&2&29&2&2\\\\\n&99th&1.5&1.61&1266&2.72&1.5&3.01&3.01&239&4&3\\\\\n&max&3&3&4534&10.66&5.64&5&28&5100&28&24\\\\\n\n \\bottomrule\n \\end{tabular}\n\\end{table*}\n\n\\begin{table}[t]\n \\caption{mean-relative-error for SUM and AVG aggregation functions for DBEst++.}\n \\label{tab:aqpacc}\n \\centering\n \\resizebox{\\linewidth}{!}{%\n \\begin{tabular}{c c | c c c c c} \n \\toprule\nDataset&function&$M_{0}$&DDUp&baseline&stale&retrain\\\\\n \\midrule\n\\multirow{2}{*}{census}&SUM&13.05&17.30&65.88&21.36&13.60\\\\\n&AVG&1.89&2.36&8.15&2.37&1.97\\\\\n\\midrule\n\\multirow{2}{*}{forest}&SUM&10.11&15.51&88.73&24.59&10.14\\\\\n&AVG&0.76&1.04&3.90&1.35&0.79\\\\\n\\midrule\n\\multirow{2}{*}{TPCDS}&SUM&4.53&6.37&61.40&22.64&5.12\\\\\n&AVG&0.88&1.47&12&3.50&1.21\\\\\n\\midrule\n\\multirow{2}{*}{DMV}&SUM&76.73&85.29&423&97.00&110\\\\\n&AVG&6.4&6.9&15.9&8.6&7.3\\\\\n\n\\bottomrule\n \\end{tabular}}\n\\end{table}\n\n\\begin{table}[t]\n \\caption{Classification results for TVAE in terms of micro f1. 'r' stands for real data, 's' stands for synthetic data.}\n \\label{tab:tvaef1}\n \\centering\n \\resizebox{\\linewidth}{!}{%\n \\begin{tabular}{c | c c | c c | c c | c c | c c } \n \\toprule\n\\multirow{2}{*}{Dataset}\n&\\multicolumn{2}{c}{$M_{0}$}&\\multicolumn{2}{c}{DDUp}&\\multicolumn{2}{c}{baseline}&\\multicolumn{2}{c}{stale}&\\multicolumn{2}{c}{retrain}\\\\\n&r&s&r&s&r&s&r&s&r&s\\\\\n \\midrule\ncensus&0.67&0.63&0.77&0.73&0.77&0.55&0.77&0.56&0.77&0.72\\\\\nforest&0.84&0.69&0.89&0.78&0.89&0.63&0.89&0.60&0.89&0.74\\\\\nDMV&0.97&0.97&0.98&0.97&0.98&0.92&0.98&0.93&0.98&0.98\\\\\n\n\\bottomrule\n \\end{tabular}}\n\\end{table}\n\n\\subsubsection*{Performance on old and new queries} To better illustrate the effects of \\textit{catastrophic forgetting}{} and \\textit{intransigence} we elaborate on performance on FWT and BWT. (As \\texttt{retrain} avoids be definition \\textit{catastrophic forgetting}{} and \\textit{intransigence}, it is omitted).\nThe results are shown in \\autoref{tab:mdntranfers}. \nNote that any insertion affects only a percentage of queries, shown in \n\\autoref{tab:querypercents}.\nComparing AT, FWT, and BWT in \\autoref{tab:qerror} and \\autoref{tab:mdntranfers} first note that fine-tuning always performs much better in terms of FWT compared to BWT (due to catastrophic forgetting).\nSecond, conversely, a stale model shows better BWT compared to FWT. \nFor DDUp, FWT and BWT remain close to each other, especially in terms of median q-error, showing that DDUP can ensure accuracy for queries on old and new data.\nOverall, DDUp enjoys high accuracy.\n\n\\subsubsection*{Incremental Steps} To show the updates in incremental steps, we have split the \\(20\\%\\) data into 5 equal-sized chunks and have performed an update incrementally for each batch. \\autoref{fig:incupdates2} compares the trend of accuracy during updates. As it is clear from the figures, DDUp remains very close to \\texttt{retrain}, while there is a drastic drop in accuracy using \\texttt{baseline}. Starting point \\(0\\) is where the base model \\(M_{0}\\) is built from scratch. (The same results hold for 95th, 99th percentiles and maximum q-error). \n\n\n\\begin{table*}[t]\n \\caption{Comparing q-error of different updating approaches in terms of FWT and BWT.}\n \\vspace{-0.2cm}\n \\label{tab:mdntranfers}\n \\begin{tabular}{c c | c | c c | c c | c c | c | c c | c c | c c } \n \\toprule\n\\multirow{3}{*}{Dataset} & \\multirow{3}{*}{metric} & \\multicolumn{7}{c|}{DBEst++} & \\multicolumn{7}{c}{Naru\/NeuroCard} \\\\\n&&\\multicolumn{1}{c}{$M_{0}$}&\\multicolumn{2}{c}{DDUp}&\\multicolumn{2}{c}{baseline}&\\multicolumn{2}{c|}{stale}&\\multicolumn{1}{c}{$M_{0}$}&\\multicolumn{2}{c}{DDUp}&\\multicolumn{2}{c}{baseline}&\\multicolumn{2}{c}{stale} \\\\\n\n&&&FWT&BWT&FWT&BWT&FWT&BWT& &FWT&BWT&FWT&BWT&FWT&BWT\\\\\n\n\\midrule\n\\multirow{3}{*}{census}&median&1.05&1.06&1.12&1.06&1.20&1.05&1.16&1.08&1.11&1.09&1.83&6&1.20&1.13 \\\\\n&95th&2&1.66&2&1.56&2.33&3.30&2&2&1.64&2&4.63&530.80&3.18&2 \\\\\n&99th&3&4.94&3&4.10&4&8.90&2.75&3&3.08&3&9.98&1598.53&8.49&3\\\\\n\n\\midrule\n\\multirow{3}{*}{forest}&median &1.02&1.01&1.08&1.23&2.66&1.05&1.20&1.04&1.07&1.07&1.39&1.65&1.18&1.08\\\\\n&95th&2&1.181&2&2.87&146.38&2.85&2&2.489&1.88&3&3.13&43.02&7.55&2.33\\\\\n&99th&2&1.52&3&3.72&590.57&18.33&2.24&4&4.89&6&5.27&163.80&191.53&4.86\\\\\n\n\\midrule\n\\multirow{3}{*}{DMV}&median&1.20&1.28&1.13&2.20&4.36&1.66&1.54&1.02&1.02&1.07&1.06&12.85&1.26&1.19\\\\\n&95th&4.910&4.30&5.87&3.34&484.46&9.50&6.87&1.20&1.16&1.55&1.65&1015.81&3.30&1.40\\\\\n&99th&9.65&9&11.65&10.50&5894.21&12.12&10.80&1.83&1.47&3&3.35&8183.34&11.93&2.49\\\\\n\n\\midrule\n\\multirow{3}{*}{TPCDS}&median&1.02&1.03&1.04&1.20&1.51&1.16&1.21&1.01&1.06&1.08&1.19&1.11&1.10&1.10\\\\\n\n&95th&1.16&1.21&1.29&2.37&339&2.26&1.35&2&2&2&2.60&54&2&2\\\\\n&99th&1.5&1.37&1.66&4.27&1536&4.48&1.66&3.01&9.77&3&9.47&434&9.64&3.77\\\\\n\n\\bottomrule\n \\end{tabular}\n\\end{table*}\n\n\\begin{table}[t]\n \\caption{The percentage of the queries (out of 2k queries) with changed actual results after inserting 20\\% new data.}\n \\vspace{-0.2cm}\n \\label{tab:querypercents}\n \\begin{tabular}{c c c } \n \\toprule\n dataset & DBEst++ & Naru \\\\\n \\midrule\n census&14\\%&12\\% \\\\\n forest&32\\%&9\\% \\\\\n TPCDS&36\\%&36\\% \\\\\n dmv&52\\%&45\\%\\\\\n \\bottomrule\n \\end{tabular}\n \\vspace{-0.5cm}\n\\end{table}\n\nWe also have evaluated the models with respect to the \\textit{log-likelihood goodness-of-fit}. Log-likelihood is widely used to evaluate NN models.\nUsing log-likelihood allows evaluation to be independent\nof underlying applications. \\autoref{fig:incll} shows changes in log-likelihood in consecutive update steps. At each step, we calculate the average of log-likelihoods over a sample of new data and a sample from historical data. In these figures we again see that updating with DDUp is fitting to the old and the new data very similarly to the \\texttt{retrain} case. In general, when keep using \\texttt{stale}, the log-likelihood drops after the first update and then remains low. The reason is that all update batches have similar permutation and since we calculate unweighted averages, the log-likelihood stays fixed. While, for \\texttt{baseline}, i.e fine-tuning, we can see a gradual decrease of likelihood which means that the network is increasingly forgetting about previous data in each step. \n\n\\begin{figure}\n\\begin{minipage}{.5\\linewidth}\n\\subfloat[]{\\label{main:a}\\includegraphics[scale=.29]{figures\/dbest_census.png}}\n\\end{minipage}%\n\\begin{minipage}{.5\\linewidth}\n\\subfloat[]{\\label{main:b}\\includegraphics[scale=.29]{figures\/naru_census.png}}\n\\end{minipage} %\n\\begin{minipage}{.5\\linewidth}\n\\subfloat[]{\\label{main:c}\\includegraphics[scale=.29]{figures\/dbest_forest.png}}\n\\end{minipage}%\n\\begin{minipage}{.5\\linewidth}\n\\subfloat[]{\\label{main:d}\\includegraphics[scale=.29]{figures\/naru_forest.png}}\n\\end{minipage}\n\\begin{minipage}{.5\\linewidth}\n\\subfloat[]{\\label{main:e}\\includegraphics[scale=.29]{figures\/dbest_dmv.png}}\n\\end{minipage}%\n\\begin{minipage}{.5\\linewidth}\n\\subfloat[]{\\label{main:f}\\includegraphics[scale=.29]{figures\/naru_dmv.png}}\n\\end{minipage}\n\\vspace{-0.3cm}\n\\caption{Updating results over 5 consecutive updates.}\n\\label{fig:incupdates2}\n\\vspace{-0.4cm}\n\\end{figure}\n\n\\begin{figure}\n\\begin{minipage}{.5\\linewidth}\n\\subfloat[]{\\label{main:c}\\includegraphics[scale=.29]{figures\/naru_census_loglikelihood.png}}\n\\end{minipage}%\n\\begin{minipage}{.5\\linewidth}\n\\subfloat[]{\\label{main:d}\\includegraphics[scale=.29]{figures\/naru_dmv_loglikelihood.png}}\n\\end{minipage}\n\\vspace{-0.3cm}\n\\caption{log-likelihood results over 5 consecutive updates.} \n\\label{fig:incll}\n\\vspace{-0.3cm}\n\\end{figure}\n\n\\subsubsection{When data is not OOD}\nIn this case, simple fine-tuning update algorithms, such as \\texttt{baseline}, will likely avoid \\textit{catastrophic forgetting}{}. \nTo illustrate this, we have repeated the 5 batched incremental updates with data without permutation. The results are reported in \\autoref{fig:incupdatenodrift}. For space reasons, we only show the results for census. The results indicate that for in-distribution data, simple baselines can have a performance close to \\texttt{retrain}.\n\n\\begin{figure}\n\\begin{minipage}{.5\\linewidth}\n\\subfloat[]{\\label{main:a}\\includegraphics[scale=.29]{figures\/dbest_census_nodrift.png}}\n\\end{minipage}%\n\\begin{minipage}{.5\\linewidth}\n\\subfloat[]{\\label{main:b}\\includegraphics[scale=.29]{figures\/naru_census_nodrift.png}}\n\\end{minipage} %\n\\vspace{-0.3cm}\n\\caption{Updating results over 5 consecutive updates when data follows the same distribution as the historical data.}\n\\label{fig:incupdatenodrift}\n\\vspace{-0.3cm}\n\\end{figure}\n\n\\begin{table}[hb]\n\\vspace{-0.4cm}\n \\caption{DDUp's speed up over \\texttt{retrain}, for two update sizes. For census, forest, and dmv, sp1: 20\\% of the original table. sp2, 5\\% of the original table. for IMDB and TPCH sp1: updating the first partition and sp2: updating the last partition.}\n \\label{tab:times}\n\\vspace{-0.25cm}\n \\begin{tabular}{c | c c | c c | c c } \n \\toprule\n\\multirow{2}{*}{Dataset} & \\multicolumn{2}{c|}{DBEst++} & \\multicolumn{2}{c|}{Naru} & \\multicolumn{2}{c}{TVAE} \\\\\n&sp1&sp2&sp1&sp2&sp1&sp2 \\\\\n \\midrule\ncensus&5&5.5&3.5&4&3.4&5.7 \\\\\nforest&1.6&4&5&9.2&3.6&7 \\\\\nDMV&4&6.5&2.3&9.6&3.4&6.8 \\\\\nIMDB&4.5&18&3.5&5&NA&NA \\\\\ntpch&6.5&16&2&4&NA&NA \\\\\n \\bottomrule\n \\end{tabular}\n\\end{table}\n\n\\subsection{Evaluating DDUp for Join Queries} \\label{joinexp}\nAs mentioned, DDUp is unconcerned whether at a time $t$, \n\\(S^{\\leq}_{t-1}\\) (a sample of \\(\\cup_{j=0}^{t-1} D_j\\)) and $D_t$ come from a raw table or from a join.\nFor this experiment, we have evaluated DDUp running 2,000 queries over two 3-table joins from the JOB and TPCH datasets.\nFor each, the 2,000 queries involve a join of the fact table with two dimension tables: \nSpecifically, the join of tables [\\texttt{title}, \\texttt{movie info idx}, \\texttt{movie companies}] for IMDB, and [\\texttt{orders}, \\texttt{customer}, \\texttt{nation}] for TPCH. For the update dynamics, we have split the fact table into 5 time-ordered equally-sized partitions. We have built \\(M_0\\) on the join (of the fact table's first partition with the 2 dimension tables) and updated it with each subsequent partition at a time. This is similar to the update setting in NeuroCard.\nResults for both CE and AQP are in \\autoref{fig:joins}.\n\nNeuroCard, unlike other models, natively supports joins, using\na \"fast-retrain\" - i.e., a light retraining where the model is retrained using a 1 percent sample of the full join result. We have included this policy here as \"fast-retrain\". \nDDUp always signalled OOD for the new update batches, except for TPCH data on DBest++, where update was not triggered. Therefore, in \\autoref{fig:joins}.d the accuracy of the stale model and fine-tuning is close to retrain. This further confirms the significance of OOD detection.\n\n\\begin{figure}[htbp]\n\\begin{minipage}{.5\\linewidth}\n\\subfloat[]{\\label{main:a}\\includegraphics[scale=.29]{figures\/Naru-imdb-95th.png}}\n\\end{minipage}%\n\\begin{minipage}{.5\\linewidth}\n\\subfloat[]{\\label{main:b}\\includegraphics[scale=.29]{figures\/Naru-tpch-95th.png}}\n\\end{minipage} \\\\\n\\begin{minipage}{.5\\linewidth}\n\\subfloat[]{\\label{main:c}\\includegraphics[scale=.29]{figures\/DBest-imdb-sum-rel-error.png}}\n\\end{minipage}%\n\\begin{minipage}{.5\\linewidth}\n\\subfloat[]{\\label{main:d}\\includegraphics[scale=.29]{figures\/DBest-tpch-sum-rel-error.png}}\n\\end{minipage}\n\\vspace{-0.3cm}\n\\caption{DDUp's performance on joined tables.}\n\\label{fig:joins}\n\\vspace{-0.3cm}\n\\end{figure}\n\n\\subsection{Effect of Transfer Learning} \\label{distilleval}\nWe now delve into the effects of transfer-learning in DDUp. How much DDUp's transfer-learning via knowledge distillation contributes to better accuracy? \nWe perform experiments where we remove the transfer-learning term of Eq \\ref{totalloss}. Therefore, we combine the sample from previous data known as the transfer-set with the new update batch and create a model\nwith the same configurations as the base model. \\autoref{fig:tleffect} shows the results.\nThe results assert that the performance of DDUp is not only related to the previous data sample, and in fact, distillation has a big effect on the improvement of the new models. \n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/transfer_learning_effect.png}\n \\caption{Effect of transfer-learning on q-error. AggTrain, is the case where we aggregate the transfer-set with the new data and train a model similar to the base model.}\n \\label{fig:tleffect}\n\\end{figure}\n\\vspace{-0.2cm}\n\\subsection{Overheads} \\label{overheads}\nWe report on the costs of each DDUp module separately. All the codes are written and executed in Python 3.8, on an Ubuntu 20 machine with 40 CPU cores, two Nvidia GTX 2080 GPUs and 64GB memory. With respect to memory usage, DDUp performs regular feed-forward steps as in regular NN training. Therefore, DDUp does not increase memory footprints\nIn terms of time, DDUp has two computation costs namely, \\textit{OOD detection} and \\textit{model update}. OOD detection is split into offline and online phases. \\autoref{tab:offontime} shows these two times. The largest detection time is for the forest dataset on a Naru model which takes around 3 minutes. However, please note that in \nthe online phase only takes 1 second to detect a change in data. \n\n\\begin{table}[hb]\n \\caption{online and offline times during OOD detection.}\n \\label{tab:offontime}\n\\vspace{-0.25cm}\n \\begin{tabular}{c | c c | c c | c c } \n \\toprule\n\\multirow{2}{*}{Dataset} & \\multicolumn{2}{c|}{DBEst++} & \\multicolumn{2}{c|}{Naru} & \\multicolumn{2}{c}{TVAE} \\\\\n&off&on&off&on&off&on \\\\\n \\midrule\ncensus&2.44&0.02&111&1.8&310&5.5\\\\\nforest&28&0.04&174&0.92&433&8.8\\\\\nDMV&86&2&144&10&99&0.44\\\\\n \\bottomrule\n \\end{tabular}\n\\end{table}\n\n\\autoref{tab:times} shows DDUp's speed up over \\texttt{retrain} for OOD data, for different update sizes. When data is OOD, DDUp can be over 9$\\times$ faster than \\texttt{retrain}. Obviously, speedups will be higher for incremental steps. This fact is reflected in IMDB and TPCH datasets where after inserting the last partition DDUp is 18$\\times$ faster than \\texttt{retrain}. Note that the updating time is dependent on a few parameters including update size, transfer-set size, training batch size etc. During updates, we have used smaller training batch sizes. If one tunes the model for bigger batches, and smaller transfer-set sizes, the speed up would be higher.\n\n\\vspace{-0.2cm}\n\\subsection{Non neural network models}\\label{nonnn}\nFor the sake of completeness and as an additional reference point, we include results for updating a state-of-the-art non-NN model that natively supports data insertions, (DeepDB \\cite{hilprecht2019deepdb}) used for CE. \nWhen an update happens, DeepDB traverses its sum-product-network graph and updates the weights of the intermediate nodes and the histograms at the leaves. We have repeated the same experiment in \\autoref{tab:qerror} for DeepDB. The results are reported in \\autoref{tab:deepdb}.\n\n\\begin{table}[t]\n \\caption{Performance of DeepDB updating vs. DDUp for Naru, for a CE task in terms of q-error.}\n \\vspace{-0.35cm}\n \\label{tab:deepdb}\n \\centering\n \\begin{tabular}{c c | c | c | c | c | c } \n \\toprule\n\\multirow{2}{*}{Dataset} & \\multirow{2}{*}{ metric} & \\multicolumn{3}{c|}{DeepDB} & \\multicolumn{2}{c}{Naru} \\\\\n&&$M_{0}$&update&retrain&$M_{0}$&DDUp \\\\\n \\midrule\n\\multirow{3}{*}{census}&median&1.05&1.2&1.05&1.08&1.09\\\\\n&95th&3&4.18&3&2&2\\\\\n&99th&5.11&8&5&3&3\\\\\n\\midrule\n\\multirow{3}{*}{forest}&median&1.02&1.2&1.02&1.04&1.07\\\\\n&95th&7.5&10.5&7&2.48&3\\\\\n&99th&31&52&31&4&6\\\\\n\\midrule\n\\multirow{3}{*}{DMV}&median&1.06&1.25&1.1&1.02&1.04\\\\\n&95th&2.5&3.5&2.5&1.20&1.41\\\\\n&99th&22&37&21&1.83&2.31\\\\\n \\bottomrule\n \\end{tabular}\n \\vspace{-0.3cm}\n\\end{table}\n\nFrom \\autoref{tab:deepdb} it can be observed that DeepDB's updating policy is under-performing, as was independently verified in \\cite{wang2020we}. \nDDUp (coupled in this experiment with Naru\/NeuroCard for CE) always performs better. Nonetheless, we wish to emphasize that the saving grace for DeepDB based on our experiments is that retraining from scratch is very efficient -- significantly faster compared to NNs. \n\n\\section{Related Work} \\label{litraturere}\n\\subsection{Learned Database Systems}\\label{ldbliterature}\nNN-based components to be used by DBs are emerging rapidly. Different works exploit different neural network models.\n\\cite{yang2019deep, yang2020neurocard, hasan2020deep} used generative neural networks to build learned selectivity estimators. Thirumuruganathan et al. \\cite{thirumuruganathan2020approximate} used VAEs for AQP. Ma et al. \\cite{ma2021learned} used mixture density networks for AQP. Database indexing research\nrecently has adopted neural networks to approximate cumulative density functions \\cite{kraska2018case,ding2020alex,nathan2020learning,ding2020tsunami}. Query optimization and join ordering are also benefiting from neural networks \\cite{marcus2019neo, kipf2018learned}. Other applications include auto-tuning databases \\cite{van2017automatic,li2019qtune,zhang2019end}, cost estimation \\cite{zhi2021efficient, siddiqui2020cost}, and workload forecasting \\cite{zhu2019novel}.\n\nAmong these, this work provides a solution for handling NN model maintenance in the face of insertion-updates with OOD data, when the models need to continue ensuring high accuracy on new and old data and on tasks for which models were originally trained (such as AQP, CE, DG, etc.).\nWhile there has been related research on transfer learning for learned DBs such as \\cite{hilprecht2021one, wu2021unified} these target a different problem setting:\nThey study how to transfer knowledge from a model trained for one task, and\/or a DB, and\/or a system, and\/or a workload to a new task and\/or DB, and\/or system, and\/or workload. They do not study how to keep performing the original task(s) on evolving datasets with insertions carrying OOD data with high accuracy for queries on both old and new data. Simply using these methods by fine-tuning on new data will incur catastrophic forgetting. Nevertheless, since these models employ some sorts of knowledge transfer, they might be useful to support updates. However, it remains open whether and how the models in \\cite{wu2021unified, hilprecht2021one} can be utilized to solve efficiently the problems tackled in this paper.\nWhile some of non-neural-network models (e.g., DeepDB) can very efficiently retrain from scratch,\nNN-based models for the above problem setting either do not support insertion-updates or suffer from poor accuracy when facing OOD data, unless paying the high costs of retraining from scratch.\n\n\n\\subsection{OOD Detection}\nOOD detection has recently attracted a lot of attention and it has long been studied in statistics as concept drift (CD) detection, or novelty detection. In general, CD and OOD detection methods could be divided into two broad categories \\cite{gama2014survey,lu2018learning,wang2020few}: \nFirst, prediction-based methods, which use the predictions of the underlying models to test for a change. Recent ML models usually use the predictive probabilities of the classifiers as a confidence score to identify changes \\cite{jiang2018trust,ovadia2019can, wilson2020bayesian,ruff2021unifying}. Others may monitor the error of the underlying models and trigger an OOD signal when a significant change is captured \\cite{gama2006learning, baena2006early,savva2019aggregate,nehme2009self,lopez2016revisiting}. While these approaches are very efficient in time, they typically come with limiting assumptions depending on the underlying model or application. For example, most of them can only be utilized and are only studied for classification (supervised) tasks.\nThe second broad family of methods is distribution-based methods. Some of these methods try to find a distance measure that can best show the discrepancy between new data and old data distributions, using tests like Kolmogorov-Smirnov (KS), \\cite{kolmogorov1933sulla}, Wilcoxon \\cite{pereira2009machine}, and their multi-variate variants \\cite{fasano1987multidimensional, baringhaus2004new}. Others try to learn the density of the underlying data distribution test for a significant change, like kernel-density-based approaches \\cite{kifer2004detecting,dasu2006information,gu2016concept,lu2014concept,bu2016pdf,song2007statistical}. More recent works utilize the estimated likelihoods of generative models \\cite{ren2019likelihood, morningstar2021density, xiao2020likelihood}. Other approaches rely on the inner representations of the networks \\cite{li2021cutpaste,hendrycks2019using,lee2018simple}. Nonetheless, this second family of OOD detection methods are usually expensive (esp. for multi-dimensional data) and involve fitting a separate density estimator. Hence, the main problem is that in an insertion scenario, the density estimators also need to be updated (typically via training from scratch, upon each insertion).\n\n\\vspace{-0.3cm}\n\\subsection{Incremental Learning (IL)}\nMost IL methods regularize the model in a way that it acquires knowledge from the new task while retaining the knowledge of old tasks. For example, \\textit{Elastic Weight Consolidation (EWC)} \\cite{kirkpatrick2017overcoming} adds a regularizer to control the learning speed around important weights of the network for old tasks while learning a new task. Similar works are developed around this idea \\cite{liu2018rotate, lee2020continual,titsias2019functional}, \\textit{Path Integral (PathInt)} \\cite{zenke2017continual} ,\\textit{Riemanian Walk (RWalk)} \\cite{chaudhry2018riemannian}. Other approaches exploit knowledge distillation to retain the knowledge of previous tasks \\cite{li2017learning}.\nAnother group of IL methods, save exemplars from past data \\cite{wu2019large, castro2018end, rebuffi2017icarl} or generate samples\/features using generative models \\cite{ostapenko2019learning, kemker2017fearnet} and involve them in learning new tasks. Lopez et al. \\cite{lopez2017gradient} has proposed \\textit{Gradient Episodic Memory} that consists of \\textit{M} blocks of memory to store examples from \\textit{T} tasks and uses the model's prediction on these examples as a constraining loss that inhibits the model to bias toward new task and forget past tasks. Lastly, some works try to completely keep previous models and create new models (or part of a model like a single layer) for each new task. Aljundi et al. \\cite{aljundi2017expert} introduce \\textit{Expert Gate} with different models for each task and an autoencoder which learns the representations of each task to assign test-time tasks to the proper model. Instead of learning a whole new model, Rusu et al. \\cite{rusu2016progressive} introduce \\textit{Progressing Neural Networks} which add new columns to the previous network architecture and learns lateral connections between them. Most of the above methods, do not account for in- and out- of distribution updates and are not easily extendable to different learning tasks. \n\n\\vspace{-0.2cm}\n\\section{Conclusion} \\label{conclusion}\nLearned DB components can become highly inaccurate when faced with new OOD data when aiming to ensure high accuracy for queries on old and new data for their original learning tasks.\nThis work proposes, to our knowledge, the first solution to this problem, coined DDUp.\nDDUp entails two novel components, for OOD detection and model updating.\nTo make detection widely applicable, OOD detection in DDUp exploits the output of the neural network (be it based on log-likelihood, cross-entropy, ELBO loss, etc.), and utilizes a principled two-sample test and a bootstrapping method to efficiently derive and use thresholds to signal OOD data.\nDDUp also offers a general solution for model updating based on sequential self-distillation and a new loss function which carefully accounts for \\textit{catastrophic forgetting} and \\textit{intransigence}.\nThis work showcases the wide applicability of DDUp model updating by instantiating the general approach to three important learned functions for data management, namely AQP, CE, and DG, whereby a different type of NN (MDNs, DARNs, VAEs) is used for each. In fact, to our knowledge, no prior work has shown how to \"distill-and-update\" MDNs, VAEs, and DARNs.\nComprehensive experimentation showcases that DDUp detects OOD accurately and ensures high accuracy with its updated models with very low overheads.\n\n\n\\section{Acknowledgement}\nThis work is partially sponsored by Huawei IRC and by EPSRC while doing a PhD at the University of Warwick.\n\n\\balance\n\n\\bibliographystyle{ACM-Reference-Format}\n\n\\section{Introduction} \\label{introduction}\nDatabase systems (DBs) are largely embracing ML. With data volumes reaching unprecedented levels, ML can provide highly-accurate methods to perform central data management tasks more efficiently. Applications abound: AQP engines are leveraging ML to answer queries much faster and more accurately than traditional DBs \\cite{ma2019dbest,hilprecht2019deepdb,thirumuruganathan2020approximate,ma2021learned}.\nCardinality\/selectivity estimation, has improved considerably leveraging ML \\cite{yang2019deep,yang2020neurocard,hasan2020deep,zhu2020flat,wang2020we}. Likewise for query optimization \n\\cite{marcus2019neo,kipf2018learned,marcus2021bao},\nindexes \\cite{kraska2018case,ding2020alex,nathan2020learning,ding2020tsunami}, cost estimation \\cite{zhi2021efficient, siddiqui2020cost}, workload forecasting \\cite{zhu2019novel}, DB tuning \\cite{van2017automatic,li2019qtune,zhang2019end}, synthetic data generation \\citep{xu2019modeling,choi2017generating,park2018data}, etc. \n\n\\subsection{Challenges}\\label{challenges}\nAs research in learned DB systems\nmatures, two key pitfalls are emerging. First, if the \"context\" (such as the data, the DB system, and\/or the workload) changes, previously trained models are no longer accurate. Second, training accurate ML models is costly. Hence, retraining from scratch when the context changes should be avoided whenever possible.\nEmerging ML paradigms, such as active learning, transfer learning, meta-learning, and zero\/few-shot learning are a good fit for such context changes and have been the focus of recent related works \\cite{ma2020active, hilprecht2021one, wu2021unified}, where the primary focus is to glean what is learned from existing ML models (trained for different learning tasks and\/or DBs and\/or workloads), \nand adapt them for new tasks and\/or DBs, and\/or workloads, while avoiding the need to retrain models from scratch.\n\n{\\bf OOD Data insertions.} In analytical DBs data updates primarily take the form of new data insertions. New data may be OOD (representing new knowledge -- distributional shifts), rendering previously-built ML models obsolete\/inaccurate.\nOr, new data may not be OOD. In the former case, the model must be updated and it must be decided how the new data could be efficiently reflected in the model to continue ensuring accuracy.\nIn the latter case, it is desirable to avoid updating the model, as that would waste time\/resources.\nTherefore, it is also crucial to check (efficiently) whether the new data render the previously built model inaccurate. \nHowever, related research has not yet tackled this problem setting, whereby\n\\textit{models for the same learning tasks (e.g., AQP, DG, CE, etc.) trained on old data, continue to provide high accuracy for the new data state} (on old and new data, as queries now may access both old data and new data, old data, or simply the new data).\nRelated work for learned DB systems have a limited (or sometimes completely lack the) capability of handling such data insertions (as is independently verified in \\cite{wang2020we} and will be shown in this paper as well).\n\n{\\bf Sources of Difficulty and Baselines.} \nIn the presence of OOD, a simple solution is adopted by some of the learned DB components like Naru \\cite{yang2019deep}, NeuroCard \\cite{yang2020neurocard}, DBest++ \\cite{ma2021learned}, and even the aforementioned transfer\/few-shot learning methods \\cite{wu2021unified, hilprecht2021one}. That is to \"fine-tune\" the original model $M$ on the new data. Alas, this is problematic. For instance, while a DBest++ model on the \"Forest\" dataset has a 95th percentile q-error of 2, updating it with an OOD sample using fine-tuning increases the 95th q-error to ~63. A similar accuracy drop occurs for other key models as well -- \\cite{wang2020we} showcases this for learned CE works.\nThis drastic drop of accuracy is due to the fundamental problem of \\textit{catastrophic forgetting}{} \\cite{mccloskey1989catastrophic}, where retraining a previously learned model on new tasks, i.e. new data, causes the model to lose the knowledge it had acquired about old data. To avoid \\textit{catastrophic forgetting}{}, Naru and DBest++ suggest using a smaller learning rate while fine-tuning with the new data. This, however, causes another fundamental problem, namely \\textit{intransigence}, \\cite{chaudhry2018riemannian} whereby the model resists fitting to new data, rendering queries on new data inaccurate.\n\nAnother simple solution to avoid these problems would be to aggregate the old data and new data and retrain the model from scratch. However, as mentioned, this is undesirable in our environment. As a concrete example, training Naru\/NeuroCard on the \"Forest\" dataset (with only 600k rows) on a 40-core CPU takes ca. 1.5 hours. Similarly high retraining overheads are typically observed for neural network models, for various tasks.\nAnd, retraining time progressively increases as the DB size increases. \n\nTherefore, more sophisticated approaches are needed, which can avoid \\textit{intransigence} and \\textit{catastrophic forgetting}{},\nupdate models only when needed and do so while ensuring much smaller training overheads than retraining from scratch and at the same time ensure high accuracy for queries on old and new data. While for some tasks, like CE, some researchers question whether achieving very high accuracy through learned models will actually help the end-task (query optimization) \\cite{marcus2021bao}, for tasks like AQP (which is itself the end-task) and for DG (with classification as the end-task) high accuracy is clearly needed, as shown here. Even for CE, with OOD data, accuracy can become horribly poor, as shown here, which is likely to affect query optimization.\n\n\\subsection{Contributions} \\label{contribution}\nTo the best of our knowledge, this work proposes the first updatability framework (DDUp) for learned DBs (in the face of new data insertions possibly carrying OOD data)\nthat can ensure high accuracy for queries on new and\/or old data. \nDDUp is also efficient and \nit can enjoy wide applicability, capable of being utilized for different NNs and\/or different learning tasks (such as AQP, DG, CE, etc.). DDUp consists of a novel OOD detection and a novel model-update module. More specifically, the contributions of DDUp are:\n\n\\begin{itemize}[leftmargin=10pt]\n \\item A general and principled two-sample test for OOD detection. Generality stems from it being based on the training loss function of the NNs. Compared to prior art, it introduces no extra costs and overheads, and could be used with different NNs, with different loss functions, in different applications. To further minimize detection time, it is divided into offline and online phases.\n \\item A novel and general formulation of transfer-learning based on sequential self-distillation for model updating. This formulation allows a higher degree of freedom in balancing tasks w.r.t new and old data, can adapt to different models and tasks, and maximizes performance via self-distillation.\n \\item Importantly, DDUp can be used by any pre-trained NN without introducing any assumptions on models or requiring additional components that might require to retrain models or incur more costs. Here, we instantiate it for three different tasks (namely, the CE task, using the Naru\/NeuroCard deep autoregressive network (DARN) models \\cite{yang2019deep, yang2020neurocard}, the AQP task, using the DBEst++ mixture density network (MDN) model \\cite{ma2021learned}, and for the DG task, using the Tabular Variational AutoEncoder (TVAE) model \\cite{xu2019modeling}) each of which employs a different NN type. These are representative learning tasks and networks with evident importance in DBs and beyond. These instantiations are also novel, showing how to distil-and-update MDNs, DARNs, and TVAEs.\n \\item Finally, DDUp is evaluated using six different datasets and the three instantiated learned DB components, for AQP, CE, and DG\n\\end{itemize}\n\n\n\\subsection{Limitations} \\label{limits}\nDDUp focuses only on data insertions, which are essential and dominant in analytical DBs, and not on updates in place and deletes, which are prevalent \nin transactional DBs.\nNonetheless, the latter touch upon an open problem in the ML literature, namely $\"unlearning\"$, \nwhere it typically concerns privacy (e.g., removing sensitive data from images in classification tasks) \n(e.g., \\citep{sekhari2021remember, golatkar2020eternal}).\nStudying unlearning for DB problem settings is a formidable task of its own and of high interest for future research.\n\n\nAlso, DDUp is designed for NN-based learned DB components. This is so as neural networks are a very rich family of models which have collectively received very large attention for learned DBs. Extending DDUp principles beyond NN models is also left for future research.\n\n\n\\section{The Problem and Solution Overview} \\label{problemdef}\n\\subsection{Problem Formulation} \\label{problemformulation}\nConsider a database relation \\(R\\) with attributes \\(\\{A_1, A_2, ..., A_m\\}\\). This can be a raw table or the result of a join query. Also consider a sequence of \\(N\\) insertion updates denoted by \\(I=\\{I_1,I_2,...I_N\\}\\). Each \\(I_t\\) is an insert operation which appends a data batch \\(D_t=\\{(A_1, A_2, ..., A_m)_t^{(i)}; i=1,..., n_t\\}\\) to \\(R\\), where \\(n_t\\) is the number of rows. Let \\(S_t\\) be a sufficient sample of \\(D_t\\) and \\(S^{\\leq}_{t-1}\\) be a sufficient sample from \\(\\cup_{j=0}^{t-1} D_j\\). We naturally assume that \\(|R|\\) is finite. \nAnd, due to the training restrictions of existing models, we also make the natural assumption:\n\\[\\forall A_i \\in R: supp(D_{t}(A_i)) \\subseteq supp(D_{t-1}(A_i)) \\]\nwhere \\(supp(D(A_i))\\) is the support of attribute \\(A_i\\) in dataset \\(D\\). This assumption satisfies the condition based on which the domain of each attribute is not violated in the upcoming update batches. \n\n\\textbf{Statistical test for data changes}. We define out-of-distribution detection as a two-sample hypothesis test between a sample of historical data and a sample of the new data. Let \\(S^{\\leq}_{t-1}\\) have a joint distribution of \\(P(A_1,\\dots, A1_m) \\equiv \\mathbb{P}\\) and \\(S_{t}\\) have a joint distribution of \\(Q(A_1,\\dots, A_m) \\equiv \\mathbb{Q}\\). We define the null hypothesis \\(H_0: \\mathbb{P}=\\mathbb{Q}\\) which asserts that \\(S_{t}\\) and \\(S^{\\leq}_{t-1}\\) are coming from a same distribution; and the alternative hypothesis \\(H_A: \\mathbb{P}\\neq \\mathbb{Q}\\) which declares that the two samples are generated by two different distributions. \n\n\\textbf{Incrementally updating the model}. Consider for \\(I_0\\) a model \\(M_{0}\\) is trained by minimizing a loss function \\(\\mathscr{L}(D_{0};\\Theta_0\\)). This model may be stale for \\(I_t; t>0\\). Ideally, the goal of incremental learning is: at time \\(t\\) train a model \\(M_{t}\\) that minimizes a function over \\(\\sum_{i=1}^{t} \\mathscr{L}(D_{i};\\Theta_i)\\). This new model should not forget \\(\\{I_{i}; i=0,1,...,t-1\\}\\) and also learn \\(I_t\\).\n\n\n\\subsection{A High Level View of DDUp}\\label{highlevel} \nThe overall architecture of DDUp is depicted in \\autoref{fig:Arch}.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.9\\linewidth, height=4cm]{figures\/Detailed-DDUp-Architecture.png}\n \\caption{The overall structure of DDUp. DDUp uses the latest model and previous data to build a sampling distribution for the two-sample test, and updates the learned component based on the shift in the data distribution.}\n \\label{fig:Arch}\n \\vspace{-0.3cm}\n\\end{figure*}\n\nDDUp process batches of tuples at a time. Such batched handling of insertions is typical in analytical DBs. Furthermore, this takes into account that \nthe effect of single tuples is usually negligible for the overall large space modelled by NNs. And, for most tasks like CE, AQP and DG, the effect of single tuples in the final result is very small, considering the large sizes of tables. And batching amortizes detect-and-update costs over many insertion operations.\n\nUpon a new batch insertion, DDUp takes the latest model \\(M_{t-1}\\), and performs a bootstrapping sampling from the previous data to build the sampling distribution for the average loss values. DDUp uses this distribution to calculate a significance level corresponding to a confidence interval (e.g a 95th confidence interval). The general idea is that if the new data is similar to the previous data (IND in \\autoref{fig:Arch}), the loss values of \\(M_{t-1}\\) for this new data should lie within the threshold. This means that the new data has the same distribution and therefore the model could be left intact (updating maybe just the hyper-parameters of the system, including possible frequency tables and other table statistics. Alternatively, a simple fine-tuning can be performed to adapt the model to the new data.\n\nIf the loss values exceeded the threshold, this implies that the data distribution has significantly changed. DDUp will deploy a teacher-student transfer learning method based on knowledge distillation to learn this new distribution without forgetting the knowledge of the old data. In this framework, while the student directly learns the distribution of the new data, the teacher act as a regularizer to make the student also learn about the old distribution.\n\\vspace{-0.3cm}\n\\section{Out-of-Distribution Detection} \\label{driftdetect}\n\\subsection{Background} \\label{oodback}\nIn ML, OOD is typically addressed from a classification perspective. Formally, assume \\(D\\) is a dataset of \\((x,y)\\) pairs which are drawn from a joint distribution, \\(p(x,y)\\), where \\(x \\in \\mathcal{X} := \\{x_1, x_2, \\dots, x_n\\}\\) is the input (independent variable) consisting of \\(n\\) features, and \\(y \\in \\mathcal{Y} := \\{1,2, \\dots, k\\}\\) is the label corresponding to one of the \\(k\\) in-distribution classes. A sample \\((x,y)\\), that probably is generated by a different distribution than \\(p(x,y)\\), is called OOD, if \\(y \\notin \\mathcal{Y}\\), i.e it does not belong to any previously seen classes. \n \nA similar problem has previously been addressed in statistics as {\\it concept drift} detection, where different types of shifts are distinguished by expanding \\(p(x,y)\\) using the Bayes rule:\n\\begin{equation}\\label{bayesrule}\n p(x,y)=p(x)p(y|x)\n\\end{equation}\nBased on Eq. \\ref{bayesrule}, changes in \\(P(y|x)\\) are usually referred to as \\textit{Real drift}, while changes in \\(P(x)\\) are called \\textit{virtual drift} \\cite{gama2014survey}. In \\(X\\rightarrow y\\) problems the latter mostly is known as \\textit{covariate shift}.\nDeciding which drift to detect is dependent on the underlying models. For example, deep autoregressive networks (e.g., used by \\cite{yang2019deep}) learn the full joint distribution of a table. Hence, they are sensitive to \\textit{covariate shift} upon insertions. \nOn the other hand, mixture density networks (e.g., used by \\cite{ma2021learned}), model the conditional probability between a set of independent attributes and a target attribute. Hence, for these models, one would be interested in detecting \\textit{real shift}.\n\\vspace{-0.2cm}\n\\subsection{Loss based OOD Detection} \\label{llfordrift}\nThere are several challenges that make it difficult to simply adopt one of the OOD detection algorithms in the ML or statistical learning literature.\nFirst, DB tables are multivariate in nature and learned models are usually trained on multiple attributes. As a result, uni-variate two-sample tests like Kolmogorov\u2013Smirnov (KS) test are not suitable for this purpose. Second, the test should introduce low overheads to the system as insertions may be frequent. Therefore, multivariate tests like kernel methods that require to learn densities and perform expensive inference computations are not desirable. Third, we aim to support different learning tasks for which different models might be used. Thus, most of OOD detection methods in ML that are based on probability scores (confidence) of classification tasks are not useful here. Moreover, the test should be able to adapt efficiently to the case where insertions occur within old data, that is, without having to recalculate baseline thresholds etc.\n\nAn efficient OOD detection method is now proposed that resolves all above issues by leveraging the underlying ML models themselves. Central to most learned data system components is the ability to derive from the underlying data tables a model for the joint or conditional data distribution like \\(p(x)\\) or \\(p(y|x)\\). A model usually achieves this by learning a set of parameters \\(\\Theta\\) that represent a function \\(f\\) by iteratively optimizing over a loss function as follows:\n\n\\begin{equation} \\label{generalopt}\n f_\\Theta = \\argmin_{f \\in \\mathcal{F}} \\frac{1}{n} \\sum_{i=1}^n \\mathscr{L}(f(x);\\Theta) + \\Omega(f)\n\\end{equation}\n\nwhere, \\(\\Omega\\) is a regularizer term, \\(n\\) is the number of samples, and \\(f\\) could be the outputs of the model in the last layer (called \\textit{logits}), or the probabilities assigned by a \"softmax\" function.\n\n\nWe will later discuss different loss functions in more details when instantiating different models. In general, loss functions are usually highly non-convex with many local mimina. However, a good learning strategy will find the global minimum. Because of the large data sizes, training is usually done by iterating over mini-batches and a gradient descent algorithm updates the parameters based on the average of loss of the samples in each mini-batch per iteration. For the rest of the paper, when we mention 'loss value' we mean average of losses of the samples in a batch. Once the model is trained, i.e. the loss values have converged, the model can serve as a transformer to map (high-dimensional) input data to the one-dimensional loss functions space around the global minimum. Accordingly, the previous data (seen by the model) are closer to the global minimum compared to the out of distribution data.\n\nThe above discussion explains the possibility to compare in- and out-of distribution data just by relying on the underlying models without any further assumptions\/components, in a low-dimensional space. With these in hand, we can perform a statistical testing to compare the loss values of old data and new data. In the following we will explain a two-sample test for this purpose. \n\n\\subsection{A Two-Sample Test Procedure}\nThe steps for a two-sample hypothesis test are: \n1. Define the null, \\(H_0\\), and alternative hypothesis, \\(H_A\\). \n2. Define a test statistic \\(d\\) that tests whether an observed value is extreme under \\(H_0\\). \n3. Determine a significance level \\(\\delta\\in[0,1]\\) that defines the \\(type\\mhyphen1\\ error\\) (false positives) of the test. \n4. Calculate \\(p\\mhyphen value\\) which equals the probability that a statistical measure, e.g. distance between two distributions, will be greater than or equal to the probability of observed results.\n5. If \\(p\\mhyphen value <= \\delta\\) then the \\(p\\mhyphen value\\) is statistically significant and shows strong evidence to reject \\(H_0\\) in favor of \\(H_A\\). Otherwise, the test failed to reject \\(H_0\\). \n\nThe main challenge herein is how to calculate the test significance of the test statistic, i.e the \\(p\\mhyphen value\\). As explained in Section \\ref{problemdef}, we aim to detect if a new data that is inserted to the system at time \\(t\\) has a different distribution than the previous data. Consider \\(S_{t-1}^{\\leq}\\) be a sample of the previous data and \\(S_{t}\\) be a sample of the newly inserted data. Let \\(d(S_{t-1}^{\\leq},\\ S_{t})\\) be a distance function that measures the distance between the two samples. If \\(P_d\\) is the distribution that explains the test statistic \\(d\\) under the null hypothesis, then the test significance could easily be computed by \\(p\\mhyphen value=P(P_d < d | H_0)\\). Note that since we assume that our test statistic is a distance function, we would perform a one-side left-tail test.\n\n\\textbf{Choosing the test statistic}. The test statistic should reflect the similarity of new data to old data. According to our discussion in Section \\ref{llfordrift}, we use\nthe loss function values after convergence of the models. We use a linear difference between the loss values of the two samples as our test statistics as follows:\n\\begin{equation}\\label{teststatistic}\nd(S_{t-1}^{\\leq},S_{t}) = \\frac{1}{|S_{t-1}|}\\sum_{s\\in S_{t-1}}\\mathscr{L}(s;\\Theta) - \\frac{1}{|S_t|}\\sum_{s\\in S_{t}}\\mathscr{L}(s;\\Theta)\n\\end{equation}\n\nwhere \\(\\mathscr{L}\\) is a loss function achieved by training model \\(M\\) with parameters \\(\\Theta\\). From Eq. \\ref{teststatistic} follows that if the loss function is Negative Log Likelihood, and the likelihoods are exact, the test statistics will be the logarithm of the well-known \\textit{likelihood-ratio} test. Eq. \\ref{teststatistic} also gives intuition about the effect size: the larger \\(d\\) is, the larger the difference between two data distributions would be.\nAlthough many of the learned DB models are trained by maximizing likelihood, some other models (e.g., regressions) are trained using a \\textit{Mean-Squared-Error} objective. It has been shown \\cite{watkins1992maximum} that MSE optimization maximizes likelihood at the same time. Therefore, the form of the distance function in Eq. \\ref{teststatistic} still holds. \nThe important consequence of Eq. \\ref{teststatistic} is that, under i.i.d assumptions for both samples, it can be shown that the central limit theorem holds for \\(P_d\\) \nhence, it has a normal limiting distribution with a mean at 0 and unknown standard deviation. The normality of \\(P_d\\) allows us to make inference based on the confidence intervals. To estimate the standard deviation (std), we perform a bootstrapping approach.\n\n\\subsection{Offline and Online Steps}\nThe main bottleneck of such an OOD detection is bootstrapping. Fortunately, this part could be performed offline before data insertion. In the offline phase, \nwe draw \\(n\\) bootstrap samples of size \\(|S^{\\leq}_{t-1}|\\) from \\(S^{\\leq}_{t-1}\\). (In practice, when we have access to the original data, we make $n$ bootstrap samples of size $|S_{t-1}^{\\leq}|$) from $D_{t-1}^{\\leq}$). We use the model \\(M_{t-1}\\) to compute the likelihoods (or other losses) of each sample and create a sampling distribution using them. Then, we calculate the standard deviation of the sampling distribution, \\(std\\), and use it to find the significance level. In the online phase, we make a sample of the new data, \\(S_{t}\\) and use the latest model, \\(M_{t-1}\\) to calculate the likelihood of \\(S_{t}\\). Finally we compare the test statistic with the threshold. If \\(d > 2\\times std\\) (equivalently \\(p\\mhyphen value \\leq \\delta\\) where \\(\\delta=0.05\\)) we declare a significant shift in data and reject the null hypothesis in the favor of the alternative hypothesis. Otherwise the test fails to reject the null hypothesis and signals \"in-distribution\". \n\n\\subsection{The Test Errors}\\label{testerrors}\nThere are two errors associated with a hypothesis testing. \\textit{type-1 error} is rejecting the null hypothesis when it should not. \\textit{Type-2 error} is the error of accepting the null hypothesis when it should be rejected. The first one introduces false positives to the system and the second causes false negatives. \nFalse positives (FPs) are only a (rather small) performance concern only, spending time to update the model while accuracy is preserved. False negatives (FNs), however, can cause a loss of accuracy. \nTherefore, the system can afford to be stricter with respect to the significance level, in order to reduce the risk of false negatives and accuracy loss.\n\nDDUp uses the loss of the trained NNs for OOD detection. Sometimes NNs could be over-confident \\cite{nguyen2015deep,ren2019likelihood,nalisnick2018deep} which may introduce bias. \nHowever, we have not witnessed it for our tasks here on tabular data.\nIf there were bias, the FP and FN rates discussed above would signal it. \nWe have evaluated DDUp with respect to FPs\/FNs in Section \\ref{oodeval} showing that this is not a concern.\n\n\\section{Model Update} \\label{KD}\nIn this section, we propose a transfer-learning based method that can retain previous knowledge of the model while adapt it to the new insertions. The OOD detection module will either output 'in-distribution' or 'out-of-distribution' signals.\n\n\\textbf{The in-distribution case}. When no drift occurs, the new data distribution is similar to that of the historical data and this distribution could be represented by a similar parameter space of the latest model, \\(M_{t}\\). \nHence, the learned component of the system could remain unchanged. More specifically, the framework can copy \\(M_{t}\\) to \\(M_{t+1}\\) and update the required meta-data associated with the system (such as the frequency tables in DBEst++, or table cardinalities in Naru\/NeuroCard). Even if there are slight permutations in data, fine-tuning the latest model's parameters on the new data will adjust it to the general representation of both old and new data. \nWe will show that when knowing that data is not OOD, \\textit{fine-tuning}{} with a relatively small learning rate, can retain model performance. \nSpecifically, with an \\textbf{in-distribution} signal at time \\(t+1\\), \\(M_{t}\\) is retrained on \\(S_{t+1}\\) with a small learning rate, $lr$. This learning rate could be tuned, as a hyper-parameter.\nWe intuitively set \\(lr_{t} = \\frac{|D_{t+1}|}{|D_{t}^\\leq|}\\ \\times \\ lr_{0}\\) and experimentally show that it is a good choice. \n\n\\textbf{The OOD case}. With a distributional shift,\nby fine-tuning on new data, the model's parameters would bias toward the new data distribution. Even smaller learning rates cause tiny deviations from the previous parameter space which may yield large errors during inference. And, retraining using all the data from scratch is too time consuming. Thus, we propose an updating approach grounded on the transfer-learning paradigm. The general idea is to use the learned model \\(M_{t}\\) and incorporate it in training \\(M_{t+1}\\). To this end, we utilize the \\textit{knowledge distillation}{} principles, which help to transfer the previously learned knowledge to a new model. Our rationale for such a model updating approach is based on the following: \n\\begin{itemize}[leftmargin=*]\n \\item Distillation has several benefits including: faster optimization, better generalization, and may even outperform the directly trained models. \\cite{yim2017gift}.\n \\item It is accurate for queries on old as well as new data.\n \\item It allows us to control the weights for queries on new and old data with just a couple of parameters.\n \\item It is efficient memory-wise as well as computationally-wise, compared to methods like Gradient Episodic Memory, or Elastic Weight Consolidation and PathInt (cf. Section \\ref{litraturere})\n \\item It does not make any assumptions about the training of the underlying models. This property, is especially desirable since: a) we can use it to update different neural networks; b) it prevents the high costs of rebuilding base models; c) different pre-processings could be left intact. For instance, Naru, DBEst++ and TVAE all use completely different types of embedding\/encoding. DDUp can update the model regardless of these differences.\n\\end{itemize}\n\n\\subsection{General Knowledge Distillation (KD)}\nKD was first introduced in \\cite{hinton2015distilling} for $model \\ compression$ by transferring knowledge from an accurate and \"cumbersome\" model, called \\textit{teacher}, to a smaller model called \\textit{student}. In its basic form, instead of fitting the student model directly to the actual data \\textit{labels}, one would use the class probability distribution learned by the teacher to fit the student model. Hinton et al. \\cite{hinton2015distilling} argued that small probabilities in \"wrong\" label logits, known as \"soft labels\", include extra information called \"dark knowledge\" that result in better learning than actual \"hard labels\". Distillation has since been extensively studied. \\autoref{fig:kdfig} shows a general view of the principles of a distillation process. A small dataset referred to as \\textit{transfer-set} is fed into a pre-trained model (teacher) and a new model (student) to be trained. A $distillation \\ loss$ is calculated using the predictions of the pre-trained model instead of the actual labels. This loss and a typical loss using actual labels will be used to train the new model. \n\n\\begin{figure}[hb]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/distillation-diagram.png}\n \\vspace{-0.2cm}\n \\caption{The knowledge distillation process.}\n \\label{fig:kdfig}\n \\vspace{-0.35cm}\n\\end{figure}\n\nTo formulate \\textit{knowledge distillation}{}, consider a model with parameters \\(\\Theta\\), representing a function \\(f_t\\) (\\(t\\) for teacher) which has been trained via Eq. \\ref{generalopt}. We would like to transfer knowledge from this teacher model to a student model with parameter \\(\\Theta'\\), representing a function \\(f_s\\). This new model could be trained as follows:\n\n\\begin{equation} \\label{distillopt}\n f_{s\\Theta'} = \\argmin_{f \\in \\mathcal{F}} \\frac{1}{|tr|} \\sum_{i\\in tr} \\left[\\lambda\\mathscr{L}_d(f_s(i);f_t(i);\\Theta;\\Theta') + (1-\\lambda)\\mathscr{L}(f_s(i);\\Theta')\\right]\n\\end{equation}\n\\\\\nfor weight \\(\\lambda\\), distillation loss \\(\\mathscr{L}_d\\), and transfer-set \\(tr\\). \n\n\\subsection{DDUp: Updating By Knowledge Distillation}\\label{upbykd}\n\n\\cite{furlanello2018born,seq-self-distill} showed that, for classification tasks, if instead of having a compact student model, one uses the same architecture of the teacher, and repeat distillation sequentially for several generations, the student models in the later generations could outperform the teacher model. This approach is called {\\it sequential self-distillation}.\nInspired by this and anticipating that this will be valid for our learning tasks, DDUp also employs a sequential self-distillation approach.\n\nTo update a model using KD,\na copy of the previously trained model becomes the new student. Then, the student is updated using a distillation loss (to be defined soon). After updating, the previous teacher is replaced with the new updated model. This cycle repeats with every new insertion batch.\n\\begin{comment}\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/Sequential Distillation.png}\n \\caption{Sequential updating in a self-distillation scheme}\n \\label{fig:sequpdate}\n\\end{figure}\n\\end{comment}\n\nTo formulate our training loss function, we consider two aspects that we would like to have in our updating scheme. First, to have control over the the new data\/queries versus the old data\/queries. Second, to make it general so that different learned DB systems could adopt it. As such, we first write down the general form of the total loss function and then, use cross-entropy and mean-squared-error as the loss functions to instantiate different models. Training in each update step is as follows:\n\\\\\n\\begin{equation} \\label{totalloss}\n\\begin{split}\n f_{s\\Theta'} = \\argmin_{f \\in \\mathcal{F}} & \\bigg(\\alpha \\times \\frac{1}{|tr|} \\sum_{x\\in tr} \\big[\\lambda\\mathscr{L}_d(f_s(x),f_t(x);\\Theta') \\\\ \n & + (1-\\lambda)\\mathscr{L}(f_s(x);\\Theta')\\big] \\\\\n & + (1-\\alpha) \\times \\frac{1}{|up|}\\sum_{x\\in up}\\mathscr{L}(f_s(x);\\Theta') \\bigg)\n\\end{split}\n\\end{equation}\n\\\\\nHere, \\(\\alpha\\) and \\(\\lambda\\) are the new data and the distillation weights, respectively. Also, \\(tr\\) and \\(up\\) are the transfer-set and the update batch. \nIn summary, the rationale for proposing this novel loss function is: \nThe transfer-set term acts as a regularizer to avoid overfitting on new data.\nThe same goal is also helped by self-distillation (when copying the teacher to the student). Additionally, as mentioned sequential self-distillation \\cite{seq-self-distill} may attain increasingly higher accuracy, even outperforming \"retrain from scratch\"\n(cf. Section 5.3).\n\nFor models that provide a conditional probability in the last layer of the network (e.g. using a Softmax function), an annealed cross-entropy loss will be employed. Otherwise, we utilize mean-squared-error using the logits from the last layer of the network. Eq. \\ref{cedistillloss} and Eq. \\ref{mseloss} show these two loss functions. \n\\\\\n\\begin{equation}\\label{cedistillloss}\n \\mathscr{L}_{ce}(D_{tr};z_t,z_s) = - \\sum_{i\\in [k]} \\frac{exp(z_{t_i}\/T)}{\\sum_{j\\in [k]}exp(z_{t_j}\/T)} \\log \\frac{exp(z_{s_i}\/T)}{\\sum_{j\\in [k]}exp(z_{s_j}\/T)}\n\\end{equation}\n\\\\\n\\begin{equation} \\label{mseloss}\n \\mathscr{L}_{mse}(D_{tr};z_t,z_s) = \\sum_{i\\in[|z_t|]}(z_{t_i} - z_{s_i})^2\n\\end{equation}\n\\\\\nwhere \\(D_{tr}\\) is the \\textit{transfer-set} , \\(T\\) is a temperature scalar to smooth the probabilities so that it produces \"softer\" targets, and \\([k]\\) is the vector \\([0,1,\\dots,n]\\) which are the class probabilities and \\([|z_t|]\\) indicates the logits of the network.\n\n\\subsection{Instantiating the Approach}\n\n\\textbf{Mixture Density Networks}. MDNs consist of an NN to learn feature vectors and a mixture model to learn the \\textit{probability density function} (pdf) of data. Ma et al. \\cite{ma2021learned} uses MDNs with Gaussian nodes to perform AQP. For the Gaussian Mixture, the last layer of MDN consists of three sets of nodes \\(\\{\\omega_i, \\mu_i, \\sigma_i\\}_{i=1}^m\\) that form the pdf according to Eq. \\ref{mdneq}. \n\n\\begin{equation}\\label{mdneq}\n\\hat{P}(y|x_1, ..., x_n) = \\sum_{i=1}^{m}\\omega_i.\\mathscr{N}(\\mu_i, \\sigma_i) \n\\end{equation}\n\nwhere \\(m\\) is the number of Gaussian components, \\(y\\) is the dependent variable and \\((x_1, ..., x_n)\\) is a set of independent variables, \\(w_i\\) is the weight of the \\(i^{th}\\) Gaussian with a mean of \\(\\mu_i\\) and a standard deviation of \\(\\sigma_i\\).\nFor MDNs, we define distillation loss as follows:\n\n\\begin{equation} \\label{mdnkdloss}\n\\mathscr{L}_d = \\mathscr{L}_{ce}(D_{tr}, \\omega_{t}, \\omega_{s}) + \\mathscr{L}_{mse}(D_{tr}, \\mu_{t}, \\mu_{s}) + \\mathscr{L}_{mse}(D_{tr}, \\sigma_{t}, \\sigma_{s})\n\\end{equation}\n\nThis summation of terms help us retain both the shape of data distribution as well as the intensity levels. \n\n\\textbf{Deep Autoregressive Networks}. The Naru and NeuroCard cardinality estimators \\cite{yang2019deep, yang2020neurocard} use deep autoregressive networks (DARNs) to approximate a fully factorized data density. DARNs are generative models capable of learning full conditional probabilities of a sequence using a masked autoencoder via Maximum Likelihood. Once the conditionals are available, the joint data distribution could be represented by the product rule as follows:\n\\[\n\\hat{P}(A_1, A_2, \\dots, A_n) = \\hat{P}(A_1)\\hat{P}(A_2|A_1)\\dots \\hat{P}(A_n|A1,\\dots ,A_{n-1})\n\\]\n\nwhere \\(A_i\\) is an attribute in a relation \\(R\\). Naru and NeuroCard use cross-entropy between input and conditionals as the loss function. This allows us to formulate the distillation loss function using the conditionals of the teacher and the student networks. Also, in Naru and NeuroCard, each conditional is calculated using a set of logits, hence we average over all as follows:\n\\begin{equation} \\label{narukdloss}\n\\mathscr{L}_d = \\frac{1}{|A|}\\sum_{i=1}^{|A|}\\mathscr{L}_{ce}(D_{tr}, z_{s_i}, z_{t_i})\n\\end{equation}\n\nWhere \\(|A|\\) is the number of attributes corresponding to the number of conditionals.\n\n\\textbf{Variational Autoencoders}. VAEs have been used for a number of DB components: \\cite{thirumuruganathan2020approximate} for AQP, \\cite{hasan2020deep} for CE, and \\cite{xu2019modeling} for synthetic tabular data generation. \nThey are a type of autoencoders that instead of learning deterministic encoder, decoder, and compressed vector (known as bottleneck), they learn a probabilistic encoder, decoder, and a latent random variable instead of the compressed vectors. (For more details, see the seminal paper \\cite{kingma2013auto}). \nInterestingly, a VAE is trained using a different loss function, known as Evidence-Lower-Bound (ELBO) loss (which amounts to a lower bound estimation of the likelihoods).\nHere we shall use TVAE for learned synthetic tabular data generation (of particular importance in privacy-sensitive environments, or when data is scarce for data augmentation purposes, or when wishing to train models over tables and accessing raw data is expensive in terms of time or money).\n\nTo distill a VAE, one must cope with the random noise added to the input of the decoder by the latent variable. For that, the latent variable in the teacher network is removed, and we use the same noise generated by the student in the teacher. The reason for doing this is that distillation tries to teach the student to behave like the teacher for a specific observation or action. If there is randomness, the student might mimic the teacher's behaviour for a completely different observation. After this change, the corresponding logits of the encoder\/encoder of the student and the teacher are compared using MSE. Finally, the loss function is:\n\\\\\n\\begin{equation}\n \\mathscr{L}_d = \\frac{1}{2}( \\mathscr{L}_{mse}(D_{tr}, z_t^{(e)}, z_s^{(e)}) + \\mathscr{L}_{mse}(D_{tr}, z_t^{(d)}, z_s^{(d)}) )\n\\end{equation}\n\\\\\nwhere, \\(e\\) and \\(d\\) correspond to the encoder and the decoder networks. \n\n\n\n\\subsection{An Example}\nWe create a simple synthetic dataset consisting of a categorical attribute, \\(x\\), with 10 \\(distinct\\mhyphen values=\\{1,2,3,\\dots,9,10\\}\\), and with each category having 1000 real values. The dataset is balanced and the real values for each category are generated by a \\textit{Mixture of Gaussians} (MoG) with five peaks. \\autoref{fig:toyexam}.a is the dataset corresponding to \\(x=1\\). We fit a \\textit{Mixture Density Network} with ten components on this dataset. \\autoref{fig:toyexam}.b shows a sample generated by this MDN which asserts that the model has perfectly learnt the data distribution. Next, we introduce an update batch generated by a MoG with two different means. \\autoref{fig:toyexam}.c shows the update batches in red color compared to the previous data in blue. We update the previously learned MDN with the proposed loss function in Eq. \\ref{mdnkdloss}. We repeat updates for 50 batches generated with the new MoG. \\autoref{fig:toyexam}.d shows the final distribution learnt by the MDN.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/toy_example.png}\n \\caption{An example to show how DDUp learns new data without forgetting. 'a' is the histogram of synthetic data corresponding to $x=1$. 'b' is the sample generated by the learned MDN for $x=1$. 'c' shows a sample of an update batch coming from different Gaussians. 'd' is the sample generated by the MDN after being updated by the DDUp loss function. We have performed the update 50 times to see the effect of high frequency updates (This explains the higher frequencies around the last two peaks for 'd').}\n \\vspace{-0.4cm}\n \\label{fig:toyexam}\n\\end{figure}\n\n\n\\subsection{Handling Join Operations}\\label{joins}\nDDUp can operate either on raw tables or tables from join results.\nIf the old data $R$ is the result of a join, the new data batch needs to be computed, due to new tuples being inserted in any of the joined tables in $R$.\nConsider at time \\(t-1\\) a model \\(M_{t-1}\\) has been trained on \\(R=\\bigcup_{j=0}^{t-1}{T_1^j} \\bowtie \\bigcup_{j=0}^{t-1}{T_2^j} \\dots \\bowtie \\bigcup_{j=0}^{t-1}{T_n^j}\\), where $T_r^j$ denotes the new batch for table $T_r$ at time $j$.\nWithout loss of generality, suppose a new insertion operation $I_t$ at time \\(t\\) adds new data to table \\(T_i\\), denoted \\(T_i^t\\). The new data for DDUp in this setting is \\( D_t = (R\\ \\setminus \\ \\bigcup_{j=0}^{t-1}T_i^{j}) \\bowtie T_i^{t} \\), where $\\setminus$ denotes a (multi)set-difference operator. Therefore, for the detection module, \\(S^{\\leq}_{t-1}\\) is a sample of R, and \\(S_t\\) a sample from $D_t$. Furthermore, during updating the transfer-set is a sample from $R$ and the new data is $D_t$. \nPlease note that all this data preparation and how each model deals with joins is orthogonal to DDUp.\nTherefore, it can be done by either computing the actual joins above or using join samplers like \\cite{zhao2018random,shanghooshabad2021pgmjoins}, as is done in NeuroCard and compared against in Section \\ref{joinexp}.\n\n\\begin{comment}\n\\subsection{Putting it All Together}\nAt time \\(t=0\\) we have data \\(D_{0}\\). We create two samples from \\(D_{0}\\): \n\\( S^{\\leq}_{0}\\) for OOD detection and a sample as the \\textit{transfer-set}. \nWe use \\(D_{0}\\) to train a model \\(M_{0}\\). \nNext, at time \\(t=1\\), a new data batch, \\(D_{1}\\) arrives. \nDDUp follows the following steps:\n\\begin{enumerate}\\setlength\\itemsep{0.5em}\n \\item The OOD detection module uses \\(S^{\\leq}_{0}\\) and a sample of \\(D_{1}\\) to test for a significant change in data distribution and issues an 'in-distribution' or 'out-of-distribution' signal.\n \\item If 'in-distribution', go to step 3, otherwise, go to step 4.\n \\item The Model Update module deploys a baseline method to update \\(M_{0}\\) to \\(M_{1}\\). This baseline method is usually fine-tuning with a smaller learning rate. Finally, go to step 5. \n \\item Start the distillation procedure:\n \\begin{itemize}\\setlength\\itemsep{0.1em}\n \\item Make an exact copy of \\(M_{0}\\)\n \\item Feed the transfer-set into \\(M_{0}\\) and \\(M_{1}\\) \n \\item Use Eq. \\ref{totalloss} to update the copied model \\(M_{1}\\)\n \\end{itemize}\n \\item Set the updated model as \\(M_{1}\\), Drop \\(M_{0}\\), Update \\(S^{\\leq}_{0}\\) and \\textit{transfer-set}.\n \\item End.\n\\end{enumerate}\n\\end{comment}\n\n\\section{Experimental Evaluation} \\label{eval}\nWe evaluate DDUp for three different models for learned DB components: (i) Naru\/NeuroCard \\cite{yang2019deep,yang2020neurocard} which use DARN models for CE; (ii) DBest++ \\cite{ma2021learned} that uses MDNs for AQP; and (iii) TVAE \\cite{xu2019modeling}, that uses variational autoencoders for DG.\nWe evaluate in terms of model accuracy and update time.\nWe use as reference points the baseline update approach provided for AQP and CE (TVAE provides no update approach).\nWe also add as reference points the accuracy when retraining from scratch and when leaving models stale. With respect to \\textit{OOD detection}, we investigate whether it can detect significant data shifts successfully and how this will contribute to the final performance of the underlying models in their specific application, CE, AQP, DG. \nUltimately, the experiments are to address the following questions:\n\n\\begin{itemize}[leftmargin=*]\n \\item How to best evaluate DDUp? (Section \\ref{setup})\n \\item Can DDUp accurately detect a distributional shift? (Section \\ref{oodeval})\n \\item Is DDUp accurate under in- $and$ out-of- distribution settings? (Section \\ref{perfeval})\n \\item How does DDUp compare to the baseline approaches in accuracy and update time? (Section \\ref{perfeval})\n \\item What is the effect of distillation? (Section \\ref{distilleval})\n \\item Is DDUp efficient? (Section \\ref{overheads})\n\\end{itemize}\n\n\\vspace{-0.3cm}\n\\subsection{Experimental Setup} \\label{setup}\nTo establish a dynamic setup, we make a copy of the base table and randomly sample 20\\% of its rows as new data. In this setting, new data follows the previous data distribution which we denote as \\textit{in-distribution}. We introduce distributional drift as is typically done for tabular data settings, say in \\cite{wang2020we}. As such, after making the copy, we sort every column of the copied table individually in-place to permute the joint distribution of attributes. Next, we shuffle the rows and randomly select \\(20\\%\\) of the rows - this now becomes the new data.\nWith these new data, we perform two types of experiments. First, we consider the whole 20\\% sample as a new data batch and update the model with it. Second, to show the updatability in incremental steps, we split the 20\\% data into 5 batches. \nIn general, the size of the transfer-set is a tunable parameter \\cite{hinton2015distilling}, influenced by the dataset complexity, the underlying model generalization ability, and the downstream tasks. \nAfter tuning, we used a 10\\% transfer-set for MDN and DARN and a 5\\% for TVAE, which could be further tuned with methods like Grid search.\n\nDDUp does not impose any further constraints to those of the underlying models. For DBest++ we use a query template with a range and an equality attribute. Also, we use one-hot encoding to encode categorical attributes and normalize the range attribute to \\([-1,1]\\). For Naru\/NeuroCard and TVAE, we use the same settings as explained in their code documentation. We use the learned hyper-parameters of the base model, i.e the model we build at time zero, for all subsequent updates. Furthermore, we intuitively set \\(\\alpha\\) parameter in Eq. \\ref{totalloss} to the fraction of update batch size to the original data size and tune \\(\\lambda\\) for values in \\([9\/10, 5\/6, 1\/4, 1\/2]\\). \n\n\\subsubsection{Datasets} \\label{datasets}\nWe have mainly used three real-world datasets (census, forest, DMV) \n(see \\autoref{tab:Datasets}). These datasets \nhave been widely used in the learned DB literature. \nFor CE, \\cite{wang2020we} uses also forest, census and DMV, while NeuroCard\/Naru use JOB\/DMV. For AQP DBEst++ uses TPCDS. For DG, \\cite{xu2019modeling} uses census and forest. Thus, we have also used census, forest, DMV, and TPCDS (\\texttt{store sales} table, scaling factor of 1). Finally, for join queries, we have used JOB (on IMDB data) and TPCH benchmarks, which are also used in \\cite{yang2020neurocard, yang2019deep}.\n\n\\begin{table}[hb]\n \\caption{Characteristics of datasets.}\n \\vspace{-0.3cm}\n \\label{tab:Datasets}\n \\begin{tabular}{c c c c} \n \\toprule\n Dataset&Rows&Columns&Joint Domain\\\\\n \\midrule\n Census & 49K & 13 & $10^{16}$ \\\\\n Forest & 581K & 10 & $10^{27}$ \\\\\n DMV & 11.6M & 11 & $10^{15}$ \\\\\n TPCDS & 1M & 7 & $10^{30}$ \\\\\n \\bottomrule\n \\end{tabular}\n \\vspace{-0.5cm}\n\\end{table}\n\n\\subsubsection{Workload} \\label{workload}Each model is evaluated using 2,000 randomly generated queries. These queries are generated at time zero for each model and are used throughout the subsequent updates. When an update batch is performed, the ground truth of the queries will be updated. For Naru\/NeuroCard, we use their generator to synthesize queries: It randomly selects the number of filters per query (forest:[3,8], census: [5,12], TPCDS: [2,6], dmv: [5,12]). Then, it uniformly selects a row of the table and randomly assigns operators \\([=,>=,<=]\\) to the columns corresponding to the selected filters. Columns with a domain less than 10 are considered categorical and only equality filters are used for them. For DBest++, we select a \\(lower\\mhyphen bound\\) and a \\(higher\\mhyphen bound\\) for the range filter and uniformly select a category from the categorical column for the equality filter. Throughout the experiments, we discard queries with actual zero answer. The structure of a typical query in our experiments is:\n\n\\begin{lstlisting}[mathescape=true,\n basicstyle=\\footnotesize,\n]\nSELECT AGG(y) FROM $T_1 \\bowtie T_2 \\dots \\bowtie T_n$ WHERE $F_1$ AND ... AND $F_{d}$\n\\end{lstlisting}\n\nwhere, \\(F_i\\) is a filter in one of these forms: \\([att_i = val, att_i >= val, att_i <= val]\\). Also, \\texttt{AGG} is an aggregation function like \\texttt{COUNT}, \\texttt{SUM}, \\texttt{AVG}. For DBest++, the query template contains one categorical attribute and one range attribute. As such, we select the following columns from each dataset: census:[\\texttt{age, country}]; forest:[\\texttt{slope, elevation}]; dmv:[\\texttt{body type, max gross weight}]; TPCDS:[\\texttt{ss quantity,ss sales price}]; IMDB:[\\texttt{info type id,production year}]; TPCH:[\\texttt{order date,total price}] where the first\/second attribute is categorical\/numeric. Furthermore, Naru could not train on the full TPCDS dataset as the encodings were too large to fit to memory. Hence, we selected the following columns [\\texttt{ss sold date sk}, \\texttt{ss item sk}, \\texttt{ss customer sk},\\texttt{ss store sk}, \\texttt{ss quantity}, \\texttt{ss net profit}], and made a 500k sample.\n\n\\subsubsection{Metrics}\nFor \\textit{count} queries, we use \\textit{q-error} as follows:\n\n\\begin{equation}\n error = \\frac{max(pred(q), real(q))}{min(pred(q), real(q))}\n\\end{equation} \n\nFor \\textit{sum} and \\textit{avg} aggregates, we use \\textit{relative-error} as follows:\n\\begin{equation}\n error = \\frac{|pred(q) - real(q)|}{real(q)}\\times100\n\\end{equation} \n\nAdditionally, Lopez et al. \\cite{lopez2017gradient} introduce the notions of Backward Transfer (BWT) and Forward Transfer (FWT) as new metrics in class incremental learning tasks. BWT is the average accuracy of the model on old tasks, and FWT is the average accuracy of the model on new tasks. Here, we re-frame BWT and FWT.\nWe generate the queries at time \\(0\\) and use them for all update steps. At each step \\(t\\), we calculate \\(diff = real_t(q) - real_{t-1}(q)\\) for each query, \\(q\\), which gives us three set of queries; \\(G_{fix}\\) with \\(diff=0\\), \\(G_{changed}\\) with \\(diff>0\\), and \\(G_{all} = G_{fix} \\cup G_{changed}\\). With these groups, we define three measures. \\(AT\\): average q-error over \\(G_{all}\\). \\(FWT\\): average q-error over \\(G_{changed}\\). \\(BWT\\): average q-error over \\(G_{fix}\\).\n\n\\subsubsection{Evaluating Variational Autoencoders}\nDG is an interesting learned application which is recently supported using TVAE. Thus, we evaluate DDUp for TVAE. In TVAE, once the training is done, only the decoder network is kept and used, as this is the generator. Hence, we apply our distillation-update method to the decoder network. We evaluate TVAE via the accuracy of an XGboost classifier trained by the synthetic samples, as in \\cite{xu2019modeling}. \nWe hold-out 30\\% of table as the test set, and train two classifiers with original and synthetic data, then predict the classes of the held-out data. We report \\textit{micro f1-score} for classifiers. For census, forest and DMV, we use: \\textit{income}, \\textit{cover-type}, and \\textit{fuel-type}, as the target class, respectively.\nFor TVAE, we created a smaller DMV with 1m records, as training TVAE on the whole DMV is very time\/resource consuming (proving indirectly the need to avoid retraining).\n\n\\subsection{OOD Detection} \\label{oodeval}\n\n\\subsubsection{Loss Functions as Signals}\nWe first show the results of loss\/log-likelihoods when the detector receives samples from the same distributions or from different distributions. The results are shown in \\autoref{tab:avgll}. For Naru\/NeuroCard and DBEst++ we report the actual log-likelihood values (not negatives, so higher is better). For TVAE, we report the ELBO loss values (hence lower is better). \n\n\\begin{table}[hb]\n \\centering\n \\caption{Average log-likelihood and ELBO loss values of data samples on a trained model. $S_{old}$ is a sample of the previous training data. \"IND\", is a 20\\% sample from a straight copy of the original table; \"OOD\", is a 20\\% sample from a permuted copy of the original table.}\n \\vspace{-0.2cm}\n \\label{tab:avgll}\n \\resizebox{\\linewidth}{!}{%\n \\begin{tabular}{c c c c | c c c | c c c } \n \\toprule\n\\multirow{2}{*}{Dataset} & \\multicolumn{3}{c|}{DBEst++} & \\multicolumn{3}{c|}{Naru\/NeuroCard} & \\multicolumn{3}{c}{TVAE} \\\\\n& $S_{old}$ & IND & OOD & $S_{old}$ & IND & OOD & $S_{old}$ & IND & OOD \\\\\n \\midrule\n Census & -0.362 & -0.361 & -0.366 & -20.99 & -20.87 & -36.95 & -15.21\t& -15.22 & 81.47 \\\\\n Forest & -0.0194 & -0.0202 & -0.052 & -43.16 & -43.9 & -141.10 & -19.96 & -20.09 & 142.38 \\\\\n DMV & 2.520 & 2.532 & 2.444 & -13.74 & -13.16 & -18.67 & 9.114 & 9.28 & 34.95 \\\\\n \\bottomrule\n \\end{tabular}}\n \\vspace{-0.35cm}\n\\end{table}\n\n\\autoref{tab:avgll} shows that the loss function (log likelihood and ELBO in our cases) can reliably signal OOD data.\nInterestingly, this corroborates similar findings in \\cite{detectOOD-iclr17} for classification tasks in various vision and NLP tasks, where the NN outputs can be used to signal OOD. Here we show it for tabular data and for NNs developed for AQP, CE, and DG tasks. \n\nIn Naru\/NeuroCard and TVAE, when permuting, all columns are sorted individually, hence the large difference in likelihoods. \nFor DBEst++, only the selected columns for a query template have been permuted, yielding a small difference in likelihoods.\n\n\\begin{comment}\n\\begin{table}[hb]\n\\centering\n \\caption{Change of log-likelihood with the number of permuted columns for a trained autoregressive model. 0 means no columns has been sorted individually therefore the data sample is following the distribution of the training data}\n \\label{tab:permlevel}\n \\begin{tabular}{c c c c} \n \\toprule\n \\#columns & census & forest & DMV\\\\\n \\midrule\n0&-20.992&-43.16048&-13.745\\\\\n1&-28.687&-44.673&-14.616\\\\\n2&-31.103&-115.736&-17.935\\\\\n3&-31.201&-127.560&-18.151\\\\\n4&-31.591&-129.549&-18.308\\\\\n5&-34.793&-127.916&-18.955\\\\\n6&-34.359&-127.054&-18.838\\\\\n7&-35.626&-140.589&-18.858\\\\\n8&-35.938&-143.223&-18.836\\\\\n9&-36.969&-141.106&-18.670\\\\\n10&-37.029&&\\\\\n11&-37.243&&\\\\\n12&-36.953&&\\\\\n \\bottomrule\n \\end{tabular}\n\\end{table}\n\\end{comment}\n\n\\subsubsection{The two-sample test results} \n\\autoref{tab:driftdetect} shows results for two-sample testing for OOD detection. The significance level of the test (threshold) is \\(2\\times variance\\) of the bootstrapping distribution, which was obtained by $>$1000 iterations. \nIn each iteration, we use a 1\\% sample with replacement from previous data and a 10\\% sample without replacement from new data to calculate the test statistic. The results show that when data is permuted, the test statistic is far away from the threshold. This means it appears at a great dissonance in the tails of the bootstrapping distribution. \nAnd since the critical value to test for OOD is found by bootstrapping over \\(S_{old}\\), i.e., \\(S^{\\leq}_{t}\\), it will adjust even to small differences when faced with OOD. \nCase in point, the DBEst++ OOD likelihood value for census (which is similar to IND\/$S_{old}$ in \\autoref{tab:avgll}) vs the corresponding test-statistic value in \\autoref{tab:driftdetect}.\n\n\\begin{table*}[t]\n \\caption{The test-statistic values. Threshold is $2\\times variance$ and bs-mean is the mean of bootstrapping distribution. }\n \\label{tab:driftdetect}\n \\resizebox{\\textwidth}{!}{\n \\begin{tabular}{c | c c c c | c c c c | c c c c } \n \\toprule\n\\multirow{2}{*}{Dataset} & \\multicolumn{4}{c|}{DBEst++} & \\multicolumn{4}{c}{Naru\/NeuroCard} & \\multicolumn{4}{c}{TVAE} \\\\\n& bs-mean & threshold & IND & OOD & bs-mean & threshold & IND & OOD & bs-mean & threshold & IND & OOD \\\\\n \\midrule\nCensus&-0.3524 & 0.007 & 0.001 & 0.05 & -21.0076 & 0.0529 & 0.032 & 16.0052 & -15.1834 & 0.6041 & 0.0419 & 100.5126 \\\\\nForest&-0.0228 & 0.0122 & 0.007 & 0.2315 & -41.35 & 0.0141 & 0.0084 & 72.5473 & -19.99 & 0.0868 & 0.0417 & 167.0502 \\\\\nDMV&2.52 & 0.1287 & 0.0145 & 4.5745 & -13.7674 & 0.0012& 0.0007& 5.1145 & 9.1209 & 0.0177 & 0.0015 & 25.1398 \\\\\n \\bottomrule\n \\end{tabular}}\n\\end{table*}\n\n\n\n\\subsubsection{FP and FN rates in OOD detection}\\label{fpfnrates}\n\nTo evaluate OOD detection, we measure FP and FN rates (FPR, FNR). \nWe created an OOD test-set and an IND test-set, each equaling half the original size of the table. The latter is just a random sample from the original table. The former is constructed as follows. The perturbed data is obtained by perturbing one or more of five columns of the table, say $C1, \\ ... \\ C5$. First we perturb $C1$ and take a sample of the resulting table of size $10\\%$ and append it to the OOD test-set. Then we perturb $C1$ and $C2$ and similarly sample and append it to the OOD test-set. We repeat this for perturbations on $C1, C2, C3$, on $C1, C2, C3, C4$, and on $C1, C2, C3, C4, C5$, ending up with an OOD test-set of size 50\\% of the original table. Note that this setup creates a more-challenging case, as the degree of perturbations (for OOD data) is finer-grained.\nThen, at each batch, we fed a random sample from the OOD test-set and of the IND test-set to the DDUp detector. For each batch, the detector would signal IND or OOD and we recorded and calculated FPR and FNR. The batch size was 2,000 and we repeated the experiment for 1,000 batches.\n\nWe used the same parameters for all datasets and models: the bootstrapping size is 32 and the threshold is \\(2 \\times std\\). For DBEst++, the results are reported in \\autoref{tab:fprfnr}. FPR and FNR for Naru\/NeuroCard and TVAE were always zero. These results further confirm that the OOD detection algorithm is not biased.\n\n\\begin{table}[hb]\n \\vspace{-0.3cm}\n \\centering\n \\caption{FPR and FNR for DBEst++.}\n \\vspace{-0.3cm}\n \\label{tab:fprfnr}\n \\begin{tabular}{c c c } \n \\toprule\nDataset & FPR & FNR \\\\\n \\midrule\n Census & 0.15 & 0.01 \\\\\n Forest & 0.10 & 0 \\\\\n DMV & 0.01 & 0 \\\\\n \\bottomrule\n \\end{tabular}\n \\vspace{-0.35cm}\n\\end{table}\n\nFurthermore, we studied the sensitivity on the batch size and varied it from a size of 1 to 2,000. Results are shown in \\autoref{fig:oodsens}, which clearly show that after a low-threshold batch size, FPR and FPN tend to zero. The same results hold for other models and datasets, and are omitted here for space reasons.\n\n\\begin{figure}\n\\begin{minipage}{.5\\linewidth}\n\\subfloat[]{\\label{main:a}\\includegraphics[scale=.29]{figures\/dbest-forest-fprfnr.png}}\n\\end{minipage}%\n\\begin{minipage}{.5\\linewidth}\n\\subfloat[]{\\label{main:b}\\includegraphics[scale=.29]{figures\/tvae-dmv-fpfn.png}}\n\\end{minipage} %\n\\vspace{-0.4cm}\n\\caption{Sensitivity of OOD detection vs batch size.}\n\\label{fig:oodsens}\n\\vspace{-0.55cm}\n\\end{figure}\n\n\\subsection{Accuracy Results} \\label{perfeval}\n\\subsubsection{When there is OOD data} \\label{whenood}\n\nFor Naru\/NeuroCard, DBEst++, and TVAE, and for each dataset, we compare 4 updating approaches against each other and against the base model before any new data is inserted. The 4 approaches are as follows: \n\"\\texttt{Retrain}\", retrains the model from scratch using both old and new data. \"\\texttt{Baseline}\" is the baseline approach in Naru\/NeuroCard and DBest++ where a trained model is updated with new data by performing \\textit{SGD} with a smaller learning rate. \"\\texttt{DDUp}\" is the proposed method.\nFinally, in \"\\texttt{stale}\", the model is not updated -- this is a do-nothing approach.\nFor reference, we also include the numbers for $M_0$, i.e., the original model accuracy before any new data came.\n\\autoref{tab:qerror} and \\autoref{tab:aqpacc} show the accuracy results for CE and AQP (SUM and AVG operations), respectively.\nFor TVAE, the classification f1-scores are reported in \\autoref{tab:tvaef1}. Results of these three tables correspond to the case where the update sample is permuted. \nDDUp always performs better than the baseline approach. \nMost of the times, the performance of DBEst++ on DMV dataset is not as well as for the other datasets. This probably is due to the complexity of data (large scale and highly correlated attributes). Nevertheless, DDUp stands on the top of the underlying models and regardless of the model's performance, DDUp ensures that it will retain the accuracy.\nPlease note the DMV dataset results in \\autoref{tab:qerror} and \\autoref{tab:aqpacc} and, census and forest datasets in \\autoref{tab:tvaef1}, where, DDUp even outperforms retraining from scratch. \nInterestingly, this corroborates similar evidence for sequential self-distillation (for boosting embeddings for) classification tasks \\cite{seq-self-distill}. This was one of the reasons we adapted a self-distillation based approach.\nFinally, baseline methods have poor performance for 95th and 99th percentiles. \n\n\\begin{table*}[t]\n \\caption{Results of updating a base model with a 20\\% permuted sample in terms of q-error. $M_{0}$ denotes the base model.}\n \\label{tab:qerror}\n \\centering\n \\begin{tabular}{c c | c | c | c | c | c | c | c | c | c | c} \n \\toprule\n\\multirow{2}{*}{Dataset} & \\multirow{2}{*}{metric} & \\multicolumn{5}{c|}{DBEst++} & \\multicolumn{5}{c}{Naru\/NeuroCard} \\\\\n&&$M_{0}$&DDUp&baseline&stale&retrain&$M_{0}$&DDUp&baseline&stale&retrain \\\\\n \\midrule\n\\multirow{4}{*}{census}&median&1.05&1.11&1.17&1.16&1.07&1.08&1.09&4&1.14&1.07\\\\\n&95th&2&2&2.20&2&2&2&2&471.80&2&2\\\\\n&99th&3&3&4&3&3&3&3&1534.69&3.16&3\\\\\n&max&5&7&11&10.50&5&5.25&7&8385&21.88&6\\\\\n\\midrule\n\\multirow{4}{*}{forest}&median&1.026&1.046&2&1.18&1.02&1.04&1.07&1.54&1.10&1.05\\\\\n&95th&2&2&63.40&2&1.64&2.48&3&41&2.50&2.75\\\\\n&99th&2&2.583&503.12&5.60&2&4&6&157.16&5.48&5\\\\\n&max&4&5.33&3470&90.85&5.33&27&65.66&1691&484&34.66\\\\\n\\midrule\n\n\\multirow{4}{*}{DMV}&median&1.20&1.143&3.48&1.88&1.34&1.02&1.04&2.57&1.16&1.02\\\\\n&95th&4.91&5.07&234.88&7.00&5.50&1.20&1.41&468.68&1.50&1.25\\\\\n&99th&9.65&10&3897.87&12.50&8&1.83&2.31&4734.62&2.84&2\\\\\n&max&18.83&19&65875&39&17&8&9.81&343761&9.49&5\\\\\n\n\\midrule\n\\multirow{4}{*}{TPCDS}&median&1.02&1.04&57&1.27&1.02&1.01&1.07&1.15&1.10&1.05\\\\\n&95th&1.16&1.26&269&1.58&1.18&2&2&29&2&2\\\\\n&99th&1.5&1.61&1266&2.72&1.5&3.01&3.01&239&4&3\\\\\n&max&3&3&4534&10.66&5.64&5&28&5100&28&24\\\\\n\n \\bottomrule\n \\end{tabular}\n\\end{table*}\n\n\\begin{table}[t]\n \\caption{mean-relative-error for SUM and AVG aggregation functions for DBEst++.}\n \\label{tab:aqpacc}\n \\centering\n \\resizebox{\\linewidth}{!}{%\n \\begin{tabular}{c c | c c c c c} \n \\toprule\nDataset&function&$M_{0}$&DDUp&baseline&stale&retrain\\\\\n \\midrule\n\\multirow{2}{*}{census}&SUM&13.05&17.30&65.88&21.36&13.60\\\\\n&AVG&1.89&2.36&8.15&2.37&1.97\\\\\n\\midrule\n\\multirow{2}{*}{forest}&SUM&10.11&15.51&88.73&24.59&10.14\\\\\n&AVG&0.76&1.04&3.90&1.35&0.79\\\\\n\\midrule\n\\multirow{2}{*}{TPCDS}&SUM&4.53&6.37&61.40&22.64&5.12\\\\\n&AVG&0.88&1.47&12&3.50&1.21\\\\\n\\midrule\n\\multirow{2}{*}{DMV}&SUM&76.73&85.29&423&97.00&110\\\\\n&AVG&6.4&6.9&15.9&8.6&7.3\\\\\n\n\\bottomrule\n \\end{tabular}}\n\\end{table}\n\n\\begin{table}[t]\n \\caption{Classification results for TVAE in terms of micro f1. 'r' stands for real data, 's' stands for synthetic data.}\n \\label{tab:tvaef1}\n \\centering\n \\resizebox{\\linewidth}{!}{%\n \\begin{tabular}{c | c c | c c | c c | c c | c c } \n \\toprule\n\\multirow{2}{*}{Dataset}\n&\\multicolumn{2}{c}{$M_{0}$}&\\multicolumn{2}{c}{DDUp}&\\multicolumn{2}{c}{baseline}&\\multicolumn{2}{c}{stale}&\\multicolumn{2}{c}{retrain}\\\\\n&r&s&r&s&r&s&r&s&r&s\\\\\n \\midrule\ncensus&0.67&0.63&0.77&0.73&0.77&0.55&0.77&0.56&0.77&0.72\\\\\nforest&0.84&0.69&0.89&0.78&0.89&0.63&0.89&0.60&0.89&0.74\\\\\nDMV&0.97&0.97&0.98&0.97&0.98&0.92&0.98&0.93&0.98&0.98\\\\\n\n\\bottomrule\n \\end{tabular}}\n\\end{table}\n\n\\subsubsection*{Performance on old and new queries} To better illustrate the effects of \\textit{catastrophic forgetting}{} and \\textit{intransigence} we elaborate on performance on FWT and BWT. (As \\texttt{retrain} avoids be definition \\textit{catastrophic forgetting}{} and \\textit{intransigence}, it is omitted).\nThe results are shown in \\autoref{tab:mdntranfers}. \nNote that any insertion affects only a percentage of queries, shown in \n\\autoref{tab:querypercents}.\nComparing AT, FWT, and BWT in \\autoref{tab:qerror} and \\autoref{tab:mdntranfers} first note that fine-tuning always performs much better in terms of FWT compared to BWT (due to catastrophic forgetting).\nSecond, conversely, a stale model shows better BWT compared to FWT. \nFor DDUp, FWT and BWT remain close to each other, especially in terms of median q-error, showing that DDUP can ensure accuracy for queries on old and new data.\nOverall, DDUp enjoys high accuracy.\n\n\\subsubsection*{Incremental Steps} To show the updates in incremental steps, we have split the \\(20\\%\\) data into 5 equal-sized chunks and have performed an update incrementally for each batch. \\autoref{fig:incupdates2} compares the trend of accuracy during updates. As it is clear from the figures, DDUp remains very close to \\texttt{retrain}, while there is a drastic drop in accuracy using \\texttt{baseline}. Starting point \\(0\\) is where the base model \\(M_{0}\\) is built from scratch. (The same results hold for 95th, 99th percentiles and maximum q-error). \n\n\n\\begin{table*}[t]\n \\caption{Comparing q-error of different updating approaches in terms of FWT and BWT.}\n \\vspace{-0.2cm}\n \\label{tab:mdntranfers}\n \\begin{tabular}{c c | c | c c | c c | c c | c | c c | c c | c c } \n \\toprule\n\\multirow{3}{*}{Dataset} & \\multirow{3}{*}{metric} & \\multicolumn{7}{c|}{DBEst++} & \\multicolumn{7}{c}{Naru\/NeuroCard} \\\\\n&&\\multicolumn{1}{c}{$M_{0}$}&\\multicolumn{2}{c}{DDUp}&\\multicolumn{2}{c}{baseline}&\\multicolumn{2}{c|}{stale}&\\multicolumn{1}{c}{$M_{0}$}&\\multicolumn{2}{c}{DDUp}&\\multicolumn{2}{c}{baseline}&\\multicolumn{2}{c}{stale} \\\\\n\n&&&FWT&BWT&FWT&BWT&FWT&BWT& &FWT&BWT&FWT&BWT&FWT&BWT\\\\\n\n\\midrule\n\\multirow{3}{*}{census}&median&1.05&1.06&1.12&1.06&1.20&1.05&1.16&1.08&1.11&1.09&1.83&6&1.20&1.13 \\\\\n&95th&2&1.66&2&1.56&2.33&3.30&2&2&1.64&2&4.63&530.80&3.18&2 \\\\\n&99th&3&4.94&3&4.10&4&8.90&2.75&3&3.08&3&9.98&1598.53&8.49&3\\\\\n\n\\midrule\n\\multirow{3}{*}{forest}&median &1.02&1.01&1.08&1.23&2.66&1.05&1.20&1.04&1.07&1.07&1.39&1.65&1.18&1.08\\\\\n&95th&2&1.181&2&2.87&146.38&2.85&2&2.489&1.88&3&3.13&43.02&7.55&2.33\\\\\n&99th&2&1.52&3&3.72&590.57&18.33&2.24&4&4.89&6&5.27&163.80&191.53&4.86\\\\\n\n\\midrule\n\\multirow{3}{*}{DMV}&median&1.20&1.28&1.13&2.20&4.36&1.66&1.54&1.02&1.02&1.07&1.06&12.85&1.26&1.19\\\\\n&95th&4.910&4.30&5.87&3.34&484.46&9.50&6.87&1.20&1.16&1.55&1.65&1015.81&3.30&1.40\\\\\n&99th&9.65&9&11.65&10.50&5894.21&12.12&10.80&1.83&1.47&3&3.35&8183.34&11.93&2.49\\\\\n\n\\midrule\n\\multirow{3}{*}{TPCDS}&median&1.02&1.03&1.04&1.20&1.51&1.16&1.21&1.01&1.06&1.08&1.19&1.11&1.10&1.10\\\\\n\n&95th&1.16&1.21&1.29&2.37&339&2.26&1.35&2&2&2&2.60&54&2&2\\\\\n&99th&1.5&1.37&1.66&4.27&1536&4.48&1.66&3.01&9.77&3&9.47&434&9.64&3.77\\\\\n\n\\bottomrule\n \\end{tabular}\n\\end{table*}\n\n\\begin{table}[t]\n \\caption{The percentage of the queries (out of 2k queries) with changed actual results after inserting 20\\% new data.}\n \\vspace{-0.2cm}\n \\label{tab:querypercents}\n \\begin{tabular}{c c c } \n \\toprule\n dataset & DBEst++ & Naru \\\\\n \\midrule\n census&14\\%&12\\% \\\\\n forest&32\\%&9\\% \\\\\n TPCDS&36\\%&36\\% \\\\\n dmv&52\\%&45\\%\\\\\n \\bottomrule\n \\end{tabular}\n \\vspace{-0.5cm}\n\\end{table}\n\nWe also have evaluated the models with respect to the \\textit{log-likelihood goodness-of-fit}. Log-likelihood is widely used to evaluate NN models.\nUsing log-likelihood allows evaluation to be independent\nof underlying applications. \\autoref{fig:incll} shows changes in log-likelihood in consecutive update steps. At each step, we calculate the average of log-likelihoods over a sample of new data and a sample from historical data. In these figures we again see that updating with DDUp is fitting to the old and the new data very similarly to the \\texttt{retrain} case. In general, when keep using \\texttt{stale}, the log-likelihood drops after the first update and then remains low. The reason is that all update batches have similar permutation and since we calculate unweighted averages, the log-likelihood stays fixed. While, for \\texttt{baseline}, i.e fine-tuning, we can see a gradual decrease of likelihood which means that the network is increasingly forgetting about previous data in each step. \n\n\\begin{figure}\n\\begin{minipage}{.5\\linewidth}\n\\subfloat[]{\\label{main:a}\\includegraphics[scale=.29]{figures\/dbest_census.png}}\n\\end{minipage}%\n\\begin{minipage}{.5\\linewidth}\n\\subfloat[]{\\label{main:b}\\includegraphics[scale=.29]{figures\/naru_census.png}}\n\\end{minipage} %\n\\begin{minipage}{.5\\linewidth}\n\\subfloat[]{\\label{main:c}\\includegraphics[scale=.29]{figures\/dbest_forest.png}}\n\\end{minipage}%\n\\begin{minipage}{.5\\linewidth}\n\\subfloat[]{\\label{main:d}\\includegraphics[scale=.29]{figures\/naru_forest.png}}\n\\end{minipage}\n\\begin{minipage}{.5\\linewidth}\n\\subfloat[]{\\label{main:e}\\includegraphics[scale=.29]{figures\/dbest_dmv.png}}\n\\end{minipage}%\n\\begin{minipage}{.5\\linewidth}\n\\subfloat[]{\\label{main:f}\\includegraphics[scale=.29]{figures\/naru_dmv.png}}\n\\end{minipage}\n\\vspace{-0.3cm}\n\\caption{Updating results over 5 consecutive updates.}\n\\label{fig:incupdates2}\n\\vspace{-0.4cm}\n\\end{figure}\n\n\\begin{figure}\n\\begin{minipage}{.5\\linewidth}\n\\subfloat[]{\\label{main:c}\\includegraphics[scale=.29]{figures\/naru_census_loglikelihood.png}}\n\\end{minipage}%\n\\begin{minipage}{.5\\linewidth}\n\\subfloat[]{\\label{main:d}\\includegraphics[scale=.29]{figures\/naru_dmv_loglikelihood.png}}\n\\end{minipage}\n\\vspace{-0.3cm}\n\\caption{log-likelihood results over 5 consecutive updates.} \n\\label{fig:incll}\n\\vspace{-0.3cm}\n\\end{figure}\n\n\\subsubsection{When data is not OOD}\nIn this case, simple fine-tuning update algorithms, such as \\texttt{baseline}, will likely avoid \\textit{catastrophic forgetting}{}. \nTo illustrate this, we have repeated the 5 batched incremental updates with data without permutation. The results are reported in \\autoref{fig:incupdatenodrift}. For space reasons, we only show the results for census. The results indicate that for in-distribution data, simple baselines can have a performance close to \\texttt{retrain}.\n\n\\begin{figure}\n\\begin{minipage}{.5\\linewidth}\n\\subfloat[]{\\label{main:a}\\includegraphics[scale=.29]{figures\/dbest_census_nodrift.png}}\n\\end{minipage}%\n\\begin{minipage}{.5\\linewidth}\n\\subfloat[]{\\label{main:b}\\includegraphics[scale=.29]{figures\/naru_census_nodrift.png}}\n\\end{minipage} %\n\\vspace{-0.3cm}\n\\caption{Updating results over 5 consecutive updates when data follows the same distribution as the historical data.}\n\\label{fig:incupdatenodrift}\n\\vspace{-0.3cm}\n\\end{figure}\n\n\\begin{table}[hb]\n\\vspace{-0.4cm}\n \\caption{DDUp's speed up over \\texttt{retrain}, for two update sizes. For census, forest, and dmv, sp1: 20\\% of the original table. sp2, 5\\% of the original table. for IMDB and TPCH sp1: updating the first partition and sp2: updating the last partition.}\n \\label{tab:times}\n\\vspace{-0.25cm}\n \\begin{tabular}{c | c c | c c | c c } \n \\toprule\n\\multirow{2}{*}{Dataset} & \\multicolumn{2}{c|}{DBEst++} & \\multicolumn{2}{c|}{Naru} & \\multicolumn{2}{c}{TVAE} \\\\\n&sp1&sp2&sp1&sp2&sp1&sp2 \\\\\n \\midrule\ncensus&5&5.5&3.5&4&3.4&5.7 \\\\\nforest&1.6&4&5&9.2&3.6&7 \\\\\nDMV&4&6.5&2.3&9.6&3.4&6.8 \\\\\nIMDB&4.5&18&3.5&5&NA&NA \\\\\ntpch&6.5&16&2&4&NA&NA \\\\\n \\bottomrule\n \\end{tabular}\n\\end{table}\n\n\\subsection{Evaluating DDUp for Join Queries} \\label{joinexp}\nAs mentioned, DDUp is unconcerned whether at a time $t$, \n\\(S^{\\leq}_{t-1}\\) (a sample of \\(\\cup_{j=0}^{t-1} D_j\\)) and $D_t$ come from a raw table or from a join.\nFor this experiment, we have evaluated DDUp running 2,000 queries over two 3-table joins from the JOB and TPCH datasets.\nFor each, the 2,000 queries involve a join of the fact table with two dimension tables: \nSpecifically, the join of tables [\\texttt{title}, \\texttt{movie info idx}, \\texttt{movie companies}] for IMDB, and [\\texttt{orders}, \\texttt{customer}, \\texttt{nation}] for TPCH. For the update dynamics, we have split the fact table into 5 time-ordered equally-sized partitions. We have built \\(M_0\\) on the join (of the fact table's first partition with the 2 dimension tables) and updated it with each subsequent partition at a time. This is similar to the update setting in NeuroCard.\nResults for both CE and AQP are in \\autoref{fig:joins}.\n\nNeuroCard, unlike other models, natively supports joins, using\na \"fast-retrain\" - i.e., a light retraining where the model is retrained using a 1 percent sample of the full join result. We have included this policy here as \"fast-retrain\". \nDDUp always signalled OOD for the new update batches, except for TPCH data on DBest++, where update was not triggered. Therefore, in \\autoref{fig:joins}.d the accuracy of the stale model and fine-tuning is close to retrain. This further confirms the significance of OOD detection.\n\n\\begin{figure}[htbp]\n\\begin{minipage}{.5\\linewidth}\n\\subfloat[]{\\label{main:a}\\includegraphics[scale=.29]{figures\/Naru-imdb-95th.png}}\n\\end{minipage}%\n\\begin{minipage}{.5\\linewidth}\n\\subfloat[]{\\label{main:b}\\includegraphics[scale=.29]{figures\/Naru-tpch-95th.png}}\n\\end{minipage} \\\\\n\\begin{minipage}{.5\\linewidth}\n\\subfloat[]{\\label{main:c}\\includegraphics[scale=.29]{figures\/DBest-imdb-sum-rel-error.png}}\n\\end{minipage}%\n\\begin{minipage}{.5\\linewidth}\n\\subfloat[]{\\label{main:d}\\includegraphics[scale=.29]{figures\/DBest-tpch-sum-rel-error.png}}\n\\end{minipage}\n\\vspace{-0.3cm}\n\\caption{DDUp's performance on joined tables.}\n\\label{fig:joins}\n\\vspace{-0.3cm}\n\\end{figure}\n\n\\subsection{Effect of Transfer Learning} \\label{distilleval}\nWe now delve into the effects of transfer-learning in DDUp. How much DDUp's transfer-learning via knowledge distillation contributes to better accuracy? \nWe perform experiments where we remove the transfer-learning term of Eq \\ref{totalloss}. Therefore, we combine the sample from previous data known as the transfer-set with the new update batch and create a model\nwith the same configurations as the base model. \\autoref{fig:tleffect} shows the results.\nThe results assert that the performance of DDUp is not only related to the previous data sample, and in fact, distillation has a big effect on the improvement of the new models. \n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/transfer_learning_effect.png}\n \\caption{Effect of transfer-learning on q-error. AggTrain, is the case where we aggregate the transfer-set with the new data and train a model similar to the base model.}\n \\label{fig:tleffect}\n\\end{figure}\n\\vspace{-0.2cm}\n\\subsection{Overheads} \\label{overheads}\nWe report on the costs of each DDUp module separately. All the codes are written and executed in Python 3.8, on an Ubuntu 20 machine with 40 CPU cores, two Nvidia GTX 2080 GPUs and 64GB memory. With respect to memory usage, DDUp performs regular feed-forward steps as in regular NN training. Therefore, DDUp does not increase memory footprints\nIn terms of time, DDUp has two computation costs namely, \\textit{OOD detection} and \\textit{model update}. OOD detection is split into offline and online phases. \\autoref{tab:offontime} shows these two times. The largest detection time is for the forest dataset on a Naru model which takes around 3 minutes. However, please note that in \nthe online phase only takes 1 second to detect a change in data. \n\n\\begin{table}[hb]\n \\caption{online and offline times during OOD detection.}\n \\label{tab:offontime}\n\\vspace{-0.25cm}\n \\begin{tabular}{c | c c | c c | c c } \n \\toprule\n\\multirow{2}{*}{Dataset} & \\multicolumn{2}{c|}{DBEst++} & \\multicolumn{2}{c|}{Naru} & \\multicolumn{2}{c}{TVAE} \\\\\n&off&on&off&on&off&on \\\\\n \\midrule\ncensus&2.44&0.02&111&1.8&310&5.5\\\\\nforest&28&0.04&174&0.92&433&8.8\\\\\nDMV&86&2&144&10&99&0.44\\\\\n \\bottomrule\n \\end{tabular}\n\\end{table}\n\n\\autoref{tab:times} shows DDUp's speed up over \\texttt{retrain} for OOD data, for different update sizes. When data is OOD, DDUp can be over 9$\\times$ faster than \\texttt{retrain}. Obviously, speedups will be higher for incremental steps. This fact is reflected in IMDB and TPCH datasets where after inserting the last partition DDUp is 18$\\times$ faster than \\texttt{retrain}. Note that the updating time is dependent on a few parameters including update size, transfer-set size, training batch size etc. During updates, we have used smaller training batch sizes. If one tunes the model for bigger batches, and smaller transfer-set sizes, the speed up would be higher.\n\n\\vspace{-0.2cm}\n\\subsection{Non neural network models}\\label{nonnn}\nFor the sake of completeness and as an additional reference point, we include results for updating a state-of-the-art non-NN model that natively supports data insertions, (DeepDB \\cite{hilprecht2019deepdb}) used for CE. \nWhen an update happens, DeepDB traverses its sum-product-network graph and updates the weights of the intermediate nodes and the histograms at the leaves. We have repeated the same experiment in \\autoref{tab:qerror} for DeepDB. The results are reported in \\autoref{tab:deepdb}.\n\n\\begin{table}[t]\n \\caption{Performance of DeepDB updating vs. DDUp for Naru, for a CE task in terms of q-error.}\n \\vspace{-0.35cm}\n \\label{tab:deepdb}\n \\centering\n \\begin{tabular}{c c | c | c | c | c | c } \n \\toprule\n\\multirow{2}{*}{Dataset} & \\multirow{2}{*}{ metric} & \\multicolumn{3}{c|}{DeepDB} & \\multicolumn{2}{c}{Naru} \\\\\n&&$M_{0}$&update&retrain&$M_{0}$&DDUp \\\\\n \\midrule\n\\multirow{3}{*}{census}&median&1.05&1.2&1.05&1.08&1.09\\\\\n&95th&3&4.18&3&2&2\\\\\n&99th&5.11&8&5&3&3\\\\\n\\midrule\n\\multirow{3}{*}{forest}&median&1.02&1.2&1.02&1.04&1.07\\\\\n&95th&7.5&10.5&7&2.48&3\\\\\n&99th&31&52&31&4&6\\\\\n\\midrule\n\\multirow{3}{*}{DMV}&median&1.06&1.25&1.1&1.02&1.04\\\\\n&95th&2.5&3.5&2.5&1.20&1.41\\\\\n&99th&22&37&21&1.83&2.31\\\\\n \\bottomrule\n \\end{tabular}\n \\vspace{-0.3cm}\n\\end{table}\n\nFrom \\autoref{tab:deepdb} it can be observed that DeepDB's updating policy is under-performing, as was independently verified in \\cite{wang2020we}. \nDDUp (coupled in this experiment with Naru\/NeuroCard for CE) always performs better. Nonetheless, we wish to emphasize that the saving grace for DeepDB based on our experiments is that retraining from scratch is very efficient -- significantly faster compared to NNs. \n\n\\section{Related Work} \\label{litraturere}\n\\subsection{Learned Database Systems}\\label{ldbliterature}\nNN-based components to be used by DBs are emerging rapidly. Different works exploit different neural network models.\n\\cite{yang2019deep, yang2020neurocard, hasan2020deep} used generative neural networks to build learned selectivity estimators. Thirumuruganathan et al. \\cite{thirumuruganathan2020approximate} used VAEs for AQP. Ma et al. \\cite{ma2021learned} used mixture density networks for AQP. Database indexing research\nrecently has adopted neural networks to approximate cumulative density functions \\cite{kraska2018case,ding2020alex,nathan2020learning,ding2020tsunami}. Query optimization and join ordering are also benefiting from neural networks \\cite{marcus2019neo, kipf2018learned}. Other applications include auto-tuning databases \\cite{van2017automatic,li2019qtune,zhang2019end}, cost estimation \\cite{zhi2021efficient, siddiqui2020cost}, and workload forecasting \\cite{zhu2019novel}.\n\nAmong these, this work provides a solution for handling NN model maintenance in the face of insertion-updates with OOD data, when the models need to continue ensuring high accuracy on new and old data and on tasks for which models were originally trained (such as AQP, CE, DG, etc.).\nWhile there has been related research on transfer learning for learned DBs such as \\cite{hilprecht2021one, wu2021unified} these target a different problem setting:\nThey study how to transfer knowledge from a model trained for one task, and\/or a DB, and\/or a system, and\/or a workload to a new task and\/or DB, and\/or system, and\/or workload. They do not study how to keep performing the original task(s) on evolving datasets with insertions carrying OOD data with high accuracy for queries on both old and new data. Simply using these methods by fine-tuning on new data will incur catastrophic forgetting. Nevertheless, since these models employ some sorts of knowledge transfer, they might be useful to support updates. However, it remains open whether and how the models in \\cite{wu2021unified, hilprecht2021one} can be utilized to solve efficiently the problems tackled in this paper.\nWhile some of non-neural-network models (e.g., DeepDB) can very efficiently retrain from scratch,\nNN-based models for the above problem setting either do not support insertion-updates or suffer from poor accuracy when facing OOD data, unless paying the high costs of retraining from scratch.\n\n\n\\subsection{OOD Detection}\nOOD detection has recently attracted a lot of attention and it has long been studied in statistics as concept drift (CD) detection, or novelty detection. In general, CD and OOD detection methods could be divided into two broad categories \\cite{gama2014survey,lu2018learning,wang2020few}: \nFirst, prediction-based methods, which use the predictions of the underlying models to test for a change. Recent ML models usually use the predictive probabilities of the classifiers as a confidence score to identify changes \\cite{jiang2018trust,ovadia2019can, wilson2020bayesian,ruff2021unifying}. Others may monitor the error of the underlying models and trigger an OOD signal when a significant change is captured \\cite{gama2006learning, baena2006early,savva2019aggregate,nehme2009self,lopez2016revisiting}. While these approaches are very efficient in time, they typically come with limiting assumptions depending on the underlying model or application. For example, most of them can only be utilized and are only studied for classification (supervised) tasks.\nThe second broad family of methods is distribution-based methods. Some of these methods try to find a distance measure that can best show the discrepancy between new data and old data distributions, using tests like Kolmogorov-Smirnov (KS), \\cite{kolmogorov1933sulla}, Wilcoxon \\cite{pereira2009machine}, and their multi-variate variants \\cite{fasano1987multidimensional, baringhaus2004new}. Others try to learn the density of the underlying data distribution test for a significant change, like kernel-density-based approaches \\cite{kifer2004detecting,dasu2006information,gu2016concept,lu2014concept,bu2016pdf,song2007statistical}. More recent works utilize the estimated likelihoods of generative models \\cite{ren2019likelihood, morningstar2021density, xiao2020likelihood}. Other approaches rely on the inner representations of the networks \\cite{li2021cutpaste,hendrycks2019using,lee2018simple}. Nonetheless, this second family of OOD detection methods are usually expensive (esp. for multi-dimensional data) and involve fitting a separate density estimator. Hence, the main problem is that in an insertion scenario, the density estimators also need to be updated (typically via training from scratch, upon each insertion).\n\n\\vspace{-0.3cm}\n\\subsection{Incremental Learning (IL)}\nMost IL methods regularize the model in a way that it acquires knowledge from the new task while retaining the knowledge of old tasks. For example, \\textit{Elastic Weight Consolidation (EWC)} \\cite{kirkpatrick2017overcoming} adds a regularizer to control the learning speed around important weights of the network for old tasks while learning a new task. Similar works are developed around this idea \\cite{liu2018rotate, lee2020continual,titsias2019functional}, \\textit{Path Integral (PathInt)} \\cite{zenke2017continual} ,\\textit{Riemanian Walk (RWalk)} \\cite{chaudhry2018riemannian}. Other approaches exploit knowledge distillation to retain the knowledge of previous tasks \\cite{li2017learning}.\nAnother group of IL methods, save exemplars from past data \\cite{wu2019large, castro2018end, rebuffi2017icarl} or generate samples\/features using generative models \\cite{ostapenko2019learning, kemker2017fearnet} and involve them in learning new tasks. Lopez et al. \\cite{lopez2017gradient} has proposed \\textit{Gradient Episodic Memory} that consists of \\textit{M} blocks of memory to store examples from \\textit{T} tasks and uses the model's prediction on these examples as a constraining loss that inhibits the model to bias toward new task and forget past tasks. Lastly, some works try to completely keep previous models and create new models (or part of a model like a single layer) for each new task. Aljundi et al. \\cite{aljundi2017expert} introduce \\textit{Expert Gate} with different models for each task and an autoencoder which learns the representations of each task to assign test-time tasks to the proper model. Instead of learning a whole new model, Rusu et al. \\cite{rusu2016progressive} introduce \\textit{Progressing Neural Networks} which add new columns to the previous network architecture and learns lateral connections between them. Most of the above methods, do not account for in- and out- of distribution updates and are not easily extendable to different learning tasks. \n\n\\vspace{-0.2cm}\n\\section{Conclusion} \\label{conclusion}\nLearned DB components can become highly inaccurate when faced with new OOD data when aiming to ensure high accuracy for queries on old and new data for their original learning tasks.\nThis work proposes, to our knowledge, the first solution to this problem, coined DDUp.\nDDUp entails two novel components, for OOD detection and model updating.\nTo make detection widely applicable, OOD detection in DDUp exploits the output of the neural network (be it based on log-likelihood, cross-entropy, ELBO loss, etc.), and utilizes a principled two-sample test and a bootstrapping method to efficiently derive and use thresholds to signal OOD data.\nDDUp also offers a general solution for model updating based on sequential self-distillation and a new loss function which carefully accounts for \\textit{catastrophic forgetting} and \\textit{intransigence}.\nThis work showcases the wide applicability of DDUp model updating by instantiating the general approach to three important learned functions for data management, namely AQP, CE, and DG, whereby a different type of NN (MDNs, DARNs, VAEs) is used for each. In fact, to our knowledge, no prior work has shown how to \"distill-and-update\" MDNs, VAEs, and DARNs.\nComprehensive experimentation showcases that DDUp detects OOD accurately and ensures high accuracy with its updated models with very low overheads.\n\n\n\\section{Acknowledgement}\nThis work is partially sponsored by Huawei IRC and by EPSRC while doing a PhD at the University of Warwick.\n\n\\balance\n\n\\bibliographystyle{ACM-Reference-Format}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Conclusions}\n\nIn this paper, we introduced the notion of statistically significant lexical co-occurrences. We\ndetected skews in span distributions of bigrams to assess significance and showed how our method\nallows classification of co-occurrences into different types. We performed experiments to assess the\nperformance of various frequency-based measures for detecting lexically signficant co-occurrences.\nWe believe lexical co-occurrence can play a critical role in several applications, including sense disambiguation, mutli-word spotting, etc. We will address some of these in our future work.\n\n\\section{Discussion}\n\nThreshold span needs discussion. Essentially we say that, it is reasonable to believe that all\noccurrences with span less than the threshold value can plausibly be motivated, while all\noccurrences with span greater than the threshold span are probably arbitrary. This notion applies to\nall co-occurrence based measures, not just the CSR\n\n\n\n\\subsection{Performance of different co-occurrence measures}\n\nWe now compare the performance of various frequency-based measures in the context of lexical\nsignificance. Given the large numbers of measures proposed in the literature~\\cite{pecina06}, we need to identify a subset of measures to compare.\nInspired by \\cite{ecologyMeasures} and \\cite{dataMiningTan} we identify three properties\nof co-occurrence measure which may be useful for language processing applications. First is {\\em Symmetry} - does the measure yield the same association score for (x,y) and (y,x)? Second is {\\em Null Addition} - does addition of data containing neither x nor y affect the association\nscore for (x,y)? And, finally, {\\em Homogenity} - if we replicate the corpus several times and merge them to construct a\nlarger corpus, does the association score for (x,y) remain unchanged? Note that the concept of homogenity conflicts\nwith the notion of statistical support, as support increases in direct proportion with the absolute amount of evidence.\nDifferent applications may need co-occurrence measures having different combinations of these properties. \n\n\\begin{table}\n\\scriptsize\n\\begin{tabular}{|p{1.8cm}| p{2.95cm}|l|l|l|} \n \\hline\nMethod & Formula & \\begin{sideways}Symm. \\end{sideways} & \\begin{sideways}Null Add. \\end{sideways}& \\begin{sideways}Homo. \\end{sideways}\\\\\n\\hline\nCSR (this work) & $Z \/ (E(Z) + Kt)$ & Y & Y & N\\\\ \\hline \nCSA (this work) & $\\frac{\\hat{f}(x,y)}{\\sqrt{K}} $ & Y & N & Y\\\\ \\hline\nLLR~\\cite{llr} & ${\\displaystyle{\\sum_{x', y' }}}p(x',y')log\\frac{p(x',y')}{p(x')p(y')}$ & Y & Y & Y \\\\ \\hline\nPMI~\\cite{churchHanks89} & $log\\frac{p(x,y)}{p(x)p(y)}$ & Y & N & Y \\\\ \\hline\nSCI~\\cite{cwcd} & $\\frac{p(x,y)}{p(x)\\sqrt{p(y)}}$ & N & N & Y\\\\ \\hline\nCWCD~\\cite{cwcd} & $\\frac{\\hat{f}(x,y)}{p(x)}\\frac{1\/max\\left(p(x),p(y)\\right)}{M}$ & N & N & Y\\\\ \\hline\nPearson's $\\chi^2$ test & ${\\displaystyle\\sum_{x',y'}} \\frac{\\left(\\hat{f}(x',y')-E\\hat{f}(x',y')\\right)^2}{E\\hat{f}(x',y')}$ & Y & Y & Y\\\\ \\hline\nT-test & $\\frac{\\hat{f}(x,y)-E\\hat{f}(x,y)}{\\sqrt{\\hat{f}(x,y)\\left(1-\\frac{\\hat{f}(x,y)}{N}\\right)}}$ & Y & N & Y\\\\ \\hline\nDice~\\cite{dice} & $\\frac{2\\hat{f}(x,y)}{f(x)+f(y)}$ & Y & N & Y\\\\ \\hline\nOchiai~\\cite{ecologyMeasures} & $\\frac{\\hat{f}(x,y)}{\\sqrt{f(x)f(y)}}$ & Y & N & Y\\\\ \\hline\nJaccard~\\cite{jaccard} & $\\frac{\\hat{f}(x,y)}{f(x)+f(y)-\\hat{f}(x,y)}$ & Y & N & Y \\\\ \\hline\n\\end{tabular}\n{\\scriptsize\nTerminology: ($x' \\in \\{x,\\neg x\\}$ and\n$y' \\in\\{y,\\neg y\\}$) \\\\\n\\begin{tabular}{l l }\n$N$ & Total number of tokens in the corpus \\\\\n$f(x),f(y)$ & unigram frequencies of $x,y$ in the corpus \\\\\n$p(x),p(y)$ & $f(x)\/N,f(y)\/N $\\\\\n$\\hat{f}(x,y)$ & Span-constrained ($x,y$) bigram frequency\\\\\n$\\hat{p}(x,y)$ & $\\hat{f}(x,y)\/N $\\\\\n$M$ & Harmonic mean of the spans of $\\hat{f}(x,y)$ occurrences\\\\\n$E\\hat{f}(x,y)$ & Expected value of f(x,y) \\\\\n\\hline\n\\end{tabular}\n}\n\\caption{ \\small Properties of selected co-occurrence measures }\n\\label{tab:methods}\n\\end{table}\n\n\n\nTable~\\ref{tab:methods} shows the characteristics of our chosen co-occurrence measures, which were selected from several domains like ecology,\npsychology, medicine, and language processing. Except Ochiai~\\cite{Ochiai}, \\cite{ecologyMeasures}, and the recently introduced measure CWCD~\\cite{cwcd}\\footnote{From various so-called windowless measures introduced in~\\cite{cwcd}, we chose the best-performing variant Cue-Weighted Co-Dispersion (CWCD) and implemented a window based version of it with harmonic mean. We note that any of windowless (or spanless) measure can easily be thought of as a special case of a window-based measure where the windowless formulation corresponds to a very large window (or span in our terminology).}, all other selected measures are well-known in the NLP community~\\cite{pecina06}.\n Based on our extensive study of theoretical and empirical properties of CSR, we also introduce a new bigram frequency based measure called CSA ({\\em Co-occurrence Significance Approximated}), which approximates the behaviour of CSR over a wide range of parameter settings.\n\nIn our experiments, we found that Ochiai and Chi-Square have almost identical performance, differing only in 3rd decimal digits.\nThis can be be explained easily. In our context, for any word $x$, as defined in Table~\\ref{tab:methods},\n$f(x) << N$ and therefore $p(x) << 1$. With this, Chi-Square reduces to square of Ochiai. Similarly Jaccard and Dice coincide,\nsince $f(x,y) << f(x)$ and $f(x,y) << f(y)$. Hence we do not\nreport further results for Chi-Square and Jaccard.\n\n\n\nIn our first set of experiments, we compared the performance of various frequency-based\nmeasures in terms of their suitability for detecting lexically significant co-occurrences \n(cf.~{\\em Definition~\\ref{def:test1}}). \nA high Spearman correlation coefficient between the ranked list produced by a given measure and the list produced by CSR with respect to some choice of\n$\\epsilon$ and $\\delta$ would imply that the measure is effective in detecting the corresponding {\\em type} of\nlexically significant co-occurrences.\n\n\\begin{table}\n\\centering\n\\scriptsize\n\\begin{tabular}{|l|l|l|l|l|}\n\\hline\n\t\t\t\t\t\t& \t\t\t& \\multicolumn{3}{|c|}{Span Threshold} \\\\ \\hline\n\tMeasure\t\t\t\t&\tData\t&\t 5w\t\t&\t25w\t\t&\t50w\t\\\\ \\hline\n\\multirow{3}{*}{PMI}\t& sim\t\t& C\t\t\t& -\t\t\t& - \\\\\n\t\t\t\t\t\t& rel\t\t& -\t\t\t& -\t\t\t& -\t\\\\\n\t\t\t\t\t\t& essli\t\t& -\t\t\t& -\t\t\t& -\t\\\\ \\hline\n\\multirow{3}{*}{CWCD}\t& sim\t\t& -\t\t\t& -\t\t\t& - \\\\\n\t\t\t\t\t\t& rel\t\t& -\t\t\t& -\t\t\t& -\t\\\\\n\t\t\t\t\t\t& essli\t\t& -\t\t\t& -\t\t\t& -\t\\\\ \\hline\n\\multirow{3}{*}{CSA}\t& sim\t\t&A, B, C, D\t& A, B, C& A, B, C \\\\\n\t\t\t\t\t\t& rel\t\t&A, B, C, D\t& A, B, C\t& A, C\t\\\\\n\t\t\t\t\t\t& essli\t\t&A, B, C, D\t& A, B, C\t\t& A, C\t\\\\ \\hline\n\\multirow{3}{*}{Dice}\t& sim\t\t&A, B, C, D\t& A, B, C\t& A, B \\\\\n\t\t\t\t\t\t& rel\t\t&A, B, C, D\t& -\t\t\t& -\t\\\\\n\t\t\t\t\t\t& essli\t\t& -\t\t\t& -\t\t\t& -\t\\\\ \\hline\n\\multirow{3}{*}{Ochiai}\t& sim\t\t&A, B, C, D\t& A, B, C, D& A, B, C \\\\\n\t\t\t\t\t\t& rel\t\t&A, B, C, D\t& A, B, C\t& A, B, C\t\\\\\n\t\t\t\t\t\t& essli\t\t&A, B, C, D\t& A, B\t\t& A\t\\\\ \\hline\n\\multirow{3}{*}{LLR}\t& sim\t\t&A, B, C, D\t& A, B\t\t& A \\\\\n\t\t\t\t\t\t& rel\t\t&A, B, C, D\t& A\t\t\t& A\t\\\\\n\t\t\t\t\t\t& essli\t\t&A, B, C\t& A\t\t\t& A\t\\\\ \\hline\n\\multirow{3}{*}{TTest}\t& sim\t\t&A, B, C\t& A\t\t\t& - \\\\\n\t\t\t\t\t\t& rel\t\t&A, B, C\t& -\t\t\t& -\t\\\\\n\t\t\t\t\t\t& essli\t\t& -\t\t\t& -\t\t\t& -\t\\\\ \\hline\n\\multirow{3}{*}{SCI}\t& sim\t\t& -\t\t\t& -\t\t\t& - \\\\\n\t\t\t\t\t\t& rel\t\t& -\t\t\t& -\t\t\t& -\t\\\\\n\t\t\t\t\t\t& essli\t\t& -\t\t\t& -\t\t\t& -\t\\\\ \\hline\n\\end{tabular}\n\\caption{Types of lexical co-occurrences detected by different measures}\n\\label{tab:ABCDsummary}\n\\end{table}\n\n\\begin{figure*}\n \\begin{center}\n\\resizebox{160mm}{!}\n{\\includegraphics{sim_all_0.pdf}} \\\\\n \\caption{\\small Maximum correlation of various measures with various types of CSR for sim dataset}\n \\label{fig:maxCorSim}\n \\end{center}\n\\end{figure*}\n\nThe Table~\\ref{tab:ABCDsummary} lists\nfor each measure and for each data set, the different types of lexically significant co-occurrences that the\nmeasure is able to detect effectively -- if the corresponding Spearman\ncorrelation coefficient exceeds 0.90, we consider the measure to be effective for the given\ntype. Results are shown for three different span constraints --\nsmall span of 5 words (or 5w), medium span of 25 words (or 25w) and large span of 50 words (or 50w).\nFor example, the CSA and Ochiai measures are effective in detecting all 4 types of lexically significant\nco-occurrences (A, B, C and D) in all three data sets, when the span constraint is set to 5 words.\nFigure~\\ref{fig:maxCorSim} presents a detailed quantitative comparison of the best performance of each\nmeasure with respect to each type of co-occurrence for a range of different span constraints on the\nsim data set (Similar results were obtained on other data sets). The inferences we can draw are\nconsistent with the results of Table~\\ref{tab:ABCDsummary}.\n\n\\begin{table}\n\\scriptsize\n\\centering\n\n\\begin{tabular}{|l|l|l|l|l|l|}\n\\hline\n\t\t\t\t\t\t& \t\t\t& \\multicolumn{4}{|c|}{Parameters for best correlation} \\\\ \\hline\n\tMeasure\t\t\t\t&\tSpan\t&\t$\\epsilon$\t&\t$\\delta$\t& Type\t& Correlation\t\t\\\\ \\hline\n\\multirow{3}{*}{PMI}\t& 5w\t\t& 0.05\t\t& 1\t\t\t& C\t\t& 91.3 \\\\\n\t\t\t\t\t\t& 25w\t\t& 0.40\t\t& 1\t\t\t& D\t\t& 85.3\t\\\\\n\t\t\t\t\t\t& 50w\t\t& 0.50\t\t& 1\t\t\t& D\t\t& 82.0 \\\\ \\hline\n\\multirow{3}{*}{CWCD}\t& 5w\t\t& 0.99\t\t& 0.9\t\t& D \t& 83.6 \\\\\n\t\t\t\t\t\t& 25w\t\t& 0.50\t\t& 0.9\t\t& D\t\t& 76.0\t\\\\\n\t\t\t\t\t\t& 50w\t\t& 0.50\t\t& 0.9\t\t& D\t\t& 74.4\t\\\\ \\hline\n\\multirow{3}{*}{CSA}\t& 5w\t\t& 0.1\t\t& 0.0005\t\t& A \t& 98.9 \\\\\n\t\t\t\t\t\t& 25w\t\t& 0.05\t\t& 0.0005\t\t& A\t\t& 96.7\t\\\\\n\t\t\t\t\t\t& 50w\t\t& 0.1\t\t& 0.0005\t\t& A\t\t& 94.9\t\\\\ \\hline\n\\multirow{3}{*}{Dice}\t& 5w\t\t& 0.1\t\t& 0.005\t\t& A\t\t& 96.1 \\\\\n\t\t\t\t\t\t& 25w\t\t& 0.05\t\t& 0.005\t\t& A\t\t& 93.0\t\\\\\n\t\t\t\t\t\t& 50w\t\t& 0.1\t\t& 0.0005\t& A\t\t& 91.3 \\\\ \\hline\n\\multirow{3}{*}{Ochiai}\t& 5w\t\t& 0.1\t\t& 0.1\t\t& A\t\t& 97.4 \\\\\n\t\t\t\t\t\t& 25w\t\t& 0.1\t\t& 0.01\t\t& A\t\t& 95.5\t\\\\\n\t\t\t\t\t\t& 50w\t\t& 0.1\t\t& 0.005\t\t& A\t\t& 94.5 \\\\ \\hline\n\\multirow{3}{*}{LLR}\t& 5w\t\t& 0.05\t\t& 0.0005\t& A \t& 97.3 \\\\\n\t\t\t\t\t\t& 25w\t\t& 0.05\t\t& 0.0005\t& A\t\t& 94.8 \\\\\n\t\t\t\t\t\t& 50w\t\t& 0.1\t\t& 0.0005\t& A\t\t& 92.6 \\\\ \\hline\n\\multirow{3}{*}{TTest}\t& 5w\t\t& 0.05\t\t& 0.0005\t& A \t& 94.2 \\\\\n\t\t\t\t\t\t& 25w\t\t& 0.05\t\t& 0.0005\t& A\t\t& 90.9 \\\\\n\t\t\t\t\t\t& 50w\t\t& 0.1\t\t& 0.0005\t& A\t\t& 88.8 \\\\ \\hline\n\\multirow{3}{*}{SCI}\t& 5w\t\t& 0.05\t\t& 0.0005\t& A \t& 82.7 \\\\\n\t\t\t\t\t\t& 25w\t\t& 0.05\t\t& 0.0005\t& A\t\t& 75.9 \\\\\n\t\t\t\t\t\t& 50w\t\t& 0.1\t\t& 0.0005\t& A\t\t& 73.1 \\\\ \\hline\n\\end{tabular}\n\\caption{Best performing $(\\epsilon,\\delta)$-pairs for different measures on {\\em sim} data}\n\\label{tab:topsummarysim}\n\\end{table}\n\nIn our next experiment, we examine which of the four types of co-occurrences are best captured by each measure.\nResults for the sim data set are listed in Table~\\ref{tab:topsummarysim} (Similar results were obtained on the other data sets). For each\nmeasure and for each span constraint, the table describes the best performing parameters ($\\epsilon$\nand $\\delta$), the corresponding co-occurrence Type and the associated `best' correlation achieved with respect to\nthe test of {\\em Definition~\\ref{def:test1}} .\nThe results show that, irrespective of the span\nconstraint, most measures perform best on Type A co-occurrences. This is reasonable because\nType A essentially represents the strongest correlations in the data and one would expect the\nmeasures to capture the strong correlations better than weaker ones. There are however, two\nexceptions to this rule, namely PMI and CWCD, which instead peak at Types C or D. The best correlations for\nthese two measures are also typically lower than the other measures. We now summarize the main\nfindings from our study:\n\\begin{itemize}\n\n\\item The relatively obscure Ochiai, and the newly introduce CSA are the best performing measure, in terms of detecting all types\nof lexical co-occurrences in all data sets and for a wide range of span constraints.\n\n\\item Dice, LLR and TTest are the other measures that effectively track lexically significant\nco-occurrences (although, all three are less effective as the span constraints become larger).\n\n\\item SCI, CWCD, and the popular PMI measure\nare ineffective at capturing {\\em any} notion of lexically significant co-occurrences, even for small\nspan constraints. In fact, the best result for PMI is the detection of Type C co-occurrences in the\nsim data set. The low $\\epsilon$ and high $\\delta$ setting of Type C suggests that PMI does a poor\njob of detecting the strongest co-occurrences in the data, overlooking both strong document-level as\nwell as corpus-level cues for lexical significance. \n\n\n\\end{itemize}\n\n\\begin{table*}\n\\begin{center}\n \\scriptsize \\addtolength{\\tabcolsep}{-5pt}\n\t\\begin{tabular}{|c | c || c | c |l| c | c || c | c |l| c | c || c | c |} \\cline{1-4} \\cline{6-9} \\cline{11-14}\n \\multicolumn{4}{|c|}{sim} && \\multicolumn{4}{|c|}{rel} && \\multicolumn{4}{|c|}{esslli} \\\\ \\cline{1-4} \\cline{6-9} \\cline{11-14}\nPMI top 10 & R & Ochiai top 10 & R && PMI top 10 & R & Ochiai top 10 & R && PMI top 10 & R & Ochiai top 10 & R\\\\ \\cline{1-4} \\cline{6-9} \\cline{11-14}\nvodka-gin & 42 & football-soccer & 3 & & money-laundering & 2 & soap-opera & 1 & & nook-cranny & 91 & floyd-pink & 4 \\\\\nseafood-lobster & 59 & street-avenue & 5 & & soap-opera & 1 & money-laundering & 2 & & hither-thither & 104 & either-or & 1 \\\\ \nbread-butter & 13 & physics-chemistry & 2 & & opec-oil & 8 & computer-software & 18 & & sprain-ankle & 60 & election-general & 7 \\\\ \nvodka-brandy & 99 & television-radio & 6 & & weather-forecast & 5 & television-film & 7 & & blimey-cor & 147 & nook-cranny & 91 \\\\ \nmidday-noon & 79 & championship-tournament & 10 & & psychology-cognition & 77 & jerusalem-israel & 16 & & margarine-butter & 77 & twentieth-century & 2 \\\\ \nmurder-manslaughter & 19 & man-woman & 16 & & decoration-valor & 73 & weather-forecast & 5 & & tinker-tailor & 65 & bride-groom & 16 \\\\ \ncucumber-potato & 130 & vodka-gin & 42 & & gender-equality & 11 & drug-abuse & 4 & & ding-dong & 26 & you-me & 14 \\\\ \ndividend-payment & 61 & king-queen & 9 & & tennis-racket & 20 & credit-card & 3 & & bride-groom & 16 & north-south & 19 \\\\ \nphysics-chemistry & 2 & car-automobile & 43 & & liability-insurance & 25 & game-series & 12 & & jigsaw-puzzle & 30 & question-answer & 11 \\\\ \npsychology-psychiatry & 27 & harvard-yale & 11 & & fbi-investigation & 10 & stock-market & 9 & & bidder-auction & 76 & atlantic-ocean & 10 \\\\ \\cline{1-4} \\cline{6-9} \\cline{11-14}\n\\end{tabular}\n\\caption{Top 10 bigrams according to PMI and Ochiai rankings on \\emph{sim}, \\emph{rel}, and \\emph{esslli} datasets. 'R' denotes the bigrams rankings according to type-A CSR measure($\\epsilon=0.1, \\delta=0.1$). Span of 25 words is used for all the three measures. }\n\\label{tab:top10PMIOchiai}\n\\end{center}\n\\end{table*}\n\nNote that our results do not contradict the utility of PMI, SCI, or, CWCD as word-association\nmeasures. We only observe their poor performance in context of detecting lexical co-occurrences. Also, our notion of lexical co-occurrence is symmetric.\nIt is possible that asymmetric SCI may have competitive performance for certain asymmetric tasks compared to the better performing symmetric measures.\nFinally, to give a qualitative feel about the differences in the correlations preferred by different methods, \nin Table~\\ref{tab:top10PMIOchiai}, we show the top 10 bigrams picked by PMI and Ochiai for all three datasets.\n\n\n\n\n\n\\section{Relation between lexical co-occurrence and human judgements}\n\n\\begin{table*}\n\\scriptsize \\addtolength{\\tabcolsep}{-5pt}\n\\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|}\n\\hline\nMethod & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10\\tabularnewline\n\\hline\nHuman & environment & maradona & opec & computer & money & jerusalem &\nlaw & weather & network & fbi\\tabularnewline\nJudgement & ecology (84)& football (53)& oil (8)& software (18)& bank (28)& israel (16)&\nlawyer (42)& forecast (5)& hardware (107)& investigation (10)\\tabularnewline\n\\hline\n\\multirow{2}{*}{CSR} & soap & money & credit & drug & weather & cup & television &\nopec & stock & fbi\\tabularnewline\n & opera (24) & laundering (129)& card (20)& abuse (69)& forecast (8)& coffee (82)& film (31)& oil (3)\n& market (19)& investigation (10)\\tabularnewline\n\\hline\n\\end{tabular}\n\\caption{ Top 10 word associations picked in rel dataset. The numbers in the brackets are the cross rankings: CSR rankings in the human row and human rankings in the CSR row. CSR parameters are same as that for Table~\\ref{tab:top10PMIOchiai}. }\n\\label{top10humanCSR}\n\\end{table*}\n\nWhile the focus of our work is on characterizing the statistically significant lexical co-occurrence, as illustrated in in Table~\\ref{top10humanCSR}, human judgement of word association is governed by many factors in addition to lexical co-occurrence considerations, and many non co-occurrence based measures have been designed to capture semantic word association. Notable among them are distributional similarity based measures \\cite{simRelDatasets,bollegalaMI07,chenLW06}\nand knowledge-based measures \\cite{wikiLinkMeasure,hughes_lexical_2007,Gabrilovich07computingsemantic,wikiwalk09,wikirelate,wordsim353,lsaEsslli08}. Since our focus is on frequency based measures alone, we do not discuss these other measures.\n\nThe lexical co-occurrence phenomenon and the human judgement of semantic association are related but different dimensions of relationships between words and different applications may\nprefer one over the other. For example, suppose, given one word (say {\\em dirt}), the task is to choose from among a number of\nalternatives for the second(say {\\em grime} and {\\em filth}). Human judgment scores for {\\em (dirt, grime)} and {\\em (dirt, filth)}\nare 5.4 and 6.1 respectively. However, their lexical co-occurrence scores (CSR) are 1.49 and 0.84 respectively. This is because {\\em filth} is often used in a moral context as well. {\\em Grime} is usually used\nonly in a physical sense. {\\em Dirt} is used mostly in a physical sense,\nbut is a bit more generic and may be used in a moral sense\noccasionally. Hence {\\em (dirt, grime)} is more correlated in corpus than {\\em (dirt, filth)}. This shows that human judgement is fallible and annotators may ignore the subtleties of meanings that may be picked up by a statistical techniques like ours.\n\nIn general, for association with a given word, all synonyms of a second word will be given similar semantic\nrelatedness score by human judges but they may have very different lexical association scores. \n\nFor applications where the notion of statistical lexical\nco-occurrence is potentially more relevant than semantic relatedness, our method can be used to generate a gold-standard of lexical association (against which other association measures can be evaluated). In this context, it is interesting to note that contrary to the human judgement, each one of the co-occurrence measures studied by us finds {\\em (dirt, grime)} more associated than {\\em (dirt, filth)}.\n\n\nHaving explained that significant lexical co-occurrence is a fundamentally different notion than human judgement of word association, we also want to emphasize that the two are not completely different notions either and they correlate reasonably well with each-other.\nFor {\\em sim, rel}, and {\\em essli} datasets, CSR's best correlations with human judgment are 0.74, 0.65, and 0.46 respectively. Note that CSR is a symmetric notion and hence correlates far more with human judgement for symmetric {\\em sim} and {\\em rel} datasets than for the asymmetric {\\em essli} dataset.\nAlso, at first glance, it is little counter-intuitive that the notion\nof lexical co-occurrence yields better correlations with the sim (based on {\\em similarity}) data set when compared to\nthe rel(based on {\\em relatedness}) data set. This can essentially be explained by our observation that\nsimilar words tend to co-occur less frequently by-chance than the related words.\n\n\\begin{comment}\n\n\\subsection{Comparison with Previous Work}\n\nAs discussed earlier, any given measure of word association models certain dimension of the relationship\nbetween words, and an application-neutral comparison of correlation with human judgment may not be very meaningful. It is still instructive to compare select co-occurrence based measures with other knowledge-based and distributional similarity based measures of word association proposed in the literature. Since our work is focused on co-occurrence based measures, we do not give any details of these other measures but simply present their correlations in Table~\\ref{prevWorkTable}. \nOther researchers using wordsim353 dataset have not publicized actual association score for each word pair or even the relative rankings. Hence we cannot compute the Spearmen's correlation coefficient for other measures on sim203 and rel252 dataset. The table includes all the publicly available data.\n\nWe see that co-occurrence based measures compare favorably with other resource heavy measures across the range of datasets. Of course, as mentioned several times, one should not use these correlation numbers to claim that one method is superior to other. That can only be done in application specific settings where the needs of an application are well-understood.\n\n\\begin{table}\n\\tiny \\addtolength{\\tabcolsep}{-3pt}\n\\begin{tabular}{|p{3cm}| p{1.2cm}| l l p{.7cm}| l|} \n \\hline\nMethod & Resource & sim & rel & wordsim & esslli\\\\\n\\hline\nPMI & Wikipedia corpus & 0.74 & 0.69 & 0.70 & 0.34 \\\\\nOchiai & Wikipedia corpus & 0.69 & 0.64 & 0.63 & 0.46 \\\\\nSignificance Ratio (CSR) & Wikipedia corpus & 0.70 & 0.65 & 0.65 & 0.46 \\\\ \\hline\nLatent Semantic Analysis~\\cite{lsaEsslli08} & Newspaper corpus & - & - & - & 0.38 \\\\\nGraph Traversal (WN30g) ~\\cite{simRelDatasets}) & Wordnet & 0.72 & 0.56 & 0.66 & - \\\\\nBag of Words based Distributional Similarity (BoW)~\\cite{simRelDatasets}) & Web corpus & 0.70 & 0.62 & 0.65 & - \\\\ \nContext Window based Distributional Similarity (CW)~\\cite{simRelDatasets}) & Web corpus & 0.77 & 0.46 & 0.60 & - \\\\ \nHyperlink Graph~\\cite{wikiLinkMeasure} & Wikipedia hyperlinks graph & -& -& 0.69 & - \\\\\nRandom Graph Walk~\\cite{hughes_lexical_2007} & WordNet & -& -& 0.55 & - \\\\\nExplicit Semantic Analysis~\\cite{Gabrilovich07computingsemantic} (reimplemented in~\\cite{wikiwalk09}) & Wikipedia concepts & -& -& 0.75 (0.71) & - \\\\\nNormalized Path-length (lch)~\\cite{wikirelate} & Wikipedia category tree & -& -& 0.55 & - \\\\\nThesarus based~\\cite{jarmasz03} & Roget's & - & - & 0.55 & - \\\\\nLatent Semantic Analysis~\\cite{wordsim353} & Web corpus, & - & - & 0.56 & - \\\\\n\\hline\n\\end{tabular}\n\\caption{ \\small Comparison with previous work. Data for missing entries is not available.}\n\\label{prevWorkTable}\n\\end{table}\n\\end{comment}\n\n\n\\section{Introduction}\n\\label{sec:intro}\n\nThe notion of {\\em word association} is important for\nnumerous NLP applications, like, word sense disambiguation,\noptical character recognition, speech\nrecognition, parsing, lexicography,\nnatural language generation, and machine\ntranslation. Lexical co-occurrence is an important indicator of word association and this has\nmotivated several frequency-based measures for word association \\cite{churchHanks89,llr,dice,cwcd}.\nIn this paper, we present a theoretical basis for detection and classification of lexical\nco-occurrences\\footnote{Note that we are interested in co-occurrence, not collocation, i.e.,\npairs of words that co-occur in a document with an arbitrary number of intervening words. Also, we use the term bigram to mean\nbigram at-a-distance or spanned-bigram -- again, other words can occur in-between the constituents of\na bigram.}. In general, a lexical co-occurrence\ncould refer to a pair of words that occur in a large number of documents; or it could refer\nto a pair of words that, although appear only in a small number of documents, occur frequently very\nclose to each other within each document. We formalize these ideas and construct a significance\ntest for co-occurrences that will allow us to detect different kinds of co-occurrences within a\nsingle unified framework (a feature which is absent in current measures for co-occurrence). As a\nby-product, our framework also leads to a better understanding of existing measures for word\nco-occurrence.\n\n\n\n\n\n\nAs pointed out in ~\\cite{kilgariff05language}, language is never random -\nwhich brings us to the question of what model of\nrandom chance can give us a good statistical test\nfor lexical co-occurrences.\nWe need a null hypothesis that can account for an\nobserved co-occurrence as a pure chance event and\nthis in-turn requires a corpus generation model.\nIt is often reasonable to assume that documents\nin the corpus are generated independent of each\nother. Existing frequecy-based association\nmeasures like PMI~\\cite{churchHanks89},\nLLR~\\cite{llr} etc. further assume that each document\nis drawn from a multinomial distribution\nbased on global unigram frequencies. The main\nconcern with such a null model is the overbearing influence of unigram\nfrequencies on the detection of word associations. For example, the association between {\\em anomochilidae} (dwarf pipe\nsnakes) and {\\em snake} would go undetected in our wikepedia corpus, since less than\n$0.1\\%$ of the pages containing {\\em snake} also contained\n{\\em anomochilidae}. Similarly, under current models, the expected {\\em span} (inter-word distance)\nof a bigram is also very sensitive to the associated unigram frequencies:\nthe expected span of a bigram composed of low frequency\nunigrams is much larger than that with\nhigh frequency unigrams. This is contrary to\nhow word associations appear in language, where\nsemantic relationships manifest with small inter-word\ndistances irrespective of the underlying unigram\ndistributions.\n\nThese considerations motivate our search for a\nmore direct relationship between words,\none that can potentially be detected using careful\nstatistical characterization of inter-word distances, while minimizing the influence of the\nassociated unigram frequencies. We focus on only the documents containing both the terms (of a\ncandidate bigram) since in NLP applications, we often have\nto chose from a set of alternatives for a given word. Hence, rather than ask the abstract\nquestion of whether words $x$ and $y$ are related, our approach is to ask, given that $y$ is a candidate for pairing with $x$,\nhow likely is it that $x$ and $y$ are lexically correlated. For example, probability that {\\em\nanomochilidae} is found in the vicinity of {\\em snake} is higher if we knew that\n{\\em anomochilidae} and {\\em snake} appear in the same context.\n\n\n\nWe consider a null model that represents each document as a bag of words \\footnote{There can be many ways to\nassociate a bag of words with a document. Details of this association are not important for us,\nexcept that the bag of words provides some kind of quantitative summary of the words within the document.}.\nThen, a random permutation of\nthe associated bag of words gives a linear\nrepresentation for the document. An arbitrary relation between a pair\nof words will result in the locations\nof these words to be randomly distributed\nin the documents in which they co-occur.\nIf the observed span distribution of a bigram resembles that under\nthe (random permutation) null model, then the relation between the words is not strong enough\nfor one word to influence the placement of the other. However, if the words are\nfound to occur closer together than explainable by our\nnull model, then we hypothesize existence of a more direct association\nbetween these words.\n\n\n\n\nIn this paper, we formalize the notion of statistically significant lexical co-occurrences by introducing a\nnull model that can detect biases in span distributions of word associations, while being\nagnostic to variations in global unigram frequencies. Our framework has the fidelity to\ndistinguish different classes of lexical co-occurrences, based on strengths of the document\nand corpus-level cues of co-occurrence in the data.\nWe perform extensive experiments on benchmark data sets to study the performance of various co-occurrence\nmeasures that are currently known in literature. We find that a relatively obscure measure called\nOchiai, and a newly introduced measure CSA, capture the notion of lexical co-occurrence best, followed next by LLR, Dice, and TTest, while\nanother popular measure, PMI, suprisingly, performs poorly in the context of lexical co-occurrence.\n\n\n\\begin{comment}\nWe perform extensive experiments on benchmark data sets to study how several well-known co-occurrence\nmeasures correlate with our notion of significant lexical co-occurrence.\nWe also compare performance of these measures against human produced\ngold-standards. To the best of our knowledge, a comparison study\nof different co-occurrence measures on a large English dataset\nfor the word association problem has not been reported before. \nOur significance test competes well with knowledge and computation\nintensive distributional similarity and knowledge-based \nmeasures reported in \\cite{simRelDatasets}, \\cite{wikiwalk09},\n\\cite{wikiLinkMeasure}, \\cite{Gabrilovich07computingsemantic}, \\cite{wikirelate} and \\cite{jarmasz03}.\n\\end{comment}\n\n\n\n\n\n\n\\section{Experimental Results}\n\n\\subsection{Datasets and Text Corpus}\n\nSince similarity and relatedness are\ndifferent kinds of word associations \\cite{budanitskyHirst}, in ~\\cite{simRelDatasets}\ntwo different data sets, namely 203 words {\\em sim} (the\nunion of similar and unrelated pairs) and 252 words {\\em rel} (the union of related\nand unrelated pairs) datasets are derived from {\\em wordsim}~\\cite{wordsim353}.\nWe use these two data sets in our experiments. These datasets are symmetric in that the order of words in a pair is not expected to matter. As some of our chosen co-occurrence measures are asymmetric, we also report results on the asymmetric 272-words {\\em esslli} dataset\nfor the `free association' task at~\\cite{esslli08}. \n\n\nWe use the Wikipedia~\\cite{wikipedia} corpus in our experiments. It contains 2.7 million articles\nfor a total size of 1.24 Gigawords. We did not pre-process the corpus - no lemmatization,\nno function-word removal. When counting document size in words, punctuation symbols were ignored.\nDocuments larger than 1500 words were partitioned keeping the size of each part to no greater\nthan 1500 words.\n\n\nIn Table~\\ref{tab:typeExamples}, we present some examples of\ndifferent types of co-occurrences observed in the data. \n\n\n\\begin{table*}\n\\centering\n{\\scriptsize\n\\begin{tabular}{|l|l|l|l|l|}\n\\hline\nDataset & Type A bigrams & Type B bigrams & Type C bigrams & Type D bigrams \\\\ \\hline\n\\multirow{2}{*}{sim} & announcement-news & forest-graveyard & lobster-wine & stock-egg \\\\\n & bread-butter & tiger-carnivore & lad-brother & cup-object \\\\ \\hline\n\\multirow{2}{*}{rel} & baby-mother & alcohol-chemistry & victim-emergency & money-withdrawal \\\\\n & country-citizen & physics-proton & territory-kilometer & minority-peace \\\\ \\hline\n\\multirow{2}{*}{esslli} & arrest-police & pamphlet-read & meditate-think & fairground-roundabout \\\\\n & arson-fire & spindly-thin & ramble-walk & \\\\\n\\hline\n\\end{tabular}\n\\label{tab:typeExamples}\n \\caption{Examples of Type A, B, C and D co-occurrences under a span constraint of 20 words.}\n}\n\\end{table*}\n\n\\begin{comment}\n\n\\begin{table*}\n\\scriptsize\n\\centering\n\\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|}\n\\hline\n& & & \\multicolumn{4}{|c|}{$\\epsilon=0.4$; $\\delta=0.1$} & \\multicolumn{4}{|c|}{$\\epsilon=0.1$; $\\delta=0.1$} \\\\ \\cline{2-11}\nWord pair & Data set & $K$ & $Z$ & $E(Z)$ & $Kt$ & CSR & $Z$ & $E(Z)$ & $Kt$ & CSR \\\\ \\hline\nforest-graveyard & sim & 265 & 45 & 16.6 & 17.5 & 1.3 & 19 & 8.4 & 17.5 & 0.7 \\\\ \ntiger-carnivore & sim & 50 & 13 & 2.6 & 7.6 & 1.3 & 8 & 1.6 & 7.6 & 0.9 \\\\ \nalcohol-chemistry & rel & 702 & 77 & 47.3 & 28.4 & 1.0 & 37 & 23.1 & 28.4 & 0.7 \\\\ \nphysics-proton & rel & 547 & 76 & 46.6 & 25.1 & 1.1 & 31 & 17.0 & 25.1 & 0.7 \\\\ \npamphlet-read & esslli & 389 & 40 & 17.6 & 21.2 & 1.0 & 29 & 12.5 & 21.2 & 0.9 \\\\ \nspindly-thin & esslli & 25 & 13 & 3.0 & 5.4 & 1.6 & 4 & 0.4 & 5.4 & 0.7 \\\\ \n\\hline\n\\end{tabular}\n\\caption{Examples of Type B co-occurrences found in data under a span\nconstraint of 20 words.}\n\\label{tab:typeBillustrations}\n\\end{table*}\n\n\n\\begin{table*}\n\\scriptsize\n\\centering\n\\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|}\n\\hline\n & & & \\multicolumn{4}{|c|}{$\\epsilon=0.1$; $\\delta=0.4$} &\n\\multicolumn{4}{|c|}{$\\epsilon=0.1$; $\\delta=0.1$} \\\\ \\cline{2-11}\nWord pair & Data set & $K$ & $Z$ & $E(Z)$ & $Kt$ & CSR & $Z$ &\n$E(Z)$ & $Kt$ & CSR \\\\ \\hline\nlobster-wine & sim & 87 & 9 & 2.6 & 6.3 & 1.0 & 9 & 2.6 & 10.0 & 0.7 \\\\ \nlad-brother & sim & 306 & 22 & 10.1 & 11.8 & 1.0 & 22 & 10.1 & 18.8 & 0.8 \\\\ \nvictim-emergency & rel & 633 & 37 & 18.6 & 17.0 & 1.0 & 37 & 18.6 & 27.0 & 0.8 \\\\ \nterritory-kilometer & rel & 206 & 18 & 6.8 & 9.7 & 1.1 & 18 & 6.8 & 15.4 & 0.8 \\\\ \nmeditate-think & esslli & 70 & 8 & 2.2 & 5.7 & 1.0 & 8 & 2.2 & 9.0 & 0.7 \\\\ \nramble-walk & esslli & 40 & 6 & 1.7 & 4.3 & 1.0 & 6 & 1.7 & 6.8 & 0.7 \\\\ \n\\hline\n\\end{tabular}\n\\caption{Examples of Type C co-occurrences found in data under a span\nconstraint of 20 words.}\n\\label{tab:typeCillustrations}\n\\end{table*}\n\nRecall that our test\nstatistic for lexical significance is $[Z \\geq E(Z)+Kt]$. When $\\epsilon$ is relaxed keeping\n$\\delta$ fixed, $E(Z)$ increases while $Kt$ remains constant. This can be observed in\nTables~\\ref{tab:typeAillustrations} \\& \\ref{tab:typeBillustrations}. Similarly, when $\\delta$ is\nrelaxed keeping $\\epsilon$ fixed $E(Z)$ is constant and $Kt$ changes (See\nTable~\\ref{tab:typeCillustrations}). Notice how the patterns are significant (indicated by CSR $>1$) for both sets of\n$\\epsilon$ and $\\delta$ in Table~\\ref{tab:typeAillustrations}; this is unlike in Tables\n\\ref{tab:typeBillustrations} \\& \\ref{tab:typeCillustrations} where we have reported exclusively Type\nB and Type C patterns respectively.\n\n\\end{comment}\n\n\n\\section{Lexically significant co-occurrences}\n\\label{sec:significance-test}\n\nConsider a bigram $\\alpha$. Let $\\ensuremath{\\mathcal{D}}=\\{D_1,\\ldots,D_K\\}$ denote the set of \n$K$ documents (from out of the entire corpus) that contain at least one occurrence of $\\alpha$. The {\\em frequency} of $\\alpha$ in\ndocument $D_i$, $f_i$, is the maximum number of {\\em non-overlapped occurrences} of $\\alpha$ in $D_i$. A set of occurrences of \na bigram are called non-overlapping if the words corresponding to one occurrence from the set do not appear\nin-between the words corresponding to any other occurrence from the set. \n\n\nThe {\\em span} of an occurrence of $\\alpha$ is the `unsigned distance' \nbetween the first and last textual units of interest associated with that occurrence.\nWe mostly use words as the unit of distance, but in general, distance can be measured in\nwords, sentences, or even paragraphs (e.g.~an occurrence comprising two adjacent words in a sentence has a word-span of one \nand a sentence-span of zero). Likewise, the size of a document $D_i$, denoted as $\\ell_i$, \nis correspondingly measured in units of words, sentences or paragraphs.\nFinally, let $\\ensuremath{\\widehat{f}}_i$ denote the maximum number of non-overlapped occurrences of\n$\\alpha$ in $D_i$ with span less than a given threshold $x$. We refer to\n$\\ensuremath{\\widehat{f}}_i$ as the {\\em span-constrained frequency} of $\\alpha$ in $D_i$. Note that \n$\\ensuremath{\\widehat{f}}_i$ cannot exceed $f_i$.\n\n\nTo assess the statistical significance of the bigram $\\alpha$\nwe ask if the span-constrained frequency $\\ensuremath{\\widehat{f}}_i$ (of $\\alpha$)\nis more than what we would expect for it in a document of size $\\ell_i$ containing $f_i$ `random' occurrences of $\\alpha$.\nOur intuition is that if two words are semantically related, they will often appear close to\neach other in the document and so the distribution of the spans will typically exhibit a prominent bias\ntoward values less than a small $x$.\n\nConsider the null hypothesis that a document is generated as a random permutation of the bag of words\nassociated with the document. Let $\\pi_x(\\ensuremath{\\widehat{f}},f,\\ell)$ denote the probability of\nobserving a span-constrained frequency (for $\\alpha$) of {\\em at least} $\\ensuremath{\\widehat{f}}$ in a document of length $\\ell$ that contains\na maximum of $f$ non-overlapped occurrences of $\\alpha$. Observe that $\\pi_x(0,f,\\ell)=1$ for any\n$x>0$; also, for $x\\geq \\ell$ we have $\\pi_x(f,f,\\ell)=1$ (i.e.~all $f$ occurrences will always have span\nless than $x$ for $x\\geq \\ell$). However, for typical values of\n$x$ (i.e.~for $x \\ll \\ell$) the probability $\\pi_x(\\ensuremath{\\widehat{f}},f,\\ell)$ decreases with increasing $\\ensuremath{\\widehat{f}}$. \nFor example, consider a document of length 400 with 4 non-overlapped\noccurrences of $\\alpha$. The probabilities of observing at least 4, 3, 2, 1 and 0 occurrences of\n$\\alpha$ within a span of 20 words are 0.007, 0.09, 0.41, 0.83, and 1.0 respectively. \nSince $\\pi_{20}(3,4,400)=0.09$, even if 3 of the 4 occurrences of $\\alpha$\n(in the example document) have span less than 20 words, \nthere is 9\\% chance that the occurrences\nwere a consequence of a random event (under our null model). As a result, if\nwe desired a confidence-level of at least 95\\%, we would have to declare $\\alpha$ as {\\em\ninsignificant}.\n\nGiven an $\\epsilon$ ($0< \\epsilon < 1$) and a span upper-bound $x$ ($\\geq 0$) \nthe document $D_i$ is said to {\\em support} the hypothesis ``$\\alpha$ is a $\\epsilon$-significant bigram'' if $\\pi_x(\\ensuremath{\\widehat{f}}_i,f_i,\\ell) < \\epsilon$. \nWe refer to $\\epsilon$ as the {\\em document-level} lexical co-occurrence of $\\alpha$.\nDefine indicator variables $z_i$, $i=1,\\ldots,K$ as:\n\\begin{equation}\nz_i = \\left\\{\\begin{array}{ll}\n1 & \\mbox{if\\ } \\pi_x(\\ensuremath{\\widehat{f}}_i,f_i,\\ell) < \\epsilon \\\\\n0 & \\mbox{otherwise}\\end{array} \\right.\n\\label{eq:zi}\n\\end{equation}\\vspace{-0.2in}\n\nLet $Z = \\sum_{i=1}^K z_i$; $Z$ models the number of documents (out of $K$) that\nsupport the hypothesis ``$\\alpha$ is a $\\epsilon$-significant bigram.''\nThe expected value of $Z$ is given by\n\\begin{eqnarray}\nE(Z) &=& \\sum_{i=1}^K E(z_i) \\label{eq:ez1}\\\\\n\t &=& \\sum_{i=1}^K \\pi_x(g_\\epsilon(f_i,\\ell_i), f_i,\\ell_i) \\label{eq:ez2}\n\\end{eqnarray}\nwhere $g_\\epsilon(f_i,\\ell_i)$ denotes the smallest $\\ensuremath{\\widehat{f}}$ for which we can get\n$\\pi_x(\\ensuremath{\\widehat{f}},f_i,\\ell_i)<\\epsilon$ (This quantity is well-defined since $\\pi_x(\\ensuremath{\\widehat{f}},f_i,\\ell_i)$ is\nnon-increasing with respect to $\\ensuremath{\\widehat{f}}$). For the example given earlier, $g_{0.2}(4,400)=3$\nand $g_{0.05}(4,400)=4$. \n \nUsing Hoeffding's Inequality, for $t>0$,\n\\begin{equation}\nP[ Z \\geq E(Z) + Kt ] \\leq \\exp(-2Kt^2)\n\\label{eq:hoeffding}\n\\end{equation}\nTherefore, we can bound the deviation of the observed value of $Z$ from its expectation by chosing $t$ appropriately.\nFor example, in our corpus, the bigram ({\\em canyon,\\ landscape}) occurs in $K= 416$ documents. For\n$\\epsilon = 0.1$, we find that $Z=33$ documents (out of 416) have $\\epsilon$-significant occurrences,\nwhile $E(Z)$ is 14.34. Let $\\delta = .01$. By setting $t = \\sqrt{\\ln{\\delta}\/(-2K)}=.07$, we get\n$E(Z) + Kt=43.46$, which is greater than the observed value of $Z$ (=33).\nThus, we cannot be 99\\% sure that the occurrences of ({\\em canyon,\\ landscape}) in the 33 documents\nwere a consequence of non-random phenomena. Hence, our test declares ({\\em canyon,\\ landscape}) as\n{\\em insignificant} at $\\epsilon=.1, \\delta=.01$. We formally state the significance test for lexical\nco-occurrences next:\n\\begin{definition}\n[Significant lexical co-occurrence] \nConsider a bigram $\\alpha$ and a set of $K$ documents containing at least one occurrence of $\\alpha$.\nLet $Z$ denote the number of documents (out of $K$) that support the hypothesis ``$\\alpha$ is\nan $\\epsilon$-significant bigram (for a given $\\epsilon>0$, $x>0$)\". \nThe $K$ occurrences of the bigram $\\alpha$ are regarded $\\epsilon$-significant with\nconfidence $(1-\\delta)$ (for some user-defined $\\delta>0$) if we have $[Z \\geq E(Z) + Kt]$, where $t=\\sqrt{\\log{\\delta}\/\n(-2K)}$ and $E(Z)$ is given by Eq.~(\\ref{eq:ez2}). The ratio $[Z \/ (E(Z) + Kt)]$ is called the\nCo-occurrence Significance Ratio (CSR) for $\\alpha$.\n\\label{def:test1}\n\\end{definition}\n\n\nWe now describe how to compute $\\pi_x(\\ensuremath{\\widehat{f}}_i,f_i,\\ell_i)$ for $\\alpha$ in $D_i$. Let $N(f_i,\\ell_i)$\ndenote the number of ways of embedding $f_i$ non-overlapped occurrences of $\\alpha$ in a document of\nlength $\\ell_i$. Similarly, let $N_x(\\ensuremath{\\widehat{f}}_i,f_i,\\ell_i)$ denote the number of ways of embedding $f_i$\nnon-overlapped occurrences of $\\alpha$ in a document of length $\\ell_i$, in such a way that, at\nleast $\\ensuremath{\\widehat{f}}_i$ of the $f_i$ occurrences have span less than $x$. Recall that\n$\\pi_x(\\ensuremath{\\widehat{f}}_i,f_i,\\ell_i)$ denotes the probability of observing a span-constrained frequency\n(for $\\alpha$) of at least $\\ensuremath{\\widehat{f}}_i$ in a document of length $\\ell_i$ that contains\na maximum of $f_i$ non-overlapped occurrences of $\\alpha$. Thus, we can assign the probability \n$\\pi_x(\\ensuremath{\\widehat{f}}_i,f_i,\\ell_i)$ in terms of $N(f_i,\\ell_i)$ and $N_x(\\ensuremath{\\widehat{f}}_i,f_i,\\ell_i)$ as follows:\n\\begin{equation}\n\\pi_x(\\ensuremath{\\widehat{f}}_i,f_i,\\ell_i) = \\left( \\frac{N_x(\\ensuremath{\\widehat{f}}_i,f_i,\\ell_i)}{N(f_i,\\ell_i)} \\right)\n\\label{eq:pi}\n\\end{equation}\n\nTo compute $N(f_i,\\ell_i)$ and $N_x(\\ensuremath{\\widehat{f}}_i,f_i,\\ell_i)$, we essentially need the histogram for $\\ensuremath{\\widehat{f}}$\ngiven $f$ and $\\ell$. Let $hist_{f,\\ell}[\\ensuremath{\\widehat{f}}]$ denote the number of ways to embed $f$ non-overlapped\noccurrences of a bigram in a document of length $\\ell$ in such a way that exactly $\\ensuremath{\\widehat{f}}$ of the $f$\noccurrences satisfy the span constraint $x$. We can obtain $N(f_i,\\ell_i)$ and\n$N_x(\\ensuremath{\\widehat{f}}_i,f_i,\\ell_i)$ from $hist_{f_i,\\ell_i}$ using\n\\begin{eqnarray}\nN_x(\\ensuremath{\\widehat{f}}_i,f_i,\\ell_i) &=& \\sum_{k=\\ensuremath{\\widehat{f}}_i}^{f_i} hist_{f_i,\\ell_i}[k] \\\\\nN(f_i,\\ell_i) &=&\\sum_{k=0}^{f_i} hist_{f_i,\\ell_i}[k]\n\\end{eqnarray}\n\n\\begin{algorithm}\n\\small\n\\caption{$ComputeHist(f,\\ell)$}\n\\label{algo-wf}\n\\begin{algorithmic}[1]\n\n\\REQUIRE $\\ell$ - length of document; $f$ - number of non-overlapped occurrences to be embedded; \n$x$ - span constraint for occurrences\n\n\\ENSURE $hist_{f,\\ell}[\\cdot]$ - histogram of $\\ensuremath{\\widehat{f}}$ when $f$ occurrences are embedded in a document\nof length $\\ell$\n\n\\STATE Initialize $hist_{f,\\ell}[\\ensuremath{\\widehat{f}}] \\leftarrow 0$ for $\\ensuremath{\\widehat{f}}=0,\\ldots,f$\n\n\\IF{$f>\\ell$}\n\t\\STATE return $hist_{f,\\ell}$\n\\ENDIF\n\n\\IF{$f=0$}\n\t\\STATE $hist_{f,\\ell}[0] \\leftarrow 1$;\n\t\\STATE return $hist_{f,\\ell}$\n\\ENDIF\n\n\\FOR{$i \\leftarrow 1$ to $(\\ell-1)$}\n\t\\FOR{$j \\leftarrow (i+1)$ to $\\ell$}\n\t\t\\STATE $hist_{f-1,\\ell-j} \\leftarrow ComputeHist(f-1, \\ell-j)$\n\t\t\\FOR{$k \\leftarrow 0$ to $f-1$}\n\t\t\t\\IF{$(j-i) < x$}\n\t\t\t\t\\STATE $hist_{f,\\ell}[k+1] \\leftarrow hist_{f,\\ell}[k+1] + hist_{f-1,\\ell-j}[k]$\n\t\t\t\\ELSE\n\t\t\t\t\\STATE $hist_{f,\\ell}[k] \\leftarrow hist_{f,\\ell}[k] + hist_{f-1,\\ell-j}[k]$\n\t\t\t\\ENDIF\n\t\t\\ENDFOR\n\t\\ENDFOR\n\\ENDFOR\n\\STATE return $hist_{f,\\ell}$\n\n\\end{algorithmic}\n\\end{algorithm}\n\n{\\em Algorithm~\\ref{algo-wf}} lists the pseudocode for computing the histogram $h_{f,\\ell}$. It enumerates all possible ways of embedding $f$ non-overlapped\noccurrences of a bigram in a document of length $\\ell$. \nThe main steps in the algorithm involve selecting a start and end position for\nembedding the very first occurrence (lines 7-8) and then recursively calling\n$ComputeHist(\\cdot,\\cdot)$ (line 9). The $i$-loop selects a\nstart position for the first occurrence of the bigram, and the $j$-loop selects the end position. The task in\nthe recursion step is to now compute the number of ways to embed the remaining $(f-1)$ non-overlapped occurrences in the remaining\n$(\\ell-j)$ positions. Once we have $hist_{f-1,\\ell-j}$, we need to check whether the\noccurrence introduced at positions $(i,j)$ will contribute to the $\\ensuremath{\\widehat{f}}$ count. If $(j-i)x$, there is no contribution\nto the span-constrained frequency from the $(i,j)$ occurrence, and so we increment $hist_{f,\\ell}[k]$\nby the quantity $hist_{f-1,\\ell-j}[k]$ (lines 10-11, 13-14).\n\nThis algorithm is exponential in $f$ and $l$, but it\ndoes not depend explicitly on the data. This allows us to populate the histogram off-line, and\npublish the $\\pi_x(\\ensuremath{\\widehat{f}},f,\\ell)$ tables for various $x$, $\\ensuremath{\\widehat{f}}$, $f$ and $\\ell$. \n(If the paper is accepted, we will make an interface to this table publicly available).\n \n\n \n\n\\begin{comment}\nIn light of the above, $\\epsilon$ can either be chosen by the user based on the application needs,\nor it can be left unspecified, in which case, we can derive a sound, traditional\ntwo-parameter test involving just the span constraint $x$ and the corpus-level confidence $\\delta$:\n\\begin{definition}\n[Significant lexical co-occurrences] \nOccurrences of a bigram $\\alpha$ are regarded as\n{\\em significant lexical co-occurrences} with span less than $x$ and confidence $(1-\\delta)$\n(for some user-defined $x>0$, $\\delta>0$) if there exists an $\\epsilon$ ($0<\\epsilon<1$)\nsuch that we have $[Z \\geq E(Z) + Kt]$, where $t=\\sqrt{\\log{\\delta}\/ (-2K)}$.\nThe Co-occurrence Significance Ratio (CSR) can be defined as the maximum value attained by\n$[Z \/ (E(Z) + Kt)]$ as $\\epsilon$ is varied between 0 to 1.\n\\label{def:test2}\n\\end{definition}\nWe can also show that for a given co-occurrence pattern, only finitely many $\\epsilon$ values need\nto be tried. Due to space constraints, we omit the theoretical details of this variant of our\nsignificance test. In practice, we found that the test is robust to small fluctuations in $\\epsilon$\n(which is important since, otherwise, significance results may yield unusable, fragile results).\nThroughout the rest of the paper, we stick to the notion of significance as described in Definition~\\ref{def:test1}.\n\\end{comment}\n\n\n\n\n\\section{Utility of CSR test}\\label{sec:discussion}\n\n\n\nEvidence for significant lexical co-occurrences can be gathered at two levels in the data --\ndocument-level and corpus-level. First, at the document\nlevel, we may find that a surprisingly high proportion of occurrences {\\em within} a\ndocument (of a pair of words) have smaller spans than they would by random chance. Second, at the corpus-level, we may find\na pair of words appearing closer-than-random in an unusually high number of documents in the\ncorpus. The significance test of {\\em Definition~\\ref{def:test1}} is capable of gathering both kinds\nof evidence from data in carefully calibrated amounts. Prescribing $\\epsilon$ essentially fixes the strength of the document-level\nhypothesis in our test. A small $\\epsilon$ corresponds to a strong document-level hypothesis and\nvice-versa. The second parameter in our test, $\\delta$, controls the confidence of our decision\ngiven all the documents in the data corpus. A small $\\delta$\nrepresents a high confidence test (in the sense that there are a surprisingly large number of documents in\nthe corpus, each of which, individually have some evidence of relatedness for the pair of words). \nBy running the significance test with different values of $\\epsilon$ and $\\delta$, we can detect\ndifferent types of lexically significant co-occurrences. We illustrate the utility of\nour test of significance by considering the 4 types of lexical significant co-occurrences\n\n {\\em Type A}: These correspond to the strongest lexical co-occurrences in the data, with strong\ndocument-level hypotheses (low $\\epsilon$) as well as high corpus-level confidence (low $\\delta$). Intuitively, if a pair of\nwords appear close together several times within a document, and if this pattern is observed in a large\nnumber of documents, then the co-occurrence is of {\\em Type A}. \n\n\n{\\em Type B}: These are co-occurrences based on weak document-level hypotheses (high $\\epsilon$) \nbut because of repeated observation in a substantial number of documents in the corpus, we can still detect them with\nhigh confidence (low $\\delta$). We expect many interesting\nlexical co-occurrences in text corpora to be of\nType B \u2013 pairs of words that appear close to each\nother only a small number of times within a document,\nbut they appear together in a large number of documents.\n\n\n\\begin{comment}\nTo detect\nType B co-occurrences, we need to run our significance test (cf.~{\\em Definition~\\ref{def:test1}})\nwith high $\\epsilon$ and low $\\delta$. This essentially amounts to {\\em relaxing} the document-level hypothesis\nof Type A (while keeping $\\delta$ at the same level). Thus, the test will\nreturn {\\em all} Type A co-occurrences, plus some more. To detect Type B co-occurrences, we simply\nremove those that also belong to Type A and return only those that uniquely correspond to a\nhigh $\\epsilon$ and low $\\delta$. In our experiments we classify all the unique co-occurrences\ndetected by the test for $\\epsilon \\geq 0.4$ and $\\delta \\leq 0.1$ as Type B.\n\\end{comment}\n\n{\\em Type C}: Sometimes we may be interested in words that are\nstrongly correlated within a document, even if we observe the strong correlation only in a\nrelatively small number of documents in the corpus. These correspond to Type C co-occurrences. \nAlthough they are statistically weaker inferences than\nthose of Type A and Type B (since confidence $(1-\\delta)$ is lower) Type C co-occurrences represent an important class of relationships\nbetween words. If the document corpus contains a very small of number documents on some topic, then\nstrong co-occurrences (i.e. those found with low $\\epsilon$) which are unique to that topic may not be\ndetected at low values of $\\delta$. By relaxing the confidence parameter $\\delta$, we may be able to detect\nsuch occurrences (possibly at the cost of some extra false positives). \n\n\n{\\em Type D}: These co-occurrences represent the weakest correlations found in\nthe data, since they neither employ a strong document-level hypothesis nor enforce a high \ncorpus-level confidence. In most applications, we expect Type D co-occurrences to be of little use, with their best case\nutility being to provide a baseline for disambiguating Type C co-occurrences. \n\n\n\n\\begin{table}\n\\centering\n\\begin{tabular}{|l|l|l|} \n \\hline\nType\t&\t$\\epsilon$\t&\t$\\delta$ \\\\ \\hline\nA\t\t&\t$\\leq 0.1$\t& $\\leq 0.1$ \\\\\nB\t\t&\t$\\geq 0.4$\t& $\\leq 0.1$ \\\\\nC\t\t&\t$\\leq 0.1$\t& $\\geq 0.4$ \\\\\nD\t\t&\t$\\geq 0.4$\t& $\\geq 0.4$ \\\\ \\hline\n\\end{tabular}\n\\caption{4 types of lexical co-occurrences.}\n\\label{tab:edpairs}\n\\end{table}\n\nIn the experiments we describe later, we fix the $\\epsilon$ and $\\delta$ for the different Types as\nper Table~\\ref{tab:edpairs}. Finally, we note that Types B and C subsume Type A; similarly, Type D\nsubsumes all three other types. Thus, to detect co-occurrences that are exclusively of (say) Type B,\nwe would have to run the test with a high $\\epsilon$ and low $\\delta$ and then remove from the\noutput, those co-occurrences that are also part of Type A.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe presence of charged quantum vacuum fluctuations induces self-interactions\nof the electromagnetic field~\\cite{Heisenberg:1936qt}. In particular, light\npassing through a strong external magnetic field is expected to\ntravel at reduced velocity compared to the propagation through plain\nvacuum~\\cite{Baier,Dittrich:2000zu}.\n\n\nAs we argue in the following, the combination of ground-based\ngravitational-wave interferometers and strong pulsed magnetic fields forms an\ninstrument which is sensitive enough to demonstrate nonlinearities in the\npropagation of light and thereby contribute to the research of\nstrong-field QED \\cite{Dobrich:2009}. At the same time, it facilitates a search for light particles beyond our\ncurrent standard model of particle physics.\n\n\n\\section{Alternative goals for gravitational-wave interferometers}\n\nIn order to detect gravitational-waves by means of interferometry, two evacuated tubes of equal length $L$ are installed orthogonally with respect to each other. The respective tubes have a mirror installed at their ends and thus form a cavity for a laser beam which is directed through both tubes by means of a beam splitter. An incoming gravitational-wave will \ninduce a relative change $\\Delta L(t)$ among the lengths of the two arms as a function of time. Alternatively, an \\textit{apparent} change of optical path length $L$ can be caused by applying an external magnetic field $B(t)$ over a distance $x$ in one of the interferometer arms, as the light traveling through the magnetic field region will propagate at reduced velocity. Using natural units $\\hbar=c=1$, this implies a so-called \\textit{strain} in the interferometer\n\\begin{equation}\nh(t) = \\frac{\\Delta L}{L}(t) = \\frac{x}{L} (1-v(t)) \\ , \\label{eq:strain}\n\\end{equation}\nas first suggested by \\cite{Boer:2002zw}, cf. also \\cite{Denisov_Zavattini}. \n\nSince the sensitivity of the interferometer to the strain $h(t)$ is limited by diverse sources of noise, the temporal variation of $h(t)$ should be adapted to the region of highest sensitivity. Generically, gravitational-wave interferometers are most sensitive to variations at frequencies of about $\\mathcal{O}(100\\mathrm{Hz})$. More precisely, the specific sensitivity of each interferometer can be read off its spectral noise density function $S_h(f)$, see e.g.~\\cite{Blair:1991wd}. In conclusion, for the detection of nonlinear light propagation with the help of gravitational-wave interferometers one needs magnetic fields varying at the millisecond scale.\n\nIn fact, such pulsed fields are provided by several magnetic field laboratories around the world. Focussing on the ongoing research at the Dresden High-Magnetic-Field-Laboratory (HDL)~\\cite{wosnitza}, we consider the specifications of a technically feasible Helmholtz-coil setup with a coil diameter of $x=0.2\\mathrm{m}$. The need for a Helmholtz setup arises from the fact that no nonlinearities are induced for light traveling along the direction of the magnetic field lines. By contrast, for light traveling orthogonally to the magnetic field lines, the effect is maximized\\footnote{For this reason, also the drop-off in field strength perpendicular to the field lines which is generic for Helmholtz coils must be minimized.}, depending on the beam polarization.\n\nA feasible model for $N$ subsequent field pulses is a damped sinusoidal oscillation:\n\\begin{equation}\nB(t)=B_{0}\\sum_{i=0}^{N-1}\\theta(t-t_{i})\n\\sin(2 \\pi \\nu_{B}(t-t_{i}))\\exp(-\\gamma(t-t_{i}))\\ ,\n\\label{eq:model_pulse}\n\\end{equation} \nwith pulse frequency $\\nu_{B}$ and a damping constant $\\gamma$. For the following estimates, we assume $B_{\\mathrm{max}}=60\\mathrm{T}$ and $B_{\\mathrm{min}}=-6\\mathrm{T}$ which fixes the amplitude $B_{0}\\approx148\\mathrm{T}$ and relates the remaining parameters via $\\gamma=2 \\nu_{B} \\ln\\left|B_{\\mathrm{max}}\/B_{\\mathrm{min}}\\right|$.\n\nA meaningful measure for the visibility of the strain $h(t)$ is the signal-to-noise-ratio (SNR) $d$. Its value is a measure for the likeliness that the strain is induced by the external magnetic field rather than due to random noise fluctuations. Applying a matched filter (or ''Wiener filter'')~\\cite{Blair:1991wd}, the square of the SNR is given by\n\n\\begin{equation}\nd^{2}=2\\int_{0}^{\\infty}\\frac{|\\tilde{h}(f)|^{2}}{S_{h}(f)}\\mathrm{d}f\\\n,\\quad \\tilde{h}(f)=\\int_{-\\infty}^{\\infty}h(t)e^{-2\\pi ift}\\mathrm{d}t,\n\\label{eq:SNR_def}\n\\end{equation}\nwhere $\\tilde{h}(f)$ is the Fourier transform of the induced strain.\nA lever arm for the enhancement of this observable is provided by the fact that the setup for the field pulse is non-destructive and thus the pulse can be repeated after the magnet system has been re-cooled. Depending on the details of the setup, the re-cooling time of the magnet system is on the order of several minutes.\nTo good accuracy, $N$ subsequent pulses can enhance the SNR by a factor of $\\sqrt{N}$:\n\\begin{equation}\n d^{2}|_{N}\\approx N\\ d^{2}|_{1}. \\label{eq:SNR_N}\n\\end{equation}\n\n\n\\section{Discovery potential at GEO600 and advanced LIGO}\n\n\n\\begin{figure}\n\\begin{minipage}{0.49 \\linewidth}\n\\includegraphics[scale=0.285]{doebrich_babette.fig1.eps}\n\\end{minipage}\n\\begin{minipage}{0.49 \\linewidth}\n\\includegraphics[scale=0.285]{doebrich_babette.fig2.eps} \n\\end{minipage}\n\\caption{The figure on the left-hand side shows the discovery potential for spin-$\\frac{1}{2}$ minicharged particles (MCP), while the figure on the right-hand side applies to axion-like particles (ALP). Already a single pulse measurement at advanced LIGO can improve the best current laboratory bounds~\\cite{Zavattini:2005tm,Chou:2007zzc} in the respective coupling-mass planes.} \n\\label{fig:figure1}\n\\end{figure}\n\nWe start by computing the number of pulses required to achieve a total SNR of $\\mathcal{O}(1)$ for the strain induced by nonlinear QED. To maximize the effect, the laser beam should be polarized in parallel to the external magnetic field lines. The velocity shift then reads~\\cite{Baier,Dittrich:2000zu} $1-v=14B^{2}\\alpha^{2}\/(45 m^{4})$, where $\\alpha\\approx1\/137$ denotes the fine-structure constant and $m$ the electron mass. Together with the parameterization of the field pulse, see Eq.\\eqref{eq:model_pulse}, the velocity shift can be translated into the SNR through Eqs.~\\eqref{eq:SNR_def} and~\\eqref{eq:strain}, while the number of required pulses $N$ enters through Eq.~\\eqref{eq:SNR_N}. We perform the calculation for the noise densities $S_h(f)$ of the advanced LIGO~\\cite{ligocurves}, which consists of interferometer arms of length $L=4000\\mathrm{m}$, and GEO600~\\cite{geocurves}, where $L=600\\mathrm{m}$. By a variation of the SNR with respect to the pulse frequency $\\nu_B$, we find that for the advanced LIGO $\\nu_B \\approx 47 \\mathrm{Hz}$ yields the greatest strain, while for GEO600 $\\nu_B\\approx 273 \\mathrm{Hz}$ is optimal. In terms of the number of required pulses, this would imply $N \\approx 2763$ at advanced LIGO, demanding a continuous operation over a few days, which appears reasonable.\n(The operation time at GEO600, however, would be several years since $N \\approx 2.3 \\times 10^6$ pulses would be needed for an SNR of $\\mathcal{O}(1)$ from the QED induced strain).\n\nIn analogy to the vacuum polarization induced by the electron fluctuations, also hypothetical particles with a weak coupling to photons can induce a velocity shift in the interferometer~\\cite{Gies:2008wv}. In the following, we therefore deduce the accessible parameter space with respect to coupling and mass for axion-like particles (ALPs) and minicharged particles (MCPs).\n\nThe velocity shift induced by fluctuating MCPs~\\cite{Gies:2006ca,Ahlers:2006iz} with fractional charge $Q=\\epsilon e$ depends strongly on their mass $m_{\\epsilon}$. While for large masses, the scaling is analogous to the electromagnetic situation $(1-v)\\sim \\varepsilon^4 B^2\/m_\\varepsilon^4$, for low MCP masses the asymptotic limit reads $(1-v)\\sim-\\varepsilon^{8\/3} B^{2\/3}\/\\omega^{4\/3}$, where the laser frequency $\\omega=1.2 \\mathrm{eV}$ for the interferometers.\nWe consider only MCP masses with a Compton wavelength smaller than the separation of the Helmholtz coils $\\sim\\mathcal{O}(1\\mathrm{cm})$, implying $m_{\\varepsilon}\\gtrsim 2\\times10^{-5}\\mathrm{eV}$. For smaller masses, the homogeneous-field assumption underlying the prediction for the velocity shift is no longer valid.\n\nUncharged scalar (S) and pseudo-scalar (P) ALPs couple to the $\\bot$ and the $\\parallel$ mode of the laser beam in the magnetic field, respectively. The corresponding velocity shifts read~\\cite{Maiani:1986md} $1-v_\\parallel^{\\text{P}}=1-v_\\bot^{\\text{S}} =B^{2}g^{2}\/\\left[2 m_{\\phi}^{2}\n\\left(1-\\sin(2y)\/2y \\right)\\right]$, where $y=xm_{\\phi}^{2}\/(4\\omega)$ with ALP mass $m_{\\phi}$ and coupling $g$.\n\n\nAs displayed in Fig.~\\ref{fig:figure1}, already a single-pulse measurement at advanced LIGO can improve the currently best laboratory bounds for MCPs~\\cite{Zavattini:2005tm,Chou:2007zzc} and ALPs~\\cite{Zavattini:2005tm,Ahlers:2006iz} in the upper mass ranges (comparable to results for $\\mathcal{O}(10^3)$ pulses at GEO600). Taking $N=2763$ pulses at advanced LIGO, as needed for the QED effect, current laboratory bounds can be improved almost in the entire mass range.\n\n\n\n\n\n\\section{Conclusions}\n\nPulsed magnetic fields such as provided by the Dresden High-Magnetic-Field-Laboratory can contribute to the research in the strong-field domain of QED for two reasons. Although they have generically a reduced field extent $x$ in comparison to dipole magnets, they can provide for extremely high field strengths $B$. Since the velocity shifts induced by nonlinear QED, ALPs and the large mass regime of MCPs scale with $x B^2$, the reduced field extent can well be compensated for, see also~\\cite{battesti}. Secondly, their pulse frequency can be well matched to the region of highest sensitivity of gravitational-wave interferometers.\nFor these reasons, combining strong pulsed magnetic fields with the interferometric techniques provided by modern gravitational-wave interferometers can give access to an unexplored parameter regime of strong field QED and at the same time allow to search for particles of a hidden sector.\n\n\n\n\n\n\\section*{Acknowledgments}\nB.D. would like to thank the organizers of the 5th Patras Workshop in Durham for the opportunity to contribute to the workshop on the one hand and even more profit from it on the other. The authors acknowledge support from the DFG under GRK1523, SFB\/TR18, and Gi328\/5-1.\n \n\n\\begin{footnotesize}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}