{"text":"\\section{Introduction\\label{sec:intro}}\n\nMultiwavelength sky surveys provide a powerful way to study how galaxies and structure form in the early universe and subsequently evolve through cosmic time. These surveys grow both in area and depth with the combined efforts of large consortia and the advent of observational facilities delivering a significant increase in sensitivity.\nIn this context, the XXL Survey represents the largest XMM-Newton project to date (6.9 Ms; \\citealt{pierre16}, hereafter XXL Paper I). It encompasses two areas, each covering 25 square degrees with an X-ray (0.5-2 keV) point-source sensitivity of $\\sim5\\times10^{-15}{\\rm \\ erg\\, s^{-1}cm^{-2}}$. A wealth of multiwavelength data (X-ray to radio) is available in both fields. Photometric redshifts are computed for all sources, and\nover $15\\,000$ optical spectroscopic redshifts are already available. The main goals of the project are to constrain the dark energy equation of state using clusters of galaxies, and to provide long-lasting legacy data for studies of galaxy clusters and AGN (see XXL Paper I for an overview). \n\nIn the context of AGN and their cosmic evolution, the radio wavelength window offers an important complementary view to X-ray, optical, and infrared observations (e.g. \\citealt{padovani17}). Only via radio observations can AGN hosted by otherwise passive galaxies be revealed (\\citealt[e.g.][]{sadler07,smo09,smo17c}), presumably tracing a mode of radiatively inefficient accretion onto the central supermassive black holes, occurring at low Eddington ratios and through puffed-up, geometrically thick but optically thin accretion disks (see \\citealt{heckman14} for a review). Furthermore, radio continuum observations directly trace AGN deemed responsible for radio-mode AGN feedback, a key process in semi-analytic models that limits the formation of overly massive galaxies \\citep[e.g.][]{croton06,croton16}, a process that still needs to be verified observationally \\citep[e.g.][]{smo09a,smo17c,best14}. \n\nRadio continuum surveys, combined with multiwavelength data are necessary to study the properties of radio AGN at intermediate and high redshifts, their environments , and their cosmic evolution. Optimally, the sky area surveyed and the sensitivity reached are simultaneously maximised. In practice this is usually achieved through a 'wedding-cake approach' where deep, small area surveys are combined with larger area, but shallow surveys (see e.g. Fig.~1 in \\citealt{smo17}). The newly obtained radio continuum coverage of the XXL-North and -Sout fields (XXL-N and XXL-S respectively) is important in this respect, as it covers an area as large as 50 square degrees down to intermediate radio continuum sensitivities (rms~$\\sim40-200~\\mu$Jy\/beam), expected to predominantly probe radio AGN through cosmic time.\n\nThe XXL-S has been covered at 843 MHz by the Sydney University Molonglo Sky Survey (SUMSS) down to a sensitivity of 6 mJy\/beam \\citep{bock99}. To achieve a higher sensitivity, it was observed by the XXL consortium with the Australia Telescope Compact Array (ATCA) at 2.1 GHz frequency down to a $1\\sigma$ sensitivity of $\\sim40~\\mu$Jy\/beam (\\citealt{smolcic16}, hereafter XXL Paper XI; \\citealt{butler17}, hereafter XXL Paper XVIII). \n\nThe XXL-N has been covered by the 1.4 GHz (20 cm) NRAO VLA Sky Survey (NVSS; \\citealt{condon98}) and Faint Images of the Radio Sky at Twenty Centimeters (FIRST; \\citealt{becker95}) surveys down to sensitivities of 0.45 and 0.15 mJy\/beam, respectively. Subareas were also covered at 74, 240, 325 and 610 MHz within the XMM-LSS Project (12.66 deg$^2$; \\citealt{tasse06, tasse07}), and 610 MHz and 1.4 GHz within the VVDS survey (1 deg$^2$; \\citealt{bondi03}). Here we present new GMRT 610 MHz data collected towards the remainder of the XXL-N field. We combine these data with the newly processed 610 MHz data from the XMM-LSS Project, and present a validated source catalogue extracted from the total area observed. This sets the basis necessary for exploring the physical properties, environments and cosmic evolution of radio AGN in the XXL-N field (see Horellou et el., XXL Paper XXXIV, in prep.). Combined with the ATCA-XXL-S 2.1 GHz survey (XXL Papers XI and XVIII) it provides a unique radio data set that will allow studies of radio AGN and of their cosmic evolution over the full 50 square degree XXL area. An area of this size is particularly sensitive to probing the rare, intermediate- to high radio-luminosity AGN at various cosmic epochs \\citep[e.g.][]{willott01,sadler07,smo09a,donoso09,pracy16}.\n\n\nThe paper is outlined as follows. In \\s{sec:data} \\ we describe the observations, data reduction, and imaging. The mosaicing procedure and source catalogue extraction are presented in \\s{sec:mosaic} , and \\s{sec:catalog} , respectively. We test the reliability and completeness of the catalogue in \\s{sec:tests} , and summarise our results in \\s{sec:summary} . The radio spectral index, $\\alpha$, is defined via $S_\\nu\\propto\\nu^\\alpha$, where $S_\\nu$ is the flux density at frequency $\\nu$. \n\n\n\\section{Observations, data reduction and imaging}\n\\label{sec:data}\n\nWe describe the GMRT 610 MHz observations towards the XXL-N field, and briefly outline the data reduction and imaging performed on this data set using the Source Peeling and Atmospheric Modeling ({\\sc SPAM}) pipeline\\footnote{\\url{http:\/\/www.intema.nl\/doku.php?id=huibintemaspam}}.\n\n\\subsection{Observations}\n\\label{sec:obs}\n\nObservations with the GMRT at 610 MHz were conducted towards an area of 12.66 deg$^2$ within the XXL-N field (which includes the XMM-LSS area), using a hexagonal grid of 36 pointings (see \\citealt{tasse07} for details). A total of $\\sim18$ hours of observations were taken in the period from July to August 2004 \\citep{tasse07}\\footnote{Project ID 06HRA01}. The full available bandwidth of 32 MHz was used. The band was split into two intermediate frequencies (IFs), each sampled by\n128 channels, and covering the ranges of 594-610 MHz and 610-626 MHz, respectively. The source 3C 48 was observed for 30 minutes at the beginning and end of an observing run for flux and bandpass calibration, while source 0116-208 was observed for 8 min every 30 min as the secondary calibrator. To optimise the $uv$-coverage \\citet{tasse07} split the\n30 min observation of each individual pointing into three 10-minute\nscans, separated by about 1.3 hours. \n\nThe remaining areas of the XXL-N field, not previously covered at 610 MHz frequency, were observed with the GMRT through Cycles 23\\footnote{Project ID 23$\\_$022; 30 hours allocated in the period of October 2012 - March 2013}, \n24\\footnote{Project ID 24$\\_$043; 45 hours allocated in the period of April 2013 - September 2013}, \n27\\footnote{Project ID 27$\\_$009; 70 hours allocated in the period of October 2014 - March 2015}, and \n30\\footnote{Project ID 30$\\_$005; 29 hours allocated in the period of April - September 2016} for a total of 174 hours, in a combination of rectangular and hexagonal pointing patterns. The observations were conducted under good weather conditions. Using the GMRT software backend (GSB) a total bandwidth of 32 MHz was covered, at a central frequency of 608~MHz, and sampled by 256 channels in total. To maximise the $uv$-coverage, individual pointing scans were spacedout and iterated throughout the observing run whenever possible.\nPrimary calibrators (3C~48, 3C~147, 3C~286) were observed for an on-source integration time of $10-15$ minutes at the beginning and end of each observing run. Secondary calibrators were also observed multiple times during the observations. We note, however, that phase\/amplitude calibration was not performed using the secondary calibrators, but via self-calibration against background models, which has been shown to improve the final output (see \\citealt{intema16} for further details; see also below).\n\n\n\n\\subsection{Data reduction and imaging}\n\\label{sec:spam}\n\nThe data reduction and imaging was performed using the {\\sc SPAM} pipeline, described in detail by \\citet[see also \\citealt{intema09, intema14}]{intema16}. The pipeline includes direction dependent calibration, modelling and imaging for correcting mainly ionospheric dispersive delay. It consists of two parts. In the first, pre-processing, step the raw data from individual observing runs are calibrated using good-quality instrumental calibration solutions obtained per observing run, and then converted into visibility data sets for each observed pointing. Flagging, gain, and bandpass calibrations are performed in an iterative process, applying increasingly strict radio frequency interference flagging to optimise the calibration results. \nIn the second step the main pipeline converts the individual pointing visibility data sets into Stokes I continuum images, performing several steps of direction-independent and direction-dependent calibration, self-calibration, flagging and image construction. Imaging is performed via a single CLEAN deconvolution, automatically setting boxes around sources, and cleaning down to 3 times the central background noise. We refer to \\citet{intema16} for further details about the pipeline.\n\nThe {\\sc SPAM} pipeline successfully processed all XXL-N GMRT 610 MHz observing runs.\nA visual verification of the image quality found satisfactory results for every pointing.\n\n\\section{Mosaicing}\n\\label{sec:mosaic}\n\nIn this section we describe the astrometric corrections (\\s{sec:astrocorr} ) and flux density corrections (i.e., primary beam and pointing; \\s{sec:fluxcorr} ) performed prior to combining the individual pointings into the final mosaic (\\s{sec:finalmosaic} ). We constrain and\/or verify these corrections using compact sources (total signal-to-noise ratio > 10) \nin overlapping pointings, lying within the inner part of each individual pointing, and extracted with the {\\sc PyBDSF}\\footnote{\\url{http:\/\/www.astron.nl\/citt\/pybdsf\/}} software \\citep{mohan15}, in the same way as described in \\s{sec:pybdsm} .\n A sample assembled in this way assures that the errors on individual flux density\/position measurements by Gaussian fitting are minimised so that the noise-independent calibration errors can be determined.\n\n\\subsection{Astrometric corrections}\n\\label{sec:astrocorr}\n\n\nTo account for possible residual systematic astrometric shifts across individual pointings, initially caused by the ionosphere and not fully accounted for by the direction-dependent calibration, the source positions in each pointing were matched to those in the FIRST survey catalogue (\\citealt{becker95}; see also Sect.~\\ref{sec:mosaicastrom}), and the systematics corrected accordingly. In total there were 1\\,286 sources used for this comparison.\n\nIn \\f{fig:astr} \\ we show the resulting relative positional offsets of bright, compact sources in overlapping pointings. We find a $1\\sigma$ scatter of $0.67\\arcsec$ in RA and Dec, with a median offset of $0.02\\arcsec$, and $-0.03\\arcsec$ in RA and Dec, respectively, affirming that systematics have been corrected for.\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{XXL_relative_astrometry_v2.pdf}\n\\protect\\protect\\caption{\nPositional offsets of bright, compact sources in overlapping pointings. The median offsets in RA and Dec are indicated by the vertical and horizontal lines, respectively, while the dotted circle represents the $1\\sigma$ deviation (also indicated at the top).\n\\label{fig:astr}\n}\n\\end{figure}\n\n\\subsection{Flux density corrections}\n\\label{sec:fluxcorr}\n\nTo correct the individual pointing maps for primary beam attenuation, \n we adopt a standard, parameterised axisymmetric model, with coefficients given in the GMRT Observer Manual. Given the small fractional bandwidth covered (5.25\\%), we use the central frequency (610~MHz) beam model for all frequency channels.\nIn \\f{fig:pb} \\ we show the ratio of the flux densities of the sources in overlapping pointings, but at various distances from the phase centres, not corrected for primary beam attenuation (hereafter apparent flux densities) as a function of the ratio of the primary beam model attenuations\nfor the same sample of compact sources in overlapping pointings. For a perfect primary beam attenuation model the ratio of the apparent flux densities should be in one-to-one correspondence with the given ratio of the primary beam model attenuations. From \\f{fig:pb} \\ it is apparent that this is the case within a few percent on average, thus verifying the primary beam attenuation model used. \n\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{XXL_primary_beam.pdf}\n\\protect\\protect\\caption{\nRatio of apparent flux density (i.e., not corrected for primary beam attenuation; dots) vs. ratio of the primary beam model attenuations for a sample of bright, compact sources in overlapping pointings. The median and $\\pm1\\sigma$ offsets are indicated by the full and dashed lines, respectively. The one-to-one line is represented by the dotted line. \\label{fig:pb}\n}\n\\end{figure}\n\n\nFollowing \\citet{intema16} we also quantify and apply antenna pointing corrections. In the top panel of \\f{fig:ptcorr} \\ we show the ratio of flux densities of our sources in overlapping pointings as a function of the local azimuth of the source position in the first pointing. We account for the deviation from unity (changing sign at about 170 degrees), and in the bottom panel of \\f{fig:ptcorr} \\ we show the corrected flux density ratios, now consistent with unity. \nWe note that the $1\\sigma$ scatter of the flux density ratios is $\\sim20\\%$. As shown in \\f{fig:fluxratio} \\ this value remains constant as a function of flux density. As the sources used for this analysis have been drawn from overlapping parts of various pointings, i.e. from areas further away from the pointing phase centres, where the $\\rm{rms}$ noise is higher and the primary beam corrections less certain (see \\f{fig:pb} ), this value should be taken as an upper limit on the relative uncertainty of the source flux densities. \n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{XXL_without_pointing.pdf}\n\\includegraphics[width=\\columnwidth]{XXL_with_pointing.pdf}\n\\protect\\protect\\caption{\nRatio of flux densities (dots) of bright, compact sources in overlapping pointings as a function of local azimuth of the source position in the first pointing before (top) and after (bottom) applying the pointing correction (see text for details). In both panels the median and $\\pm1\\sigma$ deviations are indicated by the full, and dashed lines, respectively. \\label{fig:ptcorr}\n}\n\n\\end{figure}\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{XXL_internal_flux_ratios.pdf}\n\\protect\\protect\\caption{\nRatio of flux densities (dots) of bright, compact sources in overlapping pointings as a function of their mean apparent flux density (i.e. uncorrected for primary beam attenuation). The median and $\\pm1\\sigma$ is indicated by the full and dotted lines, respectively. \\label{fig:fluxratio}\n}\n\\end{figure}\n\n\n\\subsection{Final mosaic}\n\\label{sec:finalmosaic}\n\nAfter applying the per-pointing astrometric and flux density corrections described above to the individual pointings, each pointing (including its clean component and residual maps) is convolved to a common circular resolution of FWHM $6.5\\arcsec\\times6.5\\arcsec$, prior to mosaicing. This corresponds to a clean beam size larger than or equal to that intrinsically retrieved for the majority of the imaged pointings. The maps are then regridded to $1.9\\arcsec\\times1.9\\arcsec$ pixels, and combined into a mosaic in such a way that each pixel is weighted as the inverse square of the local rms, estimated using a circular sliding box with a 91-pixel diameter, chosen as a trade-off between minimising false detections at sharp boundaries between high noise and low noise, and separating extended emission regions from high rms regions.\nThe final mosaic, shown in \\f{fig:mosaic} , containing $16,177\\times11,493$ pixels, encompasses a total area of $30.4$ square degrees. As can be seen in \\f{fig:mosaic} , the rms within the mosaic is highly non-uniform: It decreases from about 200~$\\mu$Jy\/beam within the XMM-LSS subregion, to about 50~$\\mu$Jy\/beam in the remaining area. Although a factor of 3.8 difference in sensitivity is significant, we note that the data processing applied here achieved a background noise reduction of 50\\% in the XMM-LSS area relative to the previous data release ($rms\\sim300~\\mu$Jy\/beam; \\citealt{tasse07}). For comparison, in \\f{fig:lssxxlmap} \\ we show the image of the central part of the XMM-LSS area of the XXL-N field obtained here, and within the previous data release \\citep{tasse07}. \n\nThe overall areal coverage of the GMRT-XXL-N mosaic as a function of rms is shown in \\f{fig:visibility} . For 60\\% of the total 30.4 square degree area an rms better than 150~$\\mu$Jy\/beam is achieved (with a median value of about 50~$\\mu$Jy\/beam). For the remainder (corresponding to the XMM-LSS subarea of the XXL-N field) a median rms of about 200~$\\mu$Jy\/beam is achieved. \n \n\\begin{figure*}[ht!]\n\\centering\n\\includegraphics[clip, trim=0.cm 7.5cm 0.cm 7.5cm, scale=0.85]{XXL-N_GMRT610_14x10}\n\\protect\\protect\\caption{\nGrayscale mosaic with overlayed pixel flux distributions within small areas, encompassed by the panel (the local rms in $\\mu$Jy\/beam is indicated in each panel). The x and y ranges for all the panels are indicated in the bottom left panel. \\label{fig:mosaic}\n}\n\\end{figure*}\n\n\\begin{figure*}\\includegraphics[clip, trim=0.cm 0cm 0.cm 0.cm]{XXL_XMMLSS_center.pdf}\n\\protect\\protect\\caption{\nMosaiced images of the XMM-LSS area of the XXL-N field obtained by the procedure presented here (left panel) and previously published (right panel; \\citealt{tasse07}), using the same colour scale.\n\\label{fig:lssxxlmap}\n}\n\\end{figure*}\n\n\\begin{figure}\n\\includegraphics[clip, trim=0.cm 3cm 0.cm 4.1cm, scale=0.6]{visibility.pdf}\n\\protect\\protect\\caption{\nAreal coverage as a function of the rms noise in the mosaic.\n\\label{fig:visibility}\n}\n\\end{figure}\n\n\\section{Cataloguing\\label{sec:catalog}}\n\nWe describe the source extraction (\\s{sec:pybdsm} ), corrections performed to account for bandwidth smearing, and the measurement of the flux densities of resolved and unresolved sources (\\s{sec:res} ). We also describe the process of combining multiple detections, physically associated with single radio sources (\\s{sec:multi} ), and present the final catalogue (\\s{sec:finalcat} ). \n\n\\subsection{Source extraction}\n\\label{sec:pybdsm}\n\n\n\nTo extract sources from our mosaic we used the {\\sc PyBDSF} software \\citep{mohan15}. We set {\\sc PyBDSF} to search for islands of pixels with flux density values greater than or equal to three times the local rms noise (i.e $\\geq3\\sigma$) surrounding peaks above $5\\sigma$.\nTo estimate the local rms a box of 195 pixels per side was used, leading to a good trade-off between detecting real objects and limiting false detections (see \\s{sec:falsdet} ). Once islands are located, {\\sc PyBDSF} fits Gaussian components, and their flux density is estimated after deconvolution of the clean beam. These components are grouped into single sources when necessary, and final source flux densities are reported, as well as flags indicating whether multiple Gaussian fits were performed. The procedure resulted in 5\\,434 sources with signal-to-noise ratios $\\geq 7$. \n\n\n\\subsection{Resolved and unresolved sources and smearing}\n\\label{sec:res}\n\nTo estimate smearing due to bandwidth- and time-averaging, and to separate resolved from unresolved sources we follow the standard procedure, which relies on a comparison of the sources' total and peak flux densities \\citep[e.g.,][]{bondi08,intema14,smo17}. In \\f{fig:stotspeak} \\ we show the ratio of the total and peak flux densities ($S_\\mathrm{T}$ and $S_\\mathrm{P}$, respectively) for the 5\\,434 sources as a function of the signal-to-noise ratio ($\\mathrm{S\/N}\\geq7$). While the increasing spread of the points with decreasing S\/N ratio reflects the noise properties of the mosaic, smearing effects will be visible as a systematic, positive offset from the $S_\\mathrm{T}\/S_\\mathrm{P}=1$ line as they decrease the peak flux densities, while conserving the total flux densities. To quantify the smearing effect we fit a Gaussian to the logarithm of the $S_\\mathrm{T}\/S_\\mathrm{P}$ distribution obtained by mirroring the lower part of the distribution over its mode (to minimise the impact of truly resolved sources). We infer a mean value of 6\\%, and estimate an uncertainty of 1\\% based on a range of binning and S\/N ratio selections. After correcting the peak flux densities for this effect we fit a lower envelope encompassing 95\\% of the sources below the $S_\\mathrm{T}\/S_\\mathrm{P}=1$ line, and mirror it above this line. Lastly, we consider all sources below this curve, of the form \n\\begin{equation}\n\\label{eq:curve}\nS_\\mathrm{T}\/S_\\mathrm{P}=1 + 3.2 \\times (S\/N)^{-0.9},\n\\end{equation}\nas unresolved and set their total flux densities equal to their peak flux densities (corrected for the smearing effects). We note that the four extreme outliers ($S_\\mathrm{T}\/S_\\mathrm{P}<0.5$) are due to blending issues, a locally high rms, or being possibly spurious.\n\n\\begin{figure}\n\\includegraphics[clip, trim=0.cm 7cm 0.cm 7cm, width=\\columnwidth]{StotSpeak.pdf}\n\\protect\\protect\\caption{\nTotal ($S_\\mathrm{T}$) over peak ($S_\\mathrm{P}$) flux as a function of signal-to-noise ratio. The horizontal dashed line shows the $S_\\mathrm{T}\/S_\\mathrm{P}=1$ line. The upper curve was obtained by mirroring the lower curve (which encompasses 95\\% of the sources below the $S_\\mathrm{T}\/S_\\mathrm{P}=1$ line; see Eq.~\\ref{eq:curve}) over the $S_\\mathrm{T}\/S_\\mathrm{P}=1$ line. All sources below the upper curve are considered unresolved, while those above are considered resolved (see text for details).\n\\label{fig:stotspeak}\n}\n\\end{figure}\n\n\\subsection{Complex sources}\n\\label{sec:multi}\n\nRadio sources appear in many shapes, and it is possible that sources with complex radio morphologies (e.g. core, jet, and lobe structures that may be warped or bent) are listed as multiple sources within the source extraction procedure. To identify these sources we adopt the procedure outlined in \\citet{tasse07}. We identify groups of components within a radius of $60\\arcsec$ from each other. Based on the source density in the inner part of the field, the Poisson probability is 0.22 $\\%$ that two components are associated by chance. For the outer part of the field, an additional flux limit of $S_\\mathrm{610MHz}>1.4$~mJy is imposed on the components prior to identifying the groups they belong to. This flux limit is justified by the size-flux relation for radio sources \\citep[e.g.][]{bondi03}, i.e. brighter sources are larger in size and are thus more likely to break into multiple components, and it also assures a Poisson probability of 0.39 $\\%$ for a chance association. All of these groups were visually checked a posteriori, and verified against an independent visual classification of multicomponent sources performed by six independent viewers. In total we identify 768 sources belonging to 337 distinct groups of multiple radio detections likely belonging to a single radio source (see also next section).\n\n\n\n\n\\subsection{Final catalogue}\n\\label{sec:finalcat}\n\nIn our final catalog for each source we report the following:\n\\begin{itemize}\n\\item[ ]{Column 1:} Source ID;\\vspace{2mm}\n\n\\item[ ]{Columns 2-5:} RA and Dec position (J2000) and error on the position as provided by {\\sc PyBDSF};\\vspace{2mm}\n\n\\item[ ]{Column 6:} Peak flux density ($S_\\mathrm{P}$) in units of Jy\/beam, corrected for smearing effects as detailed in \\s{sec:res} ;\\vspace{2mm}\n\n\\item[ ]{Column 7:} Local rms value in units of Jy\/beam; \\vspace{2mm}\n\n\\item[ ]{Column 8:} S\/N ratio;\\vspace{2mm}\n\n\\item[ ]{Column 9 - 10:} Total flux density ($S_\\mathrm{T}$) and its error, in units of Jy;\\vspace{2mm}\n\n\\item[ ]{Column 11:} Flag for resolved sources; 1 if resolved, 0 otherwise. We note that the total flux density for unresolved sources equals the smearing corrected peak flux and the corresponding error lists rms scaled by the correction factor (see \\s{sec:res} \\ for details); \\vspace{2mm}\n\n\\item[ ]{Column 12:} Complex source identifier, i.e., ID of the group obtained by automatic classification of \n multicomponent sources (see \\s{sec:multi} \\ for details); if 0 no group associated with the source, otherwise the integer corresponds to the group ID;\\vspace{2mm}\n \n\\item[ ]{Column 13:} Spectral index derived using the 610 MHz and 1.4 GHz (NVSS) flux densities, where available (-99.99 otherwise); \\vspace{2mm}\n \n\\item[ ]{Column 14:} Area flag; $0$ if the source is in the inner mosaic area (within the XMM-LSS field and with higher rms), $1$ if it is in the outer field area (with better rms; see \\s{sec:finalmosaic} \\ and \\f{fig:mosaic} \\ for details);\\vspace{2mm}\n\n\\item[ ]{Column 15:} Edge flag; 0 if the source is on the edge where the noise is high, $1$ otherwise. We note that selecting this flag to be 1 in the inner (outer) area, i.e. for area flag 0 (1) selects sources within areas of 7.7 (12.66) square degrees, while the area corresponds to 9.63 square degrees for the inner area (area flag 0) and an edge flag of 0 or 1. \\vspace{2mm}\n\\end{itemize}\nThe full catalogue is available as a queryable database table\n(XXL$\\_$GMRT$\\_$17) via the XXL Master Catalogue browser\\footnote{\\url{http:\/\/cosmosdb.iasf-milano.inaf.it\/XXL}}. A copy will also be deposited at the\nCentre de Donn\\'ees astronomiques de Strasbourg (CDS)\\footnote{\\url{http:\/\/cdsweb.ustrasbg.fr}}.\n\n\\section{Reliability and completeness}\n\\label{sec:tests}\n\nIn this section we assess the astrometric accuracy and false detection rate within the catalogue presented above (\\s{sec:mosaicastrom} \\ and \\s{sec:falsdet} , respectively). We compare the source flux densities to those derived from the previous XMM-LSS GMRT 610 MHz data release (\\s{sub:Flux-comparison} ). We then derive average 610 MHz -- 1.4 GHz spectral indices for unbiased subsamples of bright sources (\\s{sec:alpha} ), and use these to construct 1.4~GHz radio source counts, which we compare to counts from much deeper radio continuum surveys and thereby assess and quantify the incompleteness of our survey as a function of flux density (\\s{sec:counts} ).\n\n\\subsection{Astrometric accuracy}\n\\label{sec:mosaicastrom}\n\nTo assess the astrometric accuracy of the presented mosaic and catalogue, we compared the positions of our compact sources with those given in the FIRST survey. Using a search radius of $5\\arcsec$ we found 1\\,286 matches. The overall median offset is (as expected) small ($\\Delta\\mathrm{RA}=-0.01\\arcsec$, $\\Delta\\mathrm{Dec}=0.01\\arcsec$), and the 1$\\sigma$ position scatter radius is $0.62\\arcsec$. This is similar to the internal positional accuracy seen in pointing overlap regions, and corresponds to roughly one-tenth of the angular resolution in the mosaic ($6.5\\arcsec$).\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{XXL_FIRST_astrometry_v2.pdf}\n\\protect\\protect\\caption{\nPositional offsets of sources detected in the XXL mosaic, relative to those catalogued within the FIRST survey. The median offsets in RA and Dec are indicated by the vertical and horizontal lines, respectively, while the dotted circle represents the $1\\sigma$ deviation (also indicated at the top).\n\\label{fig:astromfinal}\n}\n\\end{figure}\n\n\n\\subsection{False detection rate}\n\\label{sec:falsdet}\n\nTo assess the false detection rate, i.e. estimate the number of spurious sources in our catalogue, we ran {\\sc PyBDSF} on the inverted (i.e. multiplied by $-1$) mosaic. Each 'detection' in the inverted mosaic can be considered spurious as no real sources exist in the negative part of the mosaic. Running {\\sc PyBDSF} with the same set-up as for the catalogue presented in \\s{sec:catalog} \\ , we found only one detection with S\/N ratio $\\geq7$ in the inverted mosaic. Since there are 5\\,434 detections with $\\mathrm{S\/N}\\geq7$ in the real mosaic, false detections are not significant ($<10^{-3}$ of the source population).\n\n\n\\subsection{Flux comparison within the XMM-LSS area}\n\\label{sub:Flux-comparison}\n\nWe compared the flux densities for our sources extracted within the XMM-LSS area of the XXL-N field with those extracted in the same way, but over the XMM-LSS mosaic published by \\citet{tasse07}. Using a search radius of $5\\arcsec$ we find 924 sources common to the two catalogues. In Fig.~\\ref{fig:lssxxlflux} we compare the {\\sc PyBDSF}-derived flux densities (prior to corrections for bandwidth smearing, and total flux densities). Overall, we find good agreement, with an average offset of 2.4\\%. This can be due to the use of different flux standards (\\citealt{perley13} and \\citealt{scaife12}). We note that the comparison remains unchanged if flux densities from the \\citet{tasse07} catalogue are used instead.\n\n\n\n\\begin{figure}\n\\includegraphics[clip, trim=0.cm 7cm 0.cm 7cm, width=\\columnwidth]{GMRT_TASSE_GRAF.pdf}\n\\includegraphics[clip, trim=0.cm 7cm 0.cm 7cm, width=\\columnwidth]{GMRT_TASSE_HISTOGRAM.pdf}\n\\protect\\protect\\caption{\nComparison within the XMM-LSS area of the XXL-N field between the flux densities obtained here (x-axis) and those extracted in the same way, but over the mosaic published by \\citet[][y-axis; top panel]{tasse07}. The solid line is the diagonal. The distribution of the flux ratio, with fitted Gaussian is shown in the bottom panel. \n\\label{fig:lssxxlflux}\n}\n\\end{figure}\n\n\n\n\\subsection{Spectral indices}\n\\label{sec:alpha}\n\nWe derived the 610~MHz -- 1.4 GHz spectral indices for the sources in our catalogue that are also detected in the the NVSS survey \\citep{condon98}. We found 1\\,395 associations (470 within the XMM-LSS subarea) using a search radius of $20\\arcsec$, which corresponds to about half of the NVSS synthesised beam (FWHM~$\\sim45\\arcsec$), and assures minimal false matches. In Fig.~\\ref{fig:alpha} we show the derived spectral indices as a function of 610~MHz flux density, separately for the XMM-LSS and the outer XXL areas (given the very different rms reached in the two areas). The different source detection limits of the NVSS and GMRT-XXL-N surveys would bias the derived spectral indices if taken at face value. Thus, to construct unbiased samples we defined flux density cuts of \n$S_\\mathrm{610MHz}\\geq20$~mJy and $\\geq2$~mJy for the XMM-LSS, and outer XXL areas, respectively. As illustrated in Fig.~\\ref{fig:alpha}, these cuts conservatively assure samples with unbiased $\\alpha$ values. For samples defined in this way we find average spectral indices of -0.65 and -0.75 with standard deviations of 0.36 and 0.34, respectively. The values correspond to typically observed spectral indices of radio sources at these flux levels \\citep[e.g.,][]{condon92, kimball08}, and they are consistent with those inferred by \\citet{tasse07} based on the previous (XMM-LSS) data release.\n\n\\begin{figure}\n\\includegraphics[clip, trim=0.cm 0.cm 0.cm 1.5cm, width=1\\columnwidth]{Plot_Final_MINUS} \n\\protect\\protect\\caption{\nSpectral index based on 610 MHz and 1.4 GHz (NVSS) data ($\\alpha$) as a function of 610~MHz flux density, separately shown for the XMM-LSS (top panel), and outer XXL-N (bottom panel) areas. The dashed line in both panels indicates the constraint on $\\alpha$ placed by the NVSS detection limit (2.5~mJy), and the vertical full line indicates the threshold beyond which the sample is not expected to be biased by the different detection limits. The horizontal line indicates the average spectral index for sources with flux densities above this threshold.\n\\label{fig:alpha}\n}\n\\end{figure}\n\n\\subsection{Source counts and survey incompleteness}\n\\label{sec:counts}\n\nIn \\f{fig:counts} \\ we show the Euclidean normalised source counts at 1.4~GHz frequency derived for our GMRT-XXL-N survey, separately for the inner (XMM-LSS) and outer XXL-N areas. We have chosen a reference frequency of 1.4~GHz for easier comparison with the counts from the literature, which are deeper than our data and are based on data and on simulations \\citep{condon84,wilman08, smo17}. The 610 MHz flux densities were converted to 1.4~GHz using a spectral index of $\\alpha=-0.7$, as derived in the previous section. The area considered for deriving the GMRT-XXL-N counts within the inner (XMM-LSS) area (with an rms of $\\sim200~\\mu$Jy\/beam) was 9.6 square degrees (green symbols in \\f{fig:counts} ). Two areas were considered for the outer XXL-N region: the full area of 20.75 square degrees (also containing the noisy edges; see \\f{fig:mosaic} ) and an area of 12.66 square degrees, which excludes the noisy edges and is characterised by a fairly uniform rms of $\\sim45~\\mu$Jy\/beam (black and yellow symbols, respectively, in \\f{fig:counts} ). We find that the counts within these 12.66 square degrees are $40-60$\\% higher than those derived using the full outer area (below 0.4 mJy). This suggests that including the noisy edges (as would be expected) further contributes to survey incompleteness; hereafter we thus only consider the 12.66 square degree area for the outer GMRT-XXL-N field.\n\nIn \\f{fig:counts} \\ we also show the 1.4~GHz counts derived by \\citet{condon84}, \\citet{wilman08} and \\citet{smo17}. \\citet{condon84} developed a model for the source counts using the \nlocal 1.4 GHz luminosity function for two dominant, spiral and elliptical galaxy populations combined with source counts, redshift, and spectral-index distributions for various 400 MHz to 5 GHz\nflux limited samples. The Square Kilometre Array Design Study (SKADS) simulations of the radio source counts were based on evolved luminosity functions of various radio populations, also accounting for large-scale clustering \\citep{wilman08}. Lastly, the counts taken from \\citet{smo17} were constructed using the VLA-COSMOS 3~GHz Large Project, to date the deepest ($\\mathrm{rms}\\sim2.3~\\mu$Jy\/beam) radio survey of a relatively large field (2 square degrees). Their 1.4 GHz counts were derived using the average spectral index ($\\alpha=-0.7$) inferred for their 3 GHz sources. We note that the various counts from the literature are consistent down to the flux level reached by our GMRT-XXL-N data ($\\sim0.15~$mJy), and that they are in good agreement with our GMRT counts down to $\\sim2$~mJy, implying that our survey is complete at 1.4 GHz flux densities $>2$~mJy. This corresponds to 610 MHz flux densities of $>4$~mJy, beyond which we take the detection fractions and completeness to equal unity (see below).\n\nThe decline at 1.4 GHz (610 MHz) flux densitites $\\lesssim2$~mJy ($\\lesssim4$~mJy) of the derived GMRT-XXL-N counts, compared to the counts from the literature (see \\f{fig:counts} ) , can be attributed to the survey incompleteness. In radio continuum surveys this is due to a combination of effects, such as real sources remaining undetected due to their peak brightnesses falling below the detection threshold given the noise variations across the field, or source extendedness, and the flux densities of those detected being over- or underestimated due to these noise variations. Commonly, such survey incompleteness is accounted for by statistical corrections (as a function of flux density) taking all these combined effects into account \\citep[e.g.][]{bondi08,smo17}. The approach we take here makes use of \n the availability of radio continuum surveys that reach much deeper than our GMRT-XXL-N 610 MHz survey. In particular, for the inner (XMM-LSS; 9.6 square degrees, $\\mathrm{rms}\\sim200~\\mu$Jy\/beam) and outer (12.66 square degrees, $\\mathrm{rms}\\sim45~\\mu$Jy\/beam) GMRT-XXL-N areas, we derive the measurements relative to the VLA-COSMOS 3 GHz source counts. The $\\sim2.3~\\mu$Jy\/beam depth of the VLA-COSMOS 3 GHz Large Project means these counts are 100\\% complete in the flux regime encompassed by the GMRT-XXL-N survey (see \\citealt{smo17} for details). \n \n The GMRT-XXL-N survey differential incompleteness measurements (i.e. detection fractions as a function of flux density) are computed as the ratio of the GMRT-XXL-N and VLA-COSMOS source counts, are shown in \\f{fig:compl} , and listed in \\t{tab:compl} \\ separately for the inner and outer areas of the mosaic. \n The listed corrections, combined with the 'edge flag', given in the catalogue (see \\s{sec:finalcat} ), can be used to statistically correct for the incompleteness of the source counts important to determine luminosity functions. \n In \\f{fig:compl} \\ we also show the total completeness of the survey within the given areas corresponding to the fraction of sources detected above the given flux bin (lower) limit; in \\f{fig:compl2} \\ we show the same, but for the full 30.4 square degree mosaic. We note that we consider the survey complete beyond $S_\\mathrm{610MHz}=4.6$~mJy, below which the detection fraction decreases. The overall completeness for the full area (30.4 square degree survey) reaches 50\\% at $S_\\mathrm{610MHz}\\approx400~\\mu$Jy (i.e. about $250~\\mu$Jy in the outer XXL area and about $900~\\mu$Jy in the inner XXL area). Furthermore, comparing the detection fraction derivation with that derived relative to the SKADS simulation, we estimate a possible uncertainty of the derived values of the order of 10\\%.\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[clip, trim=0.8cm 9cm 0.cm 8cm, width=1.56\\columnwidth]{counts_xxl.pdf}\n\\protect\\protect\\caption{\nSource counts at 1.4~GHz , normalised to Euclidean space separately for various surveys (symbols) and simulations (lines), as indicated in the panel. The 610 MHz flux densities have been translated to 1.4 GHz using a spectral index of $\\alpha=-0.7$, as derived in \\s{sec:alpha} . The rapid decline of the GMRT-XXL-N counts (at $S_\\mathrm{1.4GHz}<2$~mJy) is due to survey incompleteness (see text for details). \n\\label{fig:counts}\n}\n\\end{center}\n\\end{figure*}\n\n\n\\begin{figure}\n\\includegraphics[clip, trim=0.cm 8.5cm 0.cm 9cm, width=1.06\\columnwidth]{complet_xxl_lss.pdf}\n\\includegraphics[clip, trim=0.cm 8.5cm 0.cm 9cm, width=1.06\\columnwidth]{complet_xxl_outer_cutedges.pdf}\n\\protect\\protect\\caption{\nDetection fraction as a function of 610~MHz flux density relative to the VLA-COSMOS 3~GHz Large Project (dashed line) for the inner area (9.6 square degrees, coincident with the XMM-LSS field; left panel) and outer area (12.66 square degree) areas. Also shown is the completeness (i.e. the detection fraction in a given bin and that above the given flux bin lower limit) as a function of flux density (full line). \n\\label{fig:compl}\n}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[clip, trim=0.cm 8.5cm 0.cm 9cm, width=1.06\\columnwidth]{complet_xxl_entire.pdf}\n\\protect\\protect\\caption{\nSame as \\f{fig:compl2} \\ but for the full 30.4 square degree observed GMRT-XXL-N area. \n\\label{fig:compl2}\n}\n\\end{figure}\n\n\n\\begin{table}\n\\begin{center}\n\\caption{Differential incompleteness measures, i.e. detection fraction as a function of 610~MHz flux density for the GMRT-XXL-N 610~MHz survey relative to the VLA-COSMOS 3 GHz Large Project source counts, and separated into two areas (inner and outer field) with different $1\\sigma$ sensitivity limits. The estimated uncertainty of the detection fractions computed in this way is $\\sim10\\%$. Beyond the flux densities given here, we consider the survey complete (see text for details).}\n\\renewcommand{\\arraystretch}{1.5}\n\\begin{tabular}[t]{c c | c c}\n\\hline\ninner area & (9.6 deg$^2$) & outer area & (12.66 deg$^2$) \\\\\n \\hline\n $S_\\mathrm{610MHz}$ & detection & $S_\\mathrm{610MHz}$ & detection \\\\\n $[$mJy$]$ & fraction & $[$mJy$]$ & fraction\\\\\n \\hline\n 1.92 & 0.45 & 0.39 & 0.34 \\\\\n 2.96 & 0.65 & 0.51 & 0.75 \\\\\n 4.58 & 0.94 & 0.67 & 0.80 \\\\\n & & 0.87 & 0.87 \\\\\n & & 1.24 & 0.87 \\\\\n & & 1.92 & 0.92 \\\\ \n & & 2.96 & 0.96 \\\\\n & & 4.58 & 0.95 \\\\\n \\hline\n\\end{tabular}\n\\label{tab:compl}\n\\end{center}\n\\end{table}\n\n\\section{Summary and conclusions\\label{sec:summary}}\n\nBased on a total of 192 hours of observations with the GMRT towards the XXL-N field we have presented the GMRT-XXL-N 610 MHz radio continuum survey. Our final mosaic encompasses a total area of 30.4 square degrees with a non-uniform rms, being $\\sim200~\\mu$Jy\/beam in the inner area (9.6 square degrees) within the XMM-LSS field, and $\\sim45~\\mu$Jy\/beam in the outer area (12.66 square degrees). We have presented a catalogue of 5\\,434 radio sources with S\/N ratios down to $7\\times$\\,rms. Of these, 768 have been identified as components of 337 larger sources with complex radio morphologies, and flagged in the final catalogue. The astrometry, flux accuracy, false detection rate and completeness of the survey have been assessed and constrained. \n\nThe derived 1.4 GHz radio source counts reach down to flux densities of $\\sim150~\\mu$Jy (corresponding to $\\sim290~\\mu$Jy at 610~MHz frequency assuming a spectral index of $\\alpha=-0.7$, which is consistent with the average value derived for our sources at the bright end). Past studies have shown that the radio source population at these flux densities is dominated by AGN, rather than star forming galaxies \\citep[e.g.][]{wilman08,padovani15,smo08,smo17}. This makes the GMRT-XXL-N 610 MHz radio continuum survey, in combination with the XXL panchromatic data a valuable probe for studying the physical properties, environments, and cosmic evolution of radio AGN.\n\n\\section*{Acknowledgements}\n\nThis research was funded by the European Union's Seventh Framework programme under grant agreements 333654 (CIG, 'AGN feedback'). V.S., M.N. and J.D. acknowledge support from the European Union's Seventh Framework programme under grant agreement 337595 (ERC Starting Grant, 'CoSMass'). W.L.W. acknowledges support from the UK Science and Technology Facilities Council [ST\/M001008\/1]. We thank the staff of the GMRT who made these observations possible. GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research. The Saclay group acknowledges long-term support from the Centre National d'Etudes Spatiales (CNES).\nXXL is an international project based on an XMM Very Large \\foreignlanguage{british}{P}rogramme\nsurveying two 25~square degree extragalactic fields at a depth of $\\sim5\\times10^{-15}$~erg~cm$^{-2}$~s$^{-1}$\nin the {[}0.5-2{]}~keV band for point-like sources. The XXL website\nis \\url{http:\/\/irfu.cea.fr\/xxl}. Multiband information and spectroscopic\nfollow-up of the X-ray sources are obtained through a number of survey\nprogrammes, summarised at \\url{http:\/\/xxlmultiwave.pbworks.com\/}.\n\n \\bibliographystyle{aa}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\nLet $q>p$ be two integers. We use $[p,q]$ to denote the set $\\{p,p+1,\\cdots,q\\}$,\nand simply write $[1,q]$ as $[q]$.\nGraphs considered in this paper are simple, and digraphs in consideration are those which are orientations of simple graphs.\n Let $m\\ge 1$ be an integer and $G$ be a graph with $m$ edges. We use $V(G),E(G),\\Delta(G)$ and $\\delta(G)$ to denote the vertex set, the edge set, the maximum degree and the minimum degree of $G$, respectively. An \\emph{antimagic labeling} of $G$ is a bijection $\\tau$ from $E(G)$ to $[m]$ such that for any two distinct vertices $u$ and $v$ in $G$, the sum of labels on the edges incident with $u$ differs from that of $v$. A graph is said to be \\emph{antimagic} if it admits an antimagic labeling. The concept of antimagic labeling was proposed by Hartsfield and Ringel in 1990 \\cite{HR1990}. In the same paper, they conjectured that every connected graph other than $K_2$ is antimagic and every tree other than $K_2$ is antimagic. This topic was investigated by many researchers, for instance, see \\cite{CLPZ2016,LZ2014, YHYWW2018}.\n One of the best known results for trees was due to Kaplan, Lev, and Roditty \\cite{KLR2009}, who proved that any tree having more than two vertices and at most one vertex of degree two\nis antimagic (see also \\cite{LWZ2014}).\nLozano, Mora and Seara \\cite{LMS2019} proved that caterpillars are antimagic, where a \\emph{caterpillar} is a tree of order at least 3 such that the removal of its leaves produces a path. For related results the readers are referred to the\nsurvey of Gallian \\cite{G2017}.\n\nIn 2010, Hefetz, M\\\"{u}tze and Schwartz introduced a variation of antimagic labeling, i.e., antimagic labeling of a digraph \\cite{HMS2010}. Let $D$ be a digraph. We use $V(D)$ and $A(D)$ to denote the set of vertices and the set of arcs of $D$, respectively. Let $X,Y\\subseteq V(D)$ be two subsets, we use $A(X,Y)$ to denote the set of arcs with tail in $X$ and head in $Y$. The notation $A(X,\\overline{X})$ is also denoted by $\\partial(X)$. Let $|A(D)|=m$. An \\emph{antimagic labeling} of $D$ is a bijection $\\tau$ from $A(D)$ to $[m]$ such that no two vertices receive the same vertex-sum, where the \\emph{vertex-sum} of a vertex $u\\in V(D)$ is the sum of labels of all arcs entering $u$ minus the sum of labels of all arcs leaving $u$. We use $s_D(u)$ to denote the vertex-sum of the vertex $u\\in V(D)$,\nand simply write $s(u)$ if $D$ is understood.\n We say $(D,\\tau)$ is an \\emph{antimagic orientation} of a graph $G$ if $D$ is an orientation of $G$ and $\\tau$ is an antimagic labeling of $D$. Hefetz, M\\\"{u}tze and Schwartz~\\cite{HMS2010} proposed the following conjecture.\n\n\\begin{conj} \\label{orientation-conj} Every connected graph admits an antimagic orientation.\n\\end{conj}\n\n\nIn the same paper, Hefetz, M\\\"{u}tze and Schwartz proved that Conjecture \\ref{orientation-conj} is affirmative for some classes of\ngraphs, such as stars, wheels, cliques, and dense graphs (graphs of order $n$ with minimum degree at least $C\\log n$ for an absolute\nconstant $C$); in fact, the authors proved a stronger result that every orientation of these graphs is antimagic. Recently, Conjecture \\ref{orientation-conj} has been verified for regular graphs \\cite{HMS2010,LSWYZ2019,Y2019,SH2019}, biregular bipartite graphs with minimum degree at least two \\cite{SY2017}, Halin graphs \\cite{YCZ2019}, and graphs with large maximum degree \\cite{YCOP2019}. Trees are widely investigated for graph labeling problems. For antimagic orientation, it is proved that Conjecture \\ref{orientation-conj} is true for caterpillars \\cite{L2018} and complete $k$-ary trees \\cite{SH2019tree}.\n\nIt is easy to observe that all antimagic\nbipartite graphs admit an antimagic orientation where all edges are oriented\nin the same direction between the partite sets. Therefore, all subclasses\nof trees that are known to be antimagic admit antimagic orientations.\nA \\emph{lobster} is a tree such that the removal of its leaves produces a caterpillar.\nIt is still unknown that if every\nlobster is antimagic.\nIn this paper, we prove that Conjecture \\ref{orientation-conj} is affirmative for every lobster.\n\n\n\n\\begin{thm}\\label{Thm} Every lobster admits an antimagic orientation.\n\\end{thm}\n\n\n\\section{Preliminary Lemmas}\n\nIn this section, we prove two lemmas that will be used in proving Theorem~\\ref{Thm}.\n\n\n\\begin{lem}\\label{lemma3} Every path with at least two edges is antimagic and every path admits an antimagic orientation.\n\\end{lem}\n\n\\begin{proof}\n\tSince $K_2$ admits an antimagic orientation, and every antimagic bipartite graph admits an antimagic orientation, it suffices to prove the first part of the statement.\n\tLet $m\\ge 2$ be an integer and $P=v_0v_1\\cdots v_m$ be a path. We define a labeling of $P$ as follows.\n\t\n\tIf $m$ is even, starting from the edge incident to $v_0$\n\tconsecutively assign labels\n\t$$\n\t1,2,3, \\cdots, m-1, m.\n\t$$\n\t\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\t\n\tIf $m$ is odd,\n\tstarting from the edge incident to $v_0$\n\tconsecutively assign labels\n\t$$\n\t1,2,3, \\cdots, m-2, m, m-1.\n\t$$\n\t\n\tIt is clear that the assignment is an antimagic labeling of $P$.\n\\end{proof}\n\n\nLet $P$ be a path and $u,v\\in V(P)$. We denote by $P[u,v]$ the subpath of $P$ with one end $u$ and the other end $v$. The following lemma will be used for giving an antimagic orientation for the central path of a lobster, where vertices in $U$ are corresponding to the vertices of degree at least three in the lobster.\n\n\\begin{lem}\\label{lemma1} Let $m\\ge 2$ be an integer, $P=v_0v_1\\cdots v_m$ be a path, and $U=\\{v_{h_1}, v_{h_2},\\cdots,v_{h_t}\\}\\subseteq V(P)$ with $|U|\\ge 1$, where $h_1] (v\\i) edge (v\\j);\n\t\t}\n\t\t\n\t\t\\foreach \\i in {10,11}\n\t\t{\t\n\t\t\t\\pgfmathint{\\i + 1}\n\t\t\t\\edef\\j{\\pgfmathresult}\n\t\t\t\\draw[<-] (v\\i) edge (v\\j);\n\t\t}\n\t\t\n\t\t\\node at (-1,-0.5) {$v_{0}$};\n\t\t\\node at (0,-0.5) {$v_{1}$};\n\t\t\\node at (2,-0.5) {$v_{h_1}$};\n\t\t\\node at (3,-0.5) {$v_{h_2}$};\n\t\t\\node at (6,-0.5) {$v_{h_3}$};\n\t\t\\node at (8,-0.5) {$v_{h_4}$};\n\t\t\\node at (9,-0.5) {$v_{h_5}$};\n\t\t\\node at (10,-0.5) {$v_{h_6}$};\n\t\t\\node at (12,-0.5) {$v_{13}$};\n\t\t\\path[draw,thick,black!60!green]\n\t\t\n\t\t(v-1) edge node[name=la,pos=0.5, above] {\\color{blue} $13$} (v0)\n\t\t(v0) edge node[name=la,pos=0.5, above] {\\color{blue} $1$} (v1)\n\t\t(v1) edge node[name=la,pos=0.5, above] {\\color{blue}$11$} (v2)\n\t\t(v2) edge node[name=la,pos=0.5, above] {\\color{blue} $10$} (v3)\n\t\t(v3) edge node[name=la,pos=0.5, above] {\\color{blue} $8$} (v4)\n\t\t(v4) edge node[name=la,pos=0.5, above] {\\color{blue} $3$} (v5)\n\t\t(v5) edge node[name=la,pos=0.5, above] {\\color{blue} $9$} (v6)\n\t\t(v6) edge node[name=la,pos=0.5, above] {\\color{blue} $4$} (v7)\n\t\t(v7) edge node[name=la,pos=0.5, above] {\\color{blue} $7$} (v8)\n\t\t(v8) edge node[name=la,pos=0.5, above] {\\color{blue} $6$} (v9)\n\t\t(v9) edge node[name=la,pos=0.5, above] {\\color{blue} $5$} (v10)\n\t\t(v10) edge node[name=la,pos=0.5, above] {\\color{blue} $2$} (v11)\n\t\t(v11) edge node[name=la,pos=0.5, above] {\\color{blue} $12$} (v12);\n\t\t\n\t\t\\node at (6,-1.5) {Labeling of $\\overrightarrow{P}$ when $\\ell$ is even and $s=1$};\n\t\t\n\t\t\\end{tikzpicture}\n\t\t\n\t\t\\smallskip\n\t\t\n\t\t\\begin{tikzpicture}\n\t\t\\foreach \\i in {-1,0,...,12}\n\t\t{\n\t\t\t\n\t\t\t\\node[draw, circle,fill=white,minimum size=6pt, inner sep=0pt]\n\t\t\t(v\\i) at (\\i,0) {};\n\t\t\t\n\t\t}\n\n\\node[draw, circle,fill=black,minimum size=6pt, inner sep=0pt] at (3,0) {};\t\n\\node[draw, circle,fill=black,minimum size=6pt, inner sep=0pt] at (4,0) {};\t\n\\node[draw, circle,fill=black,minimum size=6pt, inner sep=0pt] at (6,0) {};\n\\node[draw, circle,fill=black,minimum size=6pt, inner sep=0pt] at (7,0) {};\n\\node[draw, circle,fill=black,minimum size=6pt, inner sep=0pt] at (10,0) {};\n\n\n\t\t\n\t\t\\foreach \\i in {-1,0,...,8,9}\n\t\t{\t\n\t\t\t\\pgfmathint{\\i + 1}\n\t\t\t\\edef\\j{\\pgfmathresult}\n\t\t\t\\draw[->] (v\\i) edge (v\\j);\n\t\t}\n\t\t\n\t\t\n\t\t\\foreach \\i in {10,11}\n\t\t{\t\n\t\t\t\\pgfmathint{\\i + 1}\n\t\t\t\\edef\\j{\\pgfmathresult}\n\t\t\t\\draw[<-] (v\\i) edge (v\\j);\n\t\t}\n\t\t\n\t\t\\node at (-1,-0.5) {$v_{0}$};\n\t\t\\node at (0,-0.5) {$v_{1}$};\n\t\t\\node at (4,-0.5) {$v_{h_2}$};\n\t\t\\node at (3,-0.5) {$v_{h_1}$};\n\t\t\\node at (6,-0.5) {$v_{h_3}$};\n\t\t\\node at (7,-0.5) {$v_{h_4}$};\n\t\n\t\t\\node at (10,-0.5) {$v_{h_5}$};\n\t\t\\node at (12,-0.5) {$v_{13}$};\n\t\t\\path[draw,thick,black!60!green]\n\t\t\n\t\t(v-1) edge node[name=la,pos=0.5, above] {\\color{blue} $13$} (v0)\n\t\t(v0) edge node[name=la,pos=0.5, above] {\\color{blue} $1$} (v1)\n\t\t(v1) edge node[name=la,pos=0.5, above] {\\color{blue}$11$} (v2)\n\t\t(v2) edge node[name=la,pos=0.5, above] {\\color{blue} $10$} (v3)\n\t\t(v3) edge node[name=la,pos=0.5, above] {\\color{blue} $9$} (v4)\n\t\t(v4) edge node[name=la,pos=0.5, above] {\\color{blue} $3$} (v5)\n\t\t(v5) edge node[name=la,pos=0.5, above] {\\color{blue} $8$} (v6)\n\t\t(v6) edge node[name=la,pos=0.5, above] {\\color{blue} $7$} (v7)\n\t\t(v7) edge node[name=la,pos=0.5, above] {\\color{blue} $6$} (v8)\n\t\t(v8) edge node[name=la,pos=0.5, above] {\\color{blue} $4$} (v9)\n\t\t(v9) edge node[name=la,pos=0.5, above] {\\color{blue} $5$} (v10)\n\t\t(v10) edge node[name=la,pos=0.5, above] {\\color{blue} $2$} (v11)\n\t\t(v11) edge node[name=la,pos=0.5, above] {\\color{blue} $12$} (v12);\n\t\t\n\t\t\\node at (6,-1.5) {Labeling of $\\overrightarrow{P}$ when $\\ell$ is even and $s=2$};\n\t\t\n\t\t\\end{tikzpicture}\n\t\t\n\t\t\\smallskip\n\t\t\n\t\t\\begin{tikzpicture}\n\t\t\\foreach \\i in {0,...,13}\n\t\t{\t\t\n\t\t\t\\node[draw, circle,fill=white,minimum size=6pt, inner sep=0pt]\n\t\t\t(v\\i) at (\\i,0) {};\n\t\t\t\n\t\t}\n\n\\node[draw, circle,fill=black,minimum size=6pt, inner sep=0pt] at (3,0) {};\t\n\\node[draw, circle,fill=black,minimum size=6pt, inner sep=0pt] at (5,0) {};\t\n\\node[draw, circle,fill=black,minimum size=6pt, inner sep=0pt] at (6,0) {};\n\\node[draw, circle,fill=black,minimum size=6pt, inner sep=0pt] at (7,0) {};\n\\node[draw, circle,fill=black,minimum size=6pt, inner sep=0pt] at (10,0) {};\n\n\t\t\n\t\t\\foreach \\i in {0,...,9}\n\t\t{\t\n\t\t\t\\pgfmathint{\\i + 1}\n\t\t\t\\edef\\j{\\pgfmathresult}\n\t\t\t\\draw[->] (v\\i) edge (v\\j);\n\t\t}\n\t\t\n\t\t\n\t\t\\foreach \\i in {10,11,12}\n\t\t{\t\n\t\t\t\\pgfmathint{\\i + 1}\n\t\t\t\\edef\\j{\\pgfmathresult}\n\t\t\t\\draw[<- ] (v\\i) edge (v\\j);\n\t\t}\n\t\t\n\t\t\\node at (0,-0.5) {$v_{0}$};\n\t\t\\node at (3,-0.5) {$v_{h_1}$};\n\t\t\\node at (5,-0.5) {$v_{h_2}$};\n\t\t\\node at (6,-0.5) {$v_{h_3}$};\n\t\t\\node at (7,-0.5) {$v_{h_4}$};\n\t\n\t\t\\node at (10,-0.5) {$v_{h_5}$};\n\t\t\n\t\t\\node at (13,-0.5) {$v_{13}$};\n\t\t\\path[draw,thick,black!60!green]\n\t\t\n\t\t(v0) edge node[name=la,pos=0.5, above] {\\color{blue} $13$} (v1)\n\t\t(v1) edge node[name=la,pos=0.5, above] {\\color{blue}$1$} (v2)\n\t\t(v2) edge node[name=la,pos=0.5, above] {\\color{blue} $11$} (v3)\n\t\t(v3) edge node[name=la,pos=0.5, above] {\\color{blue} $3$} (v4)\n\t\t(v4) edge node[name=la,pos=0.5, above] {\\color{blue} $9$} (v5)\n\t\t(v5) edge node[name=la,pos=0.5, above] {\\color{blue} $8$} (v6)\n\t\t(v6) edge node[name=la,pos=0.5, above] {\\color{blue} $7$} (v7)\n\t\t(v7) edge node[name=la,pos=0.5, above] {\\color{blue} $5$} (v8)\n\t\t(v8) edge node[name=la,pos=0.5, above] {\\color{blue} $4$} (v9)\n\t\t(v9) edge node[name=la,pos=0.5, above] {\\color{blue} $6$} (v10)\n\t\t(v10) edge node[name=la,pos=0.5, above] {\\color{blue} $10$} (v11)\n\t\t(v11) edge node[name=la,pos=0.5, above] {\\color{blue} $2$} (v12)\n\t\t(v12) edge node[name=la,pos=0.5, above] {\\color{blue} $12$} (v13);\n\t\t\n\t\t\\node at (7,-1.5) {Labeling of $\\overrightarrow{P}$ when $\\ell$ is odd and $s=0$};\n\t\t\\end{tikzpicture}\n\t\t\n\t\t\n\t\t\\smallskip\n\t\t\n\t\t\\begin{tikzpicture}\n\t\t\\foreach \\i in {0,...,13}\n\t\t{\t\t\t\n\t\t\t\\node[draw, circle,fill=white,minimum size=6pt, inner sep=0pt]\n\t\t\t(v\\i) at (\\i,0) {};\t\t\t\n\t\t}\n\n\\node[draw, circle,fill=black,minimum size=6pt, inner sep=0pt] at (4,0) {};\t\n\\node[draw, circle,fill=black,minimum size=6pt, inner sep=0pt] at (5,0) {};\t\n\\node[draw, circle,fill=black,minimum size=6pt, inner sep=0pt] at (6,0) {};\n\\node[draw, circle,fill=black,minimum size=6pt, inner sep=0pt] at (7,0) {};\n\\node[draw, circle,fill=black,minimum size=6pt, inner sep=0pt] at (10,0) {};\n\n\t\n\t\t\\foreach \\i in {0,...,9}\n\t\t{\t\n\t\t\t\\pgfmathint{\\i + 1}\n\t\t\t\\edef\\j{\\pgfmathresult}\n\t\t\t\\draw[->] (v\\i) edge (v\\j);\n\t\t}\n\t\t\n\t\t\n\t\t\\foreach \\i in {10,11,12}\n\t\t{\t\n\t\t\t\\pgfmathint{\\i + 1}\n\t\t\t\\edef\\j{\\pgfmathresult}\n\t\t\t\\draw[<- ] (v\\i) edge (v\\j);\n\t\t}\n\t\t\n\t\t\\node at (0,-0.5) {$v_{0}$};\n\t\t\\node at (4,-0.5) {$v_{h_1}$};\n\t\t\\node at (5,-0.5) {$v_{h_2}$};\n\t\t\\node at (6,-0.5) {$v_{h_3}$};\n\t\t\\node at (7,-0.5) {$v_{h_4}$};\n\t\n\t\t\\node at (10,-0.5) {$v_{h_5}$};\n\t\t\n\t\t\\node at (13,-0.5) {$v_{13}$};\n\t\t\\path[draw,thick,black!60!green]\n\t\t\n\t\t(v0) edge node[name=la,pos=0.5, above] {\\color{blue} $13$} (v1)\n\t\t(v1) edge node[name=la,pos=0.5, above] {\\color{blue}$1$} (v2)\n\t\t(v2) edge node[name=la,pos=0.5, above] {\\color{blue} $11$} (v3)\n\t\t(v3) edge node[name=la,pos=0.5, above] {\\color{blue} $9$} (v4)\n\t\t(v4) edge node[name=la,pos=0.5, above] {\\color{blue} $8$} (v5)\n\t\t(v5) edge node[name=la,pos=0.5, above] {\\color{blue} $7$} (v6)\n\t\t(v6) edge node[name=la,pos=0.5, above] {\\color{blue} $6$} (v7)\n\t\t(v7) edge node[name=la,pos=0.5, above] {\\color{blue} $4$} (v8)\n\t\t(v8) edge node[name=la,pos=0.5, above] {\\color{blue} $3$} (v9)\n\t\t(v9) edge node[name=la,pos=0.5, above] {\\color{blue} $5$} (v10)\n\t\t(v10) edge node[name=la,pos=0.5, above] {\\color{blue} $10$} (v11)\n\t\t(v11) edge node[name=la,pos=0.5, above] {\\color{blue} $2$} (v12)\n\t\t(v12) edge node[name=la,pos=0.5, above] {\\color{blue} $12$} (v13);\n\t\t\n\t\t\\node at (7,-1.5) {Labeling of $\\overrightarrow{P}$ when $\\ell$ is odd and $s=1$};\n\t\t\\end{tikzpicture}\t\n\t\t\\caption{Labelings of $P$ satisfy requirements of Lemma~\\ref{lemma1}}\n\t\t\\label{fig1}\n\t\t\n\t\\end{center}\n\t\n\\end{figure}\n\n\n\n\nFor the vertex $v_{h_t}$, by the orientation of $P$, we have $v_{h_t-1}v_{h_t}\\in A(\\overrightarrow{P})$ and $ v_{h_t+1}v_{h_t}\\in A(\\overrightarrow{P})$, thus $s(v_{h_t})>0$.\nBy Step 2 and step 3, $\\tau(v_{h_1-1}v_{h_1})>\\tau(v_{h_1}v_{h_1+1})$. Therefore,\n$s(v_{h_1})=\\tau(v_{h_1-1}v_{h_1})-\\tau(v_{h_1}v_{h_1+1})>0$.\nFor each $v_{h_i}\\in U\\setminus \\{v_{h_1}, v_{h_t}\\}$, by Step 3,\nwe always have \t$\\tau(v_{h_i-1}v_{h_i})>\\tau(v_{h_i}v_{h_i+1})$. Therefore,\n$s(v_{h_i})=\\tau(v_{h_i-1}v_{h_i})-\\tau(v_{h_i}v_{h_i+1})>0$. This proves (i).\n\nStatement (ii) is obvious as for each $v_i\\in V(P)\\setminus U$ with $i\\ne 0, m$,\n$1\\le |s(v_i)|=|\\tau(v_{i-1}v_i)-\\tau(v_iv_{i+1})|\\le m-1$, $s(v_0)=-m$ and $s(v_m)=-(m-1)$.\n\t\n\nWe now show (iii). Let $v_i,v_j \\in V(P)\\setminus U$ with $i |s(v_j)|$. Also, by Step 3 and Step 4, if $v_i,v_j \\in V(P)\\setminus (U\\cup V(P_0)) $ such that $v_i$ and $v_j$ are from distinct subpaths,\nwe have $|s(v_i)|> |s(v_j)|$ unless $v_j$ is from $V(P_{t})$.\nIn this case, $|s(v_i)|< |s(v_j)|$.\nThus, we are only left to show that $s(v_i)\\ne s(v_j)$ when $v_i\\in V(P_0)$ and $v_j\\in V(P)\\setminus (U\\cup V(P_0)) $. Note that by Step 2, the set of vertex-sums on vertices from $V(P_0)$ is as follows:\n\n$$\n\\begin{cases}\n\t\\{-m, m-1, -(m-3), m-5, \\cdots, m-2\\ell+3,\\\\ -(m-2\\ell+1)\\}, & \\text{if $\\ell$ is even and $s=1$};\\\\\n\\{-m, m-1, -(m-3), m-5, \\cdots, m-2\\ell+3,\\\\ -(m-2\\ell+1), m-2\\ell-1, -(m-2\\ell-2), m-2\\ell-3,\\\\ \\cdots,-(m-2\\ell-s+1)\\}, & \\text{if $\\ell$ is even, $s$ is odd and $s\\ge 3$};\\\\\n\\{-m,2\\}, & \\text{if $\\ell=2$ and $s=0$};\\\\\n\\{-m, m-1, -(m-3), m-5, \\cdots, m-2\\ell+7,\\\\ -(m-2\\ell+5), 2\\}, & \\text{if $\\ell\\ge 4$ is even and $s=0$};\\\\\n\\{-m, m-1, -(m-3), m-5, \\cdots, m-2\\ell+3, \\\\-(m-2\\ell+1), 1\\}, & \\text{if $\\ell$ is even and $s=2$};\\\\\n \\{-m, m-1, -(m-3), m-5, \\cdots, m-2\\ell+3,\\\\ -(m-2\\ell+1), m-2\\ell-1, -(m-2\\ell-2), m-2\\ell-3,\\\\ \\cdots, m-2\\ell-s+3,-(m-2\\ell-s+2),1\\}, & \\text{if $\\ell$ is even, $s$ is even and $s\\ge 4$};\\\\\n \\{-m, m-1, -(m-3), m-5, \\cdots, m-2\\ell+5, \\\\-(m-2\\ell+3)\\}, & \\text{if $\\ell$ is odd and $s=0$};\\\\\n \\{-m, m-1, -(m-3), m-5, \\cdots, m-2\\ell+5,\\\\ -(m-2\\ell+3), m-2\\ell+1, -(m-2\\ell-1)\\}, & \\text{if $\\ell$ is odd and $s=2$};\\\\\n \\{-m, m-1, -(m-3), m-5, \\cdots, m-2\\ell+5, \\\\ -(m-2\\ell+3),m-2\\ell+1, -(m-2\\ell-1), m-2\\ell-2,\\\\-(m-2\\ell-3), \\cdots, m-2\\ell-s+2,-(m-2\\ell-s+1)\\}, & \\text{if $\\ell$ is odd, $s$ is even and $s\\ge 4$};\\\\\n \\{-m,2\\}, & \\text{if $\\ell=1$ and $s=1$};\\\\\n\\{-m, m-1, -(m-3), m-5, \\cdots, m-2\\ell+5, \\\\-(m-2\\ell+3),2\\}, & \\text{if $\\ell\\ge 3$ is odd and $s=1$};\\\\\n\\{-m, m-1, -(m-3), m-5, \\cdots, m-2\\ell+5, \\\\-(m-2\\ell+3), m-2\\ell+1, -(m-2\\ell-1), 1\\}, & \\text{if $\\ell$ is odd and $s=3$};\\\\\n \\{-m, m-1, -(m-3), m-5, \\cdots, m-2\\ell+5,\\\\ -(m-2\\ell+3), m-2\\ell+1, -(m-2\\ell-1), m-2\\ell-2,\\\\-(m-2\\ell-3), \\cdots,-(m-2\\ell-s+2),1\\}, & \\text{if $\\ell$ is odd, $s$ is odd and $s\\ge 5$}.\n\t\\end{cases}\n\t$$\n\n\n\nWe consider two cases regarding where the vertex $v_j$ is.\n\n{\\bf \\noindent Case 1: $v_j\\in V(P_t) $.}\n\n\n\nBy Step 1, if $\\ell $ is even, then $s(v_j)\\in \\{-(m-1), m-3, -(m-5), m-7, \\cdots, -(m-2\\ell+3), m-2\\ell+1\\}$; and\nif $\\ell$ is odd, then $s(v_j)\\in \\{-(m-1), m-3, -(m-5), m-7, \\cdots, (m-2\\ell+3), -(m-2\\ell+1)\\}$.\nSince the value $s(v_i)=-(m-2\\ell+1)$ is only achieved when $\\ell$ is even, we see that\n $s(v_j)\\ne s(v_i)$ in either case.\n\n\n{\\bf \\noindent Case 2: $v_j\\in V(P_k) $ for some $k\\in [t-1]$.}\n\n\nBy Steps 3 and 4, $|s(v_j)|\\le m-2\\ell -sp$. Therefore, if $u,v\\in V(P)$, it holds that\n$s_4(u)\\neq s_4(v)$.\n\nConsider now that $u,v\\in X$.\nIf $u,v\\in X_1$, then $s_4(u)\\neq s_4(v)$ as $u$ and $v$ are leaves of $T$.\nIf $u,v\\in X\\setminus X_1$, then $s_4(u)\\neq s_4(v)$ by~\\eqref{ine2}. Thus we assume that\n$u\\in X_1$ and $v\\in X\\setminus X_1$.\nBy Steps 3 and 4, labels assigned to edges in $A( X_1, V(P))$ is a subset of $[p+|Y|+1, p+|Y|+|X_1|-r+|M_1|]. $\nThus $|s_4(u)|\\leq p+|Y|+|X_1|-r+|M_1|$. Every vertex in $X\\setminus X_1$ is adjacent in $T$ to a\nvertex from $Y$ and is adjacent to a vertex from $V(P)$. By Steps 3 and 4, the smallest label that is assigned to edges in $A(X,Y)$\nis $p+1$, and the smallest label that is assigned to edges in $A(X\\setminus X_1, V(P))$\nis $p+|Y|+|X_1|-r+1$.\nTherefore $|s_4(v)|\\geq p+1+p+|Y|+|X_1|-r+1>p+|Y|+|X_1|-r+|M_1|+2$, as $p>|M_1|$. Thus $s_4(v) \\ne s_4(u)$. Therefore if $u,v\\in X$, then $s_4(u)\\neq s_4(v)$.\n\nSince any vertex in $Y$ is a leaf of $T$, if $u,v\\in Y$, then it naturally holds that $s_4(u)\\neq s_4(v)$.\n\nThus we only need to prove that when $u$ and\n$v$ are from two distinct sets among $V(P), X$, and $Y$, it holds that $s_4(u)\\neq s_4(v)$.\nThis follows by the following arguments.\nFor every $y\\in Y$, by Step 2, the arc incident to $y$ is entering $y$. Thus, by Steps 3 and 4, $s_4(y)\\in [p+1,p+|Y|]$.\nFor every $x\\in X$, by Step 2, the arcs incident to $x$ are leaving $x$. Thus, by Steps 3 and 4, $s_4(x)<0$ and $|s_4(x)|\\ge p+|Y|+1 $.\nFor every $z\\in V(P)$, by Step 1, we have either\n$|s_4(z)|\\le p$ or $s_4(z)\\ge p+|Y|+1$.\nTherefore $\\tau_4$ is an antimagic labeling of $\\overrightarrow{T}$.\n\\section{Open problem}\n\nIn this paper, we showed that every lobster admits an antimagic orientation. Since every bipartite antimagic graph\n$G$ admits an antimagic orientation, it is natural to ask that whether lobsters are antimagic.\nWe believe this is true.\n\n\\begin{conj} Every lobster admits an antimagic labeling.\n\\end{conj}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction\n\\label{sec:intro}\n\nLet $M$ be a finitely generated module over a local ring $R$. \nFrom its minimal free resolution\n\\[\n0 \\gets M \\gets R^{ b_0}\\gets R^{ b_1}\\gets R^{ b_2}\\gets \\cdots\n\\]\nwe obtain the \\defi{Betti sequence} $b^R(M):=(b_0,b_1,b_2, \\dots)$ of\n$M$. Questions about the possible behavior of $b^R(M)$ arise in many\ndifferent contexts (see \\cite{peeva-stillman} for a recent survey).\nFor instance, the Buchsbaum--Eisenbud--Horrocks Rank Conjecture\nproposes lower bounds for each $b^R_i(M)$, at least when $R$ is\nregular, and this conjecture is related to multiplicative structures\non resolutions~\\cite{buchs-eis-gor}*{p.\\ 453}, vector bundles on\npunctured discs~\\cite{hartshorne-vector}*{Problem~24}, and equivariant\ncohomology of products of spheres (\\cite{carlsson2}\nand~\\cite{carlsson}*{Conj~II.8}). When $R$ is not regular, there are even\nmore\nquestions about the possible behavior of\n$b^R(M)$~\\cite{avramov-notes}*{\\S4}.\n\n\nHere we consider the qualitative behavior of these sequences; we\ndefine the \\defi{shape} of the free resolution of $M$ as the Betti\nsequence $b^R(M)$ viewed \\emph{up to scalar multiple}. Instead of\nasking if there exists a module $M$ with a given Betti sequence,\nsay $\\mathbf{v}=(18,20,4,4,20,18)$, we ask if there exists a\nBetti sequence $b^R(M)$ with the same shape as $\\mathbf{v}$, i.e.,\nwhether $b^R(M)$ is a scalar multiple of $\\mathbf{v}$. In a sense,\nthis approach is orthogonal to questions like the\nBuchsbaum--Eisenbud--Horrocks Rank Conjecture,\nwhich focus on the {\\em size} of a free resolution.\n\nIn this article, we show that this shift in approach, which was\nmotivated by ideas of~\\cite{boij-sod1}, provides a clarifying\nviewpoint on Betti sequences over local rings. First, we completely\nclassify shapes of resolutions when $R$ is regular. To state the\nresult, we let $\\mathbb V= \\mathbb{Q}^{n+1}$ be a vector space with standard basis\n$\\{\\epsilon_i\\}_{i=0}^n$.\n\n\\begin{theorem}\\label{thm:regmain}\nLet $R$ be an $n$-dimensional regular local ring, \n$\\mathbf{v} := (v_i)_{i=0}^n \\in \\mathbb V$, and $0 \\leq d \\leq n$. \nThen the following are equivalent:\n\\begin{enumerate}[\\rm (i)]\n\\item\\label{thm:regmain:mod} There exists a finitely generated\n $R$-module $M$ of depth $d$ such that $b^R(M)$ has shape\n $\\mathbf{v}$, i.e., there exists $\\lambda\\in \\mathbb{Q}_{>0}$ such that\n $b^R(M)=\\lambda \\mathbf{v}$.\n\\item\\label{thm:regmain:sum} \n\tThere exist $a_{-1}\\in\\mathbb Q_{\\geq 0}$ and $a_i\\in \\mathbb Q_{>0}$ for $i\\in\\{0, \\dots, n-d-1\\}$ such that\n\t\\[\n \\mathbf{v} = a_{-1}\\epsilon_0 + \n \\sum_{i=0}^{n-d-1} a_i(\\epsilon_i+\\epsilon_{i+1}).\n\t\\]\n\\end{enumerate}\nIf $a_{-1}=0$ in \\eqref{thm:regmain:sum}, \nthen $M$ can also be chosen to be Cohen--Macaulay.\n\\end{theorem}\n\nThis demonstrates that there are almost no bounds on the shape of \na minimal free $R$-resolution. While showing \nthat~\\eqref{thm:regmain:mod} implies \\eqref{thm:regmain:sum} is\nstraightforward, the converse is more interesting, as it leads\nto examples of free resolutions with unexpected behavior.\nFor instance, let $R=\\mathbb{Q}[[x_1,\\dots,x_{14}]]$, fix some $0 < \\delta \\ll 1$, \nand let $\\mathbf{v}=(1-\\frac{\\delta}{2}, 1, \\delta, \\delta, \\delta, \\delta,\n\\delta, \\delta, 4, 4, \\delta, \\delta, \\delta, 1, 1-\\frac{\\delta}{2})$.\nPlotting its entries, the shape of $\\mathbf{v}$ is shown in Figure~\\ref{fig:v}. \nAs $\\mathbf{v}$ satisfies\nTheorem~\\ref{thm:regmain}\\eqref{thm:regmain:sum}, \nthere exists a finite length $R$-module $M$ whose minimal free resolution \nhas this shape. \nSimilar pathological examples abound\n\\begin{figure\n\\vspace*{.65cm}\n\\begin{tikzpicture}\n\\filldraw (0,.63) circle (1pt);\n\\filldraw (.4,.7) circle (1pt);\n\\filldraw (.8,.1) circle (1pt);\n\\filldraw (1.2,.1) circle (1pt);\n\\filldraw (1.6,.1) circle (1pt);\n\\filldraw (2,.1) circle (1pt);\n\\filldraw (2.4,.1) circle (1pt);\n\\filldraw (2.8,.1) circle (1pt);\n\\filldraw (3.2,2.8) circle (1pt);\n\\filldraw (3.6,2.8) circle (1pt);\n\\filldraw (4,.1) circle (1pt);\n\\filldraw (4.4,.1) circle (1pt);\n\\filldraw (4.8,.1) circle (1pt);\n\\filldraw (5.2,.7) circle (1pt);\n\\filldraw (5.6,.63) circle (1pt);\n\\draw[thick] (0,.63) --(.4,.7) --(.8,.1) --(1.2,.1) --(1.6,.1) --(2.0,.1)\n--(2.4,.1) --(2.8,.1) --\n (3.2,2.8) --(3.6,2.8) --(4,.1) --(4.4,.1) --\n (4.8,.1) --(5.2,.7) --(5.6,.63);\n\\draw[thin,dashed](0,2.8)--(0,0)--(5.7,0);\n\\draw (2.8,-.5) node {$i$};\n\\draw (-.5,1.4) node {$b_i(M)$};\n\\end{tikzpicture}\n\\hspace{1cm}\n\\begin{tikzpicture}\n\\filldraw (0,1.9) circle (1pt);\n\\filldraw (.4,2) circle (1pt);\n\\filldraw (.8,.1) circle (1pt);\n\\filldraw (1.2,2) circle (1pt);\n\\filldraw (1.6,2) circle (1pt);\n\\filldraw (2,.1) circle (1pt);\n\\filldraw (2.4,2) circle (1pt);\n\\filldraw (2.8,2) circle (1pt);\n\\filldraw (3.2,.1) circle (1pt);\n\\filldraw (3.6,2) circle (1pt);\n\\draw (4,.2) node {\\dots};\n\\filldraw (4.4,2) circle (1pt);\n\\filldraw (4.8,.1) circle (1pt);\n\\filldraw (5.2,2) circle (1pt);\n\\filldraw (5.6,1.9) circle (1pt);\n\\draw[thick] (0,1.9) -- (.4,2) -- (.8,.1)\n--(1.2,2)--(1.6,2)--(2,.1)--(2.4,2)--(2.8,2)--(3.2,.1)--(3.6,2);\n\\draw[thick] (4.4,2) -- (4.8,.1) -- (5.2,2) --(5.6,1.9);\n\\draw[thin,dashed](0,2.8)--(0,0)--(5.7,0);\n\\draw (2.8,-.5) node {$i$};\n\\draw (-.5,1.4) node {$b_i(N)$};\n\\end{tikzpicture}\n\\caption{On the left, we illustrate the shape of \n$\\mathbf{v}=(1-\\frac{\\delta}{2}, 1, \\delta, \\delta, \\delta, \\delta,\n\\delta, \\delta, 4, 4, \\delta, \\delta, \\delta, 1, 1-\\frac{\\delta}{2})$\nwhere $0<\\delta\\ll 1$ is a rational number.\nOn the right, we illustrate an oscillating shape,\nas in Example~\\ref{ex:oscillate}. \nEach arises as the shape of some minimal free resolution.}\n\\label{fig:v}\n\\end{figure}\nAs mentioned above, our work is inspired by the Boij--S\\\"oderberg \nperspective that the numerics of minimal free resolutions over a graded \npolynomial ring $S$ are easier to understand if one works up to scalar multiple. \nThey introduced the cone of Betti diagrams over $S$ and provided \nconjectures about the structure of this cone. Their conjectures were proven and \nextended in a series of papers~\\cites{boij-sod1,boij-sod2,efw,ES-JAMS}. \n(See also~\\cite{ES:ICMsurvey} for a survey.)\n\nTo provide a local version of Boij--S\\\"oderberg theory, we study the\n\\defi{cone of Betti sequences} $\\mathrm{B}_{\\mathbb{Q}}(R)$, which we define to be \nthe convex cone spanned by all points $b^R(M)\\in \\mathbb{V}$, \nwhere $M$ is a finitely generated $R$-module. \nTheorem~\\ref{thm:regmain} implies that the closure of $\\mathrm{B}_{\\mathbb{Q}}(R)$ \nis spanned by the rays corresponding to $\\epsilon_0$ and \n$(\\epsilon_i+\\epsilon_{i+1})$ for $i=0,\\dots, n-1$. \nThe point $(\\epsilon_i+\\epsilon_{i+1})$ can be\ninterpreted as the Betti sequence of the non-minimal complex\n$(R^1\\stackrel{\\sim}{\\longleftarrow}R^1)$, where the copies of $R$ lie in\nhomological positions $i$ and $i+1$. Since this is not itself a\nminimal free resolution, it follows that $\\mathrm{B}_{\\mathbb{Q}}(R)$ is not a closed\ncone, in contrast with the graded case. \nThe facet equation description of $\\mathrm{B}_{\\mathbb{Q}}(R)$ is also simpler than in the graded case: by Proposition~\\ref{prop:moredetailed} \nbelow, all facets are given by partial Euler characteristics.\n\nOur proof of Theorem~\\ref{thm:regmain} relies on a limiting \ntechnique that is possible because we study Betti sequences \nin $R$ only up to scalar multiple; the introduction of the \nrational points of $\\mathrm{B}_{\\mathbb{Q}}(R)$, which can be thought of as \nformal $\\mathbb{Q}$-Betti sequences, enables the use of this technique. \nTo produce the necessary limiting sequences, we first produce \nlocal analogues of the Eisenbud--Schreyer pure resolutions, \nas we have precise control over their Betti numbers. \n\nWe emphasize here the fact that $\\mathrm{B}_{\\mathbb{Q}}(R)$ depends only on \nthe dimension of $R$. In particular, the result is the same \nfor both equicharacteristic and mixed characteristic rings. \n\n\\subsection*{Hypersurface rings}\nWe also examine the shapes of\nminimal free resolutions over the simplest singular local rings:\nhypersurface rings. \nGiven a regular local ring $(R,\\mathfrak m_R)$, \nwe say that $Q$ is a \\defi{hypersurface ring of} $R$ \nif $Q=R\/\\$ and $f\\in \\mathfrak m_R^2$.\n\nUnlike the regular local case, free resolutions are not necessarily\nfinite in length over a hypersurface ring. Hence Betti sequences\n$b^Q(M)$ lie in an infinite dimensional vector space\n$\\mathbb{W}:=\\prod_{i=0}^\\infty \\mathbb{Q}$. We let $\\{\\epsilon_i\\}$ denote the\ncoordinate vectors of $\\mathbb{W}$ and we write elements of $\\mathbb{W}$ as possibly\ninfinite sums $\\sum_{i=0}^\\infty a_i\\epsilon_i$. We also view $\\mathbb{V}$ as\na subspace of $\\mathbb{W}$ in the natural way.\n\nThe key tool for studying free resolutions over a hypersurface ring is\nthe \\defi{standard construction} (which is briefly reviewed in\n\\S\\ref{sec:cone:all:hyp}). Given a $Q$-module $M$, this builds a\n(generally non-minimal) $Q$-free resolution of $M$ from the minimal\n$R$-free resolution of $M$. The numerics of this free resolution of\n$M$ are easy to understand in terms of $b^R(M)$. Define $\\Phi \\colon\n\\mathbb{W}\\to \\mathbb{W}$ by\n\\[\n\\Phi(v_0,v_1,v_2,\\dots):=(v_0, v_1, v_0+v_2,v_1+v_3,v_0+v_2+v_4,\\dots).\n\\]\nThe standard construction for $M$ yields a (generally non-minimal)\nresolution $G_\\bullet$ with Betti sequence\n$b^Q(G_\\bullet)=\\Phi(b^R(M))$.\n\nDue to this close connection between free resolutions over $R$ and\nover $Q$, it is tempting to conjecture that the numerics of $\\mathrm{B}_{\\mathbb{Q}}(Q)$\nshould be controlled by the cone $\\mathrm{B}_{\\mathbb{Q}}(R)$ and the map $\\Phi$.\nHowever, additional ingredients are clearly required. First, the\nsequence $\\Phi(b^R(M))$ always has infinite length, whereas there do\nexist minimal free resolutions over $Q$ with finite projective\ndimension. Second, if an $R$-module $M$ is annihilated by some\npolynomial $f$, then it automatically has rank $0$ as an\n$R$-module. Thus we should only be interested in applying $\\Phi$ to\nmodules of rank $0$.\n\nThe following theorem shows that \nall minimal free resolutions over hypersurface \nrings of $R$ are controlled by correcting precisely these two factors. \n\n\\begin{theorem}\\label{thm:hypmain}\nLet $(R, \\mathfrak{m}_R)$ be an $n$-dimensional regular local ring,\nlet $\\overline R$ be an $(n-1)$-dimensional regular local ring, \nand fix $\\mathbf{w} := (w_i)_{i=0}^\\infty \\in \\mathbb{W}$. \nThen the following are equivalent:\n\\begin{enumerate}[\\rm (i)]\n\\item\\label{thm:hypmain:mod} There exists $f\\in \\mathfrak\n m_R$, a positive integer $\\lambda$, and a finitely generated $R\/\\$-module\n $M$ such that $b^{R\/\\}(M)=\\lambda\n \\mathbf{w}$.\n\\item\\label{thm:hypmain:sum} \n\tThere exists an $R$-module $M_1$ of rank $0$ \n\tand an $\\overline{R}$-module $M_2$ such that\n\n\\mathbf{w}=\\Phi(b^R(M_1))+b^{\\overline{R}}(M_2).\n\n\\end{enumerate}\n\\end{theorem}\n\nThis demonstrates that, except for eventual periodicity, \nthere are essentially no bounds on the shape of a minimal free resolution \nover a hypersurface ring of $R$. As in the regular local case, this \nleads to examples of free resolutions with surprising behavior.\nFor instance, fix any $\\delta>0$ and let $R=\\mathbb{Q}[[x_1,\\dots,x_{14}]]$.\nApplying Theorem~\\ref{thm:regmain}, there exist $M_1$ and $M_2$ \nso that \n$\\mathbf{w}=\\Phi(b^R(M_1))+b^{\\overline{R}}(M_2)$, where\n\\[\n\\mathbf{w}:=(\\tfrac{\\delta}{2},4,4,\\delta,\\delta,\\delta,\\delta,\\delta,\n\\delta,\\delta,\\delta,1,1,\\delta,6+\\tfrac{\\delta}{2},6,6,6,\\dots).\n\\] \nSince $\\mathbf{w}$ satisfies\nTheorem~\\ref{thm:hypmain}\\eqref{thm:hypmain:sum}, there exists a\nmodule $M$ over a hypersurface ring of $R$ whose\nminimal free resolution has this shape. \n\nWe now make the connection with local Boij--S\\\"oderberg theory explicit.\n\n\\vspace{.25mm}\n\n\\begin{definition}\\label{defn:Rtot}\nThe \\defi{total hypersurface cone} $\\overline{\\mathrm{B}_{\\mathbb{Q}}(R_{\\infty})}$ is the closure in\n $\\mathbb{W}$ of the union $\\bigcup_{f \\in \\mathfrak m_R} \\mathrm{B}_{\\mathbb{Q}}(R\/\\)$.\n\\end{definition}\n\n\\vspace{.35mm}\nWe show in Remark~\\ref{rmk:equals asymptotic} \nthat the cone $\\overline{\\mathrm{B}_{\\mathbb{Q}}(R_{\\infty})}$ may also be realized as a limit of cones\n\\begin{equation}\\label{eqn:equals asymptotic}\n\\overline{\\mathrm{B}_{\\mathbb{Q}}(R_{\\infty})} = \\lim_{t\\to \\infty} \\mathrm{B}_{\\mathbb{Q}}(R\/\\)\\subseteq \\mathbb{W}\n\\end{equation}\nfor any sequence $(f_t \\in \\mathfrak m^t_R)_{ t \\geq 1}$. \n\nThe following result provides an \nextremal rays description of this cone.\n\n\\begin{proposition}\\label{prop:hypersurface rays}\nThe cone $\\overline{\\mathrm{B}_{\\mathbb{Q}}(R_{\\infty})}$ is an $(n+1)$-dimensional subcone of $\\mathbb{W}$ \nspanned by the following list of $(n+2)$ extremal rays:\n\\begin{enumerate}[\\rm (i)]\n\\item the ray spanned by $\\epsilon_0$,\n\\item the rays spanned by $(\\epsilon_i+\\epsilon_{i+1})$ for\n $i\\in\\{0, \\dots, n-2\\}$, and \n\\item the rays spanned by\n \\[\n \\sum_{i=n-2}^\\infty \\epsilon_i \\quad \\text{ and }\\quad\n \\sum_{i=n-1}^\\infty \\epsilon_i.\n \\]\n\\end{enumerate}\n\\end{proposition}\n\n\\smallskip\nThe proofs of Theorem~\\ref{thm:hypmain} and Proposition~\\ref{prop:hypersurface rays}\nrely on two types of asymptotic arguments. First, as in the proof of Theorem~\\ref{thm:regmain},\nwe study sequences of formal $\\mathbb{Q}$-Betti sequences. Second, we use\nthat the cone $\\overline{\\mathrm{B}_{\\mathbb{Q}}(R_{\\infty})}$ is itself a limit, as illustrated in \\eqref{eqn:equals asymptotic}.\n\nIn Proposition~\\ref{prop:moreDetailedHyper}, we also describe the cone\n$\\overline{\\mathrm{B}_{\\mathbb{Q}}(R_{\\infty})}$ in terms of defining hyperplanes. In addition, we observe\nthat, as in the description of $\\mathrm{B}_{\\mathbb{Q}}(R)$, most of the extremal rays of\n$\\overline{\\mathrm{B}_{\\mathbb{Q}}(R_{\\infty})}$ do not correspond to actual minimal free resolutions. Note\nthat, based on \\eqref{eqn:equals asymptotic}, the cone $\\mathrm{B}_{\\mathbb{Q}}(R\/\\)$\nis closely approximated by $\\overline{\\mathrm{B}_{\\mathbb{Q}}(R_{\\infty})}$, at least when the Hilbert--Samuel\nmultiplicity of $R\/\\$ is large.\n\nWe end by considering the more precise question of completely describing \n$\\mathrm{B}_{\\mathbb{Q}}(R\/\\)$ for a fixed $f\\in \\mathfrak m_R$. \nThe following conjecture claims that the cone $\\mathrm{B}_{\\mathbb{Q}}(R\/\\)$ depends \nonly on the dimension and multiplicity of the hypersurface ring $R\/\\$.\n\n\\begin{conjecture}\\label{conj:hyper:equiv}\nLet $Q$ be a hypersurface ring of embedding dimension $n$ \nand multiplicity $d$. \nThen $\\mathrm{B}_{\\mathbb{Q}}(Q)$ is an $(n+1)$-dimensional cone, and its closure is defined \nby the following $(n+2)$ extremal rays:\n\\begin{enumerate}[\\rm (i)]\n\\item the ray spanned by $\\epsilon_0$, \n\\item the rays spanned by $(\\epsilon_i+\\epsilon_{i+1})$ \n\tfor $i=\\{0, \\dots, n-2\\}$, and \n\\item the rays spanned by\n \\[\n \\tfrac{d-1}{d}\\epsilon_{n-2}+\\sum_{i=n-1}^\\infty \\epsilon_i \\quad\n \\text{ and }\\quad \\tfrac{1}{d}\\epsilon_{n-2}+\\sum_{i=n-1}^\\infty\n \\epsilon_i.\n\t\\]\n\\end{enumerate}\n\\end{conjecture}\n\nProposition~\\ref{prop:one direction} proves one direction of this\nconjecture, by showing that $\\mathrm{B}_{\\mathbb{Q}}(Q)$ belongs to the cone spanned by\nthe proposed extremal rays. We also prove\nConjecture~\\ref{conj:hyper:equiv} when $\\operatorname{edim}(Q)=2$. Observe also\nthat Proposition~\\ref{prop:hypersurface rays} is essentially the\n$d=\\infty$ version of this conjecture.\n\n\\subsection*{Notation}\nThroughout the rest of this document $R$ will be a regular local ring\nand $Q$ will be a quotient ring of $R$. If $M$ is an $R$-module or a\n$Q$-module, then $e(M)$ is the Hilbert--Samuel multiplicity of $M$ and\n$\\mu(M)$ is the minimal number of generators for $M$. Given a\nsurjection $R^{\\mu(M)} \\to M$, we denote the kernel by $\\Omega(M)$,\nand in general, we set $\\Omega^{j}(M) = \\Omega^1(\\Omega^{j-1}(M))$,\nwith the convention $\\Omega^0(M) = M$, and we call $\\Omega^j(M)$ the\n{\\bf $j$th syzygy module} of $M$.\n\n\\subsection*{Acknowledgements}\nSignificant parts of this work were done when the second author\nvisited Purdue University and during a workshop funded by the Stanford\nMathematics Research Center; the paper was completed while the first \nauthor attended the program ``Algebraic Geometry with a view towards\napplications\" at Institut Mittag-Leffler; we are grateful for all of\nthese opportunities. Throughout the course of this work, calculations\nwere performed using the software {\\tt Macaulay2}~\\cite{M2}. We thank\nMatthias Beck for pointing out the reference~\\cite{triangulations}.\nWe also thank Jesse Burke, David Eisenbud, Courtney Gibbons, \nMel Hochster, Frank-Olaf Schreyer, and Jerzy Weyman for insightful \nconversations. \nWe also thank the referee for suggestions that greatly streamlined this paper. \n\n\\section{Passage of graded pure resolutions to a regular local ring}\n\\label{sec:pass:pure}\n\nTo prove Theorem~\\ref{thm:regmain}, we produce a collection of Betti\nsequences that converge to each extremal ray of $\\overline{\\mathrm{B}_{\\mathbb{Q}}(R)}$.\nThe key step in constructing these sequences is the construction of\nlocal analogues of the pure resolutions of Eisenbud and Schreyer.\n\nLet $S=\\mathbb{Z}[x_1, \\dots, x_n]$. Fix $d=(d_0,\\dots,d_s)\\in \\mathbb{Z}^{s+1}$\nwith $d_i0$). Adopt the notation of \\S\\ref{sec:pass:pure} and define\n$\\mathbf{v}_j(d)$ to be the unique scalar multiple of $\\mathbf{v}(d)$\nsuch that $\\mathbf{v}(d)_j=1$. Based on the formula for\n$\\mathbf{v}(d)$ from Proposition~\\ref{prop:pureoverRLR}, view\n$\\mathbf{v}_j$ as a map from $\\mathbb{Z}^{n+1}\\to \\mathbb{V}$ (with poles) defined\nby the formula\n\\[\n\\mathbf{v}_j(d_0,\\dots,d_n)=\\left(\\frac{\\prod_{i\\ne j}\n|d_i-d_j|}{\\prod_{i\\ne 0} |d_i-d_0|}, \\frac{\\prod_{i\\ne j}\n|d_i-d_j|}{\\prod_{i\\ne 1} |d_i-d_1|}, \\dots, \\frac{\\prod_{i\\ne j}\n|d_i-d_j|}{\\prod_{i\\ne n} |d_i-d_n|}\\right)\\in \\mathbb{V}.\n\\]\n\nAnd now for the crucial choice, \nwhich is explored further in Example~\\ref{ex:djk}. \nFor each $j$, consider the sequence $\\{ d^{j,t}\\}_{t\\geq0}$ defined by\n$d^{j,t}:=(0, t, 2t, \\ldots, jt, jt+1,(j+1)t+1, \\ldots, (n-1)t+1)$.\nIn other words, \n\\[\nd^{j,t}_k=\\begin{cases}\nkt &\\text{if $k\\leq j$,}\\\\\n(k-1)t+1 &\\text{if $k>j$.} \n\\end{cases}\n\\]\nWe claim that\n\n\\rho_j = \\lim_{t\\to \\infty} \\mathbf{v}_j(d^{j,t}).\n$\nThis would imply, by Proposition~\\ref{prop:pureoverRLR}, that\n$\\rho_i\\in \\overline{\\mathrm{B}_{\\mathbb{Q}}(R)}$, thus completing the proof.\nTo prove this claim, we observe that the $j$th coordinate function of\n$\\mathbf{v}_j$ equals $1$ and $\\mathbf{v}_j(d)$ lies in the hyperplane\ndefined by $\\chi_{[0,n]}=0$. So it suffices to prove that the\n$\\ell$th coordinate function of $\\mathbf{v_j}$ goes to $0$ for all\n$\\ell\\ne j,j+1$. We directly compute\n\\begin{align*}\n \\lim_{t\\to \\infty} \\mathbf{v}_j(d^{j,t})_{\\ell}&=\\lim_{t\\to\n \\infty}\\frac{\\prod_{i\\ne j} |d^{j,t}_i-d^{j,t}_j|}{\\prod_{i\\ne\n \\ell} |d^{j,t}_i-d^{j,t}_\\ell|}=\\lim_{t\\to\n \\infty}\\frac{O(t^{n-1})}{O(t^{n})} =0. \\qedhere\n\\end{align*} \n\\end{proof}\n\n\\begin{example}\\label{ex:djk}\nIf $n = 4$, then \n$d^{1,t} = (0,t,t+1,2t+1,3t+1)$.\nOver $S = \\Bbbk [x_1,\\dots,x_4]$ with the standard grading, \nthis degree sequence corresponds to the Betti diagram \n\\[\n\\beta^S(M(d^{1,t})) = \n\\begin{bmatrix}\n\\beta_0^{1,t} & - & - & - & - \\\\\n\\vdots & \\vdots & \\vdots & \\vdots & \\vdots\\\\\n- & \\beta_1^{1,t} & \\beta_2^{1,t} & - & - \\\\\n\\vdots & \\vdots & \\vdots & \\vdots & \\vdots \\\\\n- & - & - & \\beta_3^{1,t} & - \\\\\n\\vdots & \\vdots & \\vdots & \\vdots & \\vdots \\\\\n- & - & - & - & \\beta_4^{1,t}\n\\end{bmatrix}\n\\hspace{-.8cm}\n\\mbox{\n\\scriptsize\n\\begin{tabular}{l}\n\t$\\left.\\begin{tabular}{l}\n\t \\ \\\\ \\ \\\\ \\ \n\t\\end{tabular}\\right\\}$ \n\t\\\\\n\t $\\left.\\begin{tabular}{l}\n\t\\ \\\\ \\ \\\\ \\ \n\t\\end{tabular}\\right\\}$ \n\t\\\\ \n\t $\\left.\\begin{tabular}{l}\n\t\\ \\\\ \\ \\\\ \\ \n\t\\end{tabular}\\right\\}$ \n\t\\\\\n\t $\\begin{tabular}{l}\n\t\\ \\\\ \n\t\\end{tabular}$ \n\\end{tabular}\n}\n\\hspace{-.5cm}\n\\mbox{\n\\begin{tabular}{l}\n\\ \\vspace{.2cm} \\\\\n$t-1$ rows \\\\ \\vspace{.15cm} \\\\\n$t-1$ rows \\\\ \\vspace{.15cm} \\\\\n$t-1$ rows \\\\ \\\\\n\\ \n\\end{tabular}\n}\n\\]\nwhere there are gaps of $t-3$ rows of zeroes between the various \nnonzero entries.\nNotice that as $t\\to\\infty$, this Betti diagram gets longer. \nIt is thus necessary to consider the total Betti \nnumbers $\\beta_{i}$ (i.e., to forget about the individual graded Betti numbers \n$\\beta_{i,j}$) before it makes sense to \nconsider a limit. \n\\end{example}\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:regmain}]\nFirst we show that \\eqref{thm:regmain:mod} implies \\eqref{thm:regmain:sum}.\nLet $M$ be any module of depth $d$ such that $b^R(M)=\\lambda \\mathbf{v}$. Since $\\chi_{[i,n]}(b^R(M))= \\operatorname{rank} \\Omega^i(M)$ for $i\\geq 0$, the Auslander--Buchsbaum formula\nimplies that this is strictly positive for $i=1, \\dots, n-d$ and $0$ for $i>n-d$. The proof of Proposition~\\ref{prop:moredetailed} then shows\nthat $b^R(M)$ has the desired form.\n\nNext we show that \\eqref{thm:regmain:sum} implies \\eqref{thm:regmain:mod}.\nIf there exists any $M$ such that $b^R(M)=\\mathbf{v}$, then the Auslander--Buchsbaum formula implies\nthat $M$ has depth $d$. It thus suffices to produce a module $M$ with the desired Betti sequence. \nWe may also assume that the coefficient $a_{-1}$ of $\\rho_{-1}$ equals $0$.\n\n Let $C$ denote the cone spanned by $\\rho_0, \\dots, \\rho_{n-d-1}$, so that \n $\\mathbf{v}$ now belongs to the interior of $C$.\n The proof of Proposition~\\ref{prop:moredetailed} illustrates that \n for each $i=0, \\dots, n-d-1$, we can construct $\\rho_i$ as the limit\n of Betti sequences of Cohen--Macaulay modules of codimension $n-d$. \n Since we can construct every extremal ray of $C$ via such a sequence, \n it follows that every interior point of $C$ can be written as a \n $\\mathbb{Q}$-convex combination of the Betti sequences of \n Cohen--Macaulay $R$-modules of codimension $n-d$. In particular,\n $\\mathbf{v}$ has this property, and hence $\\mathbf{v}\\in \\mathrm{B}_{\\mathbb{Q}}(R)$, as\n desired. This construction also implies the final sentence of the\n theorem, as we have written $\\mathbf{v}$ as the sum of Betti\n sequences of Cohen--Macaulay modules of codimension $n-d$. \n\\end{proof}\n\n\n\\begin{example}[Oscillation of Betti numbers]\\label{ex:oscillate}\nLet $n=\\dim R$ be congruent to $1$ mod $3$. Let $0 < \\delta \\ll 1$ be a\nrational number and set\n\\[\na_i':=\\begin{cases}\n0&\\text{if } i=-1,\\\\\n1-\\frac{\\delta}{2} & \\text{if } i \\ge 0 \\text{ and } i \\equiv 0 \\pmod 3,\\\\\n\\frac{\\delta}{2} & \\text{if } i \\ge 0 \\text{ and } i \\equiv \\pm 1 \\pmod 3.\n\\end{cases}\n\\]\nLet $\\mathbf{v}':=\\sum_{i} a_i'\\rho_i$, so that the entries of \n$\\mathbf{v}'$ oscillate between $1$ and $\\delta$. \nThen there exists a finite length $R$-module\n$N$ such that $b^R(N)$ is a scalar multiple of $\\mathbf{v}'$. See\nFigure~\\ref{fig:v}.\n\\end{example}\n\n\\begin{remark}\nFor a finite length module, the Buchsbaum--Eisenbud--Horrocks Rank Conjecture proposes that \n$b_i(M) \\geq \\binom{n}{i}$ for $i=0, 1, \\dots, n.$ \nIt is natural to seek a sharper lower bound $B_i$ that depends on the number \nof generators of $M$ and the dimension of the socle of $M$. \nFor $B_1$ we may set\n$B_1(b_0,b_n):=b_0-1+n$, and then $b_1\\geq B_1(b_0,b_n)$; something similar holds for $B_{n-1}$. However, Theorem~\\ref{thm:regmain}\nimplies that when $i\\ne 1,n-1$ there is no such linear bound. This follows immediately from the fact\nthat, for any $0< \\delta \\ll 1$, there is a resolution with shape $(1,1+\\frac{\\delta}{2},\\delta, \\dots, \\delta,1+\\frac{\\delta}{2}, 1)$.\n\\end{remark}\n\n\\begin{question}\nAre there nonlinear functions $B_i(b_0,b_n)$ such that $b_i(M)\\geq B_i(b_0(M),b_n(M))$ for all finite length modules $M$?\n\\end{question}\n\n\n\\begin{remark}[The graded\/local comparison]\\label{rmk:comparison}\nIf \n$S=\\Bbbk[x_1, \\dots, x_n]$ (with the standard grading) and\n$R=\\Bbbk[x_1, \\dots, x_n]_{(x_1, \\dots, x_n)}$,\nthen there is a map \n$\\mathrm{B}_{\\mathbb{Q}}(S)\\to \\mathrm{B}_{\\mathbb{Q}}(R)$ obtained by ``forgetting the grading'' and\nlocalizing. \nTheorem~\\ref{thm:regmain} implies that this map is surjective. \nIt would be interesting to understand if a similar statement is true \nif we replace $S$ by a more general graded ring.\n\\end{remark}\n\n\n\\section{Betti sequences over hypersurface rings I: the cone $\\overline{\\mathrm{B}_{\\mathbb{Q}}(R_{\\infty})}$}\n\\label{sec:cone:all:hyp}\n\nWe say that $Q$ is a \\defi{hypersurface ring} of a regular local ring \n$(R, \\mathfrak m)$ if $Q = R\/\\$ for some nonzerodivisor $f\\in R$. \nTo avoid trivialities, we assume that $f \\in \\mathfrak m^2$. \nLet $n := \\dim R$ and $d := {\\rm ord}(f)$, i.e., the unique\ninteger $d$ such that $f \\in \\mathfrak m^d \\ensuremath{\\!\\smallsetminus\\!} \\mathfrak m^{d+1}$. \nThe following result is the basis for the ``standard construction.'' \nSee~\\cite{shamash}, \\cite{eisenbud-ci}*{\\S 7}, or \\cite{avramov-notes} \nfor more details.\n\n\\begin{theorem}[Eisenbud, Shamash] \\label{thm:standardconstruction} \n Given a $Q$-module $M$, let ${\\bf F}_\\bullet \\to M$ be its minimal\n free resolution over $R$. Then there are maps $s_k \\colon {\\bf F}_\\bullet\n \\to {\\bf F}_{\\bullet + 2k-1}$ for $k \\ge 0$ such that\n \\begin{enumerate}[\\rm (i)]\n \\item $s_0$ is the differential of ${\\bf F}_\\bullet$.\n \\item \\label{item:firsthomotopy} $s_0s_1 + s_1s_0$ is multiplication\n by $f$.\n \\item \\label{item:higherhomotopy} $\\sum_{i=0}^k s_i\n s_{k-i} = 0$ for all $k > 1$. \n \\end{enumerate}\n\\end{theorem}\n\nWe note that if $R$ and $Q$ are graded local rings, then the maps $s_k$ can\nbe chosen to be homogeneous. \nUsing the $s_k$, we may form a new complex ${\\bf F}'_\\bullet$\nwith terms \n\\[\n{\\bf F}'_i = \\bigoplus_{j \\ge 0} {\\bf F}_{i-2j} \\otimes_R Q\n\\]\nand with differentials given by taking the sum of the maps \n\\begin{align*} {\\bf F}_i \\otimes_R Q \\xrightarrow{(s_0, s_1, s_2,\n \\dots)} ({\\bf F}_{i-1} \\oplus {\\bf F}_{i-3} \\oplus {\\bf F}_{i-5}\n \\oplus \\cdots) \\otimes_R Q.\n\\end{align*}\nThen ${\\bf F}'_\\bullet \\to M$ is a $Q$-free resolution which need not\nbe minimal.\n\nWith $\\mathbb{W}=\\prod_{i=0}^\\infty \\mathbb{Q}$ and $\\epsilon_i\\in \\mathbb{W}$ the $i$th\ncoordinate vector, we define $\\Phi \\colon \\mathbb{W} \\to \\mathbb{W}$ by\n\\[\n\\Phi(w_0, w_1, \\dots) := \n(w_0, w_1, w_0+w_2, w_1+w_3, w_0+w_2+w_4, \\dots).\n\\]\nIn other words, the $\\ell$th coordinate function of $\\Phi$ is given by\n\\[\n\\Phi_{\\ell}(w_0,w_1,\\dots)=\\begin{cases}\n\\sum_{i=0}^{\\frac{\\ell}{2}} w_{2i} & \\text{if $\\ell$ is even,}\\\\\n \\vspace{-.3cm}\\\\\n\\sum_{i=0}^{\\frac{\\ell-1}{2}} w_{2i+1} & \\text{if $\\ell$ is odd.}\n\\end{cases}\n\\]\nAs in Section~\\ref{sec:cone:reg}, let $\\rho_{-1}:=\\epsilon_0$ and \n$\\rho_i:=\\epsilon_i+\\epsilon_{i+1}$ for $i\\geq 0$. \n\nFree resolutions over a hypersurface ring can be infinite in length,\nbut they are periodic after $n$ steps~\\cite{eisenbud-ci}*{Corollary~6.2}, so that $b^Q_i(M)=b^Q_{i+1}(M)$ for all $i\\geq\nn$~\\cite{eisenbud-ci}*{Proposition~5.3}. Thus, if we seek to describe\nthe cone of Betti sequences in the hypersurface case, it is necessary\nto include some rays with infinite support. We define\n\\[\n\\tau^\\infty_{i}:=\\sum_{j=i}^\\infty \\epsilon_j\\in \\mathbb{W}\n\\]\nand note that $\\tau^\\infty_i=\\Phi(\\rho_i)$. \nThe rays $\\tau^\\infty_{n-2}$ and $\\tau^\\infty_{n-1}$ \nwill be especially important for us.\n\nWe now give a precise description of the \ntotal hypersurface cone $\\overline{\\mathrm{B}_{\\mathbb{Q}}(R_{\\infty})}$ from Definition~\\ref{defn:Rtot}. \n\n\\begin{proposition}\\label{prop:moreDetailedHyper}\nThe following three $(n+1)$-dimensional cones in $\\mathbb{W}$ coincide: \n\\begin{enumerate}[\\rm (i)]\\newcounter{savenum}\n\\item \\label{item:full hyp cone} \n\tThe total hypersurface cone ${\\overline{\\mathrm{B}_{\\mathbb{Q}}(R_{\\infty})}}$. \n\\item \\label{item:hyperextremal} \n\tThe cone spanned by the rays \n\t$\\mathbb{Q}_{\\geq 0}\\langle \\rho_{-1}, \\rho_0, \\dots, \\rho_{n-2}, \n\t\\tau^\\infty_{n-2}, \\tau^\\infty_{n-1}\\rangle$.\n\\item \\label{item:hyperfacets} \n\tThe cone defined by the functionals \n \\[\n \\begin{cases}\n \\chi_{[i,j]}\\geq 0 & \\text{ for all } i\\leq j \\le n \\text{ with }\n i-j \\text{ even},\\\\ \n \\chi_{[i,i+1]}=0 & \\text{ for all } i\\geq n, \\text{ and}\\\\\n \\chi_{[n-1,n]}\\geq 0.\\\\\n \\end{cases}\n \\]\n\\end{enumerate}\n\\end{proposition}\n\n\\begin{proof\n It is straightforward to check that the extremal rays satisfy the\n desired facet inequalities, and hence we have\n \\eqref{item:hyperextremal}$ \\subseteq $\\eqref{item:hyperfacets}.\n The reverse inclusion is more difficult than the analogous statement\n in Proposition~\\ref{prop:moredetailed} because\n here~\\eqref{item:hyperextremal} is not a simplicial cone. We first\n identify the boundary facets, and then show that for each boundary\n facet, one of the listed functionals vanishes on it.\n\n To do this, we use that these rays satisfy a unique linear\n dependence relation. When $n$ is even, the relation is given by\n\\[\n\\tau^{\\infty}_{n-1}+\\rho_{n-3}+\\dots +\\rho_{-1}=\\tau^{\\infty}_{n-2}+\\rho_{n-4}+\\dots +\\rho_0,\n\\]\nand a similar relation holds when $n$ is odd.\nWe now consider subsets of these rays \nof size $n$, which we index by the two rays that we omit from the collection. \nThese fall into three categories:\n\\begin{enumerate}[\\rm (a)]\n\\item \\label{item:functionalA} $\\{ \\rho_i,\\rho_j\\}$ with $i$ and $f\\in \\mathfrak m$ is arbitrary. \nWe thus reduce to the consideration of a point $\\mathbf{w}=b^{Q}(M)$, \nwhere $M$ is a $Q$-module. In this case, \n\\[\n\\chi_{[i,j]}(b^Q(M))=\\frac{1}{e(Q)}\\left(e(\\Omega^i(M))+(-1)^{i-j}e(\\Omega^j(M))\\right),\n\\]\nwhich is certainly nonnegative when $i$ and $j$ have the same parity.\nIt follows from~\\cite{eisenbud-ci}*{Proposition~5.3, Corollary~6.2}\nthat $\\chi_{[i,i+1]}(b^Q(M))=0$ for $i\\geq n$. Thus it remains to\ncheck the inequality $\\chi_{[n-1,n]}(b^Q(M))\\geq 0$. Using $\\mu(N)$\nto denote the minimal number of generators of a module $N$, we have\n\\[\n\\chi_{[n-1,n]}(b^Q(M))=\\mu(\\Omega^{n-1}(M))-\\mu(\\Omega^{n}(M)).\n\\]\nBoth of these syzygy modules are maximal Cohen--Macaulay \n$Q$-modules. The key difference is that $\\Omega^{n-1}(M)$ \nmight have a free summand, whereas $\\Omega^n(M)$ does not. \nSince maximal Cohen--Macaulay modules without free summands \nover hypersurface rings have a periodic resolution \nby~\\cite{eisenbud-ci}*{Theorem~6.1(ii)}, it follows that \n$\\chi_{[n-1,n]}(b^Q(M))$ computes the number of free summands in\n$\\Omega^{n-1}(M)$, so it is nonnegative.\n\nTo complete the proof, we show\nthat~\\eqref{item:hyperextremal}$\\subseteq$\\eqref{item:full hyp cone}\nby showing that each extremal ray lies in $\\overline{\\mathrm{B}_{\\mathbb{Q}}(R_{\\infty})}$. We first show\nthat $\\rho_i$ belongs to $\\overline{\\mathrm{B}_{\\mathbb{Q}}(R\/\\)}$ for any $f$. Choose\na regular local subring $R'\\subseteq R\/\\$ of dimension $n-1$ and\nan $R'$-module $M'$. Then\n$b^{R\/\\}(M'\\otimes_{R'}R\/\\)=b^{R'}(M')$ because $R\/\\$ is\nfinite and flat over $R'$. In particular,\n$\\overline{\\mathrm{B}_{\\mathbb{Q}}(R')}\\subseteq\\overline{\\mathrm{B}_{\\mathbb{Q}}(R\/\\)}$. Since $\\rho_i\n\\in \\overline{\\mathrm{B}_{\\mathbb{Q}}(R')}$ by Proposition~\\ref{prop:moredetailed}, we\nhave $\\rho_i \\in \\overline{\\mathrm{B}_{\\mathbb{Q}}(R\/\\)}$.\n\nFinally, we must show that $\\tau^{\\infty}_{n-2}$ and\n$\\tau^{\\infty}_{n-1}$ belong to $\\overline{\\mathrm{B}_{\\mathbb{Q}}(R_{\\infty})}$. \nThis is where the advantage of working with $\\overline{\\mathrm{B}_{\\mathbb{Q}}(R_{\\infty})}$ becomes clear, \nas it enables a second limiting argument that, roughly speaking, \nmakes the standard construction exact. \nThe key observation is summarized in Lemma~\\ref{lem:phiexact} below.\n\nIn fact, we now show the more general statement that $\\Phi(\\rho_i)\\in\n\\overline{\\mathrm{B}_{\\mathbb{Q}}(R_{\\infty})}$ for $i=0,\\dots, n-1$. Fix $i$ and let $d^{i,t}$ be the\nsequence of degree sequences defined in the proof of\nProposition~\\ref{prop:moredetailed}. \nFor each $t$, we choose any polynomial \n$f_t\\in \\mathfrak m^{d^{i,t}_n-d^{i,t}_0+1}$. \nWe now apply Lemma~\\ref{lem:phiexact}, \nalong with the fact that $\\Phi$ is continuous, to conclude that\n\\begin{align*}\n \\tau^{\\infty}_i&=\\Phi(\\rho_i)\\\\\n &= \\Phi \\left( \\lim_{t\\to \\infty} b^{R}(M(d^{i,t})\\otimes_S R) \\right)\\\\\n &= \\lim_{t\\to \\infty} \\Phi \\left( b^{R}(M(d^{i,t})\\otimes_S R) \\right)\\\\\n &= \\lim_{t\\to \\infty} b^{R\/\\}(M(d^{i,t})\\otimes_S R).\n\\end{align*}\nSince $b^{R\/\\}(M(d^{i,t})\\otimes_S R) \\in \\overline{\\mathrm{B}_{\\mathbb{Q}}(R_{\\infty})}$ for all $t$,\nit follows that the final limit lies in $\\overline{\\mathrm{B}_{\\mathbb{Q}}(R_{\\infty})}$.\n\\end{proof}\n\n\\begin{remark}\\label{rmk:equals asymptotic}\n The proof of Proposition~\\ref{prop:moreDetailedHyper} goes through\n if we replace $\\overline{\\mathrm{B}_{\\mathbb{Q}}(R_{\\infty})}$ by the closure of the limit cone $\\lim_{t\\to\n \\infty} \\mathrm{B}_{\\mathbb{Q}}(R\/\\)$, illustrating that these two cones are\n equal as well. This justifies equation~\\eqref{eqn:equals\n asymptotic}.\n\\end{remark}\n\n\\begin{lemma}\\label{lem:phiexact}\n Let $M$ be an $R$-module that is annihilated by\n $\\mathfrak m^{N_0}$ and let $f\\in \\mathfrak m^N$ with $N\\gg N_0$. Then\n \\[\n \\Phi(b^R(M))=b^{R\/\\}(M).\n \\]\n More specifically, let $d=(d_0,\\dots,d_n)$ be a degree sequence, \n $M(d)\\otimes_S R$ be defined as in Proposition~\\ref{prop:pureoverRLR}, and \n $f\\in \\mathfrak m^{d_n-d_0+1}$. Then\n \\[\n \\Phi(b^R(M(d)\\otimes_S R))=b^{R\/\\}(M(d)\\otimes_S R).\n \\]\n\\end{lemma}\n\\begin{proof}\n Since $R$ is a regular local ring, the minimal $R$-free resolution\n of $M$ has finite length. So there are only finitely many $j$ such\n that the $s_j$ in Theorem~\\ref{thm:standardconstruction} are\n nonzero, and there is some positive integer $P$ such that the matrix\n entries in the minimal $R$-free resolution of $M$ belong to\n $\\mathfrak m^P$. To conclude, we need to know that the entries of each $s_j$\n belong to the maximal ideal $\\mathfrak m$. From\n Theorem~\\ref{thm:standardconstruction}\\eqref{item:higherhomotopy},\n this will be true if it holds for $j=1$, and this in turn is true if\n we set $N_0 = P$ and apply\n Theorem~\\ref{thm:standardconstruction}\\eqref{item:firsthomotopy}.\n\\end{proof}\n\n\\begin{remark} \nAssume that $n\\geq 3$. By \\cite{triangulations}*{Lemma~2.4.2}, there are \nexactly two triangulations of the cone $\\overline{\\mathrm{B}_{\\mathbb{Q}}(R_{\\infty})}$, which we now describe. \nFirst, we project from $\\mathbb{W}$ onto the first $n+1$ coordinates. \nThis does not change the combinatorial structure of the cone.\nThe hyperplane section of the projection given by \n$\\epsilon_{0} + \\cdots + \\epsilon_{n} = 1$ \nis an $n$-dimensional polytope with vertices \n$\\rho_{-1}$, $\\frac{1}{2}\\rho_0$, $\\frac{1}{2}\\rho_1$, $\\dots$, \n$\\frac{1}{2}\\rho_{n-2}$, $\\frac{1}{3}\\tau^\\infty_{n-2}$, \n$\\frac{1}{2}\\tau^\\infty_{n-1}$. \n\nTo express the triangulations, let $\\Delta_r$ denote the polytope \ngenerated by all vertices other than $r$. \nIf $n$ is odd, then the two triangulations are\n \\[\n \\{ \\Delta_{\\rho_i} \\mid i \\text{ odd}, i\\ne n-2 \\} \\cup \\{\n \\Delta_{\\tau^\\infty_{n-1}} \\}\\quad \\text{ or } \\quad \\{ \\Delta_{\\rho_i} \\mid i\n \\text{ even} \\} \\cup \\{\\Delta_{\\tau^\\infty_{n-2}}\\}.\n \\]\n If $n$ is even, then the two triangulations are\n \\[\n \\{ \\Delta_{\\rho_i} \\mid i \\text{ odd} \\} \\cup \\{\n \\Delta_{\\tau^\\infty_{n-2}} \\},\\quad \\text{ or } \\quad \\{ \\Delta_{\\rho_i} \\mid i\n \\text{ even}, i \\ne n-2 \\} \\cup \\{\\Delta_{\\tau^\\infty_{n-1}}\\}. \\qedhere\n \\]\n\\end{remark}\n\n\\section{Betti sequences over hypersurface rings II: A fixed hypersurface}\n\\label{sec:cone:one:hyp}\n\nFor a regular local ring $(R,\\mathfrak m)$ and $f\\in \\mathfrak m_R$, the cone $\\overline{\\mathrm{B}_{\\mathbb{Q}}(R_{\\infty})}$ \nis larger than $\\mathrm{B}_{\\mathbb{Q}}(Q)$ for the hypersurface ring $Q = R\/\\$. \nIn this section, we seek to make this relationship precise. \nSet $Q:=R\/\\$ and $d := {\\rm ord}(f)$, \ni.e., $f \\in \\mathfrak m^d \\setminus \\mathfrak m^{d-1}$. We note that $e(Q)=d$.\nWe define the vectors \n\\[\n\\tau^d_{n-2}:= \\left( \\tfrac{d-1}{d}\\epsilon_{n-2}+\\sum_{j=n-1}^\\infty \\epsilon_j \\right)\n\\qquad \\text{ and } \\qquad \n\\tau^d_{n-1}:=\\left( \\tfrac{1}{d}\\epsilon_{n-2}+\\sum_{\\ell=n-1}^\\infty\n \\epsilon_\\ell \\right). \n\\]\nWe also define the functionals\n\\[\n\\xi^d_{[i,j]}:=\n\\begin{cases}\n-\\epsilon_j^*+d\\chi_{[i,j-1]}&\\text{ if } i-j \\text{ is odd,}\\\\\n(d-1)\\epsilon_j^*+d\\chi_{[i,j-1]}&\\text{ if } i-j \\text{ is even}.\\\\\n\\end{cases}\n\\]\n\n\n\n\n\nThe following proposition gives some partial information about\nConjecture~\\ref{conj:hyper:equiv}. \n\n\\begin{proposition}\\label{prop:one direction}\n The following two $(n+1)$-dimensional cones in $\\mathbb{W}$ coincide:\n\\begin{enumerate}[\\rm (i)]\n\\item\\label{conj:cone:rays} \n\tThe cone spanned by the rays \n\t$\\mathbb{Q}_{\\geq 0}\\langle \\rho_{-1}, \\rho_0, \\dots, \\rho_{n-2}, \n\t\\tau^d_{n-2}, \\tau^d_{n-1}\\rangle$.\n\\item\\label{conj:cone:facets} \n\tThe cone defined by the functionals \n\t\\[\n\t\\begin{cases}\n\t\\xi^d_{[i,n]}\\geq 0 & \\text{for all } 0\\leq i \\leq n,\\\\\n\t\\chi_{[i,j]}\\geq 0 & \\text{for all } i\\leq j\\leq n \\text{ and } i-j \\text{ even,}\\\\\n\t\\chi_{[i,i+1]}=0 & \\text{for all } i\\geq n, \\text{ and}\\\\\n\t\\chi_{[n-1,n]}\\geq 0. &\\\\\n\t\\end{cases}\n\t\\]\n\t\\end{enumerate}\nFurthermore, this cone contains $\\overline{\\mathrm{B}_{\\mathbb{Q}}(Q)}$.\n\\end{proposition}\n\n\\begin{proof}\nOne may check that the cones \\eqref{conj:cone:rays} and \\eqref{conj:cone:facets} coincide by an argument entirely analogous\nto that used in the proof of Proposition~\\ref{prop:moreDetailedHyper}.\nIt thus suffices to check that the functionals in~\\eqref{conj:cone:facets} \nare satisfied by all points in $\\mathrm{B}_{\\mathbb{Q}}(Q)$. \nBy applying Proposition~\\ref{prop:moreDetailedHyper}, \nwe immediately reduce to the case of showing that $\\xi^d_{[i,n]}$ \nis nonnegative on any Betti sequence $b^Q(M)$. \n\nFix a finitely generated $Q$-module $M$ and a minimal resolution of $M$:\n$0 \\leftarrow M \\leftarrow Q^{b_0} \\leftarrow Q^{b_1} \\leftarrow \\cdots$. \nTo compute $\\xi^d_{[i,n]}(b^Q(M))$, we consider the exact sequence\n\\[\n\\xymatrix{0&\\Omega^i(M)\\ar[l]&Q^{b_i}\\ar[l]&Q^{b_{i+1}}\\ar[l]&\\dots \\ar[l]&Q^{b_n}\\ar[l]&\\Omega^{n+1}(M)\\ar[l]&0\\ar[l]\n}.\n\\]\nAssume now that $n-i$ is even and that $i\\geq 1$. \nTaking multiplicities, we obtain the equation\n\\begin{align*}\n e(\\Omega^i(M))+e(Q^{b_{i+1}}\n +\\dots+ e(Q^{b_{n-1}})+e(\\Omega^{n+1}(M)) &=\n e(Q^{b_{i}})+e(Q^{b_{i+2}})+\\dots+ e(Q^{b_{n}}),\n\\end{align*}\nwhich can be rewritten as\n\\begin{align*}\n e(\\Omega^i(M))&= d\\chi_{[i,n]} \\left( b^Q(M)\\right)-e(\\Omega^{n+1}(M)).\\\\\n \\intertext{Since $\\Omega^{n+1}(M)$ is Cohen--Macaulay, \n $e\\left(\\Omega^{n+1}(M)\\right)\\geq\n \\mu\\left(\\Omega^{n+1}(M)\\right)=b_{n+1}^Q(M)=b_{n}(M).$ Hence}\n e(\\Omega^i(M))&\\leq d\\chi_{[i,n]} \\left( b^Q(M)\\right)-b_{n}(M)\n =\\xi^d_{[i,n]}\\left( b^Q(M)\\right).\n\\end{align*}\nIt follows that $\\xi^d_{[i,n]}\\left( b^Q(M)\\right)$ is nonnegative, as desired.\n\nWhen $n-i$ is odd and $i\\geq 1$, essentially the same argument holds,\nstarting instead from the exact sequence\n\\[\n\\xymatrix{0&\\Omega^i(M)\\ar[l]&Q^{b_i}\\ar[l]&Q^{b_{i+1}}\\ar[l]&\\dots\n \\ar[l]&Q^{b_{n-1}}\\ar[l]&\\Omega^{n}(M)\\ar[l]&0\\ar[l] }.\n\\]\nThe same argument also holds when $i=0$, after one replaces \n$e(\\Omega^i(M))$ by the number\n\\[\ne':=\\begin{cases}\ne(M) & \\text{ if } \\dim(M)=\\dim(Q),\\\\\n0& \\text{ otherwise}. \n\\end{cases} \n\\qedhere\n\\]\n\\end{proof}\n \nThe opposite inclusion also holds when $Q$ has embedding dimension $2$.\n\n\\begin{proposition} If $Q$ is a hypersurface ring of embedding\n dimension $2$, then $\\mathrm{B}_{\\mathbb{Q}}(Q)$ satisfies\n Conjecture~\\ref{conj:hyper:equiv}.\n\\end{proposition}\n\\begin{proof} \n By Proposition~\\ref{prop:one direction}, it suffices to show that the\n desired extremal rays lie in $\\overline{\\mathrm{B}_{\\mathbb{Q}}(Q)}$. We may quickly\n reduce to showing that $\\tau_0^d, \\tau_1^d\\in \\overline{\\mathrm{B}_{\\mathbb{Q}}(Q)}$.\n Let $\\mathfrak m_Q$ denote the maximal ideal of $Q$, $Q':=Q\/\\mathfrak m_Q^{d-1}$, and\n$\\omega_{Q'}$ be its canonical module. A direct computation confirms that\n $\n d \\tau_1^d=b^Q(Q')$ and $\n d\\tau_0^d=b^Q\\left( \\omega_{Q'}\\right).\n $\n\\end{proof}\n\n\\begin{remark}[Codimension 2 complete intersections]\\label{rmk:codim2}\n For arbitrary quotient rings $Q$ of a regular local ring $R$, the\n cone of Betti sequences $\\mathrm{B}_{\\mathbb{Q}}(Q)$ need not be finite dimensional. For\n instance, consider $Q=\\mathbb{Q}[[x,y]]\/\\$ for any regular\n sequence $f_1, f_2$ inside $\\^2$. Let $\\mathbf{T}_\\bullet$ be the\n Tate resolution of the residue field of $Q$. Since $Q$ is Gorenstein, and\n hence self-injective, we may construct a doubly infinite acyclic\n complex $\\mathbf{F}_\\bullet$ as below:\n\\[\n\\mathbf{F}_\\bullet\\colon \\quad \n\\cdots \\longleftarrow \\mathbf{T}_1^* \\longleftarrow \\mathbf{T}_0^* \n\\longleftarrow \\mathbf{T}_0 \\longleftarrow \n\\mathbf{T}_1 \\longleftarrow \\mathbf{T}_2 \\longleftarrow \\cdots.\n\\smallskip\n\\]\nFor all $i\\geq 0$, let $M_i$ be the kernel of $\\mathbf{T}^*_i\\to\n\\mathbf{T}^*_{i+1}$, and set $\\tau_i:=b^Q(M_i)$. The $\\tau_i$ are\nlinearly independent since $\\operatorname{rank} \\mathbf{T}_i=i+1$ for all $i$\n(see~\\cite{avramov-buchweitz}*{Example~4.2} for details). So we see\nthat $\\mathrm{B}_{\\mathbb{Q}}(Q)$ is infinite dimensional. In particular, $\\mathrm{B}_{\\mathbb{Q}}(Q)$ is\nspanned by infinitely many extremal rays.\n\\end{remark}\n\n\\begin{bibdiv\n\\begin{biblist\n\n\n\\bib{avramov-notes}{article}{\n author={Avramov, Luchezar L.},\n title={Infinite free resolutions},\n conference={\n title={Six lectures on commutative algebra},\n },\n book={\n series={Mod. Birkh\\\"auser Class.},\n publisher={Birkh\\\"auser Verlag},\n place={Basel},\n },\n date={2010},\n pages={1--118},\n \n}\n\n\n\\bib{avramov-buchweitz}{article}{\n author={Avramov, Luchezar L.},\n author={Buchweitz, Ragnar-Olaf},\n title={Homological algebra modulo a regular sequence with special\n attention to codimension two},\n journal={J. Algebra},\n volume={230},\n date={2000},\n number={1},\n pages={24--67},\n}\n\n\n\\bib{BEKStensor}{article}{\n author={Berkesch, Christine},\n author={Erman, Dan},\n author={Kummini, Manoj},\n author={Sam, Steven~V},\n title={Tensor complexes: Multilinear free resolutions\n constructed from higher tensors},\n note={\\tt arXiv:1101.4604},\n date={2011},\n}\n\n\\bib{boij-sod1}{article}{\n AUTHOR = {Boij, Mats},\n AUTHOR = {S{\\\"o}derberg, Jonas},\n TITLE = {Graded {B}etti numbers of {C}ohen--{M}acaulay modules and the\n multiplicity conjecture},\n JOURNAL = {J. Lond. Math. Soc. (2)},\n VOLUME = {78},\n YEAR = {2008},\n NUMBER = {1},\n PAGES = {85--106},\n}\n\n\\bib{boij-sod2}{article}{\n author={Boij, Mats},\n author={S{\\\"o}derberg, Jonas},\n title={Betti numbers of graded modules and the multiplicity\n conjecture in the non-{C}ohen--{M}acaulay case},\n journal={Algebra Number Theory (to appear)},\n year={2008},\n note={\\tt arXiv:0803.1645v1},\n}\n\n\\bib{bruns-vetter}{book}{\n author={Bruns, Winfried},\n author={Vetter, Udo},\n title={Determinantal rings},\n series={Lecture Notes in Mathematics},\n volume={1327},\n publisher={Springer-Verlag},\n place={Berlin},\n date={1988},\n pages={viii+236},\n}\n\n\\bib{buchs-eis-gor}{article}{\n AUTHOR = {Buchsbaum, David A.},\n AUTHOR = {Eisenbud, David},\n TITLE = {Algebra structures for finite free resolutions, and some\n structure theorems for ideals of codimension {$3$}},\n JOURNAL = {Amer. J. Math.},\n \n VOLUME = {99},\n YEAR = {1977},\n NUMBER = {3},\n PAGES = {447--485},\n}\n\n\\bib{carlsson2}{article}{\n author={Carlsson, Gunnar},\n title={On the rank of abelian groups acting freely on $(S^{n})^{k}$},\n journal={Invent. Math.},\n volume={69},\n date={1982},\n number={3},\n pages={393--400},\n}\n\n\\bib{carlsson}{article}{\n AUTHOR = {Carlsson, Gunnar},\n TITLE = {Free {$({\\bf Z}\/2)\\sp k$}-actions and a problem in commutative\n algebra},\n BOOKTITLE = {Transformation groups, {P}ozna\\'n 1985},\n SERIES = {Lecture Notes in Math.},\n VOLUME = {1217},\n PAGES = {79--83},\n PUBLISHER = {Springer},\n ADDRESS = {Berlin},\n YEAR = {1986},\n}\n\n\\bib{triangulations}{book}{\n author={De~Loera, Jes{\\'u}s A.},\n author={Rambau, J{\\\"o}rg},\n author={Santos, Francisco},\n title={Triangulations},\n series={Algorithms and Computation in Mathematics},\n volume={25},\n note={Structures for algorithms and applications},\n publisher={Springer-Verlag},\n place={Berlin},\n date={2010},\n pages={xiv+535},\n}\n\n\\bib{eisenbud-ci}{article}{\n author={Eisenbud, David},\n title={Homological algebra on a complete intersection, with an\n application to group representations},\n journal={Trans. Amer. Math. Soc.},\n volume={260},\n date={1980},\n number={1},\n pages={35--64},\n}\n\n\\bib{efw}{article}{\n author={Eisenbud, David},\n author={Fl\\o ystad, Gunnar},\n author={Weyman, Jerzy},\n title={The existence of pure free resolutions},\n journal={Ann. Inst. Fourier (Grenoble)},\n volume={61},\n date={2011},\n number={3},\n pages={905\\ndash 926}}\n\n\\bib{ES-JAMS}{article}{\n author={Eisenbud, David},\n author={Schreyer, Frank-Olaf},\n title={Betti numbers of graded modules and cohomology of vector\n bundles},\n date={2009},\n journal={J. Amer. Math. Soc.},\n volume={22},\n number={3},\n pages={859\\ndash 888},\n}\n\n\\bib{ES:ICMsurvey}{inproceedings}{\n author={Eisenbud, David},\n author={Schreyer, Frank-Olaf},\n title={Betti numbers of syzygies and cohomology of coherent\n sheaves},\n date={2010},\n booktitle={Proceedings of the {I}nternational {C}ongress of\n {M}athematicians},\n note={Hyderabad, India},\n}\n\n\\bib{hartshorne-vector}{article}{\n AUTHOR = {Hartshorne, Robin},\n TITLE = {Algebraic vector bundles on projective spaces: a problem list},\n JOURNAL = {Topology},\n \n VOLUME = {18},\n YEAR = {1979},\n NUMBER = {2},\n PAGES = {117--128},\n}\n\n\\bib{HerzogKuhlPure84}{article}{\n author={Herzog, J.},\n author={K{\\\"u}hl, M.},\n title={On the {B}etti numbers of finite pure and linear resolutions},\n date={1984},\n journal={Comm. Algebra},\n volume={12},\n number={13-14},\n pages={1627\\ndash 1646},\n}\n\n\n\\bib{M2}{misc}{\n label={M2},\n author={Grayson, Daniel~R.},\n author={Stillman, Michael~E.},\n title = {Macaulay 2, a software system for research\n\t in algebraic geometry},\n note = {Available at \\url{http:\/\/www.math.uiuc.edu\/Macaulay2\/}},\n}\n\\bib{peeva-stillman}{article}{\n author={Peeva, Irena},\n author={Stillman, Mike},\n title={Open problems on syzygies and Hilbert functions},\n journal={J. Commut. Algebra},\n volume={1},\n date={2009},\n number={1},\n pages={159--195},\n}\n\n\n\\bib{shamash}{article}{\n author={Shamash, Jack},\n title={The Poincar\\'e series of a local ring},\n journal={J. Algebra},\n volume={12},\n date={1969},\n pages={453--470},\n}\n\n\n\n\\end{biblist}\n\\end{bibdiv}\n\n\\end{document\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\n\n\\section{Introduction}\n\nRecommender systems are ubiquitous in our lives, from the prioritization of content in news feeds to matching algorithms for dating or hiring.\nThe objective of recommender systems is traditionally formulated as maximizing a proxy for user satisfaction such as ranking performance. However, it has been observed that these recommendation strategies can have undesirable side effects. For instance, several authors discussed popularity biases and winner-take-all effects that may lead to disproportionately expose a few items even if they are assessed as only slightly better than others \\citep{abdollahpouri2019unfairness,singh2018fairness,biega2018equity}, or disparities in content recommendation across social groups defined by sensitive attributes \\citep{sweeney2013discrimination,imana2021auditing}. An approach to mitigate these undesirable effects is to take a more general perspective to the objective of recommendation systems. Considering recommendation as an allocation problem \\citep{singh2018fairness,patro2020fairrec} in which the ``resource'' is the exposure to users, the objective of recommender systems is to allocate this resource fairly, i.e., by taking into account the interests of the various stakeholders -- users, content producers, social groups defined by sensitive attributes -- depending on the application context. This perspective yields the traditional objective of recommendation when only the ranking performance averaged over individual users is taken into account.\n\nThere are two main challenges associated with the fair allocation of exposure in recommender systems. The first challenge is the specification of the formal objective function that defines the trade-off between the possibly competing interests of the stakeholders in a given context. The second challenge is the design of a scalable algorithmic solution: when considering the exposure of items across users in the objective function, the system needs to account for what was previously recommended (and, potentially, to whom) when generating the recommendations for a user. This requires solving a global optimization problem in the space of the rankings of all users. In contrast, traditional recommender systems simply sort items by estimated relevance to the user, irrespective of what was recommended to other users.\n\nIn this paper, we address the algorithmic challenge, with a solution that is sufficiently general to capture many objective functions for ranking with fairness of exposure, leaving the choice of the exact objective function to the practitioner. Following previous work on fairness of exposure, we consider objective functions that are concave functions that should be optimized in the space of randomized rankings \\citep{singh2018fairness,singh2019policy,morik2020controlling,do2021two}. Our algorithm, \\textsc{Offr}\\xspace (\\texttt{O}nline \\texttt{F}rank-Wolfe for \\texttt{F}air \\texttt{R}anking), is a computationally efficient algorithm that optimizes such objective functions \\textit{online}, i.e., by generating rankings on-the-fly as users request recommendations. The algorithm dynamically modifies item scores to optimize for both user utility and the selected fairness of exposure objective. We prove that the objective function converges to the optimum in $O(1\/\\sqrt{t})$, where $t$ is the number of time steps. The computational complexity of \\textsc{Offr}\\xspace at each time step is dominated by the cost of sorting, and it requires only $O(\\#users +\\#items)$ storage. The computation cost of \\textsc{Offr}\\xspace are thus of the same order as what is required in traditional recommenders systems. Consequently, using \\textsc{Offr}\\xspace, taking into account fairness of exposure in the recommendations is (almost) free.\nOur main technical insight is to observe that in the context of fair ranking, the usage of Frank-Wolfe algorithms \\citep{frank1956algorithm} resolves two difficulties:\n\\begin{enumerate}[leftmargin=*]\n \\item Frank-Wolfe algorithms optimize in the space of probability distributions but use at each round a deterministic outcome as the update direction. In our case, it means that \\textsc{Offr}\\xspace outputs a (deterministic) ranking at each time step while implicitly optimizing in the space of randomized rankings.\n \\item Even though the space of rankings is combinatorial, the objective functions used in fairness of exposure have a linear structure that Frank-Wolfe algorithms can leverage, as already noticed by \\citet{do2021two}.\n\\end{enumerate}\n\nCompared to existing algorithms, \\textsc{Offr}\\xspace is the first widely applicable and scalable algorithm for fairness of exposure in rankings. Existing online ranking algorithms for fairness of exposure \\citep{morik2020controlling,biega2018equity,yang2021maximizing} are limited in scope as they apply to only a few possible fairness objectives, and only have weak theoretical guarantees. \\citet{do2021two} show how to apply the Frank-Wolfe algorithm to general smooth and concave objective functions for ranking. However, they only solve the problem in a \\textit{batch} setting, i.e., computing the recommendations of all users at once, which makes the algorithm impractical for large problems, because of both computation and memory costs. Our algorithm can be seen as an online variant of this algorithm, which resolves all scalability issues.\n\nWe showcase the generality of \\textsc{Offr}\\xspace on three running examples of objective functions for fairness of exposure. The first two objectives are welfare functions for two-sided fairness \\citep{do2021two}, and the criterion of quality-weighted exposure \\citep{singh2018fairness,biega2018equity}. The third objective, which we call \\textit{balanced exposure to user groups}, is novel. Taking inspiration from audits of job advertisement platforms \\citep{imana2021auditing}, this objective considers maximizing ranking performance while ensuring that each item is evenly exposed to different user groups defined by sensitive attributes. \n\n\nIn the remainder of the paper, we present the recommendation framework and the different fairness objectives we consider in the next section. In Sec.~\\ref{sec:online}, we present our online algorithm in its most general form, as well as its regret bound. In Sec.~\\ref{sec:algorithms}, we instantiate the algorithm on three fairness objectives and provide explicit convergence rates in each case. We present our experiments in Sec.~\\ref{sec:xp}. We discuss the related work in Sec.~\\ref{sec:relatedwork}. Finally, in Sec.~\\ref{sec:discussion}, we discuss the limitations of this work and avenues for future research. \n}\n\n\n{\n\\section{Fairness of exposure in rankings}\n\\label{sec:framework}\n\nThis paper addresses the online ranking problem, where users arrive one at a time and the recommender system produces a ranking of $k$ items for that user. We focus on an abstract framework where the recommender system has two informal goals. First, the recommended items should be relevant to the user. Second, the exposure of items should be distributed ``fairly'' across users, for some definition of fairness which depends on the application context. We formalize these two goals in this section, by defining objective functions composed of a weighted sum of two terms: the \\textit{user objective} which depends on the ranking performance from the user's perspective, and the \\textit{fairness objective}, which depends on the exposure of items. In this section, we focus on the \\textit{ideal}\\xspace objective functions, which are defined in a \\textit{static}\\xspace ranking framework. In the next sections, we focus on the online ranking setting, where at each time step, an incoming user requests recommendations and the recommender systems produces the recommendation list on-the-fly while optimizing these ideal objective functions. \n\nIn order to disentangle the problem of learning user preferences from the problem of generating fair recommendations, we consider that user preferences are given by an oracle. We start this section by describing the recommendation framework we consider. We then present the fairness objectives we focus on throughout the paper. \n\n\\paragraph{Notation} Integer intervals are denoted within brackets, i.e., $\\forall n\\in\\mathbb{N}$, $\\intint{n}=\\{1, .., n\\}$. \nWe use the Dirac notation $\\dotp{x}{y}$ for the dot product of two vectors of same dimension $x$ and $y$. Finally, $\\indic{{\\rm expr}}$ is $1$ when ${\\rm expr}$ is true, and $0$ otherwise.\n\n\n\\subsection{Recommendation framework}\n\nWe consider a recommendation problem with $n$ users and $m$ items. We identify the set of users with $\\intint{n}$ and the set of items with $\\intint{m}$. We denote by $\\mu_{ij} \\in[0,1]$ the value of recommending item $j$ to user $i$ (e.g., a rating normalized in $[0,1]$). To account for the fact that users are more or less frequent users of the platform, we define the \\textit{activity} of user $i$ as a weight $w_i\\in[0,1]$. We consider that $w=(w_1, ..., w_n)$ is a probability distribution, so that in the online setting described later in this paper, $w_i$ is the probability that the current user at a given time step is $i$.\n\nThe recommendation for a user is a \\textit{top-$k$ ranking} (or simply \\textit{ranking} when the context is clear), i.e., a sorted list of $k$ unique items, where typically $k\\ll m$. Formally, we represent a ranking by a mapping $\\sigma:\\intint{k}\\rightarrow\\intint{m}$ from ranks to recommended items with the constraint that different ranks correspond to different items. \nThe ranking performance on the user side follows the \\textit{position-based} model, similarly to previous work \\citep{singh2018fairness,singh2019policy,patro2020fairrec,biega2018equity,do2021two}. Given a set of non-negative, non-increasing \\textit{exposure weights} $b=(b_1, ..., b_k)$, the ranking performance of $\\sigma$ for user $i$, denoted by $u_i(\\sigma)$, is equal to:\n\\begin{align}\\label{eq:position_based}\n \n u_i(\\sigma) = \\sum_{\\k=1}^{k} \\mu_{i,\\sigma(\\k)}b_\\k && \\text{with $b_1 \\geq ... \\geqb_k\\geq 0$}.\n\\end{align}\nWe use the shorthand \\textit{user utility} to refer to $u_i$.\nFollowing previous work on fairness of exposure, we interpret the weights in $b$ as being commensurable to the exposure an item receives given its rank. The weights are non-increasing to account for the \\textit{position bias}, which means that the user attention to an item decreases with the rank of the item. \nGiven a top-$k$ ranking $\\sigma$, the \\textit{exposure vector induced by $\\sigma$}, denoted by $E(\\sigma)\\in\\Re^m$ assigns each item to its exposure in $\\sigma$:\n\\begin{align}\n \n \\forall j\\in\\intint{m}, E_j(\\sigma) = \\begin{cases}\n b_{\\k}&\\text{if~}\\exists \\k\\in\\intint{k}, \\sigma(\\k)=j\\\\\n 0&\\text{otherwise}\n \\end{cases}.\n\\end{align}\nThe user utility is then equal to $\n u_i(\\sigma) = \\sum\\limits_{j=1}^m \\mu_{ij}E_j(\\sigma) = \\dotp{\\mu_i}{E(\\sigma)}$.\n\nIn practice, the ranking given by a recommender system to a user is not necessarily unique: previous work in \\textit{static}\\xspace rankings consider randomization in their rankings \\citep{singh2018fairness}, while in our case of online ranking, it is possible that the same user receives different rankings at different time steps. In that case, we are interested in averages of user utilities and item exposures.\nTo formally define these averages, we use the notation: \n\\begin{equation}\n\\begin{aligned}\n {\\mathcal{E}}&=\\xset[\\big]{E(\\sigma): \\sigma \\text{~is a~top-$k$ ranking}}\\\\ \\overline{\\arms} &= \\mathrm{convexhull}({\\mathcal{E}}) & \\Pi &= \\overline{\\arms}^{n}.\n \\end{aligned}\n\\end{equation}\n${\\mathcal{E}}$ is the set of possible item exposures vectors and $\\overline{\\arms}$ is the set of possible average exposure vectors. $\\Pi$ is an \\textit{exposure matrix}, where $\\pi_{ij}$ is the average exposure of item $j$ to user $i$. \nUnder the position-based model, a matrix $\\pi\\in\\Pi$ characterizes a recommender system since it specifies the average exposure of every item to every user. We use $\\pi$ as a convenient mathematical device to study the optimization problems of interests, keeping in mind that out algorithms effectively produce a ranking at each time step.\n\nRecalling that $w$ represents the user activities, the user utilities and total item exposures under $\\pi$ are defined as\n\\begin{equation}\n\\begin{aligned}\\label{eq:position_based model_exposures}\n \\text{(utility of user $i$)} && u_i(\\pi) = \\dotp{\\mu_i}{\\pi_i}\\\\ \\text{(exposure of item $j$)}&& v_j(\\pi) = \\sum_{i=1}^n w_i \\pi_{ij}.\n \\end{aligned}\n\\end{equation}\nFairness of exposure refers to objectives in recommender systems where maximizing average user utility is not the sole or main objective of the system. Typically, the exposure of items $v_j$, or variants of them, should also be taken into account in the recommendation. We formulate the goal of a recommender system as optimizing an objective function $f(\\pi)$ over $\\pi\\in\\Pi$, where $f$ accounts for both the user utility and the fairness objectives. \n\n\n\\subsection{Fairness Objectives}\\label{sec:fairness_objectives}\n\nWe now present our three examples of objective functions $f(\\pi)$ in order of ``difficulty'' to perform online ranking compared to static ranking. \nIn all three cases, it is easy to see that the objective functions are \\textit{concave} with respect to the recommended exposures $\\pi$. The objective functions should be maximized, so the optimal exposures $\\pi^*$ satisfy\n\\begin{equation}\n \\pi^* \\in \\argmax_{\\pi\\in\\Pi} f(\\pi).\n\\end{equation}\nSince our algorithm works on any concave function of the average exposures respecting some regularity conditions, we emphasize that the three objective functions below are only a few examples among many. \n\n\n\n\\paragraph{Two-sided fairness} The first example is from \\citet{do2021two} who optimize an additive concave welfare function of user utilities and item exposures. Interpreting item exposure as the utility of the item's producer, this approach is grounded into notions of distributive justice from welfare economics and captures both user- and item-fairness \\citep{do2021two}. \nFor $\\eta>0$, $\\beta>0$ and $\\alpha_1\\in (-\\infty, 1), \\alpha_2\\in(-\\infty, 1)$, the objective function is:\n\\begin{equation}\n\\begin{aligned}\n \\label{eq:welfobj}\n f(\\pi) &= \\sum_{i=1}^n w_i\\psi_{\\alpha_1}\\!\\big(u_i(\\pi)\\big) + \\frac{\\beta}{m} \\sum_{j=1}^m \\psi_{\\alpha_2}\\!\\big(v_j(\\pi)\\big) \n \\\\\n \\text{where~} \\psi_{\\alpha}(x) &= \\begin{cases}\n {\\rm sign}(\\alpha)(\\eta+x)^\\alpha&\\text{~if~} \\alpha\\neq 0\\\\\n \\log(\\eta+x)&\\text{~if~}\\alpha=0\n \n \n \\end{cases}.\n \\end{aligned}\n\\end{equation}\n\nWhere $\\eta>0$ avoids infinite derivatives at 0, $\\beta>0$ controls the relative weight of user-side and item-side objectives, and $\\alpha_1<1$ (resp. $\\alpha_2<1$) controls how much we focus on maximizing the utility of the worse-off users (resp. items) \\citep{do2021two}.\n\n\\paragraph{Quality-weighted exposure} One of the main criteria for fairness of exposure is \\emph{quality-weighted} exposure \\citep{biega2018equity,wu2021tfrom} (also called merit-based fairness \\citep{singh2018fairness,morik2020controlling}). A measure $q_j$ of the overall quality of an item is taken as reference, and the criterion stipulates that the item exposure is proportional to its quality. $q_j$ is often defined as the average value $\\mu_{ij}$ over users. Using this definition of $q_j$, as noted by \\citet{do2021two}, it is possible to optimize trade-offs between average user utility and proportional exposure using a penalized objective of the form:\n\\begin{equation}\n\\begin{aligned}\\label{eq:quaobj}\n &f(\\pi) = \\sum_{i=1}^n w_iu_i(\\pi) - \\beta \\sqrt{\\eta+\\frac{1}{m}\\sum_{j=1}^m \\Big(\\quaavgv_j(\\pi)-q_j\\norm{b}_1\\Big)^2} \\\\\n &\\text{~where~} q_j = \\sum_{i=1}^n w_i\\mu_{ij} \\text{~~and~~} q_{\\mathrm{avg}}=\\frac{1}{m}\\sum_{j=1}^mq_j.\n\\end{aligned}\n\\end{equation}\nAs before, $\\beta>0$ controls the trade-off between user utilities and the fairness of exposure penalty and $\\eta>0$ avoids infinite derivatives at $0$. This form of the exposure penalty was chosen because it is concave and differentiable, and it is equal to zero when exposure is exactly proportional to quality, i.e., when $\\forall j, j', \\frac{v_j}{q_j} = \\frac{v_{j'}}{q_{j'}}$. We use $\\quaavgv_j(\\pi)-q_j\\norm{b}_1$ rather than \n$\\frac{v_j(\\pi)}{q_j} -\\frac{\\norm{b}_1}{q_{\\mathrm{avg}}}$ because the the former is more stable when qualities are close to $0$ or estimated.\n\n\\paragraph{Balanced exposure to user groups} \nWe also propose to study a new criterion we call \\textit{balanced exposure to user groups}, which aims at exposing every item evenly across different user groups. For instance, a designer of a recommendation system might want to ensure a job ad is exposed to the similar proportion of men and women \\citep{imana2021auditing}, or to even proportions within each age category. \nLet $\\mathcal{S} = (s}%{\\mathfrak{s}_1, ..., s}%{\\mathfrak{s}_{\\card{\\mathcal{S}}})$ be a set of non-empty groups of users. We do not need $\\mathcal{S}$ to contain all users, and groups may be overlapping. Let $\\v_{j|\\group}$ be the exposure of item $j$ within the group $s}%{\\mathfrak{s}$, i.e., the amount of exposure $j$ receives in group $s}%{\\mathfrak{s}$ with respect to the total exposure available for this group. That is, for any $\\pi \\in \\Pi$, define\n\\begin{equation}\\nonumber\n\\v_{j|\\group}(\\pi):=\\sum_{i\\ins}%{\\mathfrak{s}} \\frac{w_i}{\\overline{w}_{\\group}} \\pi_{ij},\\text{ with }\\overline{w}_{\\group} := \\sum_{i\\ins}%{\\mathfrak{s}} w_i.\n\\quad\n\\v_{j|\\mathrm{avg}} = \\frac{1}{\\card{\\mathcal{S}}}\\sum_{s}%{\\mathfrak{s}\\in\\mathcal{S}} \\v_{j|\\group}(\\pi)\n\\end{equation}\nAlso, let $\\v_{j|\\mathrm{avg}} := (1\/\\card{\\mathcal{S}})\\sum_{s}%{\\mathfrak{s}\\in\\mathcal{S}} \\v_{j|\\group}(\\pi)$ be the average exposure for item $j$, across all the groups.\nThe objective function we consider takes the following form, where $\\beta>0$ and $\\eta>0$ play the same roles as before:\n\\begin{align}\\label{eq:balancedobj}\n \n \n f(\\pi) = \\sum_{i=1}^n w_i u_i(\\pi) - \\frac{\\beta}{m} \\sum_{j=1}^m \\sqrt{\\eta + \\sum_{s}%{\\mathfrak{s}\\in\\mathcal{S}} \\Big(\\v_{j|\\group}(\\pi)-\\v_{j|\\mathrm{avg}}(\\pi)\\Big)^2}.\n \n \n\\end{align}\n}\n\n{\n\\section{Fast online ranking}\\label{sec:online}\n\n\\subsection{Online ranking}\nThe \\emph{online setting} we consider is summarized as follows. At each time step $t\\geq 1$:\n\\begin{enumerate}\n\\item \nA user $\\it\\in\\intint{n}$ asks for recommendations. We assume $\\it$ is drawn at random from the fixed but unknown distribution of user activities with parameters $w$, i.e., $\\it\\sim{\\rm Categorical}(w)$.\n\\item The recommender system picks a ranking $\\sigma^{(t)}$.\n\\end{enumerate}\nNote that as stated before, the main assumptions of this framework are the fact that incoming users are sampled independently at each step from a distribution that remains constant over time. In our setting, we consider that the (user, item) values $\\mu_{ij}$ are known to the system. However, the user activities $w_i$ are unknown. \n\n\n\nLet $\\at=E(\\sigma^{(t)})$ be the exposure vector induced by $\\sigma^{(t)}$, and define, for every user $i$:\n\\begin{itemize}\n \\item The user counts at time $t$: $\\displaystylec^{(t)}_i = \\sum_{\\tau \\le t}\\indic{\\itp=i}$;\n \\item The average exposure at time $t$: \n $\\displaystyle \\pih^{(t)}_i=\\frac{1}{c^{(t)}_i}\\sum_{\\tau \\le t}\\indic{\\itp=i} \\atp$.\n\\end{itemize}\n\nGiven an objective function $f$ such as the ones defined in the previous section, our goal is to design computationally efficient algorithms with low \\textit{regret} when $t$ grows to infinity. More formally the goal of the algorithm is to guarantee:\n\\begin{align}\\label{eq:regret}\n R^{(t)} = \\max_{\\pi\\in\\Pi}\\big[f(\\pi)\\big] - \\mathbb{E}[f(\\pih^{(t)})] \\xrightarrow[t\\to\\infty]{} 0\n \n\\end{align}\nwhere the expectation in $R^{(t)}$ is taken over the random draws of ${i^{(1)}}, ..., \\it$ and the $O(.)$ hides constants that depend on the problem, such as the number of users or items.\n\n\\subsection{The \\textsc{Offr}\\xspace algorithm}\n\nWe describe in this section our generic algorithm, called \\textsc{Offr}\\xspace for \\texttt{O}nline \\texttt{F}rank-Wolfe for \\texttt{F}air \\texttt{R}anking. \n\\textsc{Offr}\\xspace works with an abstract objective function \n$\\obj_\\w:\\Pi\\rightarrow\\Re$, which is parameterized by the vector of user activities $w$. The $\\obj_\\w(\\pi)$ of this section is exactly the $f(\\pi)$ of the previous section, except that we make explicit the dependency on $w$ because it plays an important role in the algorithm and its analysis.\n\n\\paragraph{Assumptions on $w$ and $\\obj_\\w$} In the remainder, we assume $w$ is fixed and non-degenerate, i.e., $\\forall i\\in\\intint{n}, w_i>0$. We assume that for every $w$, $\\pi\\mapsto\\obj_\\w(\\pi)$ is \\textit{concave} and \\textit{differentiable}. More importantly, the fundamental object in our algorithm are the partial derivatives of $f$ with respect to $\\pi_i$ normalized by $w_i$. Given a user index $i$, let \n\\begin{align}\n g_{w, i}(\\pi) = \\frac{1}{w_i}\\frac{\\partial \\obj_\\w}{\\partial \\pi_i}(\\pi) \\in \\Re^m.\n\\end{align}\nWe assume that $g_{w, i}$ is bounded and Lipschitz with respect to $\\pi_i$: for every $\\pi\\in\\overline{\\arms}^n$ and every $\\pi'_i\\in\\overline{\\arms}$, we have:\n\\begin{itemize}\n \\item Bounded gradients: $\\norm{g_{w, i}(\\pi)}_\\infty \\leq G_i$;\n \\item Lipschitz gradients: \n\\end{itemize}\n\\begin{equation*}\n \\norm[\\big]{g_{w, i}(\\pi) - g_{w, i}(\\pi_1, \\ldots, \\pi_{i{-}1}, \\pi'_i, \\pi_{i{+}1}, \\ldots, \\pi_{n})}_2 \\leq \\lipgwi\\norm{\\pi_i-\\pi'_i}_2.\n\\end{equation*}\nNotice that with the normalization by $w_i$ in $g_{w, i}$, these assumptions guarantee that the importance of a user is commensurable with their activity, i.e., that the objective does not depend disproportionately on users we never see.\n\\paragraph{Online Frank-Wolfe with an approximate gradient} \nOur algorithm is described by the following rule for choosing $\\at$. First, we rely on $\\hat{g}^{(t)}_i$ which is an approximation of the gradient $g_{w, i}(\\pih^{(t)})$. We describe later the required properties of the approximation we need is given (see Th.~\\ref{thm:boundgeneral} below), and the approximation one we use in practice (see \\eqref{eq:which_approximate_gradient} below). Notice that we rely on an approximation because the user activities are unknown. Then, choose $\\at$ as:\n\\begin{align}\\label{eq:fwmain}\n \\at\\in\\argmax_{\\a\\in{\\mathcal{E}}} \\dotp{\\hat{g}^{(t)}_{\\it}}{\\a}.\n\\end{align}\nSince we compute a maximum dot product with a gradient (or an approximation thereof), our algorithm is a variant of online Frank-Wolfe algorithms. We discuss in more details the relationship with this literature in Sec.~\\ref{sec:relatedwork}. \n\nFrank-Wolfe algorithms shine when the $\\argmax$ in \\eqref{eq:fwmain} can be computed efficiently. As previously noted by \\citet{do2021two} who only study \\textit{static}\\xspace ranking, Frank-Wolfe algorithms are particularly suited to ranking because \\eqref{eq:fwmain} only requires a top-$k$ sorting. Let $\\mathrm{top\\K}(x)$ be a routine that returns the indices of $k$ largest elements in vector $x$.\\footnote{Formally, $\\sigma=\\mathrm{top\\K}(x) \\implies \\Big(x_{\\sigma(1)} \\geq ... \\geq x_{\\sigma(k)}$ and $\\forall j\\not\\in\\xset[\\big]{\\sigma(1), ..., \\sigma(K)}, x_j \\leq x_{\\sigma(k)}\\Big)$, using an arbitrary tie breaking rule as it does not play any role in the analysis.} We have: \n\\begin{proposition} \n\\protect{\\citep[Thm. 1]{do2021two}}\n\\label{prop:sort}\n\\begin{equation*}\\sigma^{(t)} = \\mathrm{top\\K}\\big(\\hat{g}^{(t)}_i\\big) \\implies E\\big(\\sigma^{(t)}\\big) \\in \\argmax_{\\a\\in{\\mathcal{E}}} \\dotp{\\hat{g}^{(t)}_i}{\\a}.\n\\end{equation*}\n\\end{proposition}\n\nWe call \\textsc{Offr}\\xspace (Online Frank-Wolfe for Fair Ranking) the usage of the online Frank-Wolfe update \\eqref{eq:fwmain} in ranking tasks, i.e., using Prop. \\ref{prop:sort} to efficiently perform the $\\argmax$ computation of \\eqref{eq:fwmain}.\n\nWe are now ready to state our main result regarding the convergence of \\textit{dynamic}\\xspace ranking. The result does not rely on the specific structure of ranking problems. The result below is valid as long as ${\\mathcal{E}}\\subset \\Re^m$ is a finite set with $\\forall \\a\\in{\\mathcal{E}}, 0\\leq \\a_j\\leq 1$. We denote by $B_\\arms = \\max_{\\a\\in{\\mathcal{E}}} \\norm{\\a}_1$ ($B_\\arms = \\norm{b}_1$ in our case). \n\\begin{theorem}[Convergence of \\eqref{eq:fwmain}]\n\\label{thm:boundgeneral} \nLet $\\pih_0\\in\\overline{\\arms}^n$, and assume there exists $D_i$ such that $\\forall t\\geq 1$ and $\\forall i\\in\\intint{n}$, we have:\n\\begin{align}\\label{eq:approximategradient}\n \n \n \n \\mathbb{E}\\Big[\n \\norm[\\Big]{\\hat{g}^{(t)}_i - g_{w, i}\\big(\\pih^{(t-1)}\\big)}_\\infty\n \\Big] \\leq \\frac{D_i}{\\sqrt{t}},\n\\end{align}\nwhere the expectation is taken over ${i^{(1)}}, \\ldots, {i^{(t-1)}}$. \n\nThen, with $\\at$ chosen by \\eqref{eq:fwmain} at all time steps, we have $\\forall t\\geq 1$:\n\\begin{align} \n R^{(t)} \\leq 2B_\\arms\\sum_{i=1}^n (\\lipgwi+ G_i)\\frac{\\ln(e t)}{t} + \\frac{6B_\\arms \\sum_{i=1}^n \\sqrt{w_i}(G_i+D_i)}{\\sqrt{t}}\n \\label{eq:generalregret}\n\\end{align}\n\\end{theorem}\nAppendix A is devoted to the proof of this result. The main technical difficulty comes from the fact that we only update the parameters of the incoming user $\\it$ with possibly non-uniform user activities, and we need a stochastic step size $1\/c^{(t)}_i$ so that the iterates of the optimization algorithm match the resulting average exposures. Notice that the guarantee does not depend on the choice of $\\pih_0$ because it only affects the first gradient computed for the user. In practice we set $\\pih_0$ to the average exposure profile of a random top-$k$ ranking.\n\nSince we do not have access to the exact gradient because user activities are unknown, we use in practice the approximate gradient built using the empirical user activities:\n\\begin{align}\\label{eq:which_approximate_gradient}\n \\hat{g}^{(t)}_i = g_{{\\wh^{(t-1)}}, i}\\big(\\pih^{(t-1)}\\big) && \\text{where~~} {\\hat{\\w}^{(t)}}=\\frac{c^{(t)}}{t}\n\\end{align} \nwith a fallback formula when ${\\wh^{(t-1)}_i}=0$. In the next section, we discuss the computationally efficient implementation of this rule for the three objectives of Sec.~\\ref{sec:fairness_objectives}, and we provide explicit bounds for $D_i$ of \\eqref{eq:approximategradient} in each case.\n}\n\n{\n\\section{Applications of \\textsc{Offr}\\xspace} \\label{sec:algorithms}\n\n\\input{offr_algorithms}\n\nPractical implementations of \\textsc{Offr}\\xspace do not rely on naive computations of $\\hat{g}^{(t)}_i = g_{{\\wh^{(t-1)}}, i}\\big(\\pih^{(t-1)}\\big)$, because they would require explicitly keeping track of $\\pih^{(t)}$. $\\pih^{(t)}$ is a matrix of size $n\\times m$, which is impossible to store explicitly in large-scale applications.\\footnote{$n\\times m$ is also the size of the matrix of (user, item) values $\\mu$, which in practice is not stored explicitly. Rather, the values $\\mu_{ij}$ are computed on-the-fly (possibly using caching for often-accessed values) and the storage uses compact representations, such as latent factor models \\citep{koren2015advances} or neural networks \\citep{he2017neural}.} Importantly, as we illustrate in this section, for the objectives of Sec.~\\ref{sec:fairness_objectives} it is unnecessary to maintain explicit representations of $\\pih^{(t)}$ because the gradients depend on $\\pih^{(t)}$ only through utilities or exposures, for which we can maintain online estimates.\n\n\\subsection{Practical implementations}\nThe implementation of \\textsc{Offr}\\xspace for the three fairness objectives \\eqref{eq:welfobj}, \\eqref{eq:quaobj} and \\eqref{eq:balancedobj} are described in Alg.~\\ref{alg:twosided}, \\ref{alg:quaexpo} and \\ref{alg:balancedexpo} respectively, where we dropped the superscripts ${}^{(t-1)}$ and ${}^{(t)}$ for better readability. \nAt every round $t$, there are three steps:\n\\begin{enumerate}\n \\item compute approximate gradients based on online estimates of user values and exposures,\n \\item update the relevant online estimates of user utility and item exposures,\n \\item perform a top-$k$ sort of the scores computed in step (1) to obtain $\\sigma^{(t)}$.\n\\end{enumerate}\n\nWe omit the details of the calculation of $\\gh_{ij}$ in Alg.~\\ref{alg:twosided}, \\ref{alg:quaexpo} and \\ref{alg:balancedexpo}, which are obtained by differentiation of $\\obj_{\\wh}$ using \\eqref{eq:which_approximate_gradient}.\\footnote{In Alg.~\\ref{alg:balancedexpo}, we use a factor $\\frac{t}{c_{\\groupofi}+1}$ while the direct calculation would give a factor $\\frac{t}{c_{\\groupofi}}$. The formula we use more gracefully deals with the case $c_{\\groupofi}=0$ and enjoys similar bounds when $t$ is large.}\n\n\\paragraph{Two-sided Fairness} For two-sided fairness \\eqref{eq:welfobj}, we have:\n\\begin{align}g_{w, ij}\\big(\\pih^{(t)}\\big) = \\psi'_{\\alpha_1}\\big(u_i(\\pih^{(t)}\\big)\\mu_i + \\frac{\\beta}{m}\\psi'_{\\alpha_2}\\big(v_j(\\pih^{(t)})\\big).\n\\end{align}\nLet:\n\\begin{equation}\n\\begin{aligned\n \\forall i\\in\\intint{n}, \\hat{u}^{(t)}_i &= u_i\\big(\\pih^{(t)}\\big) = \n \n \\frac{1}{c^{(t)}_i}\\sum_{\\tau \\le t} \\indic{\\itp=i} \\dotp{\\mu_i}{\\atp},\n \\\\\n \\forall j\\in\\intint{m}, \\hat{v}^{(t)}_j &= \\sum_{i=1}^n {\\hat{\\w}^{(t)}_i} \\pih^{(t)}_{ij} = \\frac{1}{t}\\sum_{\\tau=1}^t \\atpj.\n\\end{aligned}\n\\end{equation}\n(Recall $\\at=E(\\sigma^{(t)})$.) This gives the formula computed in Alg.~\\ref{alg:twosided} for $\\hat{g}^{(t)}_i = g_{{\\wh^{(t-1)}}, i}\\big(\\pih^{(t-1)}\\big)$.\n\nFor the online updates of $\\hat{u}$ and $\\hat{v}$, we use as initial value $\\uh^{(0)}_i$ the utility of the random ranking\nand $\\vh^{(0)}_j=0$. Since $\\hat{u}^{(t)}_i$ only changes for $\\it=i$, they are given by:\n\\begin{equation}\n\\begin{aligned}\\label{eq:welfupdate}\n \\forall i, \\uh^{(0)}_i&=\\dotp{\\mu_i}{\\frac{\\norm{b}_1}{m}} \\\\\n \\uh^{(t)}_{\\it} &= \\uh^{(t-1)}_{\\it} +\\frac{1}{t}\\Big(\\dotp{\\mu_i}{\\at} - \\uh^{(t-1)}_{\\it} \\Big) \\\\\n \\hat{v}^{(t)} &= \\vh^{(t-1)}+\\frac{1}{t}\\Big(\\at - \\vh^{(t-1)}\\Big).\n\\end{aligned}\n\\end{equation}\n\n\n\\paragraph{Quality-weighted exposure} Similarly, for quality-weighted exposure \\eqref{eq:quaobj}, approximate gradients $g_{\\wht, i}(\\pih^{(t)})$ use online estimates of exposures $\\hat{v}^{(t)}$ as in \\eqref{eq:welfupdate}, as well as online estimates of the qualities using $\\forall j, \\quah^{(0)}_j = 0$:\n\\begin{equation}\n\\begin{aligned}\\label{eq:quaupdate}\n \\quah^{(t)} &= \\frac{1}{t} \\sum_{\\tau=1}^t \\mu_{\\it} = \\quah^{(t-1)} + \\frac{1}{t}\\Big(\\mu_{\\it} - \\quah^{(t-1)} \\Big), \\\\\n \\quah^{(t)}_{\\mathrm{avg}} &= \\frac{1}{m} \\sum_{j=1}^m \\quah^{(t)}_j.\n\\end{aligned}\n\\end{equation}\n\n\\paragraph{Balanced exposure} Balanced exposure to user groups \\eqref{eq:balancedobj} works similarly, except that we need to keep track of user counts within each group, which we denote by $c^{(t)}_{\\group}$, as well as exposures within each group:\n\\begin{equation}\n\\begin{aligned}\\label{eq:balancedupdate}\n\\forall j\\in\\intint{m},c^{(t)}_{\\group} &= \\sum_{i\\ins}%{\\mathfrak{s}} c^{(t)}_i,\\\\ \n\\hat{\\v}^{(t)}_{j|\\group} &= \\frac{1}{c^{(t)}_{\\group}}\\sum_{\\tau=1}^t \\indic{\\itp\\ins}%{\\mathfrak{s}} \\atpj, \\\\ \\hat{\\v}^{(t)}_{j|\\mathrm{avg}} &= \\frac{1}{\\card{\\mathcal{S}}} \\sum_{s}%{\\mathfrak{s}\\in\\mathcal{S}} \\hat{\\v}^{(t)}_{j|\\group}.\n\\end{aligned}\n\\end{equation}\nWe use $\\hat{\\v}^{(t)}_{j|\\group} = 0$ if $c^{(t)}_{\\group}=0$ since the item has not been exposed to a group we never saw. \nAs for $\\hat{v}$ and $\\hat{q}$, these counts are updated online in $O(m)$ operations because they only change for the group of user $\\it$.\n\nThe guarantees we obtain for these algorithms are the following. The proof is given in App.~B.\n\\begin{proposition}\\label{prop:approx_gradients} The approximate gradients of Alg.~\\ref{alg:twosided}, \\ref{alg:quaexpo} and \\ref{alg:balancedexpo} satisfy:\n\\begin{itemize}[leftmargin=1.5cm]\n \\item[Alg.~\\ref{alg:twosided}:] $\\displaystyle \\mathbb{E}\\Big[\n \\norm[\\Big]{\\hat{g}^{(t)}_i - g_{w, i}\\big(\\pih^{(t-1)}\\big)}_\\infty\n \\Big] \\leq \\frac{\\beta \\norm{\\psi''_{\\alpha_2}}_{\\infty}}{m}\\sqrt{\\frac{n}{t-1}}$,\n \\item[Alg.~\\ref{alg:quaexpo}:] $\\displaystyle \\mathbb{E}\\Big[\n \\norm[\\Big]{\\hat{g}^{(t)}_i - g_{w, i}\\big(\\pih^{(t-1)}\\big)}_\\infty\n \\Big] \\leq \\frac{\\beta\\big(2+\\norm{b}_1\\big)^2}{m\\min(\\eta,\\sqrt{\\eta})}\\sqrt{\\frac{n}{t-1}}$,\n \\item[Alg.~\\ref{alg:balancedexpo}:] $\\displaystyle \\mathbb{E}\\Big[\n \\norm[\\Big]{\\hat{g}^{(t)}_i - g_{w, i}\\big(\\pih^{(t-1)}\\big)}_\\infty\n \\Big] \\leq \\frac{\\beta\\Big(\n \\frac{1}{\\overline{w}_{\\groupofi} t}\n +\n 8\\sum_{s}%{\\mathfrak{s}\\in\\mathcal{S}}\\sqrt{\\frac{\\card{s}%{\\mathfrak{s}}}{\\overline{w}_{\\group} (t-1)}}\\Big)}{m\\overline{w}_{\\groupofi}\\min(\\eta,\\sqrt{\\eta})}$.\n\\end{itemize}\n\\end{proposition}\nOverall, they all decrease in $O(\\frac{1}{\\sqrt{t}})$ as desired to apply our convergence result Th.~\\ref{thm:boundgeneral}. More interestingly, the bounds do not depend on $w$, which means that the objectives are well-behaved even when some users have low probabilities. The balanced exposure criterion does depend on $\\frac{1}{\\overline{w}_{\\group}}$, which means that the bound becomes arbitrarily bad when some groups have small cumulative activity. This is natural, since achieving balanced exposure across groups dynamically is necessarily a difficult task if one group is only very rarely observed.\n\n\nPutting together Thm.~\\ref{thm:boundgeneral} and Prop.~\\ref{prop:approx_gradients}, we obtain regret bounds of order $1\/\\sqrt{t}$:\n\\begin{corollary\n\\label{cor:boundalgos}\nIgnoring constants and assuming $\\eta\\leq 1$, the regrets $R(t)$ of Alg.~\\ref{alg:twosided}, ~\\ref{alg:quaexpo} and \\ref{alg:balancedexpo} are bounded as in Table~\\ref{tab:rates}.\n\\end{corollary}\n\n\\begin{table}[h!]\n \\begin{center}\n {\\tabulinesep=1.2mm\n \\begin{tabu}{|c|c|c|}\n \\hline\n \n Algorithm & Order of magnitude of $R^{(t)}$\\\\\n \\hline\n \n Alg.~\\ref{alg:twosided} & $\\bigg(\\sqrt{n}\\|b\\|_1 + \\dfrac{n\\|b\\|_1\\beta}{m}\\bigg)\\Big(\\eta^{\\alpha_1-1} + \\eta^{\\alpha_2-2}\\Big)\\sqrt{\\dfrac{1}{t}}$\\\\\n \\hline\n \n Alg.~\\ref{alg:quaexpo} & $\\bigg(\\sqrt{n}\\|b\\|_1 + \\dfrac{n\\|b\\|_1^3\\beta}{m}\\bigg)\\eta^{-1}\\sqrt{\\dfrac{1}{t}}$\\\\\n \\hline\n \n Alg.~\\ref{alg:balancedexpo} \n & $\\bigg(\\sqrt{n}\\|b\\|_1 + \\dfrac{n\\|b\\|_1\\beta}{m}\\sqrt{\\dfrac{|\\mathcal S|}{\\overline{w}_{\\min}^3}}\\bigg)\\eta^{-1}\\sqrt{\\dfrac{1}{t}}$\\\\\n \\hline\n \\end{tabu}\n }\n \\end{center}\n \\caption{Upper bounds on regret $R(t)$, ignoring constants and assuming $\\eta\\leq 1$. In all cases, we have $R(t) = \\mathcal O(1\/\\sqrt{t})$. For balanced exposure, the regret bound also depends on the minimum total weight of a group $\\overline{w}_{\\min}= \\min_{s \\in \\mathcal S}\\overline{w}_{\\group}$. }\n \\label{tab:rates}\n\\end{table}\n\nThe proof of Cor.~\\ref{cor:boundalgos} is given in App.~C. Compared to a batch Frank-Wolfe algorithm for the same objectives, we obtain a convergence in $O(1\/\\sqrt{t})$ instead of $1\/t$ \\citep[Prop 4.]{do2021two}. Part of this difference is due to the variance in the gradients due to unknown user activities, but Th.~\\ref{thm:boundgeneral} would be of order $1\/\\sqrt{t}$ even with true gradients (i.e., $D_i=0$). We do not believe our bound can be improved because our online setting we consider is only equivalent to a Frank-Wolfe algorithm if we consider a stochastic ``stepsize'' of $1\/c^{(t)}_i$ for user $i$ at time step $t$ (which yields the average exposures $\\pih^{(t)}$, our object of study), which introduces additional variance in the optimization. We leave the proof of lower bounds to future work.\n\n\\subsection{Computational complexity} \n\nTo simplify the discussion on computational complexity, we assume the number of groups in balanced exposure is $O(1)$ (i.e., negligible compared to $n$ and $m$), which is the case in practice for groups such as gender or age. For each of the algorithms, the necessary computations involve $O(m)$ floating point operations to compute the scores of all items (step (1)), $O(m+k\\lnk)$ (amortized) comparisons for the top-$k$ sort (step (3)). The update of the online estimates (step (2)) requires $O(m)$ operations. \nMore involved implementations of this step require only $O(k)$ operations by only updating the recommended items, but they require additional computations in step (1), which remains in $O(m)$.\nIn all cases, the computation cost is dominated by the top-$k$ sort, which would likely required in practice even without consideration for fairness of exposure. Thus, \\textsc{Offr}\\xspace provides a general approach to fairness of exposure in online ranking that does not involve significantly more computations than having no considerations for fairness of exposure at all, despite optimizing an objective function where the optimal ranking of each user depends on the rankings of all other users.\n\n\\paragraph{Memory requirements} For two-sided fairness (Alg.~\\ref{alg:twosided}), we need $O(n+m)$ bytes for $\\hat{u}$ and $\\hat{v}$. For quality weighted exposure we need $O(m)$ bytes to store $\\hat{v}$ and $\\hat{q}$, while we need $O(m\\card{\\mathcal{S}})$ bytes for balanced exposure. In all cases, storage is of the order $O(n+m)$. Notice that in practice, it is likely that counters of item exposures and user utility are computed to monitor the performance of the system anyway. The additional storage of our algorithm is then negligible.\n}\n{\n\\section{Experiments}\\label{sec:xp}\n\nWe provide in this section experiments on simulated ranking tasks following the protocol of \\citet{do2021two}. Our experiments have two goals. First we study the convergence of \\textsc{Offr}\\xspace to the desired trade-off values for the three objectives of Sec.~\\ref{sec:fairness_objectives}, by comparing objective function values of \\textsc{Offr}\\xspace and the batch Frank-Wolfe algorithm for fair ranking of \\citep{do2021two} at comparable computational budgets. Second, we compare the dynamics of \\textsc{Offr}\\xspace and \\textit{FairCo}\\xspace \\citep{morik2020controlling}, an online ranking algorithm designed to asymptotically achieve equal quality-weighted exposure for all items. We also provide a comparison between \\textsc{Offr}\\xspace and \\textit{FairCo}\\xspace on balanced exposure by proposing an ad-hoc extension to \\textit{FairCo}\\xspace for that task. In the three next subsections, we first describe our experimental protocol (Subsection \\ref{sec:xp:setup}). Then, we give qualitative results in terms of the trade-offs achieved by \\textsc{Offr}\\xspace by varying the weight of the exposure objective (Subsection~\\ref{sec:xp:quali}). We finally we dive into the comparison between \\textsc{Offr}\\xspace and batch Frank-Wolfe (Subsection~\\ref{sec:xp:results_convergence}) and between \\textsc{Offr}\\xspace and \\textit{FairCo}\\xspace (Subsection~\\ref{sec:xp:results_fairco}).\n\n\\subsection{Experimental setup}\\label{sec:xp:setup}\n\n\\paragraph{Data} We use the Last.fm dataset of \\citet{Celma:Springer2010}, which includes $360k$ users and $180k$ items (artists), from which we select the top $15k$ users and $15k$ items having the most interactions. We refer to this subset of the dataset as \\textit{lastfm15k}\\xspace. The (user, item) values are estimated using a standard matrix factorization for learning from positive feedback only \\citet{hu2008collaborative}. Details of this training can be found in App.~D.1. Since we focus on ranking given the preferences rather than on the properties of the matrix factorization algorithm, we consider these preferences as our ground truth and given to the algorithm, following previous work\\citep{patro2020fairrec,wu2021tfrom,chakraborty2019equality,do2021two}. In App.~D.3, we present results on the MovieLens dataset \\cite{harper2015movielens}. The results are qualitatively similar. Both datasets come with a ``gender'' table associated to user IDs. It is a ternary value 'male', 'female', 'other' (see \\citep{Celma:Springer2010,harper2015movielens} for details on the datasets). On \\textit{lastfm15k}\\xspace, the resulting dataset contains $~10k$\/$~3.6k$\/$1.4k$ users of category 'male'\/'female'\/'other' respectively.\n\n\\paragraph{Tasks} We study the three tasks described in Sec.~\\ref{sec:fairness_objectives}: two-sided fairness, quality-weighted exposure and balanced exposure to user groups. Note that it is possible to study weighted combinations of these objective, since the combined objective would remain concave and smooth. We focus on the three canonical examples to keep the exposition simple. We use the gender category described above as user groups, and they are only used for the last objective. We study the behavior of the algorithm as we vary $\\beta>0$, which controls the trade-off between user utility and item fairness. For two-sided fairness, we take $\\alpha_1 = \\alpha_2=0$ in \\eqref{eq:welfobj}, which are recommended values in \\citet{do2021two} to generate trade-offs between user and item fairness. In all cases, we use $\\eta=1$ in this section, and show the results for $\\eta=0.01$ (a less smooth objective function) in App.~D.2. We assume that the true user activities are uniform (but the online algorithm does not know about these activities). We consider top-$k$ rankings with $k=40$. We set the exposure weights to $b_\\k=\\frac{1}{\\log_2(1+\\k)}$, which correspond to the well-known DCG measure, as in \\citep{biega2018equity,patro2020fairrec,do2021two}. In the following, we use the term \\textit{iteration} to refer to a time step, and \\textit{epoch} to refer to $n$ timesteps (which correspond to the order of magnitude of time steps required to see every user). Notice that in the online setting, users are sampled with replacement at each iteration following our formal framework of Sec.~\\ref{sec:framework}, so the online algorithms are not guaranteed to see every user at every epoch.\n\n\\paragraph{Comparisons} We compare to two previous works:\n\\begin{enumerate}[leftmargin=*]\n \\item Batch Frank-Wolfe (\\textit{batch-FW}\\xspace): The only tractable algorithm we know of for all these objectives is the \\textit{static}\\xspace algorithm of \\citet{do2021two}. We compare offline vs online learning in terms of convergence to the objective value for a given computational budget. The algorithm of \\citet{do2021two} is based on Frank-Wolfe as well, which we refer to as \\textit{batch-FW}\\xspace and our approach (referred to as \\textsc{Offr}\\xspace) is an online version of \\textit{batch-FW}\\xspace. Thus the cost per user per epoch (one top-$k$ sort) are the same. We use this baseline to benchmark how fast \\textsc{Offr}\\xspace convergence to the optimal value.\n \\item \\textit{FairCo}\\xspace \\citep{morik2020controlling}: We use the approach from \\citep{morik2020controlling} introduced for quality-weighted exposure in dynamic ranking. In our notation, dropping the time superscrits, given user $i$ at time step $t$, \\textit{FairCo}\\xspace outputs\n \\begin{equation}\n \\begin{aligned}\\label{eq:fairco}\n \\text{(\\textit{FairCo}\\xspace \\citep{morik2020controlling})}&&\n \\sigma &= \\mathrm{top\\K}(\\tilde{\\mu}) \\\\\n &&\\text{with~} \\tilde{\\mu}_j &= \\mu_{ij} + \\beta (t-1) \\max_{j'}\\Big(\\frac{\\vh_{\\jp}}{\\quah_{\\jp}} - \\frac{\\vh_j}{\\quah_j} \\Big)\n \\end{aligned}\n \\end{equation}\n where $\\beta$ trades-off the importance of the user values $\\mu_{ij}$ and the discrepancy between items in terms of quality-weighted exposure. Notice that the item realizing the maximum in \\eqref{eq:fairco} is the same for all $j$, so the computational complexity of \\textit{FairCo}\\xspace is similar to that of \\textsc{Offr}\\xspace.\n \n A fundamental difference between \\textit{FairCo}\\xspace is that the weight given to the fairness objective increases with $t$. The authors proved in the paper that the average $\\frac{1}{m(m-1)} \\sum_{j, j'} \\Big|\\frac{\\vh_{\\jp}}{\\quah_{\\jp}} - \\frac{\\vh_j}{\\quah_j} \\Big|$ converges to $0$ at a rate $O(1\/t)$. However, they do not discuss the convergence of the user utilities depending on $\\beta$. Fundamentally, \\textit{FairCo}\\xspace and \\textsc{Offr}\\xspace address different problems, since \\textsc{Offr}\\xspace aims for trade-offs where the relative weight of the user objective and the item objective is fixed from the start. Even though they should converge to different outcomes, we compare the intermediate dynamics at the early stage of optimization.\n\n\\end{enumerate}\nIn addition, we compare to an extension of \\textit{FairCo}\\xspace to balanced exposure. Even though \\textit{FairCo}\\xspace was not designed for balanced exposure, we propose to follow a similar recipe as \\eqref{eq:fairco} as baseline for balanced exposure:\n \\begin{align}\\label{eq:fairco_balanced}\n \\sigma = \\mathrm{top\\K}(\\tilde{\\mu}) &&\\text{with~} \\tilde{\\mu}_j = \\mu_{ij} + \\beta (t-1) \\max_{s}%{\\mathfrak{s}\\in\\mathcal{S}}\\Big(\\hat{\\v}_{j|\\group} - \\hat{\\v}_{j|\\groupofi} \\Big)\n \\end{align}\n\nAll our experiments are repeated and averaged on three seeds for sampling the users at each step. The online algorithms are run for $5000$ epochs, and the batch algorithms for $50,000$ epochs.\n\n\n\\subsection{Qualitative results: effect of varying $\\protect\\beta$}\\label{sec:xp:quali}\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.85\\linewidth]{trade-offs_1.png}\n \\caption{Trade-offs between user objective ($y$-axis) and item fairness ($x$-axis), at the beginning of the online process ($10$ epochs) and closer to the end ($1000$ epochs). For two-sided fairness, both user and item objectives should be maximized, while for quality-weighted balanced exposure the item objectives ($x$-axis) should be minimized. As expected, varying the weight of the item objective $\\beta$ leads to different trade-offs between user utility and item exposure. Comparing epochs $10$ and $1000$ on quality-weighted and balanced exposure, we observe that with large $\\beta$, \\textsc{Offr}\\xspace tends to prioritize the item objective and has low user utility at the beginning of training.}\n \\label{fig:tradeoffs}\n\\end{figure*}\n\nWe first present qualitative results regarding the trade-offs that are obtained by varying the weight of the fairness penalty $\\beta$ from 0.001 to 100 by powers of $10$, for all three tasks in Fig.~\\ref{fig:tradeoffs}. The $y$-axis is the user objective for two-sided fairness and the average user utility for quality-weighted and balanced exposure. The $x$-axis is the item objective (higher is better) for two-sided fairness, and the item penalty term with $\\eta=0$ of \\eqref{eq:quaobj} and \\eqref{eq:balancedobj} for quality-weighted and balanced exposure respectively. \n\nAt a high level, we observe as expected a Pareto front spanning a large range of (user, item) objective values. We also observe on all three tasks but specifically quality-weighted and balanced exposure that at the beginning of training (epoch $10$), the item objective values are close to the final values, but the user objective values increase a lot on the course of training. We will get back to this observation in our comparison with \\textit{FairCo}\\xspace. As anecdotal remarks, we first observe that on two-sided ranking, convergence is very fast and the trade-offs obtained at epoch $10$ and $1000$ are relatively close. Second, we observe that for balanced exposure on this dataset, it is possible to achieve near perfect fairness (item objective $\\leq 10^{-3}$) at very little cost of user utility at the end of training.\n\n\\subsection{Online convergence}\\label{sec:xp:results_convergence}\n\nTo compare the convergence of \\textsc{Offr}\\xspace compared to \\textit{batch-FW}\\xspace, \nFig.~\\ref{fig:convergence} plots the regret, i.e., the difference between the maximum value and the obtained objective value on the course of the optimization for both algorithms,\\footnote{The maximum value for the regret is taken as the maximum between the result of \\textit{batch-FW}\\xspace after $50$k epochs and \\textsc{Offr}\\xspace after $5$k epochs. For both \\textsc{Offr}\\xspace and \\textit{batch-FW}\\xspace, we compute the ideal objective function, i.e., knowing the user activities $w$. Notice that \\textsc{Offr}\\xspace does not know $w$ but \\textit{batch-FW}\\xspace does.} in log-scale as a function of the number of epochs for the three tasks for $\\beta\\in\\{0.01, 1.0\\}$. We first observe that convergence is slower for larger values of $\\beta$, which is coherent with the theoretical analysis. We also observe that for the first $1000$ epochs (recall that an epoch has the same compatational cost for both algorithms), \\textsc{Offr}\\xspace fares better than the batch algorithm. Looking at more epochs or different values of $\\eta$ (shown in Fig.~4 and 6 in App.~D.2), we observe that \\textit{batch-FW}\\xspace eventually catches up. This is coherent with the theoretical analysis, as the \\textit{batch-FW}\\xspace converges in $1\/t$ \\citep{do2021two}, but \\textsc{Offr}\\xspace in $O(1\/\\sqrt{t})$. In accordance with the well-known performance of stochastic gradient descent in machine learning \\citep{bottou2007tradeoffs}, the online algorithm seems to perform much better at the beginning, which suggests that it is practical to run the algorithm online.\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.85\\linewidth]{convergence_1_1000.png}\n \\caption{Convergence speed of \\textsc{Offr}\\xspace compared to \\textit{batch-FW}\\xspace on the three fairness objectives, for $\\beta \\in\\{0.01, 1\\}$ and $\\eta=1$. The $y$-axis is the regret in log-scale. We observe that \\textsc{Offr}\\xspace is faster than \\textit{batch-FW}\\xspace at the beginning, especially for large $\\beta$.}\n \\label{fig:convergence}\n\\end{figure*}\n\n\\subsection{Comparison to \\textit{FairCo}\\xspace}\\label{sec:xp:results_fairco}\n\nWe give in Fig.~\\ref{fig:fairco} illustrations of the dynamics of \\textsc{Offr}\\xspace compared to \\textit{FairCo}\\xspace \\citep{morik2020controlling} described in \\eqref{eq:fairco} and \\eqref{eq:fairco_balanced}. Since \\textit{FairCo}\\xspace aims at driving a disparity to $0$, it cannot be applied to two-sided fairness, so we focus on quality-weighted and balanced exposure. Contrarily to the previous section, we cannot compare objective functions at convergence or convergence rates because \\textit{FairCo}\\xspace does not optimize an objective function. Our plots show the item objective ($x$-axis, lower is better, log-scale) and the average user utility on the $y$-axis. Then, for \\textsc{Offr}\\xspace and \\textit{FairCo}\\xspace for two values of $\\beta$, we show the (item objective, user utility) values obtained on the course of the algorithm. For \\textit{FairCo}\\xspace, we chose $\\beta=0.001$ which gives overall the highest user utility values we observed, and $\\beta=1$, a representative large value. For \\textsc{Offr}\\xspace we chose two different values of $\\beta$ that achieve different trade-offs. This choice has no impact on the discussion. The left plots show vanilla \\textsc{Offr}\\xspace, the right plots show \\textsc{Offr}\\xspace with a ``pacing'' heuristic described later.\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.48\\linewidth]{fairco.png}\n \\hfill\n \\includegraphics[width=0.48\\linewidth]{fairco_pacing.png}\n \\caption{Convergence of \\textsc{Offr}\\xspace compared to the dynamics of \\textit{FairCo}\\xspace. Each point is the average user utility ($y$-axis) vs item objective ($x$-axis, log-scale) for an algorithm and value of $\\beta$ (color\/marker), at a given epoch (the size of the markers increase with the epoch number). The trajectory describes the online dynamics of each algorithm in terms of the trade-offs they achieve. (left) \\textsc{Offr}\\xspace converges to the trade-off dictated by its value of $\\beta$ while keeping its item objective near the target value from the beginning, increasing the user utility with time. (right) the pacing heuristic added to \\textsc{Offr}\\xspace provides a way to approach the final trade-off while keeping user utility high during the entire course of optimization.}\n \\label{fig:fairco}\n\\end{figure*}\n\n\\paragraph{Convergence properties} Looking at the left plot, we see \\textsc{Offr}\\xspace converging to its trade-off dictated by the value of $\\beta$. On the other hand, \\textit{FairCo}\\xspace does not converge. As expected, as time goes by, \\textit{FairCo}\\xspace reduces the item objective to low values. Interestingly though, it seems that the average user utility \\textit{seems to} converge for \\textit{FairCo}\\xspace to a value that depends on $\\beta$. It is likely an effect of the experimental setup: with $\\beta=0.001$, \\textit{FairCo}\\xspace is far from the regime where it achieves low values of the item objective within our $5000$ epochs (as seen by the discrepancy in item objective between $\\beta=1$ and $\\beta=0.001$). Overall, since \\textit{FairCo}\\xspace does not have a clear objective function nor theoretical guarantees regarding the user utility, \\textit{FairCo}\\xspace does not allow to choose the trade-off between user and item objectives that is desired. On the bright side, \\textit{FairCo}\\xspace does happen to reduce the item objective to very low values for $\\beta=1$ as the number of iteration decreases. \n\n\\paragraph{Trade-offs} Interestingly, \\textit{FairCo}\\xspace and \\textsc{Offr}\\xspace have different dynamics. The plots show that on the course of the iterations, \\textsc{Offr}\\xspace rapidly reaches its item objective, but takes time to reach its user objective (as seen by the ``vertical'' pattern of \\textsc{Offr}\\xspace in the left plot of Fig.~\\ref{fig:fairco}, which means that the item objective does not change a lot). In contrast, \\textit{FairCo}\\xspace for small $\\beta$ starts from high user utility and decreases the item objective from there. Evidently, \\textsc{Offr}\\xspace and \\textit{FairCo}\\xspace strike different trade-offs, and neither of them is universally better: it depends on whether we prioritize user utility or item fairness at the early stages of the algorithm. Nonetheless, to emulate \\textit{FairCo}\\xspace's trade-offs, we propose a ``pacing'' heuristic which uses a time-dependent $\\beta$ in our objective, using $\\beta_t = \\min(\\beta, \\gamma.\\frac{t}{n})$ where $\\gamma>0$ is the pacing factor. the right plots of Fig.~\\ref{fig:fairco} show the results with $\\gamma=0.01$. We observe now a more ``horizontal'' pattern in the dynamics of \\textsc{Offr}\\xspace, similarly to \\textit{FairCo}\\xspace, meaning that \\textsc{Offr}\\xspace sucessfully pioitizes user utility over item fairness in the early stages of the algorithm. Whether or not such a pacing should be used depends on the requirements of the application.\n\n}\n{\n\\section{Related Work}\\label{sec:relatedwork}\n\nThe question of the social impact of recommender systems started with independent audits of bias against groups defined by sensitive attributes \n\\citep{sweeney2013discrimination,kay2015unequal,hannak2014measuring,mehrotra2017auditing,lambrecht2019algorithmic}.\nAlgorithms for fairness of exposure have been studied since then\n\\citep{celis2017ranking,burke2017multisided,biega2018equity,singh2018fairness,morik2020controlling,zehlike2020reducing,do2021two}. The goal is often to prevent winner-take-all effects or popularity bias \\citep{singh2018fairness,abdollahpouri2019unfairness} or promote smaller producers to incentivize production \\citep{liu2019personalized,mehrotra2018towards,mladenov2020optimizing}.\n\nThe question of online ranking is often studied in conjunction with learning to rank, i.e., learning the (user, item) values $\\mu$. The previous work by \\citet{morik2020controlling}, which we compare to in the experiments (the \\textit{FairCo}\\xspace baseline), had this dimension, which we do not. On the other hand, as we discussed, their algorithm has limited scope because it only aims at asymptotically removing any disparity. The algorithm cannot be used on other forms of loss function such as two-sided fairness, and cannot be used to converge to intermediate trade-offs. Their theoretical guarantee is also relatively weak, since they only prove that the exposure objective converges to $0$, without any guarantee on the user utility. In contrast, we show that the regret of our algorithm converges to 0 for a wide range of objectives.\n\n\\citet{yang2021maximizing} also proposes an online algorithm combining learning to rank and fairness of exposure, but they compare the exposure of groups of items within single rankings, as opposed to considering exposure of items across users. Their fairness criterion does not involve the challenge we address, since the optimal rankings in their case can still be computed individually for every user.\n\nRecently, fairness of exposure has been studied in the bandit setting \\citep{jeunen2021top,mansoury2021unbiased}. These works provide experimental evaluations of bandit algorithms with fairness constraints, but they do not provide theoretical guarantees. \\citet{wang2021fairness} also consider fairness of exposure in bandits, but without ranking.\n\nCompared to this literature on dynamic ranking, we decided to disentangle the problem of learning the user preferences from the problem of generating the rankings online while optimizing a global exposure objective. We obtain a solution to the ranking problem that is more general than what was proposed before, with stronger theoretical guarantees. Our approach unlocks the problem of online ranking with a global objective function, and we believe that our approach is a strong basis for future exploration\/exploitation algorithms.\n\nWe studied online ranking in a stationary environment. Several works consider multi-step recommendations scenarios with dynamic models of content production \\citep{mladenov2020optimizing,zhan2021towards}. They study the effect of including an exposure objective on the long-term user utilities, but they do not focus on how to efficiently generate rankings. \n\n\n\\paragraph{Relationship to Frank-Wolfe algorithms} The problem of inferring ranking lies in between convex bandit optimization \\citep[see ][and references therein]{berthet2017fast} and stochastic optimization. Our problem is easier than bandit optimization since the function is known -- at least partially, and in all cases there is no need for active exploration. The main ingredient we add to the convex bandit optimization literature is the multi-user structure, where parameters are decomposed into several blocks that can only be updated one at a time, while optimizing for a non-decomposable objective function. The similarity with the bandit optimization algorithm of \\citet{berthet2017fast} is the usage of the Frank-Wolfe algorithm to generate a deterministic decision at each step while implicitly optimizing in the space of probability distributions.\n\nOur algorithm is a Frank-Wolfe algorithm with a stochastic gradient \\citep{hazan2012projection,lafond2015online} and block-separable constraints \\citep{lacoste2013block,kerdreux2018frank}. The difference with this line of work is twofold.\nFirst, the distribution $w$ is not necessarily uniform. Second, in our case, different users have different ``stepsizes'' for their parameters (the stepsize is $\\frac{1}{c^{(t)}_i}$ for the user $i$ sampled at time $t$), rather than a single predefined stepsize. These two aspects complicate the analysis compared to that of the stochastic Frank-Wolfe with block-separable constraints of \\citet{lacoste2013block}.\n}\n{\n\\section{Conclusion and discussion}\\label{sec:discussion}\nWe presented a general approach to online ranking by optimizing trade-offs between user performance and fairness of exposure. The approach only assumes the objective function is concave and smooth. We provided three example tasks involving fairness of exposure, and the scope of the algorithm is more general. For instance, it also applies to the formulation of \\citet{do2021two} for reciprocal recommendation tasks such as dating applications.\n\nDespite the generality of the framework, there are a few technical limitations that could be addressed in future work. First, the assumption of the position-based model \\eqref{eq:position_based} is important in the current algorithmic approach, because it yields the linear structure with respect to exposure that is required in our Frank-Wolfe approach. Dealing with more general cascade models \\citep{craswell2008experimental,mansoury2021unbiased} is an interesting open problem. Second, we focused on the problem of generating rankings, assuming that (user, item) values $\\mu_{ij}$ are given by an oracle and are stationary over time. Relatedly to this stationarity assumption, we ignored the feedback loops involved in recommendation. These include feedback loops due to learning from prior recommendations \\citep{bottou2013counterfactual}, the impact of the recommender system on users' preferences themselves \\citep{kalimeris2021preference}, as well as the impact that fairness interventions on content production \\citep{mladenov2020optimizing}. Third, our approach to balanced exposure is based on the knowledge of a discrete sensitive attribute of users. Consequently, this criterion cannot be applied when there are constraints on the direct usage of the sensitive attribute within the recommender system, when the sensitive attribute is not available, or when the delineation of groups into discrete categories is not practical or ethical \\citep{tomasev2021fairness}. \n\nFinally, while we believe fairness of exposure is an important aspect of fairness in recommender systems, it is by no means the only one. For instance, the alignment of the system's objective with human values \\citep{stray2021you} critically depends on the definition of the quality of a recommendation, the values $\\mu_{ij}$ in our framework. The fairness of the underlying system relies on careful definitions of these $\\mu_{ij}$s and on unbiased estimations of them from user interactions -- in particular, taking into account the non-stationarities and feedback loops mentioned previously.\n}\n\n\n\\section*{Acknowledgements}\nThe authors thank Alessandro Lazaric and David Lopez-Paz for their feedback on the paper.\n\n\n\\bibliographystyle{unsrtnat}\n\n\\section{Proof of Theorem \\ref{thm:boundgeneral}}\n\\label{sec:proof:thm:boundgeneral}\n\nIn this section, we prove Theorem \\ref{thm:boundgeneral}.\n\n\\subsection{Preliminary remarks}\n\nWe make here some preliminary observations that we take for granted in the proof. These are well-known in the analysis of variants of Frank-Wolfe.\n\nWe start with an observation that is crucial to the analysis of Frank-Wolfe algorithms, which is the following direct consequence of the concavity and differentiability of $f$:\n\\begin{align}\\label{eq:regsmallerthan}\n \\obj_\\w^*-\\obj_\\w \\leq \\max_{\\bar{\\a}\\in\\overline{\\arms}^n} \\dotp{\\nabla \\obj_\\w(\\pi)}{\\bar{\\a}}\n\\end{align}\n\nThe second observation is that the block-separable structure in Frank-Wolfe allows to solve for each user independently \\citep[also see][Eq. 16]{lacoste2013block}:\n\\begin{align}\\label{eq:decomposemax}\n \\forall\\pi\\in\\overline{\\arms}^n,~~ \\max_{\\bar{\\a}\\in\\overline{\\arms}^n} \\dotp{\\nabla \\obj_\\w(\\pi)}{\\bar{\\a}} = \\sum_{i=1}^n \\max_{\\bar{\\a}\\in\\overline{\\arms}} \\dotp{\\nabla_i \\obj_\\w(\\pi)}{\\bar{\\a}} = \\sum_{i=1}^n \\max_{\\a\\in{\\mathcal{E}}} \\dotp{\\nabla_i \\obj_\\w(\\pi)}{\\a}.\n\\end{align}\nThese equalities are straightforward, and the last inequality comes from the fact that for a linear program solved over a polytope, there exists an extreme point of the polytope that is optimal.\n\n\n\n\nThe second observation relates to the use of approximate gradients \\citep{lafond2015online,berthet2017fast}. Recall that $B_\\arms = \\max_{\\a\\in{\\mathcal{E}}} \\norm{\\a}_1$ is the maximum $1$-norm of arms. Let $g, \\hat{g}\\in\\Re^m$ and let \\begin{align*}\n \\tilde{\\a}&\n \\in\\argmax_{\\a\\in{\\mathcal{E}}} \\dotp{g}{\\a}& \n {\\hat{\\a}}&\n \\in\\argmax_{\\a\\in{\\mathcal{E}}} \\dotp{\\hat{g}}{\\a}\n\\end{align*}\nThen\n\\begin{align}\\label{eq:approxgrad}\n \\dotp{g}{{\\hat{\\a}}} & = \\dotp{g}{\\tilde{\\a}} + \\dotp{g}{{\\hat{\\a}}-\\tilde{\\a}} =\n \\dotp{g}{\\tilde{\\a}} + \\underbrace{\\dotp{\\hat{g}}{{\\hat{\\a}}-\\tilde{\\a}}}_{\\geq 0} + \\dotp{g-\\hat{g}}{{\\hat{\\a}}-\\tilde{\\a}}\n \\geq \\dotp{g}{\\tilde{\\a}} - 2B_\\arms\\norm{g-\\hat{g}}_\\infty\n\\end{align}\n}\n\n\\subsection{Proof of Thm~\\ref{thm:boundgeneral}}\n\nIn the remainder, we denote by $\\mathbb{E}_t[X]$ the conditional expectation $\\mathbb{E}[X|i_1, ..., \\it]$.\n\nLet $t\\geq 1$. to simplify notation, we use the following shorcuts:\n\\begin{align}\n g^{(t)}_i&=\\frac{1}{w_i} \\frac{\\partial \\obj_\\w(\\pih^{(t-1)})}{\\partial \\pi_i},\n & \\aoptit &\\in\\argmax_{\\a\\in{\\mathcal{E}}} \\dotp{g^{(t)}_i}{\\a}, & \n \\ahit &\\in\\argmax_{\\a\\in{\\mathcal{E}}} \\dotp{\\hat{g}^{(t)}_i}{\\a}.\n\\end{align}\n\nWe also denote by $\\objwpii(\\pi_i')$ the partial function with respect to $\\pi_i$:\n\\begin{align}\n \\objwpii(\\pi_i') = \\obj_\\w(\\pi_1, \\ldots, \\pi_{i{-}1}, \\pi_i', \\ldots, \\pi_{i{+}1}, \\ldots, \\pi_{n}).\n\\end{align}\n\nWe now start the proof.\nFirst, let us fix ${i^{(t+1)}}$ and notice that $\\pih^{(t+1)}$ is such that:\n\\begin{align}\n \\pih^{(t+1)}_i = \\begin{cases}\n \\pih^{(t)}_i&\\text{~if~} i\\neq{i^{(t+1)}}\\\\\n \\pih^{(t)}_i + \\frac{1}{c^{(t)}_i+1}(\\ahitpo - \\pih^{(t)}_i)&\\text{~if~} i={i^{(t+1)}}\\\\\n \\end{cases}\n\\end{align}\n\n{\n\nThus, only $\\piht_{\\itpo}$ changes. \nLet $\\curvf_i = \\frac{1}{2}\\lipgwi \\max_{\\a,\\a'\\in{\\mathcal{E}}}\\norm{\\a-\\a'}_2^2 \\leq \\lipgwiB_\\arms$ because we assumed $\\forall j, 0\\leq \\a_j\\leq 1$.\\footnote{In more details: $\\forall j, 0\\leq \\a_j\\leq 1$ implies $\\norm{\\a-\\a'}_2^2 = \\sum_{j=1}^m (\\a_j-\\a'_j)^2 \\leq \\sum_{j=1}^m |\\a_j-\\a'_j| \\leq 2\\max_{\\a\\in{\\mathcal{E}}}\\norm{\\a}_1=2B_\\arms$.}\n\nBy the concavity of $\\obj^{\\w,\\piht}_{\\itpo}\\!$ and its Lipschitz continuous gradients, we have \\citep[see e.g.][Sec. 4.1]{bottou2018optimization}:\n\\begin{align}\n \\obj^{\\w,\\piht}_{\\itpo}\\!\\big(\\pihtpo_{\\itpo}\\big) \\geq \\obj^{\\w,\\piht}_{\\itpo}\\!\\big(\\piht_{\\itpo}\\big) \n + \\dotp[\\Big]{w_{{i^{(t+1)}}}g^{(t+1)}_{\\itpo}}{\\frac{1}{c^{(t)}_\\itpo+1}\\big(\\hat{\\a}^{(t+1)}_{\\itpo} - \\piht_{\\itpo}\\big)}\n - \\frac{w_{{i^{(t+1)}}}}{\\big(c^{(t)}_\\itpo+1\\big)^2}C_{\\itpo}\n\\end{align}\n\nLet $R^{(t)} = \\obj_\\w^* - \\obj_\\w(\\pih^{(t+1)})$. Noticing that given $i_1, ..., {i^{(t+1)}}$, we have $\\obj^{\\w,\\piht}_{\\itpo}\\!\\big(\\pihtpo_{\\itpo}\\big) = \\obj_\\w(\\pih^{(t+1)})$, we can rewrite \n\\begin{align}\n R^{(t+1)} \\leq R^{(t)}\n - \\dotp[\\Big]{g^{(t+1)}_{\\itpo}}{\\frac{w_{{i^{(t+1)}}}}{c^{(t)}_\\itpo+1}\\big(\\hat{\\a}^{(t+1)}_{\\itpo} - \\piht_{\\itpo}\\big)}\n + \\frac{w_{{i^{(t+1)}}}}{\\big(c^{(t)}_\\itpo+1\\big)^2}C_{\\itpo}\n\\end{align}\n\n}\n\n\nTaking the expectation over ${i^{(t+1)}}$ (still conditional to $i_1, ..., \\it$) gives:\n\\begin{align}\\label{eq:condexpreg}\n \\mathbb{E}_t[R^{(t+1)}] \\leq & R^{(t)} \n \\underbrace{ - \\sum_{i=1}^n\\dotp[\\Big]{w_ig^{(t+1)}_i}{\\frac{w_i}{c^{(t)}_i+1}\\big(\\ahitpo - \\pih^{(t)}_i\\big)}}_{\\mathcal{R}^{(t)}}\n + \\sum_{i=1}^n\\frac{w_i^2}{\\big(c^{(t)}_i+1\\big)^2}\\curvf_i\n\\end{align}\n\n\n\\subsubsection{Step 1: stepsize} \nLet $\\hat{\\gamma}^{(t+1)}_i = \\frac{w_i}{c^{(t)}_i+1}$ and $\\gamma^{(t+1)} = \\frac{1}{t+1}$. Using $\\hat{\\gamma}^{(t+1)}_i = \\gamma^{(t+1)} + (\\hat{\\gamma}^{(t+1)}_i - \\gamma^{(t+1)})$, $\\norm{\\ahitpo - \\pih^{(t)}_i}_1 \\leq 2B_\\arms$ and $\\norm{g^{(t+1)}_i}_\\infty \\leq G_i$ in $\\mathcal{R}^{(t)}$, we have:\n\\begin{align}\\label{eq:approxgammastep}\n\\mathcal{R}^{(t)} \\leq - \\gamma^{(t+1)}\\sum_{i=1}^n\\dotp[\\Big]{w_ig^{(t+1)}_i}{\\ahitpo - \\pih^{(t)}_i} \n + 2B_\\arms\\underbrace{\\sum_{i=1}^n w_i\\big| \\hat{\\gamma}^{(t+1)}_i - \\gamma^{(t+1)}\\big| G_i}_{\\mathcal{G}^{(t)}}\n\\end{align}\n\n\\paragraph{Step 2: approximate gradients} \nNow using the fact that $\\ahitpo$ maximizes the dot product with the approximate gradient $\\hat{g}^{(t+1)}_i$ and using the inequality \\eqref{eq:approxgrad} in \\eqref{eq:approxgammastep}, we obtain:\n\\begin{align}\\label{eq:approxgradstep}\n\\mathcal{R}^{(t)} \\leq\\underbrace{ - \\gamma^{(t+1)}\\sum_{i=1}^n\\dotp[\\Big]{w_ig^{(t+1)}_i}{\\aoptitpo - \\pih^{(t)}_i}}_{\\leq - \\gammatpoR^{(t)} \\text{ ~~by \\eqref{eq:regsmallerthan} and \\eqref{eq:decomposemax}}} + 2\\gammatpoB_\\arms\\underbrace{\\sum_{i=1}^nw_i\\norm{g^{(t+1)}_i-\\hat{g}^{(t+1)}_i}_\\infty}_{\\mathcal{D}^{(t)}} + 2B_\\arms\\mathcal{G}^{(t)}\n\\end{align}\nPlugging into \\eqref{eq:condexpreg}, we finally obtain (recall $\\gamma^{(t+1)} = \\frac{1}{t+1}$):\n\\begin{align}\n \\mathbb{E}_t[R^{(t+1)}] \\leq & (1-\\gamma^{(t+1)})R^{(t)} \n + 2\\gammatpoB_\\arms\\mathcal{D}^{(t)}+2B_\\arms\\mathcal{G}^{(t)}+\\gamma^{(t+1)}\\underbrace{\\sum_{i=1}^n\\frac{w_i^2(t+1)}{\\big(c^{(t)}_i+1\\big)^2}\\curvf_i}_{\\mathcal{C}^{(t)}}\n\\end{align}\nTaking the full expectation over $i^{(1)}, ..., \\it$, using the notation $\\overline{R}^{(t)}=\\mathbb{E}[R^{(t+1)}]$ and dividing by $\\gamma^{(t+1)}$ yields:\n\\begin{align}\n (t+1)\\overline{R}^{(t+1)} \\leq & t\\overline{R}^{(t)} \n + \\mathbb{E}\\big[\\mathcal{C}^{(t)}+2B_\\arms\\mathcal{D}^{(t)}+2(t+1)B_\\arms\\mathcal{G}^{(t)}\\big]\n\\end{align}\n\n\nWe separately bound each term in order in Lemmas \\ref{lem:Ct}, \\ref{lem:Dt} and \\ref{lem:Gt}, we obtain:\n\\begin{align}\n t\\overline{R}^{(t)} &\\leq \\sum_{\\tau=0}^{t-1}\\Big(2\\frac{\\sum_{i=1}^n \\curvf_i}{\\tau+1} + \\frac{3B_\\arms\\sum_{i=1}^n \\sqrt{w_i}G_i}{\\sqrt{\\tau+1}}+\\frac{2B_\\arms\\sum_{i=1}^n G_i}{\\tau+1}\\Big) + 4B_\\arms\\sqrt{t}\\sum_{i=1}^nw_iD_i\\\\ \n &\\leq 2\\sum_{i=1}^n (\\curvf_i+B_\\armsG_i)\\ln(e t) + 6B_\\arms\\sum_{i=1}^n (G_i+D_i)\\sqrt{w_i t}\n\\end{align}\nusing $\\sum_{\\tau=1}^{t}\\frac{1}{\\sqrt{\\tau}} \\leq 2\\sqrt{t}$ as in Lemma \\ref{lem:Dt}.\n\n\n\\subsection{Technical lemmas}\\label{sec:proofs:lemma}\n\nThese lemmas depend heavily on the following standard equality regarding a Binomial variable with success probability $p$ and $t$ trials \\citep{chao1972negative}:\n\\begin{align}\\label{eq:expectation_one_over_binomial}\n \\mathbb{E}_{X\\sim{\\rm Bin}(p, t)} \\Big[\\frac{1}{X+1}\\Big] = \\frac{1}{p(t+1)}(1-(1-p)^{t+1}) \\leq \\frac{1}{p(t+1)}.\n\\end{align}\n\n\\begin{lemma}\\label{lem:Ct}\n\\begin{align}\n \\mathbb{E}[\\mathcal{C}^{(t)}] = \\sum_{i=1}^n\\curvf_i\\mathbb{E}\\Big[\\frac{w_i^2(t+1)}{\\big(c^{(t)}_i+1\\big)^2}\\Big] \\leq 2\\frac{\\sum_{i=1}^n \\curvf_i}{t+1}\n\\end{align}\n\\end{lemma}\n\\begin{proof}\nWe use the following result, where ${\\rm Bin}(w_i, t)$ denotes the binomial distribution with probability of success $w_i>0$ and $t$ trials:\n\\begin{align}\n \\mathbb{E}_{X\\sim{\\rm Bin}(w_i, t)} \\Big[\\frac{1}{(X+1)^2}\\Big] \\leq \\frac{2}{w_i^2(t+1)^2}\n\\end{align}\nThe proof uses $\\frac{1}{(X+1)^2} \\leq \\frac{2}{(X+1)(X+2)}$ and the result is obtained by direct computation of the expectation. The arguments are the same as those required to obtain the standard equality \\eqref{eq:expectation_one_over_binomial}.\n\nThe result follows by noticing that $c^{(t)}_i\\sim {\\rm Bin}(w_i, t)$, using the inequality above if $w_i\\neq 0$ and noticing that the inequality $\\frac{w_i^2(t+1)}{(c^{(t)}_i+1)^2} \\leq \\frac{2}{t+1}$ holds when $w_i=0$.\n\\end{proof}\n\n\\begin{lemma}\\label{lem:Dt}\n{\n\nUsing \\eqref{eq:approximategradient}, we have:\n\\begin{align}\n \\sum_{\\tau=0}^{t-1} \\mathbb{E}[\\mathcal{D}^{(t)}] = \\sum_{i=1}^nw_i \\sum_{\\tau=1}^t \\mathbb{E}\\Big[\\norm{g^{(\\tp)}_i-\\gh^{(\\tp)}_i}_\\infty\\Big] \\leq 2\\sqrt{t}\\sum_{i=1}^nw_iD_i\n\\end{align}\n}\n\\end{lemma}\n\\begin{proof}\nIt is straightforward to show by induction that $\\sum_{\\tau=1}^t \\frac{1}{\\sqrt{\\tau}}\\leq 2\\sqrt{t}$ and recall that $\\sum_{i=1}^nw_i=1$.\n\\end{proof}\n\n\\begin{lemma}\\label{lem:Gt}\n\\begin{align}\n (t+1)\\mathbb{E}[\\mathcal{G}^{(t)}] = \\sum_{i=1}^n w_iG_i\\mathbb{E}\\Big[ \\Big|\\frac{w_i(t+1)}{c^{(t)}_i+1} - 1\\Big|\\Big] \\leq \\frac{3 \\sum_{i=1}^n \\sqrt{w_i}G_i}{2\\sqrt{t+1}}+\\frac{ \\sum_{i=1}^nG_i}{t+1}\n\\end{align}\n\\end{lemma}\n\\begin{proof}\nLet $i\\in\\intint{n}$ and $t>0$ we have\n\\begin{align}\\label{eq:bound_on_one_over}\n \\mathbb{E}\\Big[ \\Big|\\frac{w_i(t+1)}{c^{(t)}_i+1} - 1\\Big|\\Big] \n \\leq \\underbrace{\\mathbb{E}\\Big[\\frac{|w_i t-c^{(t)}_i|}{c^{(t)}_i+1}\\Big]}_{\\tilde{\\mathcal{G}}^{(t)}} + \\underbrace{\\mathbb{E}{\\Big[\\frac{|1-w_i|}{c^{(t)}_i+1}}\\Big]}_{\\leq \\frac{1}{w_i(t+1)} \\text{ because } c^{(t)}_i\\sim{\\rm Bin}(w_i, t)}\n\\end{align}\nFocusing on $\\tilde{\\mathcal{G}}^{(t)}$ and writing it as:\n\\begin{align}\n \\tilde{\\mathcal{G}}^{(t)} = \\mathbb{E}\\Big[\\underbrace{\\frac{|(w_i+c^{(t)}_i) - w_i (t+1)|}{w_i (t+1)}}_{= a}\\times\\underbrace{\\frac{w_i (t+1)}{c^{(t)}_i+1}}_{=b}\\Big]\n\\end{align}\nusing $ab\\leq \\frac{1}{2} \\lambda a^2+\\frac{1}{2} \\frac{b^2}{\\lambda}$ with $\\lambda=\\sqrt{w_i}\\sqrt{t+1}$ we obtain:\n\\begin{align}\n \\tilde{\\mathcal{G}}^{(t)} \\leq \\frac{\\sqrt{t+1}}{2w_i^{3\/2}} \\underbrace{\\mathbb{E}\\Big[\\Big(\\frac{w_i+c^{(t)}_i}{t+1}-w_i\\Big)^2\\Big]}_{\\leq\\frac{w_i(1-w_i)}{t+1} \\text{(variance of sums of independent r.v.s)}}\n + \\frac{1}{2\\sqrt{w_i}\\sqrt{t+1}} \\underbrace{\\mathbb{E}\\Big[\\frac{(w_i (t+1))^2}{\\big(c^{(t)}_i+1\\big)^2}\\Big]}_{\\leq 2 \\text{ by Lemma \\ref{lem:Ct}}}.\n\\end{align}\nAnd the result follows.\n\\end{proof}\n\n}\n{\n\\section{Proof of Proposition \\ref{prop:approx_gradients}}\n\\label{sec:proof:approx_gradients}\n\nWe give here the computations that lead to the bounds of \\ref{prop:approx_gradients}. The bounds themselves mostly rely on calculations of bounds on second-order derivatives, but they rely on a set of preliminary results regarding the deviation of multinomial distributions. We first give these results, and then detail the calculations for each fairness objective in separate subsections.\n\n\\textbf{In this section, we drop the superscript in $t$. All quantities with a hat $\\hat{.}$ are implicitly taken at time step $t$, i.e. they should be read $\\hat{.}^{(t)}$. This is also the case for $\\pi}%{\\hat{\\pi}$, which should read $\\pih^{(t)}$.} We also remind that $\\mu_{ij}\\in[0,1]$ so user utilities and item qualities are also in $[0,1]$, and $\\norm{w}_\\infty \\leq 1$ so all exposures are also in $[0,1]$.\n\n\\subsection{Main inequalities}\nAll our bounds are based on the following result \\citep[Thm. 1]{han2015minimax}:\n\\begin{align}\\label{eq:expectation_w_norm}\n \\Big(\\multiliner{8em}{1-norm distance between $\\hat{\\w}$ and $w$}\\Big)&&\\mathbb{E}\\Big[\\norm[\\big]{\\hat{\\w}-w}_1\\Big] \\leq \\sqrt{\\frac{n-1}{t}},\n\\end{align}\nwhere the expectation is taken over the random draws of ${i^{(1)}}, \\ldots, \\it$.\n\nFrom \\eqref{eq:expectation_w_norm} we obtain a bound on the deviation of online estimates of exposures from their true value:\n\\begin{align}\\label{eq:deviation_exposure}\n\\mathbb{E}\\Big[\\big|\\vh_j - v_j(\\pi}%{\\hat{\\pi})\\big|\\Big] \n&\\leq \n\\mathbb{E}\\Big[\\sum_{i=1}^n\\big| {\\wh_i}-w_i\\big| \\pih_{ij} \\Big] \n\\leq \\mathbb{E}\\Big[\\norm[\\big]{\\hat{\\w}-w}_1\\Big] \\leq \\sqrt{\\frac{n-1}{t}}\n\\end{align}\nFor balanced exposure, we need a refined version of \\eqref{eq:expectation_w_norm} regarding the convergence of per-group user activities:\n\\begin{lemma} For every group $s}%{\\mathfrak{s}$\n\\begin{align}\n\\mathbb{E}\\bigg[\\Big|\\sum_{i\\ins}%{\\mathfrak{s}} \\frac{c_{i}}{c_{\\group}} - \\frac{w_i}{\\overline{w}_{\\group}}\\Big|\\bigg] \\leq \\sqrt{\\frac{2(\\card{s}%{\\mathfrak{s}}-1)}{\\overline{w}_{\\group} t}}\n\\end{align}\n\\end{lemma}\n\\begin{proof}\nThe lemma is a consequence of \\eqref{eq:expectation_w_norm}, by first conditioning on the groups sampled at each round. Let $\\group^{(1)}, \\ldots, \\group^{(t)}$ correspond to the sequence of groups sampled at each round. Since given the group, the sampled users are i.i.d. within that group, we can use \\eqref{eq:expectation_w_norm}:\n\\begin{align}\n \\mathbb{E}\\bigg[\\Big|\\sum_{i\\ins}%{\\mathfrak{s}} \\frac{c_{i}}{c_{\\group}} - \\frac{w_i}{\\overline{w}_{\\group}}\\Big|\\bigg | \\group^{(1)}, \\ldots, \\group^{(t)}\\bigg] \\leq \\indic{c_{\\group}>0}\\sqrt{\\frac{\\card{s}%{\\mathfrak{s}}-1}{c_{\\group}}} + \\indic{c_{\\group}=0}.\n\\end{align}\nNotice that in the equation above we have $c^{(t)}_{\\group} = \\sum_{\\tau=1}^t \\indic{\\group^{(\\tp)} = s}%{\\mathfrak{s}}$.\n\nUsing $\\frac{\\card{s}%{\\mathfrak{s}}-1}{c_{\\group}} \\leq 2\\frac{\\card{s}%{\\mathfrak{s}}-1}{c_{\\group}+1}$ when $c_{\\group}>0$ and $\\sqrt{2}>1$ when $c_{\\group}=0$, taking the expectation over the random draws of groups, we obtain:\n\\begin{align}\n \\mathbb{E}\\bigg[\\Big|\\sum_{i\\ins}%{\\mathfrak{s}} \\frac{c_{i}}{c_{\\group}} - \\frac{w_i}{\\overline{w}_{\\group}}\\Big|\\bigg] \\leq \\mathbb{E}\\Big[\\sqrt{2\\frac{\\card{s}%{\\mathfrak{s}}-1}{c_{\\group}+1}}\\Big]\n \\leq \n \\sqrt{2\\mathbb{E}\\Big[\\frac{\\card{s}%{\\mathfrak{s}}-1}{c_{\\group}+1}\\Big]}\n\\end{align}\nby Jensen's inequality. The result follows from the expectation of $\\frac{1}{X+1}$ when $X$ follows a binomial distribution \\eqref{eq:expectation_one_over_binomial}.\n\\end{proof}\nThe lemma above allows us to extend \\eqref{eq:deviation_exposure} to exposures within a group:\n\\begin{align}\\label{eq:deviation_exposure_withingroup}\n \\mathbb{E}\\Big[\\big|\\hat{\\v}_{j|\\group} - \\v_{j|\\group}(\\pi}%{\\hat{\\pi})\\big|\\Big] \\leq \\sqrt{\\frac{2(\\card{s}%{\\mathfrak{s}}-1)}{\\overline{w}_{\\group} t}} \n && \n \\text{and~~}\n \\mathbb{E}\\Big[\\big|\\hat{\\v}_{j|\\mathrm{avg}} - \\v_{j|\\mathrm{avg}}(\\pi}%{\\hat{\\pi})\\big|\\Big] \\leq \\frac{1}{\\card{\\mathcal{S}}}\\sum_{s}%{\\mathfrak{s}\\in\\mathcal{S}}\\sqrt{\\frac{2(\\card{s}%{\\mathfrak{s}}-1)}{\\overline{w}_{\\group} t}} \n\\end{align}\n\n\n\n\n\\subsection{Two-sided fairness (Alg.~\\ref{alg:twosided})}\nWe have\n$g_{w, ij}\\big(\\pi}%{\\hat{\\pi}\\big) = \\psi'_{\\alpha_1}\\big(u_i(\\pi}%{\\hat{\\pi}\\big)\\mu_i + \\frac{\\beta}{m}\\psi'_{\\alpha_2}\\big(v_j(\\pi}%{\\hat{\\pi})\\big)$, and, by definition, $\\uh_i=u_i(\\pi}%{\\hat{\\pi})$ and $g_{\\hat{\\w}, ij}(\\pi}%{\\hat{\\pi}) =\\psi'_{\\alpha_1}(\\uh_i)\\mu_{ij} + \\frac{\\beta}{m}\\psi'_{\\alpha_2}(\\vh_j)$.\nWe thus have:\n\\begin{align}\n \\Big|g_{w, ij}\\big(\\pi}%{\\hat{\\pi}\\big) -g_{\\wht, ij}(\\pi}%{\\hat{\\pi})\\Big|\n &=\n \\frac{\\beta}{m}\\Big |\\psi'_{\\alpha_2}\\big(v_j(\\pi}%{\\hat{\\pi})\\big) - \\psi'_{\\alpha_2}(\\vh_j)\\Big | \n \\leq \\frac{\\beta\\norm{\\psi''_{\\alpha_2}}_\\infty}{m} \n \\Big|v_j(\\pi}%{\\hat{\\pi})- \\vh_j\\Big| \\\\\n\\end{align}\nTaking the expectation over ${i^{(1)}}, \\ldots,\\it$ and using \\eqref{eq:deviation_exposure}, we obtain the desired result:\n\\begin{align}\n \\mathbb{E}\\Big[\\Big|g_{w, ij}\\big(\\pi}%{\\hat{\\pi}\\big) -g_{\\hat{\\w}, ij}(\\pi}%{\\hat{\\pi})\\Big|\\Big] \\leq \\frac{\\beta\\norm{\\psi''_{\\alpha_2}}_\\infty}{m}\\sqrt{\\frac{n-1}{t}}\n\\end{align}\n\n\\subsection{Quality-weighted exposure (Alg.~\\ref{alg:quaexpo})} By similar direct calculations, let\n\\begin{align}\n \\displaystyle \\hat{Z} = \\sqrt{\\eta+\\frac{1}{m} \\sum_{j=1}^m \\Big(\\quah_{\\mathrm{avg}}\\vh_j-\\quah_j\\norm{b}_1\\Big)^2}\n&&\\text{and~~} \\displaystyle \\hat{Z} = \\sqrt{\\eta+\\frac{1}{m} \\sum_{j=1}^m \\Big(\\quaavgv_j(\\pi}%{\\hat{\\pi})-q_j\\norm{b}_1\\Big)^2}.\n\\end{align}\n\\begin{align}\\label{eq:starter_qua_approx_gradient}\n \\Big|g_{w, ij}\\big(\\pih^{(t)}\\big) -g_{\\hat{\\w}, ij}(\\pi}%{\\hat{\\pi})\\Big| \n &= \\frac{\\beta}{m}\\Big|\\frac{\\quah_{\\mathrm{avg}}\\vh_j - \\quah_j\\norm{b}_1}{\\hat{Z}}- \\frac{\\quaavgv_j(\\pi}%{\\hat{\\pi}) - q_j\\norm{b}_1}{Z}\\Big|\n\\end{align}\nNotice first that given some $B>0$, for $x\\in[-B,B]^m$, the function $h(x) =\\sqrt{\\eta+\\frac{1}{m}\\sum_{j=1}^m x_j^2}$ has derivatives bounded by $\\frac{B}{m\\sqrt{\\eta}}$ in $\\norm{.}_\\infty$. Thus, we have, for every $x, x'\\in[-B,B]^m$:\n\\begin{align}\\label{eq:main_simplification}\n \\Big|\\sqrt{\\eta+\\frac{1}{m}\\sum_{j=1}^m x_j^2} - \\sqrt{\\eta+\\frac{1}{m}\\sum_{j=1}^m {x'_j}^2}\\Big| \\leq \\frac{B}{m\\sqrt{\\eta}}\\sum_{j=1}^m |x_j-x'_j|\n\\end{align}\nWith $x_j = \\quaavgv_j(\\pi}%{\\hat{\\pi})-q_j\\norm{b}_1$ and $x'_j = \\quah_{\\mathrm{avg}}\\vh_j-\\quah_j\\norm{b}_1$, we have $B\\leq 1+\\norm{b}_1$. Moreover, the bound \\eqref{eq:starter_qua_approx_gradient} writes:\n\\begin{align}\\label{eq:details_approx_grad}\n \\frac{m}{\\beta}\\Big|g_{w, ij}\\big(\\pi}%{\\hat{\\pi}\\big) -g_{\\hat{\\w}, ij}(\\pi}%{\\hat{\\pi})\\Big| \n &= \\Big| \\frac{x_j}{h(x)}- \\frac{x'_j}{h(x')}\\Big| \\leq \\frac{|x_j-x'_j|h(x)}{h(x)h(x')}\n + |x'_j|\\frac{\\big| h(x) - h(x')\\big|}{h(x)h(x')}\\\\\n & \\leq \\frac{|x_j-x'_j|}{\\sqrt{\\eta}} + \\frac{B}{m\\eta}\\norm{x-x'}_1 \n \\leq \\frac{1+B}{m\\min(\\eta,\\sqrt{\\eta})}\\norm{x-x'}_{\\infty},\n\\end{align}\nwhere we used \\eqref{eq:main_simplification} and the fact that $h(x) \\geq \\sqrt{\\eta}$ and $h(x') \\geq |x'_j|$.\nNow, since all exposures and qualities are upper bounded by $1$, notice that\n\\begin{align}\n |x_j - x'_j| \\leq |q_{\\mathrm{avg}} - \\quah_{\\mathrm{avg}}| + |\\vh_j-v_j(\\pi}%{\\hat{\\pi})| + \\norm{b}_1 |q_j - \\quah_j| \\leq (2+\\norm{b}_1)\\norm{\\hat{\\w}-w}_1,\n\\end{align}\nwhere we used similar calculations as in \\eqref{eq:deviation_exposure} to bound for $q_j - \\quah_j$ and $q_{\\mathrm{avg}} - \\quah_{\\mathrm{avg}}$.\n\nPutting it all together, and using \\eqref{eq:deviation_exposure} for the expectation, we obtain:\n\\begin{align*}\n \\mathbb{E}\\Big[\\Big|g_{w, ij}\\big(\\pi}%{\\hat{\\pi}\\big) -g_{\\hat{\\w}, ij}(\\pi}%{\\hat{\\pi})\\Big|\\Big] \n \n \n \\leq \n \\frac{\\beta\\big(2+\\norm{b}_1\\big)^2}{m\\min(\\eta,\\sqrt{\\eta})}\\sqrt{\\frac{n-1}{t}}.\n\\end{align*}\n\n\\subsection{Balanced exposure} For balanced exposure, let us denote by\n\\begin{align}\n \\hat{Z}_j = \\sqrt{\\eta + \\sum_{s}%{\\mathfrak{s}\\in\\mathcal{S}} \\Big(\\hat{\\v}_{j|\\group}-\\hat{\\v}_{\\mathrm{avg}|\\group}\\Big)^2}\n &&\n \\text{and~~} Z_j = \\sqrt{\\eta + \\sum_{s}%{\\mathfrak{s}\\in\\mathcal{S}} \\Big(\\v_{j|\\group}(\\pi}%{\\hat{\\pi})-\\v_{j|\\mathrm{avg}}(\\pi}%{\\hat{\\pi})\\Big)^2}.\n\\end{align}\nWe then have\\footnote{We notice that in Alg.~\\ref{alg:balancedexpo} we have $\\frac{t}{c_{\\groupofi}+1}$ rather than $\\frac{t+1}{c_{\\groupofi}+1}$. This is because here we are considering $g_{\\wht, i}\\big(\\pih^{(t)}\\big) = \\hat{g}^{(t+1)}_i$ (see \\eqref{eq:which_approximate_gradient}), while the algorithm uses $\\hat{g}^{(t)}_i$.}\n\\begin{align}\n \\Big|g_{w, ij}\\big(\\pi}%{\\hat{\\pi}\\big) -g_{\\hat{\\w}, ij}(\\pi}%{\\hat{\\pi})\\Big| \n &= \\frac{\\beta}{m}\\Big|\\frac{t+1}{c_{\\groupofi}+1}\\frac{\\hat{\\v}_{j|\\groupofi} - \\hat{\\v}_{j|\\mathrm{avg}}}{ \\hat{Z}_j} - \\frac{1}{\\overline{w}_{\\groupofi}}\\frac{\\v_{j|\\groupofi}(\\pi}%{\\hat{\\pi}) - \\v_{j|\\mathrm{avg}}(\\pi}%{\\hat{\\pi})}{Z_j} \\Big|.\n\\end{align}\nSimilarly to quality of exposure, for $x\\in[-B,B]^{\\card{\\mathcal{S}}}$ let us denote by $h(x) = \\sqrt{\\eta + \\sum_{s}%{\\mathfrak{s}\\in\\mathcal{S}} x_{\\group}^2}$ for $x_{\\group}\\in[0,B]^{\\card{\\mathcal{S}}}$. We have $|h(x) - h(x')| \\leq \\frac{B}{\\sqrt{\\eta}}\\sum_{s}%{\\mathfrak{s}\\in\\mathcal{S}} |x_s}%{\\mathfrak{s}-x'_s}%{\\mathfrak{s}|$. Moreover, with \n\\begin{align}\n x_{\\group} =\\hat{\\v}_{j|\\group} - \\hat{\\v}_{j|\\mathrm{avg}} && x_{\\group}' = \\v_{j|\\group}(\\pi}%{\\hat{\\pi}) - \\v_{j|\\mathrm{avg}}(\\pi}%{\\hat{\\pi}) && \\alpha_j=\\frac{t+1}{c_{\\groupofi}+1} && \\alpha_j' = \\frac{1}{\\overline{w}_{\\groupofi}}\n\\end{align}\nwe can use $B=1$, and, using similar steps as \\eqref{eq:details_approx_grad} (here the gradients of $h$ are bounded by $\\frac{B}{\\sqrt{\\eta}}$ in infinity norm):\n\\begin{align}\n \\Big|g_{w, ij}\\big(\\pi}%{\\hat{\\pi}\\big) -g_{\\hat{\\w}, ij}(\\pi}%{\\hat{\\pi})\\Big| \n &= \\frac{\\beta}{m} \\bigg(\\frac{B}{\\sqrt{\\eta}} |\\alpha_i-\\alpha'_i| +\\alpha'_i\\Big|\\frac{x_{\\groupofi}}{h(x)} -\\frac{x_{\\groupofi}'}{h(x')} \\Big|\\bigg)\\\\\n &\\leq\n \\frac{\\beta}{m\\overline{w}_{\\groupofi}\\min(\\eta,\\sqrt{\\eta})} \\bigg( \\overline{w}_{\\groupofi}|\\alpha_j-\\alpha'_j| + |x_{\\groupofi} - x_{\\groupofi}'| + \\norm{x-x'}_1\\bigg)\n\\end{align}\nWe now notice that using \\eqref{eq:deviation_exposure_withingroup}, we have:\n\\begin{align}\\label{eq:balanced_bounds_onemorn}\n \\mathbb{E}\\Big[|x_{\\groupofi} - x_{\\groupofi}'|\\Big] \\leq \\sqrt{\\frac{2(\\card{\\group[i]}-1)}{\\overline{w}_{\\groupofi} t}} + \\frac{1}{\\card{\\mathcal{S}}}\\sum_{s}%{\\mathfrak{s}\\in\\mathcal{S}}\\sqrt{\\frac{2(\\card{s}%{\\mathfrak{s}}-1)}{\\overline{w}_{\\group} t}} \n && \\text{and thus } \n \\mathbb{E}\\Big[\\norm{x-x'}_1\\Big]\\leq 2\\sum_{s}%{\\mathfrak{s}\\in\\mathcal{S}}\\sqrt{\\frac{2(\\card{s}%{\\mathfrak{s}}-1)}{\\overline{w}_{\\group} t}} \n\\end{align}\nWe finish the proof with the bound on $|\\alpha_i-\\alpha_i'|$:\n\\begin{align}\n \\mathbb{E}\\big[|\\alpha_i-\\alpha'_i|\\big]\n & \n = \\mathbb{E}\\bigg[\n \\Big| \\frac{t+1}{c_{\\groupofi}+1} - \\frac{1}{\\overline{w}_{\\groupofi}}\\Big| \\Bigg]\n = \\frac{1}{\\overline{w}_{\\groupofi}}\\mathbb{E}\\bigg[\\frac{\\big|\\overline{w}_{\\groupofi} + (\\overline{w}_{\\groupofi}-1) \\big|}{c_{\\groupofi}+1}\\bigg]\n\\end{align}\nFollowing the same steps as \\eqref{eq:bound_on_one_over} in Lem.~\\ref{lem:Gt}, we obtain:\n\\begin{align}\n \\mathbb{E}\\bigg[\\frac{\\big|\\overline{w}_{\\groupofi} + (\\overline{w}_{\\groupofi}-1) \\big|}{c_{\\groupofi}+1}\\bigg]\n \\leq \\frac{3}{2\\sqrt{\\overline{w}_{\\groupofi}(t+1)}}+\\frac{1}{\\overline{w}_{\\groupofi} (t+1)}\n\\end{align}\nPutting it all together, we get:\n\\begin{align}\n \\mathbb{E} \\left[ \\Big|g_{w, ij}\\big(\\pi}%{\\hat{\\pi}\\big) -g_{\\hat{\\w}, ij}(\\pi}%{\\hat{\\pi})\\Big| \\right] \\leq\n \\frac{\\beta}{m\\overline{w}_{\\groupofi} \\min(\\eta,\\sqrt{\\eta})}\\bigg(\n \\frac{3}{2\\sqrt{\\overline{w}_{\\groupofi}(t+1)}} + \\frac{1}{\\overline{w}_{\\groupofi}(t+1)}\n +\n 4\\sum_{s}%{\\mathfrak{s}\\in\\mathcal{S}}\\sqrt{\\frac{2(\\card{s}%{\\mathfrak{s}} - 1)}{\\overline{w}_{\\group} t}}\\bigg).\n\\end{align}\nFinally, using $4\\sqrt{2(\\card{s}%{\\mathfrak{s}} - 1)} + \\frac{3}{2} \\leq 8\\sqrt{\\card{s}%{\\mathfrak{s}}}$ as long as $\\card{s}%{\\mathfrak{s}}\\geq 1$ (we assumed groups are non-empty),\\footnote{The function $s}%{\\mathfrak{s}\\mapsto 4\\sqrt{2(\\card{s}%{\\mathfrak{s}} - 1)} + \\frac{3}{2} - 8\\sqrt{\\card{s}%{\\mathfrak{s}}}$ is decreasing and is $\\leq 0$ when $\\card{s}%{\\mathfrak{s}}=1$.} we obtain \n\\begin{align}\n \\mathbb{E} \\left[ \\Big|g_{w, ij}\\big(\\pi}%{\\hat{\\pi}\\big) -g_{\\hat{\\w}, ij}(\\pi}%{\\hat{\\pi})\\Big| \\right] \\leq\n \\frac{\\beta}{m\\overline{w}_{\\groupofi}\\min(\\eta,\\sqrt{\\eta})}\\bigg(\n \\frac{1}{\\overline{w}_{\\groupofi}(t+1)}\n +\n 8\\sum_{s}%{\\mathfrak{s}\\in\\mathcal{S}}\\sqrt{\\frac{\\card{s}%{\\mathfrak{s}}}{\\overline{w}_{\\group} t}}\\bigg).\n\\end{align}\nwhich simplifies to the desired result when $\\eta\\leq 1$.\n\n\\section{Proof of Corollary \\ref{cor:boundalgos}}\\label{sec:proof:boundalgos}\nWhen $\\eta\\leq 1$, and using $A\\lesssim B$ as a shorthand for $A=O(B)$, one can bound the constants $D_i$ in Theorem \\ref{thm:boundgeneral} as follows.\n\\begin{itemize}\n \\item Alg.~\\ref{alg:twosided} (two-sided fairness): Using Proposition \\ref{prop:approx_gradients}, $D_i$ taken as follows is sufficient\n $$\n D_i \\lesssim \\dfrac{\\beta\\|\\psi''_{\\alpha_2}\\|_\\infty\\sqrt{n}}{m} \n \\lesssim \\dfrac{\\beta\\eta^{\\alpha_2-2}\\sqrt{n}}{m} \n \n $$\n \\item Alg.~\\ref{alg:quaexpo} (quality-weighted):\n $$\n D_i \\lesssim \\dfrac{\\beta(2+\\|b\\|_1)^2\\sqrt{n}}{m\\min(\\eta,\\sqrt{\\eta})}\n \n \\lesssim \\dfrac{\\|b\\|_1^2\\beta\\sqrt{n}}{m\\eta}\n \n $$\n \n \\item Alg.~\\ref{alg:balancedexpo} (balanced exposure):\n By euclidean Cauchy-Schwarz inequality, we have\n $$\n \\left(\\frac{1}{|\\mathcal S|}\\sum_{s \\in \\mathcal S} \\sqrt{\\frac{|s|}{\\overline{w}_{\\group}}}\\right)^2 \\le \\left(\\frac{1}{|\\mathcal S|}\\sum_s |s|\\right)\\left(\\frac{1}{|\\mathcal S|}\\sum_s \\frac{1}{\\overline{w}_{\\group}}\\right) \\le \\frac{n}{|\\mathcal S|\\overline{w}_{\\min}}\n $$\n where $\\overline{w}_{\\min} := \\min_{s \\in \\mathcal S}\\overline{w}_{\\group}$. Thus, we deduce that $\\sum_{s \\in \\mathcal S}\\sqrt{|s|\/\\overline{w}_{\\group}} \\le \\sqrt{n|\\mathcal S|\/\\overline{w}_{\\min}}$, and so\n $$\n D_i \\lesssim \\dfrac{\\beta}{m\\overline{w}_{\\groupofi}\\min(\\eta,\\sqrt{\\eta})} \\sum_{s \\in \\mathcal S}\\sqrt{\\dfrac{|s|}{\\overline{w}_{\\group}}} \\le \\dfrac{\\beta\\sqrt{n|\\mathcal S|\/\\overline{w}_{\\min}}}{m\\overline{w}_{\\groupofi}\\min(\\eta,\\sqrt{\\eta})}\n \n %\n \\lesssim\\frac{\\beta\\sqrt{n}}{m\\eta}\\sqrt{\\frac{|\\mathcal S|}{\\overline{w}_{\\min}^3}} \n \n $$\n\\end{itemize}\n\nThus, the gradient estimates \\eqref{eq:approximategradient} required in Theorem \\ref{thm:boundgeneral} hold with $D_i= \\mathcal O(C_\\star)$ for all $i$, where the constants $C_\\star$ are given above.\n\nIn addition, we can note that \n\\begin{itemize}\n \\item $\\ln(et)\/t \\ll 1\/\\sqrt{t}$ and so the first term in \\eqref{eq:generalregret} is dominated by the second, hence can be ignored.\n \\item The $L_i$'s are bounded independently of $t$ and only impact the $\\log(et)\/t$ term, so we can ignore them.\n \\item $B_\\arms = \\norm{b}_1$.\n \\item By Jensen's inequality, $(1\/n)\\sum_i \\sqrt{w_i} \\le \\sqrt{(1\/n)\\sum_i w_i} = \\sqrt{1\/n}$, and so $\\sum_i \\sqrt{w_i} \\le \\sqrt{n}$.\n\\end{itemize}\nThe regret bounds presented in Table~\\ref{tab:rates} then follow upon plugging these estimates for $D_i$ into the generic regret bound \\eqref{eq:generalregret} of Theorem \\ref{thm:boundgeneral}, together with the following bounds for $G_i$:\n\\begin{itemize}\n\\item Alg.~\\ref{alg:twosided} (two-sided fairness) $G_i \\lesssim \\norm{\\psi'_{\\alpha_1}}_\\infty+\\frac{\\beta}{m}\\norm{\\psi'_{\\alpha_2}}_\\infty \\leq \\eta^{\\alpha_1-1} + \\frac{\\beta}{m}\\eta^{\\alpha_2-1}$,\n\\item Alg.~\\ref{alg:quaexpo} (quality-weighted) $G_i \\leq 1 + \\frac{\\beta}{m\\sqrt{\\eta}}$,\n\\item Alg.~\\ref{alg:balancedexpo} (balanced exposure) $G_i \\leq 1 + \\frac{\\beta}{m\\overline{w}_{\\min}\\sqrt{\\eta}}$.\n\\end{itemize}\nNotice that the user activity does not appear explicitly in these upper-bounds on the normalized gradient because (1) the user objective weights $\\pi_i$ by $w_i$ and (2) in the definition of item exposure, $\\pi_{ij}$ is also weighted by $w_i$. The normalization removes these weights.\n\n}\n{\n\\section{Additional experimental details}\n\n\\subsection{Training}\\label{sec:xp_training}\n\nFor the \\textit{lastfm15k}\\xspace experiments, following \\citet{do2021two}, the training was performed by randomly splitting the dataset into 3 splits $70\\%\/10\\%\/20\\%$ of train\/validation\/test sets. The hyperparameters of the factorization\\footnote{Using the Python library \\texttt{Implicit} toolkit \\url{https:\/\/github.com\/benfred\/implicit}.} are selected on the validation set by grid search. The number of latent factors is chosen in $[16, 32, 64, 128]$, the regularization in $[0.1, 1., 10., 20., 50.]$, and the confidence weighting parameter in $[0.1, 1., 10., 100.]$. The estimated preferences we use are the positive part of the resulting estimates.\n\n\n\\subsection{More details on \\textit{lastfm15k}\\xspace}\\label{sec:xp:details:lastfm}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\linewidth]{convergence_1_5000.png}\n \n \\includegraphics[width=\\linewidth]{convergence_1_50000.png}\n \\caption{Convergence speed on \\textit{lastfm15k}\\xspace of \\textsc{Offr}\\xspace compared to \\textit{batch-FW}\\xspace on the three fairness objectives, for $\\beta \\in\\{0.01, 1\\}$ and $\\eta=1$. As expected, they converge to the same values. \\textsc{Offr}\\xspace was run for $5k$ epochs, while \\textit{batch-FW}\\xspace was run for $50k$ epochs. We see that \\textsc{Offr}\\xspace converges to the same objective function value as \\textit{batch-FW}\\xspace as expected, up to some noise on quality-weighted exposure for small values of $\\beta$.}\n \\label{fig:convergence_details_lastfm}\n\\end{figure}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\linewidth]{trade-offs_0.01.png}\n \n \\includegraphics[width=\\linewidth]{trade-offs_1.png}\n \\caption{Comparison of the trade-offs obtained by varying $\\beta$ for $\\eta=0.01$ (top row) and $\\eta=1$ (bottom row, repeating Fig.~\\ref{fig:tradeoffs} for better visibility), for \\textsc{Offr}\\xspace with $10$ and $1000$ epochs on \\textit{lastfm15k}\\xspace. }\n \\label{fig:tradeoffs_details_lastfm}\n\\end{figure}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\linewidth]{convergence_0_01_1000.png}\n \\caption{Convergence speed of \\textsc{Offr}\\xspace compared to \\textit{batch-FW}\\xspace on \\textit{lastfm15k}\\xspace for $\\eta=0.01$. The overall trends are similar as for $\\eta=1$ in Fig.~\\ref{fig:convergence}, except that \\textit{batch-FW}\\xspace becomes more rapidly better than \\textsc{Offr}\\xspace on the balanced exposure objective for small $\\beta$.}\n \\label{fig:convergence_eta_lastfm}\n\\end{figure}\n\nIn this section, we provide additional details regarding convergence, as well as the choice of $\\eta$. \n\n\\paragraph{Convergence} In Fig.~\\ref{fig:convergence_details_lastfm} we give the results of the algorithms with more epochs than in Fig.~\\ref{fig:convergence} in the main paper. The online algorithm was run for 5000 epochs, while the batch algorithm was run for $50k$ epochs (which was necessary to reach the convergence value for batch for large values of $\\beta$). We observe of online on two-sided fairness and balanced exposure, the convergence on quality weighted exposure is more noisy and seems to oscillate around $10^{-4}\/10^{-5}$ of the objective, but nonetheless converges to the desired value with much faster convergence as beta becomes large.\n\n\\paragraph{Changing $\\eta$} In our objective functions, the main purpose of $\\eta$ is to ensure that the objective functions are smooth. Note that fundamentally, we are looking from trade-offs between a user objective and a fairness objective by varying $\\beta$. Different values of $\\eta$ lead to different trade-offs for fixed $\\beta$, but not necessarily different Pareto fronts when varying $\\beta$ from very small to very large values. Nonetheless, as $\\eta$ controls the curvature of the function, it is important for the convergence of both \\textit{batch-FW}\\xspace (see e.g., the analysis in \\citet{clarkson2010coresets} for more details in the convergence of batch Frank-Wolfe algorithms and the importance of the curvature). \n\nIn Fig.~\\ref{fig:tradeoffs_details_lastfm}, we show the trade-offs achieved with $\\eta=0.01$ compared to $\\eta=1$ as shown in the main paper, for the same values of $\\beta\\in\\{10^{x}, x\\in\\{-3, -2, -1, 0, 1, 2\\}\\}$, as in Fig.~\\ref{fig:tradeoffs}. \nFor both quality-weighted and balanced exposure, we observe that the smaller values of $\\eta$ (top row) reaches better trade-offs than $\\eta=1$ for this range of $\\beta$, which may indicate that $\\eta=0.01$ might be preferable in practice to $\\eta=1$. \n\nIn Fig.~\\ref{fig:convergence_eta_lastfm}, we plot the convergence speed of \\textsc{Offr}\\xspace and \\textit{batch-FW}\\xspace for the first $1000$ epochs when $\\eta=0.01$. comparing with Fig.~\\ref{fig:convergence}, we observe that, as expected, both \\textsc{Offr}\\xspace and \\textit{batch-FW}\\xspace converge to their objective function slowlier overall. The relative convergence of \\textsc{Offr}\\xspace compared to \\textit{batch-FW}\\xspace follow similar trends as for $\\eta=1$, with \\textsc{Offr}\\xspace obtaining better values of the objective at the beginning, and \\textsc{Offr}\\xspace being significantly better than \\textit{batch-FW}\\xspace for large values of $\\beta$. \n\nInterestingly though, compared to \\textit{batch-FW}\\xspace, \\textsc{Offr}\\xspace still converges relatively fast, and seems less affected by the larger curvature than \\textit{batch-FW}\\xspace. This is coherent with the observation that \\textsc{Offr}\\xspace converges faster than \\textit{batch-FW}\\xspace at the beginning, especially for large values of $\\beta$. While the exact reason why \\textsc{Offr}\\xspace is less affected by large curvatures than \\textit{batch-FW}\\xspace is an open problem, these results are promising considering the wide applicability of online Frank-Wolfe algortihms.\n\n\\subsection{Results on Movielens data}\\label{sec:xp_mlm}\n\nTo show the results on another dataset, we replicate the experiments on the MovieLens-1m dataset (\\textit{MovieLens-1m}\\xspace) \\cite{harper2015movielens}. The dataset contains ratings of movies, with $~3k$ users and $4k$ items, as well as a gender attribute. We transform the rating matrix as a binary problem by considering ratings $\\geq 3$ as positive examples, and setting all other entries as 0. This makes the problem similar to \\textit{lastfm15k}\\xspace, and we then follow exactly the same as for \\textit{lastfm15k}\\xspace, including sampling and learning the matrix factorization of user values.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\linewidth]{trade-offs_ml1m.png}\n \\caption{Trade-offs in terms of user objective ($y$-axis) and item fairness ($x$-axis) for \\textit{MovieLens-1m}\\xspace. The observations are similar than on \\textit{lastfm15k}\\xspace.}\n \\label{fig:tradeoffs_ml1m}\n\\end{figure}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\linewidth]{convergence_1_1000_ml1m.png}\n \n \\includegraphics[width=\\linewidth]{convergence_1_5000_ml1m.png}\n \n \\includegraphics[width=\\linewidth]{convergence_1_50000_ml1m.png}\n \\caption{Convergence speed of \\textsc{Offr}\\xspace compared to \\textit{batch-FW}\\xspace on \\textit{MovieLens-1m}\\xspace, for $\\beta \\in\\{0.01, 1\\}$ and $\\eta=1$ for the first $1k$ epochs (top row), $5k$ epochs (middle row) and $50k$ epochs (bottom row, note that only \\textit{batch-FW}\\xspace was ran for $50k$ epochs.}\n \\label{fig:convergence_ml1m}\n\\end{figure}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.48\\linewidth]{fairco_ml1m.png}\n \\hfill\n \\includegraphics[width=0.48\\linewidth]{fairco_pacing_ml1m.png}\n \\caption{Convergence of \\textsc{Offr}\\xspace compared to the dynamics of \\textit{FairCo}\\xspace on \\textit{MovieLens-1m}\\xspace (left) without the pacing heuristic (right) with the pacing heuristic.}\n \\label{fig:fairco_ml1m}\n\\end{figure}\n\n\n\\paragraph{Qualitative trade-offs} The trade-offs obtained by varying $\\beta\\in\\{10^{x}, x\\in\\{-3, -2, -1, 0, 1, 2\\}\\}$ are shown in Fig.~\\ref{fig:tradeoffs_ml1m}. The plots are qualitatively very similar to those of\n\\textit{lastfm15k}\\xspace.\n\n\\paragraph{convergence} The convergence of \\textsc{Offr}\\xspace compared to \\textit{batch-FW}\\xspace are shown in Fig.~\\ref{fig:convergence_ml1m}. They again look very similar to the plots of \\textit{lastfm15k}\\xspace, with \\textsc{Offr}\\xspace being better than \\textit{batch-FW}\\xspace at the beginning, and reaching the same values at convergence than \\textit{batch-FW}\\xspace, up to noise when the objective is close to the optimum.\n\n\\paragraph{Comparison to \\textit{FairCo}\\xspace} Fig.~\\ref{fig:fairco_ml1m} shows the comparison to \\textit{FairCo}\\xspace. The observations are once again similar to those on \\textit{lastfm15k}\\xspace, with the main trends exacerbated (\\textsc{Offr}\\xspace without pacing heuristic converging in fairness objective extremely fast and taking time to converge in user objective, while \\textit{FairCo}\\xspace keeping a high value of user objective as much as possible while consistently reducing the item objective. \n}\n\\end{document}\n\\endinput\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n\nOver the past half-century, excitons were considered as notably interesting\ncandidates for Bose--Einstein condensation (BEC), in which collective\ncoherence may lead to intriguing macroscopic quantum phenomena (see, Ref. \n\\onlinecite{Moskal}] and references therein). Exciton being a bound state of\nan electron and a hole in a semiconductor is a unique physical system with a\nrather small mass comparable to the free electron mass. This is a crucial\nadvantage from the experimental point of view since the BEC critical\ntemperature of an excitonic gas is much higher than that of an atom gas with\nthe same number density. \\cite{Pethick} However, the BEC was first\nsuccessfully realized for trapped alkali atoms, \\cite{BEC} which are several\nthousand times heavier than excitons. The latter provided additional\nstimulus for realization of BEC for various condensed matter physics of\nquasiparticles. In this context, it is worthy to mention realization of BEC\nof quasiparticles, known as exciton--polaritons, \\cite{Kasp} existing even\nat room-temperature. \\cite{BECS}\n\nAmong the variety of bosonic quasiparticles, the excitons of the yellow\nseries in the semiconductor cuprous oxide ($\\mathrm{Cu}_{2}\\mathrm{O}$) are\nstill considered as the most promising candidates for pure\\ excitonic BEC. \n\\cite{Rev1,Rev2,Rev3} Experiments in this direction have been done since\n1986, \\cite{Exp1,Exp2,Exp3,Exp4,Exp5} and continued up to present \\cit\n{R1,R2,R3,R4,R5,R6,R7,R8} due to several favorable features of excitons in \n\\mathrm{Cu}_{2}\\mathrm{O}$. First, the large binding energy of $0.15\\ \n\\mathrm{eV}$ which increases the Mott density up to $10^{19}\\mathrm{cm}^{-3}\n. Second, the ground state of this series splits into the threefold\ndegenerate orthoexciton and the non-degenerate paraexciton. The latter is\nthe lowest energetic state lying below the orthoexciton states. Due to the\nselection rules one photon decay of paraexciton is forbidden. Its decay is\nonly possible via optical phonon and photon resulting in a long lifetime. \n\\cite{Mys,Shi} The latter is in the microsecond range during which BEC may\nbe reached. To achieve excitonic BEC one should create a dense gas of\nexcitons either in a bulk crystal or in a potential trap. However\nexperiments \\cite{Exp1,Exp2,Exp3,Exp4,Exp5,R3,R4,R5,R6} did not demonstrate\nconclusively excitonic BEC. The main reason for this failure is connected\nwith the fact that the lifetime of excitons in $\\mathrm{Cu}_{2}\\mathrm{O}$\ndecreases significantly at high gas densities. This effect has been\nattributed to an Auger recombination process between two excitons resulting\nin a loss rate $\\Gamma _{\\mathrm{A}}=\\alpha n$, where $\\alpha $ is the Auger\nconstant and $n$ is the exciton gas density. However, there is no general\nconsent on the value of Auger constant. The reported values for $\\alpha $\nrange are from $10^{-20}\\ \\mathrm{cm}^{3}\\mathrm{ns}^{-1}$ to $1.8\\times\n10^{-16}\\ \\mathrm{cm}^{3}\\mathrm{ns}^{-1}$ and differ for orto- and\npara-excitons. \\cite{D1,D2,D3,D4,D5,D6}\n\nAs was mentioned above, the isolated exciton in $\\mathrm{Cu}_{2}\\mathrm{O}$\nis unstable and decays into photon and phonon. Due to BEC coherence, one can\nexpect collective radiative effects at the decay of a large number of\nexcitons. The latter may be a tool that evidences the state of the BEC, as\nwell as, it may significantly reduce the lifetime of the BEC state. Such an\neffect has been revealed for the positronium atoms, \\cite{Mer1,Mer2} which\nin some sense resembles excitons. It has been shown that at the coupling of\ntwo coherent ensembles of bosons -- the BEC of positronium atoms and photons\nthere is an instability at which, starting from the vacuum state of the\nphotonic field, the expectation value of the photon's mode occupation grows\nexponentially for a narrow interval of frequencies. For the excitons in \n\\mathrm{Cu}_{2}\\mathrm{O}$ one will have coupling between three bosonic\nfields and it is of interest to investigate how excitonic BEC burst into\nphotons\/phonons.\n\nIn this paper collective decay of excitons from initial Bose-Einstein\ncondensate state is investigated arising from the second quantized\nformalism. It is shown that because of intrinsic instability of recoilless\ntwo-boson decay of Bose-Einstein condensate, the spontaneously emitted\nbosonic pairs are amplified, leading to an exponential buildup of a\nmacroscopic population into certain modes. The exponential growth rate has a\nnonlinear dependence on the BEC density and it is quite large for the\nexperimentally achievable densities. For the elongated condensate, one can\nreach self-amplification of the end-fire-modes. With the initial\nmonochromatic photonic beam, one can generate the monochromatic phononic\nbeam. Hence, the considered phenomenon may also be applied for realization\nof phonon laser.\n\nThe paper is organized as follows. In Sec. II the main Hamiltonian is\nintroduced. In Sec. III spontaneous decay of exciton is analyzed. In Sec. IV\nwe consider intrinsic instability of recoilless collective two-boson decay\nof excitonic BEC. Finally, conclusions are given in Sec. V.\n\n\\section{Basic Hamiltonian}\n\nWe start our study with the construction of the Hamiltonian which governs\nthe quantum dynamics of considered process. The total Hamiltonian consists\nof four parts: \n\\begin{equation}\n\\widehat{H}=\\widehat{H}_{\\mathrm{exc}}+\\widehat{H}_{\\mathrm{phot}}+\\widehat{\n}_{\\mathrm{phon}}+\\widehat{H}_{\\mathrm{d}}. \\label{fH}\n\\end{equation\nHere the first part is the Hamiltonian of free excitons\n\\begin{equation}\n\\widehat{H}_{\\mathrm{exc}}=\\int d\\Phi _{\\mathbf{p}}\\mathcal{E}_{\\text\n\\textsc{e}}}\\left( \\mathbf{p}\\right) \\widehat{\\text{\\textsc{e}}}_{\\mathbf{p\n}^{+}\\widehat{\\text{\\textsc{e}}}_{\\mathbf{p}}, \\label{H_exc}\n\\end{equation\nwhere $\\widehat{\\text{\\textsc{e}}}_{\\mathbf{p}}^{+}$ ($\\widehat{\\text\n\\textsc{e}}}_{\\mathbf{p}}$) is the creation (annihilation) operator for an\nexciton. These operators satisfy the Bosonic commutation rules for a\nrelatively small number density $n$ of excitons, that is at $n0$ is just given by the expansio\n\\begin{equation*}\n|\\Psi \\rangle =C_{0}e^{-\\frac{i}{\\hbar }\\mathcal{E}_{\\text{\\textsc{e}\n}\\left( 0\\right) t}|0\\rangle _{\\mathrm{phon}}\\otimes |0\\rangle _{\\mathrm{pho\n}}\\otimes \\widehat{\\text{\\textsc{e}}}_{0}^{+}|0\\rangle _{\\mathrm{exc}}+\\int\nd\\Phi _{\\mathbf{k}}d\\Phi _{\\mathbf{k}^{\\prime }}\n\\end{equation*\n\\begin{equation}\n\\times C_{\\mathbf{k};\\mathbf{k}^{\\prime }}\\left( t\\right) e^{-i\\left( \\omega\n_{\\mathrm{ph}}\\left( \\mathbf{k}^{\\prime }\\right) +\\omega \\left( \\mathbf{k\n\\right) \\right) t}\\widehat{b}_{\\mathbf{k}^{\\prime }}^{+}|0\\rangle _{\\mathrm\nphon}}\\otimes \\widehat{c}_{\\mathbf{k}}^{+}|0\\rangle _{\\mathrm{phot}}\\otimes\n|0\\rangle _{\\mathrm{exc}}, \\label{Psit}\n\\end{equation\nwhere $C_{\\mathbf{k};\\mathbf{k}^{\\prime }}\\left( t\\right) $ is the\nprobability amplitude for the photonic and phononic fields to be in the\nsingle-particle state, while excitonic field in the vacuum state. From the\nSchr\\\"{o}dinger equation one can obtain evolution equations\n\\begin{equation*}\ni\\frac{\\partial C_{\\mathbf{k};\\mathbf{k}^{\\prime }}\\left( t\\right) }\n\\partial t}=\\frac{\\mathcal{M}\\left( \\mathbf{k}^{\\prime },\\mathbf{0}\\right) }\n\\mathcal{V}^{1\/2}}C_{0}\\left( t\\right) \\frac{\\left( 2\\pi \\right) ^{3}}\n\\mathcal{V}}\\delta \\left( \\mathbf{k}+\\mathbf{k}^{\\prime }\\right) \n\\end{equation*\n\\begin{equation}\n\\times e^{i\\left( \\omega _{\\mathrm{ph}}\\left( \\mathbf{k}^{\\prime }\\right)\n+\\omega \\left( \\mathbf{k}\\right) -\\omega _{\\mathrm{exc}}\\right) t}.\n\\label{ev1}\n\\end{equation\nThen, according to perturbation theory we take $C_{0}\\left( t\\right) \\simeq 1\n$, and for the amplitude $C_{\\mathbf{k};\\mathbf{k}^{\\prime }}\\left(\nt\\rightarrow \\infty \\right) $ from Eq. (\\ref{ev1}) we obtai\n\\begin{equation*}\nC_{\\mathbf{k};\\mathbf{k}^{\\prime }}=\\frac{\\mathcal{M}\\left( \\mathbf{k\n^{\\prime },\\mathbf{0}\\right) }{i\\mathcal{V}^{1\/2}}\\frac{\\left( 2\\pi \\right)\n^{4}}{\\mathcal{V}}\n\\end{equation*\n\\begin{equation}\n\\times \\delta \\left( \\omega _{\\mathrm{ph}}\\left( \\mathbf{k}^{\\prime }\\right)\n+\\omega \\left( \\mathbf{k}\\right) -\\omega _{\\mathrm{exc}}\\right) \\delta\n\\left( \\mathbf{k}+\\mathbf{k}^{\\prime }\\right) . \\label{pert3}\n\\end{equation\nFor the decay of an exciton the modes laying in the narrow interval of\nwavenumbers are responsible. Hence, for the dispersion relations we assume \n\\omega _{\\mathrm{ph}}\\left( \\mathbf{k}\\right) =\\mathrm{const}\\equiv \\omega _\n\\mathrm{ph}}$ and $\\omega \\left( \\mathbf{k}\\right) =kc_{l}$, where $c_{l}\n-is the light speed in a semiconductor. Then returning to expansion (\\re\n{Psit}), one can writ\n\\begin{equation*}\n|\\Psi \\rangle \\simeq C_{0}e^{-i\\omega _{\\mathrm{exc}}t}|0\\rangle _{\\mathrm\nphon}}\\otimes |0\\rangle _{\\mathrm{phot}}\\otimes |1_{\\mathbf{0}}\\rangle _\n\\mathrm{exc}}\n\\end{equation*\n\\begin{equation*}\n+\\frac{\\mathcal{V}^{1\/2}\\mathcal{M}\\left( k_{0},0\\right) k_{0}^{2}}{i\\left(\n2\\pi \\right) ^{2}c_{l}}e^{-i\\omega _{\\mathrm{exc}}t}\n\\end{equation*\n\\begin{equation}\n\\times |0\\rangle _{\\mathrm{exc}}\\otimes \\int d\\widehat{\\mathbf{k}}|1_\n\\mathbf{k}}\\rangle _{\\mathrm{phon}}\\otimes |1_{-\\mathbf{k}}\\rangle _{\\mathrm\nphot}}, \\label{fs}\n\\end{equation\n\\ where $\\widehat{\\mathbf{k}}=\\mathbf{k\/}\\left\\vert \\mathbf{k}\\right\\vert $,\nand \n\\begin{equation}\nk_{0}=\\frac{\\omega _{\\mathrm{exc}}-\\omega _{\\mathrm{ph}}}{c_{l}}. \\label{k0}\n\\end{equation\nHear we have taken into account that the decay amplitude does not depend on\nthe direction of $\\mathbf{k}$ and\\textbf{, }as a result, the final state \n\\ref{fs}) resulting from an exciton decay is a superposition of the states\nof oppositely propagating photon and phonon with the given momentum $k_{0}$.\nThat is, we have recoilless two-boson decay of exciton, which is crucial for\nthe development of instability in BEC where the lowest energy single\nparticle state is occupied. For the decay rate one can write \n\\begin{equation*}\n\\Gamma =\\int d\\Phi _{\\mathbf{k}}d\\Phi _{\\mathbf{k}^{\\prime }}\\frac\n\\left\\vert C_{\\mathbf{k};\\mathbf{k}^{\\prime }}\\right\\vert ^{2}}{t_{\\mathrm\nint}}},\n\\end{equation*\nwhere $t_{\\mathrm{int}}$ is the interaction time. With the help of Eq. (\\re\n{pert3}) we obtain the well known result: \n\\begin{equation}\n\\Gamma =\\frac{\\mathcal{M}^{2}}{\\pi c_{l}}k_{0}^{2}. \\label{456}\n\\end{equation\nThe radiative lifetime of an isolated exciton is $\\Gamma ^{-1}$.\n\n\\section{Collective decay}\n\nFor analysis of the collective photon-phonon decay of excitons we will use\nHeisenberg representation, where the evolution operators are given by the\nfollowing equation \n\\begin{equation}\ni\\frac{\\partial \\widehat{L}}{\\partial t}=\\left[ \\widehat{L},\\widehat{H\n\\right] , \\label{Heis}\n\\end{equation\nand the expectation values are determined by the initial wave function $\\Psi\n_{0}$\n\\begin{equation*}\n\\left\\langle \\widehat{L}\\right\\rangle =\\langle \\Psi _{0}|\\widehat{L}|\\Psi\n_{0}\\rangle .\n\\end{equation*\nWe will assume that the excitonic field starts up in the Bose-Einstein\ncondensate state, while for photonic and phononic fields we will consider\nboth vacuum state and states with nonzero mean number of particles. Taking\ninto account Hamiltonian (\\ref{fH}) from Eq. (\\ref{Heis}) we obtain a set of\nequations\n\\begin{equation}\ni\\frac{\\partial \\widehat{c}_{\\mathbf{k}}}{\\partial t}=\\omega \\left( \\mathbf{\n}\\right) \\widehat{c}_{\\mathbf{k}}+\\int d\\Phi _{\\mathbf{p}}\\frac{\\mathcal{M\n\\left( \\mathbf{p-k},\\mathbf{p}\\right) }{\\mathcal{V}^{1\/2}}\\widehat{b}_\n\\mathbf{p-k}}^{+}\\widehat{\\text{\\textsc{e}}}_{\\mathbf{p}}, \\label{M1}\n\\end{equation\n\\begin{equation}\ni\\frac{\\partial \\widehat{b}_{\\mathbf{k}}}{\\partial t}=\\omega _{\\mathrm{ph\n}\\left( \\mathbf{k}\\right) \\widehat{b}_{\\mathbf{k}}+\\int d\\Phi _{\\mathbf{p}\n\\frac{\\mathcal{M}\\left( \\mathbf{k},\\mathbf{p}\\right) }{\\mathcal{V}^{1\/2}\n\\widehat{c}_{\\mathbf{p-k}}^{+}\\widehat{\\text{\\textsc{e}}}_{\\mathbf{p}},\n\\label{M2}\n\\end{equation\n\\begin{equation}\ni\\frac{\\partial \\widehat{\\text{\\textsc{e}}}_{\\mathbf{p}}}{\\partial t}=\\hbar\n^{-1}\\mathcal{E}_{\\text{\\textsc{e}}}\\left( \\mathbf{p}\\right) \\widehat{\\text\n\\textsc{e}}}_{\\mathbf{p}}+\\int d\\Phi _{\\mathbf{q}}\\frac{\\mathcal{M}^{\\ast\n}\\left( \\mathbf{q},\\mathbf{p}\\right) }{\\mathcal{V}^{1\/2}}\\widehat{c}_\n\\mathbf{p-q}}\\widehat{b}_{\\mathbf{q}}. \\label{M3}\n\\end{equation}\n\nThese equations are a nonlinear set of equations with the photonic, phononic\nand excitonic fields' operators defined self-consistently. As we are\ninterested in the quantum dynamics of considered system in the presence of\ninstabilities we can decouple the excitonic field treating the dynamics of\nphotonic and phononic fields. For this propose we just use the Bogolubov\napproximation. If the lowest energy single particle state has a macroscopic\noccupation, we can separate the field operators $\\widehat{\\text{\\textsc{e}}\n_{\\mathbf{p}}$ into the condensate term and the non-condensate components,\ni.e. the operator $\\widehat{\\text{\\textsc{e}}}_{\\mathbf{p}}$ in Eqs. (\\re\n{M1}) and (\\ref{M2}) is replaced by the c-number as follo\n\\begin{equation}\n\\widehat{\\text{\\textsc{e}}}_{\\mathbf{p}}=\\sqrt{n_{0}}\\frac{\\left( 2\\pi\n\\right) ^{3}}{\\mathcal{V}^{1\/2}}\\delta \\left( \\mathbf{p}\\right) e^{-i\\omega\n_{\\mathrm{exc}}t}, \\label{Bogol}\n\\end{equation\nwhere $n_{0}$ is the number density of excitons in the condensate. Making\nBogoloubov approximation we arrive at a linear set of the Heisenberg\nequation\n\\begin{equation}\ni\\frac{\\partial \\widehat{c}_{-\\mathbf{k}}}{\\partial t}=\\omega \\left(\nk\\right) \\widehat{c}_{-\\mathbf{k}}+\\chi \\left( k\\right) \\widehat{b}_{\\mathbf\nk}}^{+}e^{-i\\omega _{\\mathrm{exc}}t}, \\label{L1}\n\\end{equation\n\\begin{equation}\ni\\frac{\\partial \\widehat{b}_{\\mathbf{k}}}{\\partial t}=\\omega _{\\mathrm{ph}\n\\widehat{b}_{\\mathbf{k}}+\\chi \\left( k\\right) \\widehat{c}_{\\mathbf{-k\n}^{+}e^{-i\\omega _{\\mathrm{exc}}t}, \\label{L2}\n\\end{equation\nwhich couples photon modes with momentum $\\mathbf{k}$ to the phonons with\nmomentum $-\\mathbf{k}$. The coupling constant is \n\\begin{equation}\n\\chi \\left( k\\right) =\\sqrt{n_{0}}\\mathcal{M}\\left( k,0\\right) . \\label{CC}\n\\end{equation}\n\nEquations (\\ref{L1}) and (\\ref{L2}) compose a set of linearly coupled\noperator equations that can be solved by the method of characteristics whose\neigenfrequencies define the temporal dynamics of the bosonic fields. The\nexistence of an eigenfrequency with an imaginary part would indicate the\nonset of instability at which the initial spontaneously emitted bosonic\npairs are amplified leading to an exponential buildup of a macroscopic mode\npopulation. Solving Eqs. (\\ref{L1}) and (\\ref{L2}), we obtai\n\\begin{equation*}\n\\widehat{b}_{\\mathbf{k}}^{+}=e^{i\\left( \\omega _{\\mathrm{ph}}-\\frac{\\delta\n\\left( k\\right) }{2}\\right) t}\\left\\{ \\widehat{b}_{\\mathbf{k}}^{+}\\left(\n0\\right) \\cosh \\left( \\sigma \\left( k\\right) t\\right) +\\frac{i}{2\\sigma\n\\left( k\\right) }\\right.\n\\end{equation*\n\\begin{equation}\n\\left. \\times \\left( \\delta \\left( k\\right) \\widehat{b}_{\\mathbf{k\n}^{+}\\left( 0\\right) +2\\chi ^{\\ast }\\left( k\\right) \\widehat{c}_{-\\mathbf{k\n}\\left( 0\\right) \\right) \\sinh \\left( \\sigma \\left( k\\right) t\\right)\n\\right\\} , \\label{S1}\n\\end{equation\n\\begin{equation*}\n\\widehat{c}_{-\\mathbf{k}}=e^{i\\left( \\frac{\\delta \\left( k\\right) }{2\n-\\omega \\left( k\\right) \\right) t}\\left\\{ \\widehat{c}_{-\\mathbf{k}}\\left(\n0\\right) \\cosh \\left( \\sigma \\left( k\\right) t\\right) -\\frac{i}{2\\sigma\n\\left( k\\right) }\\right.\n\\end{equation*\n\\begin{equation}\n\\left. \\times \\left( 2\\chi \\left( k\\right) \\widehat{b}_{\\mathbf{k\n}^{+}\\left( 0\\right) +\\delta \\left( k\\right) \\widehat{c}_{-\\mathbf{k}}\\left(\n0\\right) \\right) \\sinh \\left( \\sigma \\left( k\\right) t\\right) \\right\\} ,\n\\label{S2}\n\\end{equation\nwher\n\\begin{equation}\n\\delta \\left( k\\right) =\\omega \\left( k\\right) -\\omega _{\\mathrm{exc\n}+\\omega _{\\mathrm{ph}} \\label{det}\n\\end{equation\nis the resonance detuning, and \n\\begin{equation}\n\\sigma \\left( k\\right) =\\sqrt{\\left\\vert \\chi \\left( k\\right) \\right\\vert\n^{2}-\\frac{\\delta ^{2}\\left( k\\right) }{4}}. \\label{CCC}\n\\end{equation\nAs is seen from Eqs. (\\ref{S1})-(\\ref{CCC}), the condition for the dynamic\ninstability is: \n\\begin{equation*}\n\\left\\vert \\chi \\left( k\\right) \\right\\vert >\\frac{\\left\\vert \\delta \\left(\nk\\right) \\right\\vert }{2}\n\\end{equation*\nleading to the exponential growth of the modes in the narrow interval of\nwavenumber\n\\begin{equation}\n\\omega _{\\mathrm{exc}}-\\omega _{\\mathrm{ph}}-2\\left\\vert \\chi \\left(\nk_{0}\\right) \\right\\vert >1$ and $\\delta $ is the width of distribution in the momentum\nspace $\\delta <