diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzgyfg" "b/data_all_eng_slimpj/shuffled/split2/finalzzgyfg" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzgyfg" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\n\\begin{table*}\n\\begin{center}\n\\caption{Details of the observations made of each target. Standard stars\n(HD49798, EG 274 and EG 21) for photometric calibration were observed with the\nsame instrumental configuration.}\\label{tab:obs}\n\\begin{tabular}{cccccccl}\n\\hline\nTarget & Redshift & \\multicolumn{2}{c}{Position (J2000)} & Date & Exposure & Axial & Comment \\\\\n & & RA & Dec & & Time (s) & Wavelength (\\AA) &\\\\\n\\hline\nMRC B1256-243 & 2.263 & \\ra{12}{59}{12.6} & \\dec{-24}{36}{05} & 2003 July 27 & 15 $\\times$ 60 & 3957.2 & Repeated \\\\\n& & & & & & 3967.1 & twice. \\\\\n& & & & & & 3977.1 & \\\\\n& & & & & & 3987.1 & \\\\\n\\\\\nMRC B2158-206 & 2.249 & \\ra{22}{01}{27.0} & \\dec{-20}{25}{36} & 2003 July 27 & 15 $\\times$ 60 & 3959.1 & Repeated \\\\\n& & & & & & 3969.1 & four \\\\\n& & & & & & 3979.1 & times. \\\\\n& & & & & & 3989.0 & \\\\\n& & & & & & 3999.0 & \\\\\n\\\\\nBR B0019-1522 & 4.528 & \\ra{00}{22}{08.0} & \\dec{-15}{05}{39} & 1997 Nov. 6 & 600 & 6709.5 & Repeated \\\\\n& & & & & & 6725.9 & eight \\\\\n& & & & & & 6742.3 & times. \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table*}\n\nThe evolution of clustering with cosmic time is widely recognised as one of\nthe most rigid tests of the cold dark matter paradigm \\citep{Kaiser91,\nSpringel05}. However, locating high redshift clusters is challenging. The\ntraditional methods of X-ray and blind optical searches are limited: X-ray\nsurveys can detect only the most luminous sources at high-$z$, while optical\nsearches are highly vulnerable to projection effects. In order to overcome\nthese limitations, a way of targeting the search is needed.\n\nSince the earliest studies, it has been established that quasars are\nassociated with groups and clusters of galaxies \\citep{Bahcall69, Oemler72}.\nMore recently, \\citet{McLure01} argued that a close match between the space\ndensity of clusters and that of quasars indicates that practically all\nclusters contained an AGN at high redshift. Further, \\citet{Rawlings04}\npropose that radio jets from AGN are a major influence on cluster evolution.\nThey suggest that a galaxy merger within the cluster triggers a radio-jet\nepisode; the jets then delivery energy to the intracluster medium, heating it\nand preventing it from falling into the other developing cluster galaxies.\nThese galaxies are thus starved of fuel, and star formation within the cluster\nwill effectively shut down. \\citeauthor{Rawlings04} speculate that every\nprotocluster undergoes such an episode, strengthening the link postulated by\n\\citeauthor{McLure01}.\n\nThis relationship between galaxy overdensities and AGN suggests a method for\nlocating high-$z$ clusters: we can use quasars as convenient `anchors' for our\nsearch. This technique has already been exploited by others with notable\nsuccess: for example, \\citet{Stiavelli05} tentatively report the detection of\nclustering around a radio-quiet quasar at $z = 6.28$.\n\nTo date most galaxy clusters detected around AGN have been identified based on\nstatistical overdensities of objects observed in their vicinity. A better\nstrategy for overcoming foreground contamination is to identify individual\nstar forming galaxies in the AGN field by their characteristic redshift\ndependent features. In particular, Lyman $\\alpha$ emission has been used to\nidentify high redshift galaxies for some time. Among the first high redshift\nobjects identified by emission lines were the $z = 4.55$ Ly $\\alpha$ emitters\nobserved in the field of the quasar BR B2237-0607 by \\citet{Hu96}. Since then,\na series of highly profitable observations of Ly $\\alpha$ emitters in AGN\nfields have been carried out. \\citet{Kurk00} and \\citet{Pentericci00} used a\ncombination of narrow- and broad-band imaging with follow-up spectroscopy to\nidentify a galaxy overdensity within 1.5 Mpc of the $z = 2.156$ radio galaxy\nPKS B1138-262. Similar results have been achieved for the radio galaxies TN\nJ1338-1942 \\citep[$z=4.1$;][]{Venemans02}, TN J0924-2201\n\\citep[$z=5.2$;][]{Venemans04, Overzier06} and MRC B0316-257\n\\citep[$z=3.13$;][]{Venemans05} and 6C0140+326 \\citep[$z=4.413$;][]{Kuiper11}.\n\nWhile this combination of broad and narrowband imaging has produced\ndemonstrably successful results, the more direct antecedents of this work have\nadopted an alternative approach. The \\textit{Taurus Tunable Filter} (TTF)\ninstrument, installed on the Anglo-Australian Telescope, provided a powerful\nmethod of narrow-band (of order 10 \\AA) imaging over a large range of\nwavelengths \\citep{BH982}. \\citet{Bremer99} introduced the strategy used to\nsearch for line emitters at a given redshift with TTF: broadly, the tunable\nfilter is stepped across a range of wavelengths around the expected redshifted\nposition of the emission. Emission line galaxies then appear brighter in those\nframes centred on the spectral line.\n\nConsiderable success has been achieved at lower redshifts with this technique.\n\\citet{Baker01} located a cluster around the $z = 0.9$ radio-loud quasar MRC\nB0450-221 using TTF to search for $[$O\\,{\\sc ii}$]$ 3727 \\AA{} emission. The\nsame technique was used by \\citet{Barr04}, who examined six radio-loud quasars\nat redshifts $0.8 < z < 1.3$, identifying a total of 47 candidate emission\nline galaxies (ELGs), at an average space density around 100 times higher than\nthat found locally.\n\nFurther work with TTF was performed by \\citet{Francis04}, who targeted Ly\n$\\alpha$ emitters within 1 Mpc of the $z=2.159$ radio loud quasar PKS\nB0424-131 without making {\\it any} detections. These authors selected this\nextremely luminous UV source with the expectation of finding Ly $\\alpha$\nfluorescent clouds in the vicinity of the quasar but these were not detected.\nWith specific application to PKS B0424-131, \\citet{Bruns11} demonstrated that\nthe most intrinsically UV-luminous quasars observed beyond $z=1$ suppress star\nformation in low-mass haloes ($M_{\\rm vir} \\lesssim 10^{12}$ M$_\\odot$) within\na megaparsec of the quasar. The intense UV radiation field is expected to\nphoto-evaporate HI clouds which presumably accounts for the lack of\ndetections. We return to this point in our conclusion\n(\\S~\\ref{sec:conclusion}).\n\nThe present work continues to push TTF to higher redshifts, searching three\nquasar fields at redshifts up to $z \\sim 4.5$. The objects selected include\nexamples of both radio-loud and radio-quiet quasars, and their environments\nare compared. Section \\ref{sec:obs} of this paper describes the observations,\nincluding target selection, instrumental characteristics and a note on data\nreduction. Section \\ref{sec:sim} describes simulations performed to examine\nstatistical properties and completeness of our sample. Section \\ref{sec:id}\ndescribes how candidate ELGs were identified and presents details on the\ndetections, as well as considering the possible sources of mis-identified\n`interloper' objects. Section \\ref{sec:properties} analyses the distribution\nand properties of the sample. Our conclusions are summarised in Section\n\\ref{sec:conclusion}. Throughout, we assume an $H_0 = 70$ km s$^{-1}$\nMpc$^{-3}$, $\\Omega_{\\Lambda} = 0.7$, $\\Omega_{\\mathrm{M}} = 0.3$ cosmology.\n\n\\section{Observations}\n\\label{sec:obs}\n\n\n\\subsection{Target selection}\n\nTwo data sources were used for this analysis. The authors used TTF to observe\nobjects drawn from the Molonglo Quasar Sample \\citep[MQS;][]{Kapahi98} of\nlow-frequency-selected radio-loud quasars in July 2003. Six targets had been\nselected from the MQS on the basis of observability, suitable redshifts being\nlimited by the necessity to place Lyman $\\alpha$ within the wavelength ranges\naccessible to TTF's order-blocking filters. Due to weather constraints, only\ntwo quasars were observed: MRC B1256-243 ($z = 2.263$) and MRC B2158-206 ($z =\n2.249$). Immediately following each quasar observation, a standard star was\nobserved with the same instrumental settings for flux calibration. In\naddition, observations of BR B0019-1522, a $z = 4.528$ radio-quiet quasar,\nwere drawn from the Anglo-Australian Observatory archive. These data were\ntaken on 1997 November 6 by Bland-Hawthorn, Boyle and Glazebrook, and were\naccompanied by companion observations of a standard star. Details of each\ntarget are given in Table \\ref{tab:obs}.\n\n\\subsection{Instrumental setup and characteristics}\n\nThroughout this work, a distinction is drawn between a \\textit{frame}\n(corresponding to one set of data read from the CCD), an \\textit{image} (a\nnumber of frames at the same etalon settings which have been combined for\nanalysis) and a \\textit{field}, or stack of images of the same area of sky at\ndifferent etalon settings.\n\n\\subsubsection{Wavelength variation and the optical axis}\n\\label{sec:wlvariation}\n\nFabry-P\\'erot images have a quadratic radial wavelength dependence of the form\n$\\lambda_\\theta = \\lambda_{centre}(1 - \\theta^2\/2)$ \\citep{Bland89}, where\n$\\theta$ is the off-axis angle at the etalon. In a typical observation, the\nwavelength varies across the field by around 1\\% of $\\lambda_{centre}$.\nWavelength calibration is performed with respect to the axial wavelength; for\nany given pixel position on the image, it is then possible to calculate the\nwavelength observed at that point.\n\n\\subsubsection{Objects at $z \\sim 2.2$}\n\nThe TTF was used at $f\/8$ on the AAT in combination with the EEV2 CCD. This\nresulted in a scale of 0.33'' per pixel. After processing, the total useful\nrectangular field of view in the observations was around 7' by 5'. The radial\nwavelength variation described in Section \\ref{sec:wlvariation} resulted in a\nshift of 1.4~\\AA{} at 2' from the optical axis and 6.7~\\AA{} at 4' from the axis.\nConditions were photometric, and seeing was on the order of 1.5''. The full\nwidth at half maximum of the etalon transmission band was 7.5~\\AA.\n\nThe targets were scanned at etalon plate spacings corresponding to a series of\nwavelength steps of approximately 10~\\AA, the aim being to straddle the\nredshifted Ly $\\alpha$. However, an intermediate-band order-blocking filter is\nnecessary to eliminate unwanted wavelengths and other orders of interference.\nIn this case, the AAT's B1 filter was the best available. Unfortunately, the\nobserved wavelengths were at the very edge of the filter transmission, as\nshown in Fig. \\ref{fig:trans}: the signal to noise ratio therefore decreases\nsignificantly with wavelength. Table \\ref{tab:obs} and Fig. \\ref{fig:trans}\nrecord observations of MRC B1256-243 at 3987.1 \\AA. When these data were\nanalysed, it was clear that the reduced filter transmission had resulted in no\nuseful results at this wavelength. These data are not considered further in\nthis work. The MRC B2158-206 observations at 3989.0 \\AA{} and 3999.0 \\AA{} are\nincluded hereafter, but did not include any useful detections.\n\nEach CCD frame contained a total of 30 minutes of observations, taken at two\nseparate axial wavelengths. Each wavelength was exposed for 60 seconds a total\nof 15 times. This procedure was repeated twice in the case of MRC B1256-243\nand four times for MRC B2158-206; the total exposure times at each wavelength\nare thus 30 minutes and 1 hour, respectively. Between each image, the\ntelescope pointing was shifted slightly: this enabled the easy identification\nand subsequent elimination of diametric ghosts in the data.\n\n\\subsubsection{Objects at $z \\sim 4.5$}\n\nThe TTF was used at $f\/8$ on the AAT in combination with the MITLL2 CCD. This\nresulted in a scale of 0.37'' per pixel. After processing, the total useful\nrectangular field of view in the observations was 9'17'' by 4'10''. The\nradial wavelength variation described in Section \\ref{sec:wlvariation}\nresulted in a shift of 5.1~\\AA{} at 2' from the optical axis and 20.3~\\AA{} at\n4' from the axis. Conditions were photometric, and the seeing was on the\norder of 1.5\". The full width at half maximum of the etalon transmission band\nwas 9.5~\\AA. The AAT's R0 intermediate-band order-blocking filter was used:\nthis provided effectively constant transmission across the wavelength range\nunder consideration.\n\nEach CCD frame contained a total of 30 minutes of observations: ten at each of\nthree axial wavelengths. Eight CCD frames were recorded, resulting in a total\nof 80 minutes exposure for each axial wavelength. As before, the telescope\nposition was shifted slightly between images.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics{fig1}\n\\caption{On-axis etalon transmission bands for each of the three fields\nobserved shown relative to the relevant order-blocking filter used on the\ntelescope. Away from the optical axis the etalon transmission shifts to\nshorter wavelengths (\\S\\ref{sec:wlvariation}).}\\label{fig:trans}\n\\end{center}\n\\end{figure}\n\n\n\\subsection{Data reduction and catalogue construction}\n\nData reduction proceeds broadly as for standard broadband imaging. A full\nconsideration of the issues surrounding tunable filter data is given by\n\\citet{Jones012} and \\citet{Jones02}. The various different images of each\nfield at the same axial wavelengths were aligned by a marginal centroid fit on\nbright stars and then combined. Wavelength calibration was performed through\nan emission line, as described by \\citeauthor{Jones02}; xenon and\ncopper-helium arc lamps were used for the $z \\sim 2.2$ fields, and a neon arc\nlamp for BR B0019-1522.\n\nAfter the data had been reduced, object detection and fixed aperture\nphotometry were performed on each image using {\\sc SExtractor}\n\\citep{Bertin96}. The object detection parameters were defined as described in\nthe next section.\n\n\\subsection{Photometry}\n\\label{sec:photo}\n\nThe observations of the standard stars were reduced in the same way. For each\nstar, {\\sc SExtractor} was used to perform aperture photometry yielding a\ncount $C_\\mathrm{s}$. This corresponds to a known magnitude $m_\\mathrm{s}$,\nbased on \\citet{Hamuy92} for the lower redshift fields or from the ESO\nStandard Star Catalogue for that of BR B0019-1522. If the exposure time on the\nstandard is $t_\\mathrm{s}$ and that on an object in the field is\n$t_\\mathrm{Obj}$, the AB magnitude of the object is\n\n\\begin{equation}\nm_\\mathrm{AB} = m_\\mathrm{s} - 2.5 \\log_{10} (C_\\mathrm{Obj}t_\\mathrm{s})\/(C_\\mathrm{s}t_\\mathrm{Obj}).\n\\end{equation}\n\nThe AB magnitude system \\citep{Oke74} is defined by $m_\\mathrm{AB} = -2.5\n\\log_{10} f_\\nu - 48.60$ where $f_\\nu$ is the flux in units of \\mbox{ergs\ncm$^{-2}$ s$^{-1}$ Hz$^{-1}$}. The monochromatic flux $f_\\lambda$, in units of\n\\mbox{ergs cm$^{-2}$ s$^{-1}$ \\AA$^{-1}$}, is then\n\n\\begin{equation}\n\\label{eq:abtoflux}\nf_\\lambda = (c \\times 10^{-\\left(m_{\\mathrm{AB}} + 48.60\\right)\/2.5})\/\\lambda^2.\n\\end{equation}\n\nConversion from $f_\\lambda$ to the total flux in the band, $f_\\mathrm{total}$\nis performed by multiplying by the effective width of the etalon transmission.\nThe etalon transmission band may be taken as Lorentzian, normalised to 1 at\nthe wavelength of peak transmission, thus:\n\n\\begin{equation}\n\\label{eq:ttfpass}\nT(\\lambda) = (\\lambda_{\\nicefrac{1}{2}}^2 \/ 4)\/((\\lambda - \\lambda_\\mathrm{c})^2 + \\lambda_{\\nicefrac{1}{2}}^2 \/ 4)\n\\end{equation}\n\nwhere $\\lambda$ is the wavelength, $\\lambda_c$ the central wavelength of the\nband and $\\lambda_{\\nicefrac{1}{2}}$ its full width at half maximum. Assuming\nthat $\\lambda_\\mathrm{c} \\gg \\lambda_{\\nicefrac{1}{2}}$, Equation\n\\ref{eq:ttfpass} may be integrated over $0 \\le \\lambda \\le \\infty$ to yield a\nwidth of $\\pi \\lambda_{\\nicefrac{1}{2}}\/2$. Combining this with Equation\n\\ref{eq:abtoflux} yields a total flux in the band of\n\n\\begin{equation}\n\\label{eq:fluxinband}\nf_{\\mathrm{total}} = (\\pi c \\lambda_{\\nicefrac{1}{2}} \\times 10^{-\\left(m_\\mathrm{AB} + 48.60\\right)\/2.5})\/2 \\lambda_\\mathrm{c}^2\n\\end{equation}\n\nwith units \\mbox{ergs cm$^{-2}$ s$^{-1}$}.\n\nIt is worth noting that this measures the flux received in the etalon\npassband, and is thus a lower limit of the line flux of the ELG: variations of\nline shapes and widths, and their positions relative to the etalon passband,\nwill cause the fluxes measured to be systematically underestimated. They\nshould therefore be regarded as lower limits.\n\n\\section{Simulations}\n\\label{sec:sim}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics{fig2}\n\\caption{Depths of each of the three fields as determined by the simulations\ndescribed in Section \\ref{sec:dof}. On the left, the data is plotted in terms\nof simulation inputs; on the right, in terms of the measurements made from\nthe simulated images. Note that the effects of the blocking filter are clearly\nseen in the two upper (lower redshift) fields, as the completeness tails off\nat higher wavelength. The higher redshift BR B0019-1522 field falls well\nwithin the blocking filter, so the depth is relatively constant with\nwavelength across the observed range.}\n\\label{fig:simresults}\n\\end{center}\n\\end{figure*}\n\nWe constructed a series of simulated images: data with properties similar to\nour observations, but containing a known population of objects. The analysis\nof these enables us to address the following questions:\n\n\\begin{itemize}\n\\item What are the most appropriate {\\sc SExtractor} parameters for\nextracting useful data from the images?\n\\item To what depth is each field complete--and how does that vary over the\nfield?\n\\item To what extent is our analysis prone to mis-identifying spurious `noisy'\nfeatures in an image as candidate emission line galaxies?\n\\end{itemize}\n\n\\subsection{Construction of simulated images}\n\nImages were simulated in two stages: first, a background was generated, then\nobjects were superimposed on top of it.\n\nDue to the properties of the blocking filter and the variation of wavelength\nacross the image, the background signal is not constant across the image. Each\ndata image was therefore divided into 100 by 100 pixel blocks, and the mean\nbackground signal and associated noise was measured in each block. Simulated\nblocks were then generated matching each of these, and then recombined to form\nan overall simulated background of the same shape as the data.\n\nA Ruby\\footnote{\\url{http:\/\/www.ruby-lang.org\/}} code was written to simulate\nthe expected properties of objects we might observe. Objects were simulated at\nrandom redshifts (over the range the observations might be expected to cover)\nand pixel positions within the images. Based on the work of\n\\citet{LeDelliou06}, our observations were not expected to be sensitive to\ncontinuum emission from ELGs, so this was not considered. Further, the ELGs\nare spatially unresolved, so were simulated with a Gaussian point spread\nfunction equal to the measured seeing. An emission line model was developed\nbased on the widths and profiles of high-$z$ Lyman $\\alpha$ emitters based\nchiefly on the $z \\sim 4.5$ objects observed by \\citet{Dawson04}.\nExperimentation suggested that the results obtained were not sensitive to line\nprofile; velocity widths in the range 100--1000 km\\,s$^{-1}$ were chosen\nbased on both \\citet{Dawson04} and the more extreme example documented by\n\\citet{Tapken04}.\n\nThe effects of the instrument on the objects' detectabilty were then\nconsidered before they were added to the background images. First a correction\nfor the order-blocking filter transmission was applied, using the position of\nthe object within the field to determine the observed wavelength and hence\nfilter transmission. The line profile was then multiplied by the transmission\nprofile of the etalon for the image under construction.\n\n\\subsection{Results of simulations}\n\nFollowing the procedure above, simulations were run of all three fields. For\neach data image, a total of 500 simulated images were constructed, each\ncontaining 500 simulated sources.\n\n\\subsubsection{Detection parameters}\n\\label{sec:detpar}\n\nSource extraction was run multiple times on each image with different\n{\\sc SExtractor} configuration parameters. In each case, the results were\ncompared with the catalogue of simulated objects in the image. The combination\nof parameters that produced the greatest number of detections of known objects\ncombined with the smallest number of spurious detections of noise were then\nused for the analysis of both the simulations and the observed data. These\nparameters are listed in Table \\ref{tab:sextractor}.\n\n\\begin{table}\n\\begin{center}\n\\caption{Optimal {\\sc SExtractor} parameters determined by simulations and\nused throughout this work.}\\label{tab:sextractor}\n\\begin{tabular}{ccp{4.1cm}}\n\\hline\nParameter & Value & Description \\\\\n\\hline\n{\\sc detect\\_minarea} & \\phantom{0}6\\phantom{.0} & Minimum number of pixels per detection. \\\\\n{\\sc detect\\_thresh} & \\phantom{0}1.3 & Detection threshold in $\\sigma$ above local background. \\\\\n{\\sc back\\_size} & 64\\phantom{.0} & Size in pixels of mesh used for background estimation. \\\\\n{\\sc phot\\_apertures} & \\phantom{0}6\\phantom{.0} & Aperture diameter (pixels). \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\subsubsection{Depths of fields}\n\\label{sec:dof}\n\nAs in the previous section, a source detection procedure was run on each\nimage and the results compared with the known simulation inputs. This time,\nthe fraction of the objects at each wavelength and magnitude which were\ndetected was recorded. The results are shown Fig. \\ref{fig:simresults}.\n\nNote that this data can be recorded both in terms of the \\textit{simulated}\nwavelength and magnitude and and their \\textit{detected} equivalents. For any\ngiven pixel position in a field, an object can only be detected as peaking at\none of a limited range of wavelengths, since its peak will be seen to appear\nat the wavelength of the image in which it occurs (of which there are at most\n5). Hence, an object which is simulated with a very bright magnitude, but at a\nwavelength far from the peak transmission of any of the filters, will be\ndetected with a somewhat dimmer magnitude at a wavelength corresponding to the\nimage in which it is brightest. Fig. \\ref{fig:simresults} shows both the\nsimulated (on the left) and detected (on the right) quantities for each of\nthe three fields.\n\n\\section{Identification of candidate ELGs}\n\\label{sec:id}\n\n\\begin{table*}\n\\begin{center}\n\\caption{ELG candidates in the field of BR B0019-1522. The AB magnitude given\nis that measured in the peak from with no correction for galactic extinction\nor etalon transmission; the flux is calculated from that magnitude via Equation\n\\ref{eq:fluxinband}.}\\label{tab:elgresults}\n\\begin{tabular}{lccccccc}\n\\hline\nField & ELG & \\multicolumn{2}{c}{Position (J2000)} & Projected distance & Lyman $\\alpha$ Peak & AB & Flux in band \\\\\n & Id. & R.A. & Decl. & from Quasar (Mpc) & Wavelength (\\AA) & mag. & (ergs cm$^{-2}$ s$^{-1} \\times 10^{18}$)\\\\\n\\hline\nMRC B1256 & A & \\ra{12}{59}{23.2} & \\dec{-24}{37}{32.9} & 1.428 & 3966 & 20.9 & 371 \\\\\n & B & \\ra{12}{59}{15.7} & \\dec{-24}{37}{40.7} & 0.871 & 3966 & 21.1 & 293 \\\\\n & C & \\ra{12}{59}{02.7} & \\dec{-24}{37}{15.1} & 1.257 & 3957 & 20.9 & 363 \\\\\n & D & \\ra{12}{59}{05.3} & \\dec{-24}{37}{31.3} & 1.085 & 3960 & 20.7 & 424 \\\\\n\\\\\nMRC B2158 & A & \\ra{22}{01}{26.0} & \\dec{-20}{25}{08.0} & 0.263 & 3956 & 21.8 & 161 \\\\\n & B & \\ra{22}{01}{41.7} & \\dec{-20}{24}{03.5} & 1.986 & 3971 & 21.7 & 192 \\\\\n\\\\\nBR B0019 & A & \\ra{0}{21}{56.9} & \\dec{-15}{04}{04.3} & 1.229 & 6673 & 22.5 & \\phantom{0}37 \\\\\n & B & \\ra{0}{22}{03.8} & \\dec{-15}{07}{41.2} & 0.898 & 6706 & 22.5 & \\phantom{0}37 \\\\\n & C & \\ra{0}{22}{08.8} & \\dec{-15}{06}{58.8} & 0.531 & 6705 & 22.0 & \\phantom{0}57 \\\\\n & D & \\ra{0}{22}{08.8} & \\dec{-15}{06}{56.3} & 0.515 & 6704 & 21.7 & \\phantom{0}71 \\\\\n & E & \\ra{0}{21}{57.8} & \\dec{-15}{06}{58.7} & 1.105 & 6697 & 22.7 & \\phantom{0}31 \\\\\n & F & \\ra{0}{22}{14.5} & \\dec{-15}{06}{42.6} & 0.748 & 6717 & 22.1 & \\phantom{0}52 \\\\\n & G & \\ra{0}{22}{12.4} & \\dec{-15}{06}{17.8} & 0.491 & 6716 & 22.1 & \\phantom{0}51 \\\\\n & H & \\ra{0}{22}{12.7} & \\dec{-15}{06}{01.4} & 0.471 & 6697 & 22.5 & \\phantom{0}37 \\\\\n & I & \\ra{0}{22}{07.6} & \\dec{-15}{05}{27.1} & 0.087 & 6694 & 22.4 & \\phantom{0}39 \\\\\n & J & \\ra{0}{21}{58.6} & \\dec{-15}{04}{56.2} & 0.940 & 6701 & 22.3 & \\phantom{0}43 \\\\\n & K & \\ra{0}{22}{14.2} & \\dec{-15}{04}{20.6} & 0.785 & 6680 & 22.6 & \\phantom{0}32 \\\\\n & L & \\ra{0}{22}{14.8} & \\dec{-15}{07}{22.1} & 0.939 & 6719 & 22.5 & \\phantom{0}37 \\\\\n & M & \\ra{0}{22}{15.3} & \\dec{-15}{06}{52.7} & 0.849 & 6716 & 22.2 & \\phantom{0}48 \\\\\n & N & \\ra{0}{22}{11.5} & \\dec{-15}{05}{04.1} & 0.405 & 6706 & 22.3 & \\phantom{0}43 \\\\\n & O & \\ra{0}{22}{18.0} & \\dec{-15}{04}{36.8} & 1.038 & 6694 & 22.4 & \\phantom{0}39 \\\\\n & P & \\ra{0}{21}{53.9} & \\dec{-15}{05}{58.2} & 1.351 & 6685 & 22.4 & \\phantom{0}40 \\\\\n & Q & \\ra{0}{22}{13.9} & \\dec{-15}{05}{08.8} & 0.597 & 6689 & 22.5 & \\phantom{0}35 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table*}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics{fig3}\n\\caption{Relative positions of the ELG candidates detected in each of the\nthree fields. The dimensions of the plots indicate the size of the observed\nfields. The quasars are located at the origin. The letters refer to the ELG\ndesignations used throughout the text.}\\label{fig:elgcandidates}\n\\end{center}\n\\end{figure}\n\n{\\sc SExtractor} was used with the parameters determined in Section\n\\ref{sec:detpar} and a detection threshold of 5$\\sigma$ to build a catalogue\nof sources for each image. Within each field, the catalogues from each image\nwere cross-matched: objects were associated by position, with a three pixel\nthreshold.\n\nThese observations are not deep enough to observe continuum flux from a\ntypical Lyman $\\alpha$ emitting galaxy \\citep{LeDelliou06}. Given the likely\nrange of line widths \\citep{Dawson04, Tapken04}, we do not expect to observe\nLyman $\\alpha$ emitters in more than two adjacent passbands. Objects which\nwere identified in either one or two bands were therefore flagged for further\ninvestigation.\n\nIn order to minimise the risk of contamination by noisy artefacts, all\nflagged objects were examined eye, and those which appeared unphysical or\ncorresponded to sites of corruption by (for example) heavy cosmic ray\nor charge trapping activity in the original images were rejected.\n\n\\subsection{MRC B1256-243}\n\nFour candidate emission line galaxies were identified in the field of MRC\nB1256-243. Details are given in Table \\ref{tab:elgresults}, and their\nlocations are shown in Fig. 3(a). Thumbnail images of the\ncandidate galaxies from each field, together with the measured fluxes, are\nshown in Fig. \\ref{fig:1256objects}.\n\n\\subsection{MRC B2158-206}\n\nTwo candidate emission line galaxies were identified in the field of MRC\nB2158-206. Details are given in Table \\ref{tab:elgresults}, and their\nlocations are shown in Fig. 3(b). Thumbnail images of the\ncandidate galaxies from each field, together with the measured fluxes, are\nshown in Fig. \\ref{fig:2158objects}.\n\n\\subsection{BR B0019-1522}\n\nSeventeen candidate emission line galaxies were identified in the field of BR\nB0019-1522. Details are given in Table \\ref{tab:elgresults}, and their\nlocations are shown in Fig. 3(c). Thumbnail images of the\ncandidate galaxies from each field, together with the measured fluxes, are\nshown in Fig. \\ref{fig:0019objects}.\n\n\\subsection{Contaminants}\n\nThis section briefly addresses the likelihood that our method might\nincorrectly identify another sort of object as an ELG.\n\n\\subsubsection{Continuum objects}\n\nAs per Figs. \\ref{fig:trans} and \\ref{fig:simresults}, the sensitivity of\nour instrument varies from image to image. Therefore, it is possible that a\nflat-spectrum continuum object may be detected in some images but not others,\nthereby appearing to be a potential ELG.\n\nWe use the results of Section \\ref{sec:sim} to estimate the probability of\nthis occurring. Each of the 250,000 simulated objects was sorted into one of\n3,600 bins by wavelength and magnitude (each bin covering 1 \\AA{} and 0.1\nmagnitudes). It is then possible to calculate the completeness of the bin\n(i.e. the fraction of simulated objects which were recovered). Each candidate\nELG is assigned to a bin, and we then check the corresponding bins in adjacent\nimages for completeness. A low completeness value in these bins indicates that\na flat-spectrum object may have been `lost'.\n\nThis procedure calls into question four objects: A in the field of MRC\nB2158-206, B in the field of MRC B2156-243 and E and K in the field of BR\nB0019-1522. These sources were examined by eye, but there is no indication of\na faint detection in the crucial frame. They have not, therefore, been\nexcluded from this analysis.\n\n\\subsubsection{Lower redshift interlopers}\n\nAnother possibility is other emission lines at lower redshift may appear in\nour observations. The lines which might be observed are listed in Table\n\\ref{tab:interlopers}.\n\n\\begin{table*}\n\\begin{center}\n\\caption{Potential low-redshift `interloper' emission lines, together with the\nredshifts at which they appear and the estimated number observed in each of\nthe fields. The flux of each line relative to \\mbox{H\\,$\\alpha$}{} in\na ``typical'' galaxy is given, based on \\citet{Kennicutt92}.}\\label{tab:interlopers}\n\\begin{tabular}{ccccccccccc}\n\\hline\nLine & \\AA & Flux & \\multicolumn{2}{c}{MRC B2158-206} & \\multicolumn{2}{c}{MRC B1256-243} & \\multicolumn{2}{c}{BR B0019-1522} \\\\\n & (rest) & ratio & $z$ & Number & $z$ & Number & $z$ & Number \\\\\n\\hline\n\\fline{O}{ii} & 3727 & $0.41\\pm0.21$ & 0.065 & \\phantom{$^*$}0.05\\phantom{$^*$} & 0.060 & 0.02 & 0.803 & 1.93 \\\\\n\\mbox{H\\,$\\beta$} & 4860 & $0.14\\pm0.06$ & - & - & - & - & 0.383 & 1.68 \\\\\n\\fline{O}{iii} & 5007 & $0.20\\pm0.15$ & - & - & - & - & 0.342 & 1.41 \\\\\n\\mbox{H\\,$\\alpha$} & 6548 & $1.00\\pm0.00$ & - & - & - & - & 0.027 & \\phantom{$^*$}0.01\\phantom{$^*$} \\\\\n\\fline{N}{ii} & 6583 & $0.43\\pm0.16$ & - & - & - & - & 0.021 & \\phantom{$^*$}0.01\\phantom{$^*$} \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table*}\n\n\\citet{Cowie97} and \\citet{Gallego95} provide number density counts for star\nforming galaxies at a range of redshifts. Both adopt a \\mbox{$H_0 =\n50$ km\\,s$^{-1}$\\,Mpc$^{-3}$}, $\\Omega_{\\Lambda} = 0$, $\\Omega_{\\mathrm{M}} =\n1$ cosmology, which we converted to match that used in this work (Section\n\\ref{sec:intro}). In addition, \\citeauthor{Gallego95} assume a \\citet{Scalo86}\nIMF; \\citeauthor{Cowie97} provide a conversion to a \\citet{Salpeter55} IMF,\nand it is these results we adopt in this work. Based on these, we can estimate\nthe number density of star forming galaxies along our line of sight: see\nFig. \\ref{fig:sfgs}.\n\n\\begin{figure}\n\\begin{center}\n\\rotatebox{270}{\\resizebox{!}{\\columnwidth}{\\includegraphics{fig4}}}\n\\caption{Variation of galaxy number density with star formation rate for a\nrange of redshifts. Based on data from \\citet{Cowie97} and \\citet{Gallego95}.}\n\\label{fig:sfgs}\n\\end{center}\n\\end{figure}\n\n\\citet{Kennicutt98} provides a conversion between star formation rate in a\ngalaxy and \\mbox{H\\,$\\alpha$}{} luminosity; the ratios given in Table \\ref{tab:interlopers}\nmake it possible to convert that into expected luminiosities for the other\nlines. After applying a correction for instrumental effects and galactic\nextinction \\citep{Schlegel98}, a locus of points in the magnitude-wavelength\ncompleteness diagrams (Fig. \\ref{fig:simresults}) on which each line at a\ngiven redshift might be detected is determined. This locus is then integrated\nto estimate the total volume over which the line might be observed at this\nredshift. This procedure is then repeated along the full length of the curves\nshown in Fig. \\ref{fig:sfgs}. In this way, the total number of interlopers\nwhich might be observed is estimated. The results are shown in Table\n\\ref{tab:interlopers}.\n\nIt is clear that the estaimted number of interlopers is negligible in the case\nof the two lower-redshift fields. However, it is possible that as many as five\nof the candidate ELGs in the BR B0019-1522 field are, in fact, low redshift\ninterlopers. This could only be confirmed by further observations.\n\n\\section{Properties of candidate ELGs}\n\\label{sec:properties}\n\nIn this section, we consider the distribution of candidate ELGs around the\nquasars to determine whether the quasar lies in an identifiable overdensity\nrelative to the field.\n\nThe small number of candidates around the lower-$z$ quasars renders a\nmeaningful statistical analysis of the individual fields unreliable. In an\nattempt to mitigate this, and given the apparent similarity of the fields,\nthey are both considered as one unit in this section.\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics{fig5}\n\\caption{Distribution of ELG candidates around the quasars. On the left, the\nprojected distance seen on the sky for both the ELG candidates (boxes) and all\nthe objects observed (crosses); at right, the relative velocities.}\n\\label{fig:distribution}\n\\end{center}\n\\end{figure*}\n\nThe distribution of ELG candidates around the quasar is shown in both\nprojection on the sky (left) and velocity distribution (right) in Fig.\n\\ref{fig:distribution}. When calculating the projection on the sky, we have\nnormalised the total visible area on the sky in each distance bin. We also\nplot the distribution of all objects detected by {\\sc SExtractor} in the field\nfor comparison.\n\nBased on these figures, there is little evidence of projected clustering in\nthe low-$z$ fields. However, there is a notably higher density of objects\nwithin 1 Mpc (projected) of BR B0019-1522. This is consistent with what one\nmight expect from an examination of Fig. \\ref{fig:elgcandidates}: note the\nlarge number of objects to the east of the quasar in Fig. 3(c). It is also\nin-line with the scale lengths observed in clusters around other AGN\n\\citep{Venemans02, Bremer02, Barr04}.\n\nThere is no suggestion of clustering in velocity space in Fig.\n\\ref{fig:distribution}. In part, this may be due to the low number of\ndetections in the low-$z$ fields. In the field of BR B0019-1522, we note that\nall candidates were observed as bluer than the quasar itself; this is\nnoteworthy, but not implausible given the wavelength range probed (6650--6740\n\\AA, with the quasar at 6722 \\AA). Although the bluest velocity bins show a\nlower number of total counts, this can be attributed to the reduced\ninstrumental sensitivity at the relevant wavelengths (see Fig. 3(c)).\n\nThe space density of galaxies in the three fields may also be estimated. As\nalluded to in the previous section, the comoving volume being probed by our\nmeasurements varies with wavelength and magnitude. Consider for example Fig.\n2(a): a bright object--magnitude 19, say--may be detected at a range of\nwavelengths, from around 3920 \\AA{} to 4010 \\AA. A fainter object at, for\ninstance, magnitude 22 is only detected if it lies within a much smaller\nwavelength range: around 3940 \\AA{} to 3960 \\AA. Therefore, we define an\n`accessible volume', $\\mathcal{V}_n$, for each detected object $n$ within the\nfield. $\\mathcal{V}_n$ is calculated by taking the locus of points in Fig.\n\\ref{fig:simresults} occupied by a source with the observed properties and\nintegrating over all wavelengths. The density is taken as $\\rho =\n1\/\\mathcal{V}_1 + 1\/\\mathcal{V}_2 + ... + 1\/\\mathcal{V}_n$. The results for\nour fields are given in Table \\ref{tab:density}.\n\n\\begin{table}\n\\begin{center}\n\\caption{Estimated space and star formation rate densities, together with the\ntotal number of ELG candidates (\\#), for each of the fields\nobserved. Note that our observations are valid only to an approximately\ndefined lower limit of star formation.}\\label{tab:density}\n\\begin{tabular}{cccc}\n\\hline\nField & \\# & Number density & SFR density \\\\\n & & (Mpc$^{-3}\\,\\times\\,10^4$) & (M$_\\odot\\;$\\,yr$^{-1}$\\,Mpc$^{-3}$) \\\\\n\\hline\nMRC B1256 & \\phantom{0}4 & $22.48 \\pm 11.64$ & $0.0346 \\pm 0.0174$ \\\\\nMRC B2158 & \\phantom{0}2 & $\\phantom{0}9.09 \\pm \\phantom{0}6.52$ & $0.0070 \\pm 0.0049$ \\\\\nBR B0019 & 17 & $49.09 \\pm 12.21$ & $0.0484 \\pm 0.0117$ \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\nIt is also instructive to estimate the star formation rates found in these\nfields. Based on \\citet{Kennicutt94} combined with \\citet{Brocklehurst71} and\n\\citet{Hu96}, we arrive at the relationship:\n\n\\begin{equation}\n\\mathrm{SFR}(\\mathrm{M}_\\odot\\,\\mathrm{yr^{-1}}) = 0.91 \\times 10^{-42} L(\\mathrm{Ly} \\alpha) (\\mathrm{erg\\,s^{-1}})\n\\label{eq:sfr}\n\\end{equation}\n\nIt should be noted that \\mbox{Ly $\\alpha$}{} is a very poor indicator of star formation\nrate. It is resonantly scattered by neutral hydrogen, and hence has a high\nchance of absorption either before leaving the galaxy or by clouds in the\nintergalactic medium \\citep{Haiman99}. Further, \\citet{VG93} argues that \\mbox{Ly $\\alpha$}{}\nemission in starbursts is strongly dependent on the age of the burst,\nrendering the calibration of Equation \\ref{eq:sfr} unreliable from around\n$10^7$ years after the burst start. Nevertheless, \\mbox{Ly $\\alpha$}{} is the only\ndiagnostic available to us, so we persist in these estimates with caution.\n\nWe take the star formation rate density as $\\rho_{SFR} = SFR_1\/\\mathcal{V}_1 +\nSFR_2\/\\mathcal{V}_2 + ... + SFR_n\/\\mathcal{V}_n$, where $SFR_n$ is the star\nformation rate associated with ELG candidate $n$ as calculated by Equation\n\\ref{eq:sfr}. Recall from Section \\ref{sec:photo} that the line fluxes are\nsystematically underestimated since objects will fall outside the peaks of the\netalon passpands. Making the approximation that objects are evenly spread in\nwavelength around the etalon peaks, we apply a correction to the observed\nmagnitudes of 0.23 (in the low-$z$ field) or 0.27 (BR B0019-1522 field) to\naccount for this. We correct the results for completeness based on Fig.\n\\ref{fig:simresults}: a single detection in an area with a low detection rate\nis taken as representative of a larger population.\n\nThe results are shown in Table \\ref{tab:density}. Note that our observations\nare sensitive to galaxies only down to some minimum level of star formation\n(\\sfr{9} in the case of MRC B2158-206 and BR B0019-1522; \\sfr{25} in the case\nof MRC B1256-243): there may be a fainter population which we do not probe.\n\nIt is noteworthy that the star formation rate in the field of MRC B1256-243 is\nanomalously high, but the large uncertainties in the field and the higher\nminimum detectable rate render this result questionable. The most well\nconstrained result is that for BR B0019-1522; our results there are broadly\nsimilar to those reported by \\citet{Venemans02} around the $z = 4.1$ radio\ngalaxy TN J1338-1942. In all three fields, the number of objects detected is\nhigher than that which might be expected in the absence of any clustering.\nBased on \\citet{Cowie97}, we might expect on average 0.86 galaxies in the\nfield of MRC B2158-206, 0.25 in that of MRC B1256-243, and 1.3 in that of BR\nB0019-1522, while an extrapolation from the results of the LALA \\citep[`Large\nArea Lyman $\\alpha$';][]{Rhoads00} survey suggests we should observe 1.1\nobjects in the field of MRC B2158-206, 0.8 in that of MRC B1256-243 and 2.1 in\nthat of BR B0019-1522 (assuming that the density of \\mbox{Ly $\\alpha$}{} emitters is similar\nat $z \\sim 2.2$ to that observed at $z \\sim 4.5$).\n\n\\section{Conclusions}\n\\label{sec:conclusion}\n\nUntil recently, it has proved difficult to find high-redshift clusters and,\nindeed, there are very few known beyond $z \\sim 1$. The detection of hot\nX-ray emission from intracluster gas followed by optical imaging and\/or\nspectroscopic confirmation becomes inefficient for detecting more distant\nclusters; a manifestly higher success rate is achieved by targeting the\nvicinity of high redshift radio galaxies and quasars.\n\nWe have used tunable filter observations to identify a galaxy overdensity in\nthe field of BR B0019-1522, with a local number density an order of magnitude\nhigher than that which might be expected in the field. This is among the\nhighest-redshift clusters detected around a radio quiet quasar. We have also\nidentified potential overdensities in the fields of and MRC B1256-243 and MRC\nB2158-208, although deeper observations are required to confirm these\ndetections.\n\nThe current observations were made with the Taurus Tunable Filter, an\ninstrument which has now been decommissioned, on the 4 metre class\nAnglo-Australian Telescope. These observations have clearly demonstrated the\nsuccess of the tunable imaging technique. The prospects for further progress\nin this area are strong, as the next generation of tunable filter instruments\nare now available or becoming available on telescopes such as the GTC 10-m\n\\citep[OSIRIS;][]{Cepa00}, SOAR 4-m \\citep[BTFI;][]{Taylor10}, SALT 11-m\n\\citep[PFIS;][]{Smith06}, NTT 3.5-m \\citep[3D-NTT;][]{Marcelin08} and the\nMagellan 6.5-m \\citep[MMTF;][]{Veilleux10}.\n\nWith existing telescopes, it is very difficult to extract more information\nthan a few emission lines and broadband photometry for the host galaxies in\nthese high-redshift environments. More detailed spectral information will not\nbe possible until the next generation of extremely large telescopes or the\nJames Webb Space Telescope come on line. But there are other uses for these\nobservations: in particular, \\citet{Bruns11} have shown that quasar\nenvironments may act as a surrogate for studying the radiative suppression of\ngalaxy formation during the epoch of reionization. Interestingly, the UV\nsuppression reduces the star-forming galaxy counts by a factor of 2--3 but\ndoes not suppress them altogether. The time is therefore ripe to further\ndevelop this promising method of investigation in order to learn about the\noccurrence of high-redshift, star forming groups and the impact on these\ngroups by quasar activity.\n\n\\bibliographystyle{mn2e}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nScientists and engineers often study a physical system with the goal of making spatio-temporal predictions (e.g., temperature or glacier thickness) and inferring unknown quantities governing the system (e.g., atmospheric density or ice viscosity). This system's dynamics can often be phrased in terms of spatio-temporal partial differential equations (PDEs) that are based on approximations. The scientist or engineer may also be able to simulate the physical system with a computer simulator, such as a numerical PDE solver, which is subject to imperfections (e.g., numerical error). Moreover, the scientific constants entering into the system's dynamical equations such as density, friction, or viscosity may not be known precisely, but their range can be constrained to some set of plausible values. Additionally field data, though potentially scarce and noisy, can be incorporated into the analysis.\n\nSuch scenarios can be modeled with a variant of a Bayesian hierarchical spatio-temporal model that was introduced in \\citet{gopalan2018bayesian} for glacial dynamics, if considered more generally. We delineate three methods to make posterior inference efficient: the first is to utilize bandwidth limited linear-algebraic routines for likelihood evaluation \\citep{rue2001fast}, the second is to utilize an embarrassingly parallel approximation to the likelihood, and the third is to use first-order emulators \\citep{Hooten2011} for speeding up computer simulators. Though our modeling and numerical results are still within a glaciology context, we conclude with a discussion of how the model can be applied to other physical scenarios. Before introducing the Bayesian hierarchical model and associated methodology for computationally efficient posterior inference, it is appropriate to summarize relevant statistical literature developed over the last two decades.\n\nBayesian hierarchical modeling for geophysical problems was introduced in \\citet{10.1007\/978-94-011-5430-7_3} and \\citet{Wikle1998}, and summarized in \\citet{Berliner}, \\citet{cressie2011statistics}, and \\citet{Wikle2016}. In this modeling approach, prior distributions are specified for physical parameters of interest, a physical process is modeled at the intermediary, latent level (conditional on the physical parameters), and the data collection process is modeled conditional on the latent physical process values. Both numerical error and model uncertainty can be incorporated at the process level, while measurement errors can be modeled at the data level. This approach has been applied in a variety of scientific contexts, including the study of ozone concentrations \\citep{berrocal2014assessing}, sediment loads at the Great Barrier Reef \\citep{pagendam2014assimilating}, precipitation in Iceland \\citep{RePEc:wly:envmet:v:27:y:2016:i:1:p:27-41}, Antarctic contributions to sea level rise \\citep{zammit2014resolving}, and tropical ocean surface winds \\citep{wikle2001spatiotemporal} (among many others). In \\citet{gopalan2018bayesian}, the motivating example for the work in this paper, a Bayesian hierarchical model for shallow glaciers based on the shallow ice approximation (SIA) PDE was developed and evaluated. \n\n\\citet{kennedy2001bayesian} suggest constructing Bayesian statistical models that incorporate the output of a computer simulator of a physical process, such as a numerical solver for the underlying system of PDEs. Fundamental to their approach is the inclusion of a specific term that represents the deviation between the output of a computer simulator and the actual process values, known as \\textit{model discrepancy} or \\textit{model inadequacy}. This framework is developed in \\citet{higdon2004combining}, \\citet{higdon2008computer}, and \\citet{discrepancy}. In particular, \\citet{higdon2008computer} use a Bayesian model along with a principal components based approach for reducing the computational overhead of running a computer simulation with high dimensional output multiple times (an approach termed as \\textit{emulation}). \\citet{discrepancy} note that the prior for model discrepancy must be chosen carefully to mitigate bias of physical parameters and predictions. In particular, as more prior information is incorporated into a model discrepancy term through a constrained Gaussian process (GP) prior over a space of functions, the less biased inferences and predictions tend to become. The notions of an emulator, a computer simulator, and model discrepancy enter naturally into the aforementioned Bayesian hierarchical framework. Conditional on physical parameters coupled with initial and\/or boundary conditions, the physical process values at the latent level can be written as the sum of a computer simulator or emulator term and a model discrepancy term. \n\nTo be precise, let us assume that the physical process $\\bm{S}$ can be indexed through time, i.e., as $\\bm{S}_j$, and $\\bm{S}_j$ is a vector where each element corresponds to a distinct spatial location. One can specify the process level conditional on physical parameter $\\bm{\\theta}$ as \n\\begin{eqnarray}\n\\bm{S}_j &=& \\bm{f}(\\bm{\\theta},j)+\\bm{\\delta}(j)\n\\end{eqnarray}\nwhere $\\bm{\\delta}(.)$ is a vector valued model discrepancy function that is independent of $\\bm{\\theta}$, and $\\bm{f}(\\bm{\\theta},j)$ is the output of a computer simulation or emulator for physical parameter $\\bm{\\theta}$ at time index $j$. If, for instance, at each time point $j$ an observation $\\bm{Y}_j$ of $\\bm{S}_j$ is made with associated measurement error $\\bm{\\eta}_j$, then observations can be written as\n\\begin{eqnarray}\n\\bm{Y}_j &=& \\bm{f}(\\bm{\\theta},j)+\\bm{\\delta}(j)+\\bm{\\eta}_j,\n\\end{eqnarray}\nwhich is analogous to Eq. 5 of \\citet{kennedy2001bayesian}.\n\nIn \\citet{kennedy2001bayesian}, $\\bm{\\delta}(.)$ is a fixed but unknown function independent of $\\bm{\\theta}$ that is learned with a GP prior distribution. Similarly, $\\bm{\\delta}(.)$ has a constrained GP prior in \\citet{discrepancy}. The approach in this paper instead assumes a temporally indexed stochastic process (with spatial correlation) that follows a multivariate random walk, rather than a deterministic function. Additionally, in \\citet{liu2009}, the authors frame a computer emulator of time series run under multiple inputs as a dynamic linear model (DLM). As part of their approach, they allow for time varying auto-regressive coefficients that follow a random walk process, to embed non-stationarity into the model.\n\nWhile the approach taken in this paper most closely follows the above literature (i.e., Bayesian hierarchical modeling, model discrepancy, and emulation), we briefly review literature in probabilistic numerics and Bayesian numerical analysis; the emphasis in Bayesian numerical analysis is to use probabilistic methods to solve numerical problems, whereas, in the Bayesian hierarchical setup, one is also interested in inference of scientifically relevant parameters and predictions of the physical process. In \\citet{conrad2017statistical}, a probabilistic ordinary differential equation (ODE) solver is developed that adds stochasticity at each iteration; conditions for the convergence of this method to the ODE solution are given. \\citet{chkrebtii2016bayesian} utilize GPs for solving ODEs; moreover, \\citet{Calderhead:2008:ABI:2981780.2981808} use a GP regression based method to avoid explicitly solving nonlinear ODEs when performing inference for parameters that provides computational speed ups; additionally, \\citet{Owhadi} present a gamblet based solver that comes with provably computationally efficient solutions to PDEs. The approach is derived from a game theoretic and stochastic PDE framework.\n\nIn the spatio-temporal model described in this paper, stochasticity is induced with an error-correcting process that is separated from the numerical solution. In general, another way to achieve this is to define a stochastic process by equating a PDE to a white noise term -- that is, the solution $\\bm{X}$ to a stochastic partial differential equation (SPDE) $L[\\bm{X}] = \\bm{W}$, where $\\bm{L}$ is a differential operator and $\\bm{W}$ is a white noise process (indexed by spatio-temporal coordinates). For instance, a fractional Laplacian operator yields the Mat\\'{e}rn covariance function \\citep{whittle1954stationary,Whittle63,lindgren2011explicit}. We employ the former approach mainly because it is difficult to derive exact covariance functions for arbitrary differential equations (e.g., in the presence of nonlinearities), though we highlight the utility of the latter approach in situations where an analytical covariance function can be derived exactly.\n\n A major feature of this work is to represent the discrepancy between real physical process values and the output of a computer simulator for these physical process values as a multivariate random walk; typically, model discrepancy is endowed with a GP prior or a constrained GP prior over a space of functions as in \\citet{kennedy2001bayesian} and \\citet{discrepancy}. Along with this model is the development of two ways for making computations faster: the first is harnessing first-order emulator inference \\citep{Hooten2011} for speeding up the computation of a numerical solver, and the second is the use of bandwidth limited numerical linear algebra \\citep{rue2001fast} for computing the likelihood efficiently. Moreover, in the regime of a high signal-to-noise ratio, an embarrassingly parallel approximation to the likelihood can be employed. Finally, methodology to fit a spatial Gaussian field for the log of the scale of numerical errors is discussed.\n \nWe must also be clear about what distinguishes this work from its predecessor, \\citet{gopalan2018bayesian}. This includes the use of emulators, probing higher order random walks besides order 1, derivation of sparsity and computational complexity of log-likelihood evaluation, empirical run time results, and methodology to fit an error-correcting process when little prior information is available. The structure of this paper is as follows: First a test system from glaciology is described. Then the statistical model that is the focus of this work is presented in detail (in the context of the glaciology test case), followed by the exact and approximate likelihood. Then this model is analyzed in terms of computational run time and accuracy of inference, based on the test system from glaciology; moreover, the random walk error-correcting process is assessed with residual analysis. Afterward, we discuss how the model and associated methodology can be applied to other physical scenarios, and conclude by summarizing the model, method, and limitations of the approach.\n\n \\section{Description of a test system from glaciology}\nBefore delving into the specifics of the Bayesian hierarchical model and computational subtleties, we begin with a brief discussion of glaciology. Glaciology is the study of physical systems consisting mostly of ice and snow. This broad definition includes the study of the crystalline nature of ice, the transformation and compaction of snow into ice, the dynamics of the flow of ice and water in a glacier, the relationships between fundamental quantities like viscosity, temperature, and pressure, the relationships between precipitation and meteorology with said ice systems, the interaction of ice systems with other geological systems such as volcanoes and bedrock, and so on. As such, glaciology synthesizes elements from a multitude of scientific disciplines including continuum mechanics, fluid mechanics, hydraulics, chemistry, and meteorology.\n\n\\cite{Bueler} introduce analytical solutions for the SIA PDE, a commonly used model for the dynamics of glaciers \\citep{10.2307\/79748, doi:10.1080\/03091928208209013, flowers2005sensitivity,Paterson,vanderveen, Brinkerhoff, 2016arXiv161201454G, gopalan2018bayesian}. Based on the principle of conservation of mass, the SIA dictates that glacier flow is in the direction of the (negative) gradient of the glacier surface and is due to gravity and basal sliding (also referred to as friction or drag if in the direction of the positive gradient). While an explanation of the SIA PDE is given in \\cite{gopalan2018bayesian}, our focus is on ice viscosity, $B$. Intuitively, this parameter controls the softness of the ice. The other main physical parameter, which is not the subject of this paper, is $C_0\\gamma$. This controls basal sliding or friction. \n\nFor the analysis that follows, we focus on a periodic solution to the SIA in which the thickness of the glacier oscillates through time; $H(r,t)$, the thickness of the glacier as a function of two dimensional space (in polar coordinates) and time, is \n\\begin{eqnarray}\nH(r,t) &=& H_s(r)+P(r,t), \\\\\nP(r,t) &=& C_p\\sin(2\\pi t\/T_p)\\cos^2\\left[\\frac{\\pi(r-0.6L)}{.6L}\\right]; \\textrm{if } 0.3L < r < .9L, \\\\\nP(r,t) &=& 0; \\textrm{if } 0 \\leq r \\leq 0.3L \\textrm{ or if } r \\geq 0.9L. \n\\end{eqnarray}\nIn Eq. 3, $H_s$ is a static initial profile of the glacier (i.e., a dome as in Eq. 21 of \\citet{Bueler}), $P$ is a perturbation (e.g., precipitation) function, $L$ is the margin length, $C_p$ is the magnitude of the periodic perturbation, and $T_p$ is the period of the perturbation. \\citet{Bueler} derive a mass balance function that achieves this periodic solution for the SIA PDE. Qualitatively, this test case appears like a dome with a periodic oscillation in thickness around an annulus defined by $0.3L < r < .9L$. In Figure 1, an illustration of the oscillations of glacier thickness through time is displayed.\n\nThe value of each surface elevation measurement is the value of the exact analytical solution above added to a zero-mean Gaussian random variable with standard deviation of 1 meter, larger than errors of the digital-GPS instruments employed by the UI-IES. We use the same values of parameters as in \\citet{Bueler} to make for easier comparison to that work and the EISMINT experiment. In particular, $H_0 = 3600$ m, $L = 750$ km, $C_p = 200$ m, and $T_p = 5000$ years.\n\nEmploying the same set up as \\citet{gopalan2018bayesian}, glacial surface elevation measurements are assumed to be collected for 20 years, twice a year, and at 25 fixed spatial locations across the glacier, to emulate how the glaciology team at the University of Iceland Institute of Earth Sciences (UI-IES) collects data at Icelandic glaciers (e.g., see Figure 2 illustrating Langj\\\"{o}kull and the mass balance measurement sites). \n\n\\begin{figure*}[h!]\n \\centering\n \\includegraphics[width=0.5\\textwidth]{glacier_plot.jpeg}\n\\caption{An illustration of the periodic oscillatory exact solution to the SIA PDE that is used for the analysis. Since the solution is radially symmetric, only a radial cross section is illustrated. This solution is stationary except for an annulus defined by $0.3L < r < .9L$, where $L$ is 750 $km$; in the annulus, the glacier thickness vibrates back and forth periodically, as illustrated.}\n\\end{figure*}\n\n\\begin{figure*}[h!]\n \\centering\n \\includegraphics[width=0.5\\textwidth]{lang.pdf}\n\\caption{A digital elevation map of Langj\\\"{o}kull along with measurement sites demarcated on the right, provided by the University of Iceland Institute of Earth Sciences (UI-IES). Langj\\\"{o}kull is Iceland's second largest glacier by area, at $900$ sq. km, and its mean thickness is 210 meters above sea level \\citep{bjornsson2008icelandic}, so Langj\\\"{o}kull is shallow.}\n\\end{figure*}\n\n\\section{The hierarchical spatio-temporal model and its properties}\nNow that we have acquainted the reader with some facts about glaciology and the particular test case used for the analysis in this paper, we next delineate the hierarchical spatio-temporal model that is the focus of this work, by specifying its variables, parameters, and properties, including efficient computation of the likelihood and connections to other modeling frameworks. For the sake of specificity of the presentation, the glaciology example is referenced, similarly to the set up in \\citet{gopalan2018bayesian}. We assume that $n$ spatial locations are modeled at the latent level, and $m$ of those locations are observed, where $m$ is typically much smaller than $n$. We use the index $j$ to refer to time indices and $i$ to refer to spatial indices; while space and time are discretized, the differences between successive time and spatial points can be made as small as desired depending on the context of the application and computational resources available. Throughout, we use bolded notation for vectors and uppercase, unbolded, and non-italic notation for matrices. All other mathematical symbols are scalars.\n\nWe introduce the Bayesian hierarchical model in the parameter, process, data level framework of \\citet{10.1007\/978-94-011-5430-7_3}. We denote the physical parameters as $\\bm{\\theta}$ and initial and\/or boundary conditions for the physical process as $\\bm{\\phi}$. At the parameter level, one possibility is to use a truncated normal distribution for $\\bm{\\theta}$ if the support of the parameter value can be constrained, as was done in \\citet{gopalan2018bayesian}, where $\\bm{\\theta}$ represented ice viscosity. However, more generally, the distribution can be specified based on domain knowledge or expertise. We denote the output of a computer simulator, which could be either a numerical solver or an emulator, at time $j$ with the notation $\\bm{f}(\\bm{\\theta},\\bm{\\phi}, j)$, which, in full generality, is an element of $\\mathbb{R}^n$. While some values could be negative (e.g., temperature), in many cases the computer simulator output can be restricted to the nonnegative real numbers. For a specific example, in Appendix A of \\citet{gopalan2018bayesian}, $\\bm{f}(\\bm{\\theta},\\bm{\\phi}, j)$ is a second-order finite difference solver for glacier thickness, which is constrained to be nonnegative based on a boundary condition. Evidence for a nonnegative support for the physical process, in glaciology, can be found in \\cite{gopalan2018bayesian}. Particularly, this is evident in Figure 6 of that paper, which shows the process (i.e., glacier thickness) predictions across the glacier, and the distributions are all greater than zero. Specifically, the minimum of the smallest box-plot is more than 750 m. Nonetheless, the reader is suggested to think carefully about whether a negligible amount of probability mass is below zero in different applications (e.g., temperature models).\n\nThe process level of the model, conditional on $\\bm{\\theta}$ and $\\bm{\\phi}$, can be written as:\n\n\\begin{eqnarray}\n\\bm{X}_{j} &=& \\bm{X}_{j-1} +\\bm{\\epsilon}_{j}, \\\\\n\\bm{S}_{j} &=& \\bm{f}(\\bm{\\theta},\\bm{\\phi},j) + \\bm{X}_j,\n\\end{eqnarray}\nwhere $\\bm{X}_{0}$ is a vector of zeros.\n\nIn the above expressions, $\\bm{\\epsilon}_{j}$ is $\\textrm{MVN}(0,\\Sigma)$ and independent of $\\bm{\\epsilon}_{l}$ for $j \\neq l$. Furthermore, $\\bm{X}_{j}$, $\\bm{\\epsilon}_{j}$, $\\bm{f}(\\bm{\\theta},\\bm{\\phi},j)$, and (consequently) $\\bm{S}_{j}$ are members of $\\mathbb{R}^n$. In \\citet{gopalan2018bayesian}, $\\{\\bm{X}_1, \\bm{X}_2, ..., \\}$ was referred to as an error-correcting process because it was meant to represent the difference between the numerical solver and the exact solution to the SIA PDE. Note that in \\citet{gopalan2018bayesian}, $\\bm{S}_j$ referred to glacial thickness at a particular time point, where each component referred to the glacial thickness at a particular grid point. In more generality, the error-correcting statistical process can be a random walk of higher order; a multivariate RW process of order $q$ ($RW(q)$) is given by:\n\\begin{eqnarray}\n\\bm{X}_{j}+ \\sum_{p=1}^q(-1)^{p}{q \\choose p}\\bm{X}_{j-p} &=& \\bm{\\epsilon}_j\n\\end{eqnarray}\nwhere $\\bm{\\epsilon}_1$,..., $\\bm{\\epsilon}_q$ are independent and marginally $\\textrm{MVN}(0,\\Sigma)$. This form of a higher order random walk is a multivariate extension of the integrated auto-regressive process given in Chapter 5.6 of \\citet{madsen2007time}. For $q=2$, this corresponds to RW(2) of \\citet{rue2005gaussian}.\n\nAt the data level, it is assumed that data are regularly sampled at every $k$-th time point, so that one observes $\\bm{Y}_k, \\bm{Y}_{2k},..., \\bm{Y}_{Nk} \\in \\mathbb{R}^m$; in the glaciology test case, the variables $\\bm{Y}$ referred to glacial surface elevation measurements, and $k$ was set to 5, to represent the fact that the glaciologists take a set of measurements in the summer and winter, or twice a year. The corresponding observation errors $\\bm{\\eta}_k, \\bm{\\eta}_{2k},..., \\bm{\\eta}_{Nk}$ are IID $\\textrm{MVN}(0,\\sigma^2\\textrm{I})$, and represent digitial-GPS measurement errors in the glaciology example. We define the matrix A $\\in \\mathbb{R}^{m \\times n}$ to be such that its rows are unit basis vectors (i.e., an incidence matrix as in \\citet{cressie2011statistics}). That is, $\\textrm{A}_{ab} = 1$ if and only if the $b$th index of the process level vector $\\bm{S}$ has been observed, and $\\textrm{A}_{ab} = 0$ for all other entries. Then the data level model, conditional on the process $\\bm{S}$, is\n\n\\begin{eqnarray}\n\\bm{Y}_{ck} &=& \\textrm{A}\\bm{S}_{ck}+\\bm{\\eta}_{ck}, \n\\end{eqnarray}\nwhere we assume that $j \\in \\{1,2,..., T\\}$ and $c \\in \\{1,2,..., N\\}$, so there are $N$ total observed spatial vectors, observed with a period of length $k$. \n\nConditional on $\\bm{\\theta}$, $\\bm{\\phi}$, and a computer simulator, the model can be thought of as a hidden Markov model (HMM) \\citep{baum1966}; the latent physical process evolves according to a RW(1) process added to a numerical solution, and it is observed indirectly with Gaussian noise. It can also be thought of as a conditional general state space model. This is because, conditioning on $\\bm{\\theta}$, $\\bm{\\phi}$, and a computer simulator, one can write:\n\n\\begin{eqnarray}\n\\bm{S}_{j} &=& \\bm{S}_{j-1}+[-\\bm{f}(\\bm{\\theta},\\bm{\\phi},j-1)+\\bm{f}(\\bm{\\theta},\\bm{\\phi},j)] + \\bm{\\epsilon}_j,\\\\\n\\bm{Y}_{ck} &=& \\textrm{A}\\bm{S}_{ck}+\\bm{\\eta}_{ck}.\n\\end{eqnarray}\nHere, the state evolves linearly with a time dependent offset term: $[-\\bm{f}(\\bm{\\theta},\\bm{\\phi},j-1)+\\bm{f}(\\bm{\\theta},\\bm{\\phi},j)]$. The notation $ck$ is used in Eq. 11 to indicate that observations of the process are only observed every $k$th time point, whereas the latent process evolves at every time step $j$. The reader who is interested in further understanding the connection between Gaussian processes and state space models may consult \\cite{pmlr-v33-solin14}. \n\n\\subsection{Exact likelihood}\nAn advantage of using this model is that the likelihood, $p(\\bm{Y}_k, \\bm{Y}_{2k},..., \\bm{Y}_{Nk}|\\bm{\\theta},\\bm{\\phi})$, can be computed exactly in an efficient manner. It can also be approximated in a way that leads to embarrassingly parallel computation when the signal-to-noise ratio is high. The next several sections provide more details for these considerations. The likelihood of the model, $L(\\bm{\\theta},\\bm{\\phi}) = p(\\bm{Y}_k, \\bm{Y}_{2k},..., \\bm{Y}_{Nk}|\\bm{\\theta},\\bm{\\phi})$, has a multivariate normal PDF form:\n\\begin{eqnarray}\nL(\\bm{\\bm{\\theta}},\\bm{\\phi}) &=& \\frac{1}{(2\\pi)^{(mN)\/2}|\\Sigma_l|^{1\/2}}\\exp^{-(\\bm{Y-\\mu_l})^T\\Sigma_l^{-1}(\\bm{Y-\\mu_l})\/2},\n\\end{eqnarray}\nwhere the mean is:\n\\begin{eqnarray}\n\\bm{\\mu}_l &=& (\\textrm{A}\\bm{f}(\\bm{\\theta},\\bm{\\phi},k),...,\\textrm{A}\\bm{f}(\\bm{\\theta},\\bm{\\phi},Nk)),\n\\end{eqnarray}\nand the covariance matrix is:\n\\begin{eqnarray}\n\\Sigma_l &=& \\textrm{U} \\otimes \\textrm{V} + \\sigma^2\\textrm{I},\n\\end{eqnarray}\nwhere $\\textrm{U}_{ab} = k \\min(a,b)$ with $\\textrm{U} \\in \\mathbb{R}^{N \\times N}$, and $\\textrm{V} = \\textrm{A}\\Sigma \\textrm{A}^{\\intercal}$. Also, the symbol $\\otimes$ stands for the Kronecker product. $\\bm{Y}_{ck}$ is multivariate normal (conditioning on $\\bm{\\theta}$ and $\\bm{\\phi}$) as a direct consequence of equations 7 and 9, noting that $\\bm{X}_{ck}$ and $\\bm{\\eta}_{ck}$ are independent conditional on $\\bm{\\theta}$ and $\\bm{\\phi}$. Moreover, the linearity property of expectations can be used to show that the mean of $\\bm{Y}_{ck}$ is $E[\\textrm{A}\\bm{S}_{ck}+\\bm{\\eta}_{ck}] = E[\\textrm{A}\\bm{S}_{ck}]+E[\\bm{\\eta}_{ck}] = E[\\textrm{A}\\bm{f}(\\bm{\\theta},\\bm{\\phi},ck)+\\textrm{A}\\bm{X}_{ck}]+E[\\bm{\\eta}_{ck}] = E[\\textrm{A}\\bm{f}(\\bm{\\theta},\\bm{\\phi},ck)]+E[\\textrm{A}\\bm{X}_{ck}]+E[\\bm{\\eta}_{ck}] = \\textrm{A}\\bm{f}(\\bm{\\theta},\\bm{\\phi},ck) + \\bm{0} + \\bm{0}$ (again, conditional on fixed $\\bm{\\theta}$ and $\\bm{\\phi}$ fixed). Appendix A contains more details of the covariance matrix.\n\nSince evaluating Eq. 12 requires the calculation of the inverse of the matrix $\\Sigma_l$ and its determinant, these must be calculated efficiently (generally this takes $O(N^3m^3)$ operations, which can grow very quickly with more space and time observations). Since $\\textrm{U}^{-1}$ is tridiagonal, the bandwidth of $\\textrm{U}^{-1}$ is 1, and the band-limited nature of $\\textrm{U}^{-1}$ allows us to compute $\\Sigma_l^{-1}$ and $|\\Sigma_l|$ in $O(Nm^3)$ time \\citep{rue2001fast, golub2012matrix}. More details for this derivation are given in Appendix A. While using band-limited linear algebra routines can improve computation, in the next subsection we derive an approximation to the likelihood that is embarrassingly parallel and can therefore accelerate computation even more. \n\n\\subsection{An approximate likelihood}\nHere we show how to approximate the likelihood in a way that leads to embarrassingly parallel computation. The likelihood $p(\\bm{Y_k,...,Y_{Nk}}|\\bm{\\bm{\\theta}},\\bm{\\phi})$ can be equivalently written as $p(\\bm{Y_k}|\\bm{\\bm{\\theta}},\\bm{\\phi})p(\\bm{Y_{2k}|Y_k},\\bm{\\bm{\\theta}},\\bm{\\phi})... \\\\p(\\bm{Y_{Nk}|Y_k,..,Y_{(N-1)k}},\\bm{\\bm{\\theta}},\\bm{\\phi})$.\nFirst note that:\n\\begin{eqnarray}\n\\bm{Y}_{k} &=& \\textrm{A}\\bm{f}(\\bm{\\theta},\\bm{\\phi},k)+\\bm{\\eta}_{k}+\\sum_{j=1}^{k} \\textrm{A}\\bm{\\epsilon}_{j}.\n\\end{eqnarray}\nHence, $p(\\bm{Y}_{k}|\\bm{\\theta},\\bm{\\phi})$ is multivariate normal with mean $\\textrm{A}\\bm{f}(\\bm{\\theta},\\bm{\\phi},k)$ and covariance matrix $\\textrm{A}(k\\Sigma)\\textrm{A}^{\\intercal}+\\sigma^2\\textrm{I}$.\nMore generally, we have the relationship:\n\\begin{eqnarray}\n\\label{eq:rec}\n\\bm{Y}_{ck} &=& \\bm{Y}_{(c-1)k} + \\textrm{A}[\\bm{f}(\\bm{\\theta},\\bm{\\phi},ck)-\\bm{f}(\\bm{\\theta},\\bm{\\phi},(c-1)k)] + \\bm{\\eta}_{ck} - \\bm{\\eta}_{(c-1)k} + \\sum_{j=(c-1)k+1}^{ck} \\textrm{A}\\bm{\\epsilon}_{j}.\n\\end{eqnarray}\nThus we can approximate $p(\\bm{Y}_{ck}|\\bm{Y}_k,..,\\bm{Y}_{(c-1)k},\\bm{\\theta},\\bm{\\phi})$ as a MVN distribution with mean $\\bm{Y}_{(c-1)k} + \\textrm{A}[\\bm{f}(\\bm{\\theta},\\bm{\\phi},ck)-\\bm{f}(\\bm{\\theta},\\bm{\\phi},(c-1)k)]$ and covariance matrix $\\textrm{A}(k\\Sigma)\\textrm{A}^{\\intercal} +2\\sigma^2\\textrm{I}$. Nonetheless, to clarify, $p(\\bm{Y}_{ck}|\\bm{Y}_k,..,\\bm{Y}_{(c-1)k},\\bm{\\theta},\\bm{\\phi})$ is not exactly a MVN with mean $\\bm{Y}_{(c-1)k} + \\textrm{A}[\\bm{f}(\\bm{\\theta},\\bm{\\phi},ck)-\\bm{f}(\\bm{\\theta},\\bm{\\phi},(c-1)k)]$ and covariance matrix $\\textrm{A}(k\\Sigma)\\textrm{A}^{\\intercal} +2\\sigma^2\\textrm{I}$ because $\\bm{Y}_{(c-1)k}$ and $\\bm{\\eta}_{(c-1)k}$ are dependent. However, when the magnitude of the observation error $\\bm{\\eta}_{(c-1)k}$ is much smaller in comparison to the magnitude of the observation $\\bm{Y}_{(c-1)k}$, and for $\\bm{Z} \\sim \\textrm{MVN}(0,\\sigma^2\\textrm{I})$ with $\\bm{Z}$ independent of $\\bm{Y}_{(c-1)k}$, $\\bm{Y}_{(c-1)k} - \\bm{\\eta}_{(c-1)k} \\approx \\bm{Y}_{(c-1)k} - \\bm{Z}$. \n \nThis approximation is embarrassingly parallel because each of the $N$ terms in the product form of the likelihood $p(\\bm{Y}_k,...,\\bm{Y}_{T}|\\bm{\\theta},\\bm{\\phi}) = p(\\bm{Y}_k|\\bm{\\theta},\\bm{\\phi})p(\\bm{Y}_{2k}|\\bm{Y}_k,\\bm{\\theta},\\bm{\\phi})...p(\\bm{Y}_{Nk}|\\bm{Y}_k,..,\\bm{Y}_{(N-1)k},\\bm{\\theta},\\bm{\\phi})$ (or sum, if computing the log-likelihood) can be evaluated independently of each other. Therefore, in parallel, the computation comes down to evaluating a multivariate normal PDF of dimension $m$ -- this can be done in $O(m^3)$. \n\n\\subsection{Computational complexity summary}\nIf no attention is paid to the structure of $\\Sigma_l$, the cost of evaluating $L(\\bm{\\theta},\\bm{\\phi})$ is limited by the evaluation of $\\Sigma_l^{-1}$ and $|\\Sigma_l|$, which generally takes $O(N^3m^3)$ operations. However, the exact likelihood evaluation can be reduced to $O(Nm^3)$ using band-limited numerical linear algebra. The computational complexity of the approximation is also $O(Nm^3)$ (if no parallelism is used). While an exact likelihood is preferred to an approximation, a benefit of the approximation is that it is embarrassingly parallel -- if parallelized, the time complexity is that of evaluating a multivariate normal PDF of dimension $m$, which is $O(m^3)$. Nonetheless, there also exist parallel versions of sparse Cholesky decomposition, for instance in \\citet{Gupta:1994:SPA:602770.602898}. Empirical comparisons of the exact and approximate likelihood computations are presented in Section 4. \n\n\n\\section{Analysis of the model and associated methodology}\nThe purpose of this section is to motivate the various modeling choices introduced in this paper using the previously described test system from glaciology, both in terms of computational run time and quality of inferences. In particular, we compare a posterior based on an emulator to a posterior based on a numerical PDE solver, motivate the use of the random walk error-correcting process with residual analysis, examine the impact of prior information encoded into the error-correcting process on the bias of posterior distributions for physical parameters, and compare the run-time and accuracy of the likelihood approximation versus the exact likelihood. The physical parameter of interest in these examples is ice viscosity, $B$, whose actual value is the same as \\citet{Bueler}, \\citet{EISMINT}, and \\citet{gopalan2018bayesian}: $31.7 \\times 10^{-25}$ in units of $s^{-1}Pa^{-3}$.\n\nConsistent with \\citet{gopalan2018bayesian} is the choice of settings for the numerical PDE solver: a 21 by 21 grid (so $n = 441$) is used with $\\Delta_x = \\Delta_y = 10^5$ m and $\\Delta_t = .1$ years. Note that, consequently, the number of simulator runs (25) is much smaller than the dimensionality of the output of the solver (441). \n\n\\subsection{Posterior inference of the ice viscosity parameter with an emulator compared to a numerical PDE solver}\nIn this section, we conduct an empirical study to examine how a first-order spatio-temporal emulator (i.e., an emulator based on the method in Appendix B) compares to a numerical solver of the PDE, both in terms of run-time of computations and posterior inference of ice viscosity. While the precise technical details for constructing a first-order spatio-temporal emulator are given in Appendix B, the idea is to approximate the numerical solver output for each time point that there is collected data. To do this, we train an emulator using the following values for ice viscosity: $\\{10, 12.5, 15.0,...,70.0\\}$ in units of $10^{-25} s^{-1}Pa^{-3}$, a grid of values that is intentionally coarser than the values used for posterior computation, since in this case the emulator must be used for parameter values not in the training set. We used the \\texttt{rbenchmark} \\citep{rbenchmark} package to benchmark the run-time of the log-likelihood of the model evaluated at the actual parameter value computed with a numerical solver versus a first-order spatio-temporal emulator, using a MacBook Pro early 2015 model with a 2.7 GHz Intel Core i5 processor and 8 GB 1867 MHz DDR3 memory. The emulator version performs 14.5 times faster (.354 seconds for the emulator based log-likelihood versus 5.148 seconds for the numerical solver based log-likelihood). We also generated samples from the posterior distribution of ice viscosity with grid sampling (grid [10,70] inclusive with grid width .50 in units of $10^{-25} s^{-1}Pa^{-3}$), using both the numerical PDE version and the emulated version. The summary statistics of $10^6$ posterior samples for ice viscosity using both the emulator and numerical solver are given in Table 1. Qualitatively, the summary statistics are similar. \n\nThe principle behind choosing the ice viscosity parameter values in the training set is to fill the space of the support for ice viscosity, but not to choose a grid as fine as the one used for posterior sampling. (Such an approach would be circular, in that the emulator would just be generating predictions inside of the training set.) However, such a heuristic will not be feasible as the number of parameters grows beyond one parameter (the number of design points would need to grow exponentially in the number of dimensions). In such cases, we suggest using other space-filling designs: notably, a latin hypercube design has been used extensively in the computer experiments literature, for instance in \\cite{higdon2008computer}. \n\n\n\\begin {table}[H]\n\\begin{center}\n\\begin{tabular}{ |c|c|c|c|c|c|c| }\n\\hline\nTest Case & Min & 1st Quartile & Median & Mean & 3rd Quartile & Max \\\\ \\hline\nEmulator SIA solver & $15.0$ & $26.5$ & $27.0$ & $27.4$ & $29.0$ & $38.5$\\\\ \\hline\nNumerical SIA solver & $15.0$ & $24.5$ & $26.5$ & $26.3$ & $28.0$ & $37.5$\\\\ \\hline\n \\hline\n \\end{tabular}\n\\caption{Summary statistics of $10^6$ posterior samples of the ice viscosity parameter using an emulator for the SIA and a numerical solver for the SIA; qualitatively, these posterior samples are similar. Units are in $10^{-25} s^{-1}Pa^{-3}$.}\n\\end{center}\n\\end {table}\n\n\n\\subsection{Assessing a random walk for representing model discrepancy}\nThe choice of using a random walk to correct for deviations between the output of a computer simulator and the actual physical process values has a few important motivations:\n\n\\begin{enumerate}\n \\item The inaccuracy of a spatio-temporal computer simulation is most likely going to increase as it is run further into the future. Conveniently, a random walk's variance increases with time -- for example a RW(1) has marginal variance $j\\Sigma$ at time $j$.\n \\item As shown in Appendix A and Section 3.1, the likelihood involves band-limited matrices, for which there exist specialized numerical linear algebra routines. However, there is a trade-off in bandwidth and the order of the random walk utilized.\n \\item Spatial correlations in the inaccuracies of a computer simulation can be captured with the covariance matrix $\\Sigma$.\n\\end{enumerate}\n\nIn addition to these motivations, the purpose of this section is to empirically assess how a random walk model performs for correcting the output of a numerical SIA PDE solver. To do this, we use the analytical SIA solution as a gold standard. This is a simplification in the sense that the real glacial dynamics will not follow the SIA PDE and therefore the analytical SIA solution exactly, but nonetheless this is a way to check the veracity of the random walk error model in some capacity -- at the very least, as a model for numerical error but not model uncertainty.\n\nFigure 3 displays the differences between the analytical SIA PDE solution for glacial thickness and the numerical SIA PDE solution for glacial thickness at all of the glacier grid points, run forward for 5000 time steps (i.e., 500 years). More precisely, the points in blue are at the margin of the glacier, the points in red are at the interior, and the points filled in black are close to the top (also referred to as the dome) of the glacier. Recall from Figure 1 that the glacier looks like a shallow ellipsoid sliced in half (in the x-y plane), and so the panel on the right of this figure is a top view of the glacier grid points, which looks like a circle of radius 750 km projected onto the x-y plane. In comparison, the height is 3600 m.\n\n\\begin{figure*}[h!]\n \\centering\n \\includegraphics[width=0.5\\textwidth]{discrepancy.jpeg}\n\\caption{An illustration of the difference between the exact analytical solution and the numerical solution for the SIA PDE. On the \\textit{right} panel is a top view of the glacier, whose shape looks like a dome, and therefore the projection on to the x-y plane is a circle. The {\\color{blue}blue points} signify the margin of the glacier (where it drops down to zero thickness), the {\\color{red}red points} are at the interior of the glacier, and the {\\color{black}black points} are towards the top of the glacier. The points that are not filled in signify the border of the glacier, where there is no ice thickness. On the \\textit{left} panel the discrepancies between the analytical SIA PDE solution and the numerical SIA PDE solution for all grid points are shown. Specifically, the color of each path corresponds to the grid points on the right panel. Additionally, the paths are shown for 500 years, or 5000 time steps.\n}\n\\end{figure*}\n\n\nThe differences are all very smooth (i.e., continuous) functions of time, implying that the numerical SIA PDE solver is producing continuous output as well -- we know that the analytical solution is continuous based on the functional form in Eqs. 3-5. Thus, it appears that a random walk of at least a few orders is necessary to represent these differences. Moreover, as expected from \\citet{Bueler}, the largest errors occur at the margin, whereas the interior and dome differences are less extreme.\n\nTo assess if a random walk model is appropriate, for each time point $j$ and for orders 1-7, we computed residuals, in other words, the left hand side of Eq. 8, which should theoretically be distributed like $\\bm{\\epsilon_j}$ (i.e., independent $\\textrm{MVN}(0, \\Sigma)$ random variables). To compute $\\bm{X}_j$, we take the difference $\\bm{S}_j-\\bm{f}(\\bm{\\theta},j)$, where $\\bm{S}_j$ is the analytical glacial thickness solution to the SIA PDE at time $j$ (i.e., the real physical process for the purpose of this analysis), and $\\bm{f}(\\bm{\\theta},j)$ is the numerical glacial thickness solution to the SIA PDE at time $j$. We examine the residuals for two randomly selected grid points of the glacier (one at the interior and one at the margin) in Figures 4 and 5.\n\n\\begin{figure*}[h!]\n \\centering\n \\includegraphics[width=.6\\textwidth]{interior_RW_new.jpeg}\n\\caption{This figure displays residuals in units of meters (i.e., the term $\\bm{\\epsilon}_j$ in Eq. 8) for RW(q) of orders 1-7 for a randomly selected interior grid point. The first four panels display values on different scaled y-axes to better show the shapes, whereas the bottom four panels have the same scaling for the y-axis to be able to compare across the figures. RW(5) and above look like white noise processes, though RW(5) has the smallest variance. \n}\n\\end{figure*}\n\n\\begin{figure*}[h!]\n \\centering\n \\includegraphics[width=.6\\textwidth]{margin_RW_new.jpeg}\n\\caption{This figure also displays residuals in units of meters (i.e., the term $\\bm{\\epsilon}_j$ in Eq. 8) for RW(q) of orders 1-7 for a randomly selected margin grid point. Just like the previous figure, the first four panels display values on different scaled y-axes to better show the shapes, whereas the bottom four panels have the same scaling for the y-axis to be able to compare across the figures. Just as in the previous figure, RW(5) and above look like white noise processes, though RW(5) has the smallest variance.\n}\n\\end{figure*}\n\n\nA few important observations should be emphasized based on the empirical analysis displayed in these figures. The first is that a single order random walk substantially filters the discrepancy; for the interior grid point, it is reduced from the order of 10 m to the order of .01 m (1000 times reduction in magnitude), and for the margin grid point from the order of 100 m to .05 m (more than 1000 times reduction). Additionally, for both the interior and margin grid points, it appears that RW(5) is optimal in the sense that the residuals closely resemble a white noise process and have the smallest variance. While the residuals from higher order RW processes also resemble white noise, the magnitude of the noise is larger. Nonetheless, we believe that real physical processes will not always be as smooth as the analytical SIA PDE solutions, and hence it is likely that a lower order RW process will be preferred for real scenarios. \n\n\\subsection{Reducing bias for the posterior distribution of $\\bm{\\theta}$}\nIn \\citet{discrepancy}, when prior information about the model discrepancy term is introduced in a simple physical system (i.e., a constrained GP over a space of functions), the bias of a posterior distribution of a physically relevant parameter reduces. We have found that a very similar phenomenon occurs in the glaciology test case, a result that was pointed out in \\citet{gopalan2018bayesian}. Specifically, in \\citet{Bueler}, it is shown that there is large spatial variation in the scale of deviations between the exact solution to the SIA and a numerical finite difference solver of the SIA. Specifically, there is spatial variation between the dome, interior, and margin of a glacier, with deviations at the margin being markedly larger than at the interior and dome. To investigate the effect of such prior information, we choose the matrix $\\Sigma$ to be such that it is block diagonal with 3 blocks, $\\Sigma_{int}$, $\\Sigma_{dome}$, and $\\Sigma_{margin} $. Each of these blocks is derived from a square exponential covariance kernel with the same length-scale parameter $\\bm{\\phi}$ = 70 km, but differing variance parameters $\\sigma^2_{int}$, $\\sigma^2_{dome}$, and $\\sigma^2_{margin}$. If we ignore prior information from \\citet{Bueler}, we assume that there is an equal prior probability that each of $\\sigma^2_{int}$, $\\sigma^2_{dome}$, and $\\sigma^2_{margin}$ is in the set $\\{.1, 1, 10, 100\\}$ in units of $m^2$. If we use prior information from \\citet{Bueler}, we instead assume equal prior probability on $\\{.1,1\\}$ for $\\sigma^2_{int}$, $\\{1,10\\}$ for $\\sigma^2_{dome}$, and $\\{10,100\\}$ for $\\sigma^2_{margin}$ (again all units are $m^2)$. As shown in \\citet{gopalan2018bayesian}, the posterior for ice viscosity is less biased in the case that incorporates prior information for the scale of errors; this phenomenon is explored again in the next section.\n\nWhile in the above discussion we have not been precise about the term bias, the following ought to make this notion more rigorous. Let $\\bm{\\theta}_0$ be the true parameter, and $\\hat{\\bm{\\theta}}$ be an estimator of $\\bm{\\theta}_0$. The frequentist definition of bias is usually $E[\\hat{\\bm{\\theta}} - \\bm{\\theta}_0]$, where the expectation (i.e., average) is taken over the sampling distribution, $p(\\bm{Y}|\\bm{\\bm{\\theta}}_0)$. The Bayesian notion of bias used informally in the preceding paragraph (and essentially the same notion as in \\citet{discrepancy}) is $b(\\bm{Y},\\bm{\\bm{\\theta}}_0) = E[\\bm{\\bm{\\theta}} - \\bm{\\bm{\\theta}}_0]$, where the expectation (i.e., average) is taken with respect to the posterior distribution of $\\bm{\\bm{\\theta}}$, $p(\\bm{\\theta}|\\bm{Y})$. Consider $E[b(\\bm{Y},\\bm{\\bm{\\theta}}_0)]$, where the (outer) expectation is taken with respect to the sampling distribution. Then $E[b(\\bm{Y},\\bm{\\bm{\\theta}}_0)] = E[E[\\bm{\\bm{\\theta}}-\\bm{\\bm{\\theta}}_0]] = E[E[\\bm{\\bm{\\theta}}]-\\bm{\\bm{\\theta}}_0] = E[\\hat{\\bm{\\bm{\\theta}}}-\\bm{\\bm{\\theta}}_0]$, which is the frequentist bias. In other words, the frequentist bias is equivalent to the average of $b(\\bm{Y},\\bm{\\bm{\\theta}}_0)$ over the sampling distribution, if the posterior mean is chosen as an estimator. In the glaciology test case, we have (informally) not noticed much variability in the posterior for ice viscosity over repeated sampling of the data, and hence the distinction between Bayesian bias and frequentist bias is not significant.\n\nThe reader may wonder why a fixed $\\bm{\\theta}_0$ was assumed in the preceding paragraph, despite that a Bayesian model has been presented in this paper. In fact, it is typical to assume that the actual value of a parameter is fixed, despite ascribing a probability distribution to it in the form of a prior or posterior. Conceptually, such a probability distribution is a representation of a modeler's uncertainty regarding the fixed, unknown value of the parameter. For more on this interpretation of Bayesian statistics, the reader can consult results of statistical decision theory (e.g., on admissibility) in \\cite{lehmann2003theory} and \\cite{robert2007bayesian}. This viewpoint is also taken in Bayesian asymptotic analysis, such as the Bernstein-von Mises theorem \\citep{van2000asymptotic, shen2001}.\n\n\\subsection{Inferring $\\Sigma$}\n\nThe covariance matrix $\\Sigma$, first introduced after Equation 7 in Section 3, determines the spatial correlation inherent in the error-correcting process, $\\bm{X}$. Since spatial correlation in the error-correcting process is important to model (which is particularly evident in the glaciology example of \\citet{Bueler}), we need to discuss how $\\Sigma$ ought to be specified. Choosing $\\Sigma$ can be difficult if no or little prior information is available, and in such a case, we suggest:\n\n\\begin{eqnarray*}\n\\Sigma &=& \\textrm{diag}(\\bm{v})\\textrm{R} \\ \\textrm{diag}(\\bm{v}), \n\\end{eqnarray*}\nwhere $\\textrm{log}(\\bm{v}) \\sim \\textrm{MVN}(\\bm{\\mu}_v,\\Sigma_v)$, $\\Sigma_v$ is derived from a GP kernel such as squared-exponential or Mat\\'{e}rn kernel, and $\\textrm{R}$ is a correlation matrix also derived from a GP kernel. To avoid non-identifiability and complexity of inference, it is suggested to pre-specify the parameters of these GP kernels. This approach is similar to the modeling strategy employed in \\citet{doi:10.1002\/env.2343}. The intuition behind this approach is that the term $\\bm{v}$ encodes spatial variability in the scale of deviations between the output of a computer simulator and the true physical process, and spatial correlation in these deviations is strongly enforced with non-diagonal terms in both $\\Sigma_v$ and $\\textrm{R}$.\n\nFigure 6 illustrates a map of the mean posterior field for the variances of the error-correcting process, where the area of each circle is proportional to the inferred posterior mean of variance; due to a multivariate normal prior on $\\textrm{log}(\\bm{v})$, elliptical slice sampling is used as the method for posterior sampling \\citep{murray2010elliptical}. Consistent with \\citet{Bueler}, the variances tend to increase at the margins and are smaller at the interior. Additionally, the scaled differences between the analytical solution and numerical solver at the final time point the simulator is run (where scaling is inverse of the posterior mean of standard deviation) should theoretically approach a mean zero normal distribution according to the model. The p-value for an Anderson-Darling test is .436, suggesting that the scaled differences between the analytical solution and numerical solver are consistent with a normal distribution. Moreover, the sample mean for these scaled differences is .079 and the sample standard deviation is .409.\n\n\\begin{figure*}[h!]\n \\centering\n \\includegraphics[width=0.5\\textwidth]{post_var.jpeg}\n\\caption{Inferred posterior variance field of the error-correcting process, where the area of each circle is proportional to the variance at the grid point centered at the circle. Qualitatively, this field behaves as one would expect from the the work of \\citet{Bueler}, where the authors demonstrate that numerical inaccuracies for the SIA PDE are greatest toward the margin, but much smaller at the interior of the glacier.}\n\\end{figure*}\n\nAs is discussed in the previous subsection, prior information for $\\Sigma$ has an effect on the inference of physical parameters (i.e., ice viscosity), and in particular, a lack of prior information can lead to a very biased posterior distribution for physical parameters. To compare the fitted $\\Sigma$ using a GP field against the $\\Sigma$ matrices discussed in the previous section, we show in Table 2 a comparison of posterior inference for the ice viscosity parameter for three choices of $\\Sigma$. The first choice of $\\Sigma$ is the posterior mean of samples assuming the structure $\\Sigma = \\textrm{diag}(\\bm{v})\\textrm{R} \\textrm{diag}(\\bm{v})$, with $\\textrm{log}(\\bm{v}) \\sim \\textrm{MVN}(\\bm{\\mu_v},\\Sigma_v)$. In the second and third scenarios, $\\Sigma$ is block diagonal with three variance parameters for each of the three blocks. A weakly informative case assumes that $\\sigma^2_{int} = \\sigma^2_{dome} = \\sigma^2_{margin} = .1$, whereas a more informative case (using prior information from \\citet{Bueler}) has $\\sigma^2_{int} = \\sigma^2_{dome} = .1$ and $\\sigma^2_{margin} = 10$ (all units are $m^2$). The scenario for weak prior information for $\\Sigma$ results in a very biased posterior distribution whose support does not cover the actual parameter value ($31.7 \\times 10^{-25}$ in units of $s^{-1}Pa^{-3}$) -- the maximum in this case is $26.5 \\times 10^{-25}$ in units of $s^{-1}Pa^{-3}$. While the (absolute) biases of the posterior for ice viscosity for the GP field version compared to the prior information from \\citet{Bueler} are comparable (5.09 versus 4.01 in units of $10^{-25} s^{-1}Pa^{-3}$), the posterior variance is markedly larger in the former case. This result suggests that prior knowledge from a domain expert is likely to be useful in determining $\\Sigma$, though in a case when that does not exist, the methodology described in this section is an adequate alternative.\n\n\\begin {table}[H]\n\\begin{center}\n\\begin{tabular}{ |c|c|c|c|c|c|c| }\n\\hline\nTest Case & Min & 1st Quartile & Median & Mean & 3rd Quartile & Max \\\\ \\hline\n$\\Sigma$ with GP field& $10.0$ & $21.0$ & $36.0$ & $35.7$ & $50.5$ & $70.0$\\\\ \\hline\n$\\Sigma$ with strong prior information & $18.0$ & $25.0$ & $26.5$ & $26.6$ & $28.0$ & $35.5$\\\\ \\hline\n$\\Sigma$ with weak prior information & $12.5$ & $18.5$ & $19.5$ & $19.5$ & $20.5$ & $26.5$\\\\ \\hline\n \\hline\n \\end{tabular}\n\\caption{Summary statistics of $10^6$ posterior samples of the ice viscosity parameter under three versions of $\\Sigma$. While the weakly-informative case leads to a very biased posterior, the biases for the ice viscosity posterior in the first two $\\Sigma$ matrices are comparable. Nonetheless, the posterior variance is much less in the case with prior information from \\citet{Bueler}.}\n\\end{center}\n\\end {table}\n\n\n\\subsection{Exact versus approximate likelihood}\nIn Section 3.1, we showed an exact way to calculate the model likelihood as well as an approximation in Section 3.2. In this subsection, our purpose is to compare these two methods of likelihood computation in terms of run-time and posterior inference. Using a MacBook Pro early 2015 model with a 2.7 GHz Intel Core i5 processor and 8 GB 1867 MHz DDR3 memory (as before), one component of the log-likelihood approximation (which can be computed in an embarrassingly parallel fashion with the other components of the sum) takes .0179 s, whereas the full log-likelihood calculation, as in Section 3.1, is .354 seconds (in both cases, using a first-order emulator). The results of comparing posterior samples for the ice viscosity parameter are given in Table 3 -- thus, while the mean, median, first, and third quartiles are comparable, the approximate version has larger posterior uncertainty than the exact version as is evidenced by the wider tails. These results suggest that, while there is likely a computational speed-up afforded by using the approximation (i.e., at least an order of magnitude), the price to pay is increased posterior uncertainty.\n\\begin {table}[H]\n\\begin{center}\n\\begin{tabular}{ |c|c|c|c|c|c|c| }\n\\hline\nTest Case & Min & 1st Quartile & Median & Mean & 3rd Quartile & Max \\\\ \\hline\nExact likelihood & $15.0$ & $26.5$ & $27.0$ & $27.4$ & $29.0$ & $39.5$\\\\ \\hline\nLikelihood approximation & $10.0$ & $26.0$ & $28.0$ & $27.7$ & $31.0$ & $52.5$\\\\ \\hline\n \\hline\n \\end{tabular}\n\\caption{Summary statistics of $10^6$ posterior samples of the ice viscosity parameter using an exact likelihood and a likelihood approximation (units are in $10^{-25} s^{-1}Pa^{-3}$). While the 1st quartile, median, mean, and 3rd quartile are similar, the tails in the approximation are much wider.}\n\\end{center}\n\\end {table}\n\n\\section{Generality of the model and methodology}\nThough we have tested the model and methodology in the previous sections in the context of a glaciology example, it should be noted that they can be used in other physical systems with similar components. In essence, this modeling and methodology can be applied in scenarios where:\n\\begin{enumerate}\n\\item A computer program (i.e., \\textit{computer simulator}) is available to simulate a continuous physical process through space and time, but there is a deviation between the output of the computer simulator and the actual physical process.\n\\item The deviations between the computer simulator output and the actual physical process values tend to grow with time and exhibit spatial correlation structure.\n\\item Measurements of the physical process are available, but they are potentially scarce both in space and time.\n\\item Physical parameters governing the physical process are uncertain but can be constrained with domain knowledge for the random walk error covariance (i.e., $\\Sigma$).\n\\end{enumerate}\nRecall that at the process level, the model stipulates that:\n\\begin{eqnarray}\n\\bm{S}_{j} &=& \\bm{f}(\\bm{\\theta},\\bm{\\phi},j) + \\bm{X}_j.\n\\end{eqnarray}\nTo apply the same setup to another physical scenario, a different version of $\\bm{f}(.,.,.)$, such as a numerical PDE solver for another system of spatio-temporal PDEs besides the SIA, can be used. However, while $\\bm{f}(.,.,.)$ will need to be tailored to another physical scenario based on a different numerical scheme or physical model, the $\\bm{X}_j$ term would be modeled in the same way (i.e., with a random walk).\n\n\\section{Conclusion}\nThe objective of this work has been to set forth a versatile physical-statistical model in the Bayesian hierarchical framework that incorporates a computer simulator for a physical process, such as a numerical solver for a system of PDEs. Posterior inference for physical parameters (and, consequently, posterior predictions of the physical process) can be computationally demanding within this model, since each evaluation of the likelihood requires a full PDE solve and computing the inverse and determinant of a large covariance matrix. Therefore, we have set forth two main ways to speed up computation: first is the use of bandwidth limited linear algebra in a manner similar to \\citet{rue2001fast} for quickly handling the covariance matrix in the likelihood, and the second is the use of spatio-temporal emulation in a manner similar to \\citet{Hooten2011} to emulate a PDE solver that is expensive to evaluate. An additional method for speeding up computation is to approximate the likelihood in a way that leads to embarrassingly parallel computation. The utility of this model and corresponding inference methodology is demonstrated with a test example from glaciology.\n\nA unique feature of this work is how we represent the discrepancy between a computer simulator for a physical process and the real physical process values. One approach, as in \\citet{kennedy2001bayesian} and \\citet{discrepancy}, is to assume that this is a fixed yet unknown function that can be learned with a GP (or constrained GP) prior distribution over a space of functions. Instead, we assume that this discrepancy is a spatio-temporal stochastic process (i.e., a random walk), which is motivated by the fact that a computer simulation is likely to become less accurate as it is run forward in time, as well as exhibit some degree of spatial correlation in inaccuracies. An interesting consequence of this modeling decision is that linear algebraic routines for band-limited matrices can be utilized for evaluating the likelihood of the model in an efficient manner. Another interesting artifact of this approach is that when prior information is used for the random walk's error term (i.e., in $\\Sigma$), the bias for the posterior distribution of $\\bm{\\theta}$ is reduced. The same phenomenon is exhibited in the work of \\citet{discrepancy}, where a constrained GP prior over a space of functions ends up reducing the bias of the physical parameter posterior distribution. \n\nDespite that the model and methodology appear to perform well in the analysis of this paper, it is important to comment on some potential drawbacks of the approach, particularly when applied to other physical contexts. In this paper, emulation works adequately with a single parameter, though emulators do not always work well in other applications or higher dimensional parameter spaces. For example, \\cite{doi:10.1080\/01621459.2018.1514306} document some shortcomings of a principal components based emulator in climate modeling. The second main computational advantages stem from log-likelihood evaluation speed-ups. The use of bandwidth limited matrix algebra for the exact log-likelihood can be used so long as the model holds, which may not always be the case (e.g., with a non-Gaussian data distribution). Additionally, the log-likelihood approximation holds when the measurement errors are small relative to the signal modeled, which depends on the measurement instruments used to collect the data. For instance, on common geophysical scales of thousands of meters, light detection and ranging (LIDAR) or digital-GPS data have maximum errors on the order of a meter.\n\nAdditionally, if it is not possible to program the computer simulator to produce output at the data measurement locations, there are essentially two main ways to handle such a scenario. The first is to use spatial kriging to predict the value of the computer simulator at the spatial locations where data are collected, given the output of the computer simulator at the grid points. A simpler approach is to use inverse-distance weighting of the simulator output at the nearest neighbors; that is, take a weighted average of the four nearest grid points of the simulator, where the weights are proportional to the inverse of distance. Such an approach, for example, has been used in \\citet{doi:10.1002\/env.2343}.\n\nFuture research will include predicting Langj\\\"{o}kull glacier surface elevation using the modeling and methodology within this paper, based on actual data collected by the UI-IES. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nExpander graphs are now ubiquitous in both mathematics and computer science. The problem of explicitly constructing these highly connected sparse graphs has drawn the attention of researchers from across both disciplines, who have uncovered deep and surprising connections to topics as diverse as Kazhdan's property (T) and the Ramanujan conjecture. Their usefulness has been known to computer scientists for some time, who have applied them to complexity theory, derandomisation, coding theory, cryptography and more, but they are now seeing increasing use in disparate areas of mathematics. We refer the interested reader to the excellent surveys~\\cite{HLW06} and~\\cite{L12} for further information. \n\nGiven these successes, there has been a strong push in recent years towards defining and constructing high-dimensional, or hypergraph, expanders. There has already been a great deal of interesting work in this area (see, for example, the survey~\\cite{L14}), but much more remains to be done. In particular, there are only a small number of examples known which satisfy the strongest notions of expansion. In the bounded-degree case, there is essentially only one such construction~\\cite{EK17, KKL16}, arising from the so-called Ramanujan complexes~\\cite{LSV05}, defined in analogy to Ramanujan graphs~\\cite{LPS88, M88} as finite quotients of certain affine buildings. The main result of this paper is a comparatively simple mechanism for constructing $3$-uniform expanders of low degree which satisfy most, and perhaps even all, of the expansion properties discussed in the literature. To say more, we first describe the mechanism.\n\nLet $S$ be a subset of the finite abelian group $\\mathbb{Z}_2^t$. We then let $H \\coloneqq H(\\mathbb{Z}_2^t, S)$ be the $3$-uniform hypergraph with vertex set $\\mathbb{Z}_2^t$ and edge set consisting of all triples of the form $(x + s_1, x+ s_2, x+ s_3)$, where $x \\in \\mathbb{Z}_2^t$ and $s_1, s_2, s_3$ are distinct elements of $S$. A useful alternative perspective on $H$ is to consider the Cayley graph Cay$(\\mathbb{Z}_2^t, S')$, where $S'$ is the set $\\{s_1 + s_2 : s_1, s_2 \\in S, s_1 \\neq s_2\\}$. This is the graph with vertex set $\\mathbb{Z}_2^t$ where $x, y \\in \\mathbb{Z}_2^t$ are joined if and only if $x + y \\in S'$. Then $H$ is the $3$-uniform hypergraph on the same vertex set whose triples are the triangles of Cay$(\\mathbb{Z}_2^t, S')$. Note that if $S$ contains no non-trivial solutions to the equation $s_1 + s_2 = s'_1 + s'_2$, then every vertex in $H$ is contained in exactly $3\\binom{|S|}{3}$ edges and every pair of vertices is contained in either $0$ or $2|S| - 4$ edges.\n\nWe will see that this hypergraph inherits its expansion properties from the Cayley graph Cay$(\\mathbb{Z}_2^t, S)$. However, when $|S| < t$, the Cayley graph Cay$(\\mathbb{Z}_2^t, S)$ is not even connected, showing that $|S|$ will have to be at least logarithmic in the number of vertices. On the other hand, a celebrated result of Alon and Roichman~\\cite{AR94} shows that logarithmic size will also suffice, in that if $S$ is a randomly chosen subset of $\\mathbb{Z}_2^t$ of size $C t$, for $C$ sufficiently large, then Cay$(\\mathbb{Z}_2^t, S)$ will, with high probability as $t \\rightarrow \\infty$, be an expander. The set $S$ may also be chosen explicitly, but the fact that a random choice works partially addresses a question raised repeatedly in the literature~\\cite{EK17, P14, P17} as to whether there are random models for sparse high-dimensional expanders. \n\nSince random Cayley graphs over many other groups are known to have much better expansion properties (see~\\cite{BGGT15} and its references), one might ask why we use $\\mathbb{Z}_2^t$. To see why, we define, for each $x \\in \\mathbb{Z}_2^t$, the set $C_x := \\{x + s : s \\in S\\}$. The hypergraph $H$ is then a union of $3$-uniform cliques, one on each set $C_x$. The key observation about $H$, and the reason why it serves as a hypergraph expander, is that each edge in Cay$(\\mathbb{Z}_2^t, S')$ appears in at least two different sets of the form $C_x$. More specifically, if $(x, x + s_1 + s_2)$ is an edge of Cay$(\\mathbb{Z}_2^t, S')$, then this edge is contained in both $C_{x+s_1}$ and $C_{x+s_2}$. This property, for which the fact that $\\mathbb{Z}_2^t$ is abelian is crucial, means, for instance, that a random walk on the edges of $H$ never becomes trapped inside a particular set $C_x$.\n\nTo say more about the notions of expansion satisfied by our construction, we require some notation. Given a $3$-uniform hypergraph $H$, we let $E$ be the collection of pairs of distinct vertices $(u, v)$ for which there exists an edge $(u, v, w)$ of $H$ containing $u$ and $v$. The corresponding graph will be referred to as the {\\it skeleton} of $H$. To distinguish the edges of $H$ from the set $E$, we will write $T$ for these edges and refer to them as the triples of $H$, reserving the term edges for the elements of $E$.\n\nFor any subset $F$ of the skeleton $E$ of a hypergraph $H$, we may define two notions of neighbourhood, the \\emph{edge neighbourhood $N_E(F)$} and the \\emph{triple neighbourhood $N_T(F)$}, by\n\\[N_E(F) = \\{e \\in E \\setminus F: e \\cup f \\in T \\textrm{ for some } f \\in F\\}\\]\nand, writing $\\Delta(F)$ for the set of triangles in $F$,\n\\[N_T(F) = \\{t \\in T: t \\supset f \\textrm{ for some } f \\in F \\textrm{ and } t \\notin \\Delta(F)\\}.\\]\nWe then let\n\\[h_E(H) = \\min_{\\{F : |F| \\leq |E|\/2\\}} \\frac{|N_E(F)|}{|F|} \\textrm{ and } h_T(H) = \\min_{\\{F : |F| \\leq |E|\/2\\}} \\frac{|N_T(F)|}{|F|}\\]\nand say that $H$ is an {\\it $\\epsilon$-edge-expander} if $h_E(H) \\geq \\epsilon$ and an {\\it $\\epsilon$-triple-expander} if $h_T(H) \\geq \\epsilon$. Note that these notions are the direct analogues of the standard notions of vertex and edge expansion in graphs. Our main result is that under suitable conditions on Cay$(\\mathbb{Z}_2^t, S)$, the hypergraph $H(\\mathbb{Z}_2^t, S)$ is an $\\epsilon$-edge-expander and an $\\epsilon |S|$-triple-expander for some $\\epsilon > 0$. \n\nTo state the result formally, recall that the eigenvalues of an $n$-vertex graph $G$ are the eigenvalues $\\lambda_1 \\geq \\dots \\geq \\lambda_n$ of its adjacency matrix $A$. When $G$ is $d$-regular, $\\lambda_1 = d$. One of the key facts about $d$-regular expander graphs is that if $\\lambda(G) = \\max_{i \\neq 1} |\\lambda_i|$ is bounded away from $d$, then the graph $G$ exhibits strong expansion properties. We show that this behaviour carries over to the derived hypergraph $H$.\n\n\\begin{theorem} \\label{thm:main}\nSuppose that $S \\subseteq \\mathbb{Z}_2^t$ contains no non-trivial solutions to the equation $s_1 + s_2 = s'_1 + s'_2$ and $\\lambda(Cay(\\mathbb{Z}_2^t, S)) \\leq (1 - \\epsilon)|S|$. Then the hypergraph $H(\\mathbb{Z}_2^t, S)$ is an $\\frac{\\epsilon^2}{128}$-edge-expander and an $\\frac{\\epsilon^2}{64} |S|$-triple-expander.\n\\end{theorem}\n\nBy the Alon--Roichman theorem~\\cite{AR94}, a random set $S \\subseteq \\mathbb{Z}_2^t$ of order $Ct$, for $C$ a sufficiently large constant, will a.a.s.~have the required properties with $\\epsilon = 1\/2$. Therefore, writing $n = |\\mathbb{Z}_2^t|$, the theorem implies there exists a hypergraph with $n$ vertices and $O(n \\log^3 n)$ triples which is a $2^{-9}$-edge-expander and a $(2^{-8} \\log n)$-triple-expander. \n\nGetting back to random walks, we follow Kaufman and Mass~\\cite{KM16, KM162} in defining the {\\it random walk} on a $3$-uniform hypergraph $H$ to be a sequence of edges $e_0, e_1, \\dots \\in E$ such that\n\n\\begin{enumerate}\n\n\\item\n$e_0$ is chosen from some initial probability distribution $\\mathbf{p}_0$ on $E$,\n\n\\item\nfor every $i \\geq 1$, $e_i$ is chosen uniformly at random from the neighbours of $e_{i-1}$, that is, the set of $f \\in E$ such that $e_{i-1} \\cup f$ is an edge of $H$.\n\n\\end{enumerate}\n\nWe say that the random walk is {\\it $\\alpha$-rapidly mixing} if, for any initial probability distribution $\\mathbf{p}_0$ and any $i \\in \\mathbb{N}$,\n\\[\\|\\mathbf{p}_i - \\mathbf{u}\\|_2 \\leq \\alpha^i,\\]\nwhere $\\mathbf{p}_i$ is the probability distribution on $E$ after $i$ steps of the walk, $\\mathbf{u}$ is the uniform distribution on $E$ and $\\|\\mathbf{x}\\|_2 = (\\sum_i x_i^2)^{1\/2}$ for $\\mathbf{x} = (x_1, x_2, \\dots, x_n) \\in \\mathbb{R}^n$. As a corollary of Theorem~\\ref{thm:main}, we can show that under appropriate conditions on Cay$(\\mathbb{Z}_2^t, S)$ the hypergraph $H(\\mathbb{Z}_2^t, S)$ is $\\alpha$-rapidly mixing for some $\\alpha < 1$. \n \n\\begin{corollary} \\label{cor:mixing}\nSuppose that $S \\subseteq \\mathbb{Z}_2^t$ contains no non-trivial solutions to the equation $s_1 + s_2 = s'_1 + s'_2$ and $\\lambda(Cay(\\mathbb{Z}_2^t, S)) \\leq (1 - \\epsilon)|S|$.\nThen $H(\\mathbb{Z}_2^t, S)$ is $\\alpha$-rapidly mixing with $\\alpha = 1 - \\Omega(\\epsilon^4)$.\n\\end{corollary}\n\nThe hypergraph $H(\\mathbb{Z}_2^t, S)$ also satisfies several other properties, one notable example being the following pseudorandomness condition.\n\n\\begin{theorem} \\label{thm:pseudo}\nSuppose that $S_t \\subseteq \\mathbb{Z}_2^t$ is a sequence of sets with $t \\rightarrow \\infty$, where $S_t$ contains no non-trivial solutions to the equation $s_1 + s_2 = s'_1 + s'_2$ and $\\lambda(Cay(\\mathbb{Z}_2^{t}, S_t)) = o(|S_t|)$. Then, for any sets $A, B, C \\subseteq \\mathbb{Z}_2^{t}$ with $|A| = \\alpha |\\mathbb{Z}_2^{t}|$, $|B| = \\beta |\\mathbb{Z}_2^{t}|$ and $|C| = \\gamma |\\mathbb{Z}_2^{t}|$, where $\\alpha, \\beta$ and $\\gamma$ are fixed positive constants, the number of triples $T(A, B, C)$ in $H(\\mathbb{Z}_2^{t}, S_t)$ with one vertex in each of $A, B$ and $C$ is $(1 + o(1)) \\alpha \\beta \\gamma |S_t|^3 |\\mathbb{Z}_2^{t}|$.\n\\end{theorem}\n\nThat is, provided $\\lambda(Cay(\\mathbb{Z}_2^{t}, S))$ is small, the density of edges between any three large vertex subsets $A$, $B$ and $C$ in $H(\\mathbb{Z}_2^t, S)$ is asymptotic to the expected value, as it would be in a random hypergraph of the same density. It is worth noting that this pseudorandomness property is not known to hold in Ramanujan triangle complexes. Though a result of Parzanchevski~\\cite{P17} (see also~\\cite{PRT16}) says that a sequence of hypergraphs with good enough spectral properties will satisfy this condition, the spectrum of Ramanujan complexes~\\cite{GP14} is not sufficiently well-behaved for this result to apply.\n\nAs noted in~\\cite{P17, PRT16}, any sequence of hypergraphs satisfying the conclusion of Theorem~\\ref{thm:pseudo} will also satisfy Gromov's geometric overlap property~\\cite{G10}. We say that a $3$-uniform hypergraph $H$ has the {\\it $c$-geometric overlap property} if, for every embedding $\\varphi: V(H) \\rightarrow \\mathbb{R}^2$ of the vertices of $H$ in the plane, there is a point $x \\in \\mathbb{R}^2$ which is contained in the convex hull of at least a $c$-proportion of the triples of $H$. A result of Boros and F\\\"uredi~\\cite{BF84} (which was generalised to higher dimensions by B\\'ar\\'any~\\cite{B82}) says that the family of complete $3$-uniform hypergraphs has the geometric overlap property with $c = \\frac{2}{9} - o(1)$ and this constant is known to be sharp~\\cite{BMN10}. Much more recently, answering a question of Gromov~\\cite{G10}, Fox, Gromov, Lafforgue, Naor and Pach~\\cite{FGLNP12} found a number of families of bounded-degree hypergraphs with the geometric overlap property. Indeed, one of their constructions~\\cite[Section 4.1]{FGLNP12} is a close relative of ours and satisfies a version of Theorem~\\ref{thm:pseudo}, but lacks the intersection property between the sets $C_x$ which is necessary to guarantee the properties encapsulated in Theorem~\\ref{thm:main}. \n\nTo see how the geometric overlap property follows from the conclusion of Theorem~\\ref{thm:pseudo}, we recall Pach's selection theorem~\\cite{P98}, which gives a stronger guarantee than the Boros--F\\\"uredi result, saying that for any set of $n$ points in the plane there exist three sets $A$, $B$ and $C$, each of order at least $cn$, and a point $x \\in \\mathbb{R}^2$ such that every triangle $abc$ with $a \\in A$, $b \\in B$ and $c \\in C$ has $x$ in its convex hull. Suppose now that $\\varphi : V(H) \\rightarrow \\mathbb{R}^2$ is an embedding of the vertices of $H(\\mathbb{Z}_2^t, S_t)$ in the plane and let $A$, $B$ and $C$ be the sets and $x$ the point guaranteed by applying Pach's theorem to this point set. By Theorem~\\ref{thm:pseudo}, the number of triples of $H$ with one vertex in each of $A$, $B$ and $C$ is itself a positive proportion of the total number of triples in $H$. Since each one of these triples has $x$ in its convex hull, we have deduced the following corollary.\n\n\\begin{corollary} \\label{cor:geom}\nSuppose that $S_t \\subseteq \\mathbb{Z}_2^t$ is a sequence of sets with $t \\rightarrow \\infty$, where $S_t$ contains no non-trivial solutions to the equation $s_1 + s_2 = s'_1 + s'_2$ and $\\lambda(Cay(\\mathbb{Z}_2^{t}, S_t)) = o(|S_t|)$. Then the family of $3$-uniform hypergraphs $H(\\mathbb{Z}_2^t, S_t)$ has the $(c - o(1))$-geometric overlap property for some $c > 0$.\n\\end{corollary}\n\nAs in~\\cite{FGLNP12}, we can also recover the sharp constant $c = \\frac{2}{9}$ by following Bukh's proof~\\cite{B06} of the Boros--F\\\"uredi result. However, we omit the proof of this result, referring the reader instead to~\\cite{FGLNP12}. \n\nIt remains an open problem to determine whether an analogue of Corollary~\\ref{cor:geom} holds for the stronger topological overlap property. We say that a $3$-uniform hypergraph $H$ has the {\\it $c$-topological overlap property} if, for every continuous map $\\varphi: X \\rightarrow \\mathbb{R}^2$ from the simplicial complex $X = (V, E, T)$ of the hypergraph $H$ to the plane, there is a point $x \\in \\mathbb{R}^2$ which is contained in the image of at least a $c$-proportion of the triples of $H$. Gromov~\\cite{G10} generalised the Boros--F\\\"uredi result (and B\\'ar\\'any's result), showing that the family of complete $3$-uniform hypergraphs has the topological overlap property with $c = \\frac{2}{9} - o(1)$. The difficult problem of constructing $3$-uniform hypergraphs of bounded degree with the topological overlap property was only solved recently by Kaufman, Kazhdan and Lubotzky~\\cite{KKL16} and then extended to higher uniformities by Evra and Kaufman~\\cite{EK17}. Their work relies on the properties of Ramanujan complexes, but we conjecture that our construction gives another simpler example, albeit one with polylogarithmic rather than constant degree.\n\n\\section{Proofs}\n\nWe begin with a simple lemma relating the eigenvalues of Cay$(\\mathbb{Z}_2^t, S')$ to those of Cay$(\\mathbb{Z}_2^t, S)$. In particular, this means that the skeleton Cay$(\\mathbb{Z}_2^t, S')$ of our hypergraph $H(\\mathbb{Z}_2^t, S)$ is an expander whenever Cay$(\\mathbb{Z}_2^t, S)$ is. Here and throughout this section, we will use the shorthands $n = |\\mathbb{Z}_2^t|$ and $d = |S|$.\n\n\\begin{lemma} \\label{lem:eig}\nSuppose that $S \\subseteq \\mathbb{Z}_2^t$ contains no non-trivial solutions to the equation $s_1 + s_2 = s'_1 + s'_2$ and that the eigenvalues of the Cayley graph Cay$(\\mathbb{Z}_2^t, S)$ are $\\lambda_i$ for $i = 1, \\dots, n$. Then the eigenvalues of the Cayley graph Cay$(\\mathbb{Z}_2^t, S')$, where $S' = \\{s_1 + s_2 : s_1, s_2 \\in S, s_1 \\neq s_2\\}$, are $\\frac{1}{2}(\\lambda_i^2 - d)$ for $i = 1, \\dots, n$.\n\\end{lemma}\n\n\\begin{proof}\nLet $A$ be the adjacency matrix of Cay$(\\mathbb{Z}_2^t, S)$. It will suffice to show that the adjacency matrix of Cay$(\\mathbb{Z}_2^t, S')$ is $\\frac{1}{2}(A^2 - d I)$. To see this, note that $A^2_{xy}$ is the number of solutions to $x + y = s_1 + s_2$ with $s_1, s_2 \\in S$. When $x \\neq y$, the assumption that there are no non-trivial solutions to $s_1 + s_2 = s'_1 + s'_2$ tells us that $A^2_{xy} = 2$ or $0$ depending on whether or not $x + y$ are joined in Cay$(\\mathbb{Z}_2^t, S')$ or not. When $x = y$, $A_{xx}^2 = d$, corresponding to the $d$ solutions $x + x = s + s = 0$ for all $s \\in S$. The result follows. \n\\end{proof}\n\nGiven two multisets $V$ and $W$ taken from the vertex set of a graph with edge set $E$, we write $e(V,W)$ to denote $\\sum_{v \\in V, w \\in W} 1_E(vw)$. We will also use $v_x$ and $w_y$ to denote the multiplicity of $x$ in $V$ and $y$ in $W$, respectively. In what follows, we will need a slight variant of the expander mixing lemma which applies to multisets. Since the proof is identical to the usual expander mixing lemma, we omit it.\n\n\\begin{lemma}[Expander mixing lemma] \\label{lem:eml}\nSuppose that $G$ is an $(n, d, \\lambda)$-graph, that is, $G$ has $n$ vertices of degree $d$ and all eigenvalues, save the largest, have absolute value at most $\\lambda$. Then, for any two multisets $V, W \\subseteq V(G)$,\n\\[\\left|e(V, W) - \\frac{d}{n} |V||W|\\right| \\leq \\lambda \\sqrt{\\left(\\sum_{x \\in V} v_x^2 - \\frac{|V|^2}{n}\\right) \\left(\\sum_{y \\in W} w_y^2 - \\frac{|W|^2}{n}\\right)}.\\]\n\\end{lemma}\n\nWe are already in a position to show that $H(\\mathbb{Z}_2^t, S)$ satisfies the pseudorandomness property encapsulated in Theorem~\\ref{thm:pseudo}. This is a simple corollary of the following more precise result.\n\n\\begin{theorem} \\label{lem:comb}\nSuppose that $S \\subseteq \\mathbb{Z}_2^t$ contains no non-trivial solutions to the equation $s_1 + s_2 = s'_1 + s'_2$ and the eigenvalues of the Cayley graph Cay$(\\mathbb{Z}_2^t, S)$ satisfy $|\\lambda_i| \\leq \\lambda$ for all $i = 2, \\dots, n$. Then, for any sets $A, B, C \\subseteq \\mathbb{Z}_2^t$ with $|A| = \\alpha n$, $|B| = \\beta n$ and $|C| = \\gamma n$, the number of triples $e(A, B, C)$ in $H(\\mathbb{Z}_2^t, S)$ with one vertex in each of $A, B$ and $C$ is\n\\[(d^3 - d^2) \\alpha \\beta \\gamma n \\pm 2 \\mu d \\sqrt{\\alpha \\beta} \\gamma n \\pm \\lambda d^2 \\sqrt{\\alpha \\beta \\gamma} n \\pm \\lambda \\sqrt{\\mu} d (\\alpha \\beta)^{1\/4} \\sqrt{\\gamma} n,\\]\nwhere $\\mu = \\frac{1}{2} (\\lambda^2 + d)$.\n\\end{theorem}\n\n\\begin{proof}\nSince Cay$(G, S')$ is $\\binom{d}{2}$-regular and, by Lemma~\\ref{lem:eig}, has all eigenvalues, except the largest, at most $\\mu = \\frac{1}{2}(\\lambda^2 + d)$ in absolute value, the number of pairs in the skeleton of $H(\\mathbb{Z}_2^t, S)$ that have one vertex in $A$ and one vertex in $B$ is, by the expander mixing lemma,\n\\begin{equation} \\label{eqn:comexp1} \n\\binom{d}{2} \\alpha \\beta n \\pm \\mu \\sqrt{\\alpha \\beta} n.\n\\end{equation}\nGiven this set of edges $E(A, B)$, let $W(A, B)$ be the multiset of corresponding centres, that is, $w$ appears in $W(A, B)$ once for each edge $e \\in E(A, B)$ in the induced subgraph of Cay$(G, S')$ on $C_w$. We claim that $|W(A, B)| = 2 |E(A, B)|$. To see this, write $e = (u, v)$ and note that if $e \\subset C_{w_1}, C_{w_2}$, then $u = w_1 + s_1$, $v = w_1 + s_2$, $u = w_2 + s'_1$ and $v = w_2 + s'_2$ for $s_1, s_2, s'_1, s'_2 \\in S$. This implies that $s_1 + s_2 = s'_1 + s'_2$, but since there are no non-trivial solutions to this equation, we must have $s'_1 = s_2$ and $s'_2 = s_1$. Therefore, $e$ is contained in at most two sets of the form $C_w$. To see that it is exactly two, note that $e \\subset C_{u + s_1}, C_{u + s_2}$. \n\nThe number of triples containing a pair from $E(A, B)$ and a vertex from $C$ is now the number of edges between $W := W(A, B)$ and $C$ in the graph $S$. Therefore, by Lemma~\\ref{lem:eml},\n\\begin{equation} \\label{eqn:comexp2}\ne(A, B, C) = e(W, C) = \\frac{d}{n} |W||C| \\pm \\lambda \\sqrt{\\sum_{x \\in W} w_x^2 |C|} .\n\\end{equation}\nSince $w_x \\leq \\binom{d}{2}$ for all $x$, we have \n$$\\sum_{x \\in W} w_x^2 \\leq \\binom{d}{2} \\sum_x w_x \\leq \\binom{d}{2} |W| \\leq d^2 |E(A,B)|.$$ \nSubstituting \\eqref{eqn:comexp1} into \\eqref{eqn:comexp2} using this fact yields the required result.\n\\end{proof}\n\nWe now prove our main theorem, Theorem~\\ref{thm:main}, beginning with the triple expansion property. We will make use of the following result of de Caen~\\cite{dC98} putting an upper bound on the sum of the squares of the degrees of a graph.\n\n\\begin{lemma} \\label{lem:deC}\nFor any graph with $n$ vertices, $m$ edges and vertex degrees $d_1, d_2, \\dots, d_n$, \n\\[\\sum_{i=1}^n d_i^2 \\leq m \\left(\\frac{2m}{n-1} + (n-2)\\right).\\]\n\\end{lemma}\n\n\\begin{theorem} \\label{thm:tripexp}\nSuppose that $S \\subseteq \\mathbb{Z}_2^t$ contains no non-trivial solutions to the equation $s_1 + s_2 = s'_1 + s'_2$ and $\\lambda(\\textrm{Cay}(\\mathbb{Z}_2^t, S)) \\leq (1 - \\epsilon)d$. Then, for all subsets $F$ of the skeleton $E$ of $H(\\mathbb{Z}_2^t, S)$ with $|F| \\leq |E|\/2$, $|N_T(F)| \\geq \\frac{\\epsilon^2}{64} d|F|$.\n\\end{theorem}\n\n\\begin{proof}\nFor each $x$, let $F_x = \\{e \\in F : e \\subset C_x\\}$. As in the proof of Lemma~\\ref{lem:comb}, the assumption that $S$ contains no non-trivial solutions to the equation $s_1 + s_2 = s'_1 + s'_2$ implies that each edge in $F$ is contained in precisely two sets $C_x$. Therefore, $\\sum_{x \\in \\mathbb{Z}_2^t} |F_x| = 2 |F|$. Suppose now that $X = \\{x \\in \\mathbb{Z}_2^t : |F_x| \\geq (1 - \\delta) \\binom{d}{2}\\}$, where $\\delta = \\frac{\\epsilon}{8}$. We claim that $\\sum_{x \\in X^c} |F_x| \\geq \\frac{\\epsilon}{4}|F|$.\n\nTo prove the claim, we may clearly assume that $\\sum_{x \\in X} |F_x| \\geq |F|$, for otherwise, \n$$\\sum_{x \\in X^c} |F_x| \\geq 2|F| - \\sum_{x \\in X} |F_x| \\geq |F|.$$ \nConsider now the number of edges $N(X, X^c)$ in Cay$(G, S')$ between $X$ and its complement $X^c$. By Lemma~\\ref{lem:eig} and the expander mixing lemma, this number is at least\n\\[\\binom{d}{2}\\frac{|X||X^c|}{n} - \\frac{1}{2} (\\lambda^2 - d) \\frac{|X||X^c|}{n} = \\frac{1}{2} (d^2 - \\lambda^2) \\frac{|X||X^c|}{n}\\]\nfor some $\\lambda \\leq (1 - \\epsilon)d$. Suppose now that $x \\in X$ and $x + s_1 + s_2 \\in X^c$. If $e = (x + s_1, x + s_2)$ is in $F$, we see that $e \\in F_{x + s_1 + s_2}$, contributing one to $\\sum_{x \\in X^c} |F_x|$. If it is not in $F$, it contributes one to $\\sum_{x \\in X} |F_x^c|$. Therefore,\n\\[\\sum_{x \\in X^c} |F_x| + \\sum_{x \\in X} |F_x^c| \\geq N(X, X^c)\\]\nand, since $|F_x^c| \\leq \\delta \\binom{d}{2}$ for each $x \\in X$,\n\\[\\sum_{x \\in X^c} |F_x| \\geq N(X, X^c) - \\delta \\binom{d}{2}|X| \\geq \\frac{1}{2} (d^2 - \\lambda^2) \\frac{|X||X^c|}{n} - \\delta \\binom{d}{2}|X|.\\]\nNote now, since $|F| \\leq \\frac{1}{2} |E| \\leq \\frac{1}{4} \\binom{d}{2} n$, that \n\\[|X| \\leq \\frac{2|F|}{(1 - \\delta) \\binom{d}{2}} \\leq \\frac{n}{2(1 - \\delta)} \\leq (1 + 2\\delta)\\frac{n}{2},\\]\nso $|X^c| \\geq (1 - 2\\delta)\\frac{n}{2}$. Therefore, since $\\lambda^2 \\leq (1 - \\epsilon) d^2$ and $\\delta = \\frac{\\epsilon}{8}$,\n\\begin{align*}\n\\sum_{x \\in X^c} |F_x| & \\geq \\frac{1}{2} (d^2 - \\lambda^2) \\frac{|X||X^c|}{n} - \\delta \\binom{d}{2}|X| \\\\\n& \\geq \\frac{1}{4} \\left((d^2 - \\lambda^2) (1 - 2\\delta) - 2 \\delta d^2\\right) |X| \\\\\n& \\geq \\frac{1}{4} \\left(\\epsilon (1 - 2 \\delta) - 2 \\delta\\right) d^2 |X| \\geq \\frac{\\epsilon}{8} d^2 |X|.\n\\end{align*}\nSince $\\binom{d}{2}|X| \\geq \\sum_{x \\in X}|F_x| \\geq |F|$, the required claim, that $\\sum_{x \\in X^c} |F_x| \\geq \\frac{\\epsilon}{4}|F|$, now follows.\n\nGiven $x \\in X^c$, let $N_T(F_x)$ denote the set of triples $t \\subset C_x$ such that $f \\subset t$ for some $f \\in F_x$ and $t \\notin \\Delta(F_x)$. Then\n\\[|N_T(F_x)| = \\frac{1}{2} \\sum_{y \\in C_x} d(y) (d - 1 - d(y)),\\]\nwhere $d(y)$ is the degree of $y$ in the graph whose edges are $F_x$. This is because $N_T(F_x)$ includes every triple which contains an edge of both $F_x$ and $E_x \\setminus F_x$ and the factor of $1\/2$ accounts for the fact that we will include any admissible triple twice in this count. Now, by Lemma~\\ref{lem:deC},\n\\[\\sum_{y \\in C_x} d^2(y) \\leq |F_x| \\left( \\frac{2 |F_x|}{d-1} + d - 2\\right),\\]\nso that, since $\\sum_{y \\in C_x} d(y) = 2|F_x|$,\n\\[|N_T(F_x)| \\geq (d-1) |F_x| - \\frac{|F_x|^2}{d-1} - \\frac{1}{2} (d-2) |F_x| = \\frac{d}{2} |F_x| - \\frac{|F_x|^2}{d-1}.\\]\nTherefore, since $|F_x| \\leq (1 - \\delta) \\binom{d}{2}$ for $x \\in X^c$, $|N_T(F_x)| \\geq \\frac{\\delta}{2} d |F_x|$. Since no triple appears in more than one $C_x$, it follows that\n\\[|N_T(F)| \\geq \\sum_{x \\in X^c} |N_T(F_x)| \\geq \\frac{\\delta}{2} d \\sum_{x \\in X^c} |F_x| \\geq \\frac{\\epsilon^2}{64} d |F|,\\]\nas required.\n\\end{proof}\n\nThe edge expansion property from Theorem~\\ref{thm:main} now follows as a simple corollary.\n\n\\begin{corollary} \\label{cor:edgeexp}\nSuppose that $S \\subseteq \\mathbb{Z}_2^t$ contains no non-trivial solutions to the equation $s_1 + s_2 = s'_1 + s'_2$ and $\\lambda(\\textrm{Cay}(\\mathbb{Z}_2^t, S)) \\leq (1 - \\epsilon)d$. Then, for all subsets $F$ of the skeleton $E$ of $H(\\mathbb{Z}_2^t, S)$ with $|F| \\leq |E|\/2$, $|N_E(F)| \\geq \\frac{\\epsilon^2}{128} |F|$.\n\\end{corollary}\n\n\\begin{proof}\nBy Theorem~\\ref{thm:tripexp}, there are at least $\\frac{\\epsilon^2}{64} d|F|$ triples which contain an edge from both $F$ and $E \\setminus F$. Since each edge in $E \\setminus F$ is contained in at most $2d$ of these triples, the result follows by division.\n\\end{proof}\n\nWe will prove Corollary~\\ref{cor:mixing} on the rapid mixing of the random walk on the edges of $H(\\mathbb{Z}_2^t, S)$ by constructing an auxiliary graph $G$ and then appealing to the following result~\\cite[Theorem 3.3]{HLW06}.\n\n\\begin{lemma} \\label{lem:mix}\nLet $G$ be an $N$-vertex $D$-regular graph with $\\lambda = \\max_{i \\neq 1} |\\lambda_i(G)|$ and $\\mathbf{p}_0$ a probability distribution on $V(G)$. Then the random walk on $G$ starting from the initial distribution $\\mathbf{p}_0$ satisfies\n\\[\\|\\mathbf{p}_i - \\mathbf{u}\\|_2 \\leq \\left(\\frac{\\lambda}{D}\\right)^i,\\]\nwhere $\\mathbf{p}_i$ is the probability distribution on $V(G)$ after $i$ steps of the walk, $\\mathbf{u}$ is the uniform distribution on $V(G)$ and $\\|\\mathbf{x}\\|_2 = (\\sum_i x_i^2)^{1\/2}$ for $\\mathbf{x} = (x_1, x_2, \\dots, x_N) \\in \\mathbb{R}^N$.\n\\end{lemma}\n\nTo apply this lemma, we need to estimate $\\lambda$. Recall that the \\emph{edge expansion ratio} of a graph $G$ is \n\\[h(G) = \\min_{\\{U : |U| \\leq |V|\/2\\}} \\frac{e(U, U^c)}{|U|}.\\]\nThe following discrete analogue of the Cheeger inequality, due to Dodziuk~\\cite{D84} and Alon and Milman~\\cite{AM85}, places an upper bound on the second eigenvalue $\\lambda_2$ of a graph $G$ in terms of its edge expansion ratio.\n\n\\begin{lemma} \\label{lem:Cheeger}\nIf $G$ is an $N$-vertex $D$-regular graph with eigenvalues $\\lambda_1 \\geq \\dots \\geq \\lambda_N$, then\n\\[\\lambda_2 \\leq D - \\frac{h(G)^2}{2D}.\\]\n\\end{lemma}\n\nTo estimate $\\lambda_N$, we use a result of Desai and Roy~\\cite{DR94}. To state their result, for any subset $U$ of the vertex set of a graph $G$, we define $b(U)$ to be the minimum number of edges that need to be removed from the induced subgraph $G[U]$ to make the graph bipartite.\n\n\\begin{lemma} \\label{lem:DR}\nIf $G$ is an $N$-vertex $D$-regular graph with eigenvalues $\\lambda_1 \\geq \\dots \\geq \\lambda_N$, then\n\\[\\lambda_N \\geq -D + \\frac{\\Psi^2}{4D},\\]\nwhere\n\\[\\Psi = \\min_{U \\neq \\emptyset} \\frac{b(U) + e(U, U^c)}{|U|}.\\]\n\\end{lemma}\n\n{\\bf Proof of Corollary~\\ref{cor:mixing}:}\nConsider the auxiliary graph $G$ whose vertices $V$ are the edges of the skeleton $E$ and where two vertices are joined if the union of the corresponding edges $e_1, e_2 \\in E$ is in $T$. Note that $G$ is an $N$-vertex $D$-regular with $N = \\frac{1}{2} \\binom{d}{2} n$ and $D = 2d-4$. The random walk on $G$ is in one-to-one correspondence with the random walk on the original hypergraph $H(\\mathbb{Z}_2^t, S)$ and the fact that $H$ is an $\\frac{\\epsilon^2}{64}d$-triple-expander easily implies that for any $U \\subseteq V$ with $|U| \\leq |V|\/2$ the number of edges between $U$ and $U^c$ is at least $\\frac{\\epsilon^2}{64}d |U|$. Therefore, $h(G) \\geq \\frac{\\epsilon^2}{64} d \\geq \\frac{\\epsilon^2}{128}D$. By Lemma~\\ref{lem:Cheeger}, this implies that \n\\[\\lambda_2(G) \\leq D - \\frac{h(G)^2}{2D} \\leq \\left(1 - \\frac{\\epsilon^4}{2^{15}}\\right)D.\\]\nTo estimate $\\Psi$, we split into two cases. If $|U| < \\frac{15}{16}n$, we use the fact that \n\\[e(U, U^c) \\geq \\frac{\\epsilon^2}{64}d \\min(|U|, |U^c|) \\geq \\frac{\\epsilon^2}{2^{10}} d |U|\\] \nto conclude that $e(U, U^c)\/|U| \\geq \\frac{\\epsilon^2}{2^{10}} d$. On the other hand, if $|U| \\geq \\frac{15}{16} N$, the corresponding edge set $F$ in the skeleton $E$ of $H$ has at least $\\frac{3}{4} \\binom{d}{2}$ edges in at least $\\frac{3}{4} n$ of the sets $C_x$. By supersaturation, there exists a constant $c > 0$ such that each $C_x$ with at least $\\frac{3}{4} \\binom{d}{2}$ edges has at least $cd^3$ triangles with all edges in $F$. As there are at least $\\frac{3}{4} n$ sets $C_x$ with this property, $F$ contains at least $\\frac{3}{4} c d^3 n$ triangles, which in turn implies that $G[U]$ contains at least $\\frac{3}{4} c d^3 n$ triangles. Since $G$ is an edge-disjoint union of triangles, the number of edges which must be removed to make $G[U]$ bipartite is at least the number of triangles in $G[U]$. That is, $b(U)$ is at least $\\frac{3}{4} c d^3 n$, so $b(U)\/|U| \\geq c d$. In either case, we see that $\\Psi = \\Omega(\\epsilon^2 D)$ and, hence, by Lemma~\\ref{lem:DR}, there is a positive constant $c$, which we may assume is at most $2^{-15}$, such that \n\\[\\lambda_N(G) \\geq -D + \\frac{\\Psi^2}{4D} \\geq (-1 + c \\epsilon^4) D.\\]\nPutting everything together, we see that $\\lambda = \\max(|\\lambda_2(G)|, |\\lambda_N(G)|) \\leq (1 - c \\epsilon^4) D$. Therefore, applying Lemma~\\ref{lem:mix},\n\\[\\|\\mathbf{p}_i - \\mathbf{u}\\|_2 \\leq \\left(\\frac{\\lambda}{D}\\right)^i \\leq (1 - c \\epsilon^4)^i,\\]\nas required.\n\\ifvmode\\mbox{ }\\else\\unskip\\fi\\hskip 1em plus 10fill$\\Box$\n\n\\section{Further remarks}\n\n{\\bf Generalised constructions.}\n\nFor simplicity, we have worked throughout with the group $\\mathbb{Z}_2^t$. However, a similar construction works over any abelian group $G$. Indeed, given a subset $S$ of $G$, we can let $H(G, S)$ be the $3$-uniform hypergraph with vertex set $G$ and edge set consisting of all triples of the form $(x + s_1, x+ s_2, x+ s_3)$, where $x \\in G$ and $s_1, s_2, s_3 \\in S \\cup (-S)$ with $s_i \\neq \\pm s_j$ for $i \\neq j$. Alternatively, $H$ is the $3$-uniform hypergraph on the same vertex set whose triples are the triangles of Cay$(G, S')$, where\n\\[S' = \\{s_1 + s_2 : s_1, s_2 \\in S \\cup (-S), s_1 \\neq \\pm s_2\\}.\\]\nIt is worth noting that we omit edges of the form $(x, x + s + s)$, where they exist, since they will typically only be contained in one set of the form $C_x = \\{x + s: s \\in S \\cup (-S)\\}$. Over $\\mathbb{Z}_2^t$, such edges do not exist, so this issue does not arise.\n\nIt is also possible to define a variant of our construction by using longer sums. For instance, given a subset $S$ of $\\mathbb{Z}_2^t$, we may let\n\\[S' = \\{s_1 + \\dots + s_{2\\ell} : s_i \\in S, s_i \\neq s_j\\}\\]\nand take $H$ to again be the $3$-uniform hypergraph whose triples are the triangles of Cay$(\\mathbb{Z}_2^t, S')$. However, this generalised construction seems to have few tangible benefits over the $\\ell = 1$ case, so we have not pursued it further.\n\n\\vspace{3mm}\n\n{\\bf Vertex expansion.}\n\nGiven a subset $F$ of the skeleton $E$ of a hypergraph $H$, one may also define its \\emph{vertex neighbourhood}\n\\[N_V(F) = \\{v \\in V(H) : v \\cup f \\in T \\textrm{ for some } f \\in F \\textrm{ and } v \\notin V(F)\\},\\]\nwhere $V(F)$ is the set of vertices of $H$ which are contained in some edge of $F$. Assuming $H$ has $n$ vertices, we then let\n\\[h_V(H) = \\min_{\\{F: |V(F)| \\leq n\/2\\}} \\frac{|N_V(F)|}{|V(F)|}\\]\nand say that $H$ is an \\emph{$\\epsilon$-vertex-expander} if $h_V(H) \\geq \\epsilon$. One might now ask if our construction $H(\\mathbb{Z}_2^t, S)$ has this vertex expansion property for a random choice of $S$. This problem seems surprisingly delicate and we were unable to decide in which direction the truth lies. A positive solution would be of substantial interest and is likely to facilitate applications to extremal combinatorics, such as to the determination of the size Ramsey number of tight paths (see~\\cite{DLMR17} for the current status of this problem). It would also be of great interest to find alternative constructions, preferably of bounded degree, with this vertex expansion property. \n\n\\vspace{3mm}\n\n{\\bf Coboundary and cosystolic expansion.}\n\nThe progress by Evra, Kaufman, Kazhdan and Lubotzky~\\cite{EK17, KKL16} on constructing bounded-degree hypergraphs with the topological overlap property stems from a connection to another, more combinatorial, expansion property known as coboundary expansion~\\cite{DK12, LM06}. We will not attempt to describe this property here, but suffice to say that coboundary expansion and a slightly weaker notion known as cosystolic expansion are both known to imply the topological overlap property~\\cite{DKW16, G10}. The papers~\\cite{EK17, KKL16} (and the related work in~\\cite{LLR15, LM15}) then proceed by showing that the constructions under consideration are cosystolic expanders, from which the desired topological overlap property follows.\n\nIn assessing whether our construction gives cosystolic expanders, it is tempting to appeal to a criterion established by Evra and Kaufman~\\cite{EK17}. In the $3$-uniform case, this roughly says that if a hypergraph $H$ has the property that the skeleton graph and the link of each vertex are good expander graphs, then $H$ is a cosystolic expander. Unfortunately, this result does not apply in our situation, since the links in our construction are not good expanders. Nevertheless, we are still willing to conjecture that our construction yields cosystolic, and perhaps even coboundary, expanders.\n\n\\vspace{3mm}\n\n{\\bf Higher uniformities.}\n\nThe construction given in this paper does not seem to generalise to higher uniformities. To see why, recall that, given a set $S$ and $x \\in \\mathbb{Z}_2^t$, we define $C_x = \\{x + s : s \\in S\\}$. The principal reason our $3$-uniform construction goes through is that any edge $(x+s_1, x+s_2)$ in $C_x$ also appears in $C_{x+s_1+s_2}$ as $((x+s_1+s_2) + s_2, (x+s_1+s_2) + s_1)$. The natural analogue of this observation in the $4$-uniform case is to consider a triple $(x+s_1, x+s_2, x+s_3)$ in $C_x$ and to note that this triple can be rewritten as $((x+s_1+s_2 + s_3) + s_2 + s_3, (x+s_1+s_2 + s_3) + s_3 + s_1, (x+s_1+s_2 + s_3) + s_1 + s_2)$. Therefore, if $s_i + s_j$ is in $S$ for all $i \\neq j$, we see that the triple $(x+s_1, x+s_2, x+s_3)$ is also in $C_{x + s_1 + s_2 + s_3}$. However, the requirement that $s + s'$ is in $S$ for any distinct $s, s' \\in S$ is a very strong one, implying that $S$ contains every non-zero element in its span. Since $S$ needs to span all of $\\mathbb{Z}_2^t$ for Cay$(\\mathbb{Z}_2^t, S)$ to even be connected, it would need to contain all non-zero elements of $\\mathbb{Z}_2^t$. The construction would then reduce to taking the complete $4$-uniform hypergraph on $\\mathbb{Z}_2^t$. However, despite the failure of this particular mechanism, it remains a very interesting problem to find simple constructions of sparse expanders in higher uniformities.\n\n\\vspace{5mm}\n\\noindent\n{\\bf Acknowledgements.} The author gratefully acknowledges the support of the Simons Institute for the Theory of Computing during part of the period when this paper was written. The author is also indebted to Noga Alon, who brought the problem of constructing high-dimensional expanders to his attention, and to Rajko Nenadov for several valuable discussions.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\n\nGeometric Langlands correspondence was proposed by V.~G.~Drinfeld [1] as a geometric analog\nof the Langlands conjecture relating Galois representations of a global field with\nrepresentations of adelic algebraic group. In the geometric Langlands correspondence\nthe global field is replaced by the field of functions on a complete nonsingular\nalgebraic curve $C$ defined over the field of complex numbers $\\mathbb C$; Galois representations\nare replaced by local systems on the curve $C$;\nfinally, representations of adelic algebraic group\nare replaced by $\\mathcal D$-modules (or constructible sheaves) on the moduli space of bundles on $C$.\nThus, the conjectural geometric Langlands correspondence is a correspondence between\n(certain) $G^\\vee$-local systems on $C$ (here $G^\\vee$ is a semisimple algebraic group over\n$\\mathbb C$) and between (certain) $\\mathcal D$-modules on the moduli space $\\mathcal Bun_G$ of principal $G$-bundles\non $C$ (here $G$ is the group Langlands dual to $G^\\vee$).\n\nThe goal of this paper is to state a conjecture on deformation of the geometric Langlands\ncorrespondence, and to relate it with constructions of algebraic conformal field theory [2].\nThis conjecture, originally proposed also by V.~G.~Drinfeld, looks more simple and\nsymmetric than the conjecture on the geometric Langlands correspondence itself. Its\nrelation with conformal field theory found by the author is a refinement and an argument\nin favour of these conjectures, because it unifies many notions into a self-consistent picture.\n\nFor a more detailed exposition of the material of this paper, see [3].\n\nThe author is deeply grateful to B.~L.~Feigin and E.~V.~Frenkel; the main constructions of the\npaper arose from discussions with them. The author is also grateful to \nA.~A.~Beilinson and V.~G.~Drinfeld for permission to cite their \nunpublished conjectures.\n\n\n\n\\section{The main conjectures}\n\n\\subsection{} Let $G$ be a simple algebraic group over $\\mathbb C$, let $C$ be a complete smooth\nalgebraic curve of genus $g$, and let $\\mathcal Bun_G$ be the moduli space of principal $G$-bundles\non the curve $C$. It is known [3] that for any affine algebraic group $A$ the moduli space\n$\\mathcal Bun_A$ of principal $A$-bundles on $C$ is a smooth algebraic stack [4]. Moreover, this\nstack can be covered by open substacks of the form $X_n\/G_n$, where $X_n$ is a smooth\nalgebraic variety, and $G_n$ is an affine algebraic group acting on $X_n$. Hence various\nobjects (functions, $\\mathcal D$-modules, sheaves, etc.) on the stack $\\mathcal Bun_A$ can be\nobtained by glueing corresponding $G_n$-equivariant objects on the varieties $X_n$.\n\nDenote by $G^\\vee$ the group Langlands dual to $G$. For a principal $G^\\vee$-bundle\n$P^\\vee$ on the curve $C$, the space of (algebraic) connections on $P^\\vee$ is an affine space,\nwhose associated vector space is the cotangent space\n$$\nT_{P^\\vee}^*\\mathcal Bun_{G^\\vee}\\simeq\\Gamma(C,\\Omega^1_C\\otimes\\mathop{\\rm ad}\\nolimits P^\\vee)\n$$\nto the stack $\\mathcal Bun_{G^\\vee}$ at the point $P^\\vee$.\nHence we call the moduli space of $G^\\vee$-local systems on $C$,\ni.~e. principal $G^\\vee$-bundles with a connection, by the {\\it twisted cotangent bundle}\nto the stack $\\mathcal Bun_{G^\\vee}$, and denote it by $\\widetilde T^*\\mathcal Bun_{G^\\vee}$. It is known [3]\nthat the cocycle of the affine bundle $\\widetilde T^*\\mathcal Bun_{G^\\vee}$ is obtained from the\ncocycle of the canonical line bundle $\\omega_{\\mathcal Bun_{G^\\vee}}$ on the stack $\\mathcal Bun_{G^\\vee}$\nvia the homomorphism\n$$\nd\\log:H^1(\\mathcal Bun_{G^\\vee}, \\mathcal O^*_{\\mathcal Bun_{G^\\vee}})\\to H^1(\\mathcal Bun_{G^\\vee}, \n\\Omega^1_{\\mathcal Bun_{G^\\vee}}).\n$$\n\n\n\\subsection{} {\\bf Conjecture 1.} [6] The derived category of $\\mathcal D_{\\mathcal Bun_G}$-modules\nis equivalent to the derived category of quasicoherent\n$\\mathcal O_{\\widetilde T^*\\mathcal Bun_{G^\\vee}}$-modules.\n\nThis conjecture (as well as definition of the derived categories [5]) is due to Beilinson\nand Drinfeld. They refine it in the following way. The required equivalence of derived\ncategories should be given by the ``kernel'' $\\mathcal L_0$, an object of derived \ncategory of\n$\\mathcal D_{\\mathcal Bun_G}\\boxtimes\\mathcal O_{\\widetilde T^*\\mathcal Bun_{G^\\vee}}$-modules. The \nrestriction of this object\nto the open substack $\\widetilde T^*\\mathcal Bun_{G^\\vee}^\\circ$ of irreducible $G^\\vee$-local\nsystems should be a $\\mathcal D_{\\mathcal Bun_G}\\boxtimes\\mathcal O_{\\widetilde \nT^*\\mathcal Bun_{G^\\vee}^\\circ}$-module,\nflat as an $\\mathcal O_{\\widetilde T^*\\mathcal Bun_{G^\\vee}^\\circ}$-module, whose fiber \nover the local\nsystem\n$$\n\\P^\\vee=(P^\\vee,\\nabla)\\in\\widetilde T^*\\mathcal Bun_{G^\\vee}^\\circ\n$$\nshould be the unique, up to isomorphism, holonomic $\\mathcal D_{\\mathcal Bun_G}$-module $\\mathcal F_{\\P^\\vee}$,\nwhich is a Hecke eigen-$\\mathcal D_{\\mathcal Bun_G}$-module with eigenvalue $\\P^\\vee$. (For a definition of\nthis notion, see [5].) The correspondence $\\P^\\vee\\to\\mathcal F_{\\P^\\vee}$ is called the geometric\nLanglands correspondence.\n\n\\subsection{} Let us now proceed to the conjecture on deformation of the geometric\nLanglands correspondence. Denote\n$$\n\\xi=\\omega_{\\mathcal Bun_G}^{\\otimes\\left(-\\frac1{2h^\\vee}\\right)}\\in\\mathop{\\rm Pic}\\nolimits\\mathcal Bun_G\\otimes_\\mathbb Z\\mathbb Q\n$$\n($\\mathop{\\rm Pic}\\nolimits$ is the Picard group), where $h^\\vee$ is the dual Coxeter number of the group $G$.\nIn the case when $G$ is simply connected, it is known [5] that $\\xi$ is the positive generator\nof the group $\\mathop{\\rm Pic}\\nolimits\\mathcal Bun_G\\simeq\\mathbb Z$. Similarly, introduce the element\n$$\n\\xi^\\vee=\\omega_{\\mathcal Bun_{G^\\vee}}^{\\otimes\\left(-\\frac1{2h}\\right)}\\in\\mathop{\\rm Pic}\\nolimits\\mathcal Bun_{G^\\vee}\n\\otimes_\\mathbb Z\\mathbb Q.\n$$\n\n{\\bf Conjecture 2.} [6] The derived category of twisted\n$\\mathcal D_{\\mathcal Bun_G}(\\xi^{\\otimes\\kappa})$-modules is equivalent to the derived category of\ntwisted $\\mathcal D_{\\mathcal Bun_{G^\\vee}}(\\xi^{\\vee\\otimes\\kappa^\\vee})$-modules\nfor any $\\kappa\\in\\mathbb C$, $\\kappa\\ne0$, where $\\kappa^\\vee=1\/(r\\kappa)$, and $r$ is the maximal\nmultiplicity of an edge in the Dynkin diagram of the group $G$ (or $G^\\vee$);\n$r=1$, $2$, or $3$.\n\nThe idea of this conjecture is due to V.~G.~Drinfeld. This conjecture also has several\nrefinements. We are going to state them in the rest of this paper.\n\nThe required equivalence of derived categories should be given by a kernel\n$\\mathcal L_\\kappa$ which is an object of derived category of\n$\\mathcal D_{\\mathcal Bun_G}(\\xi^{\\otimes\\kappa})\n\\boxtimes\\mathcal D_{\\mathcal Bun_{G^\\vee}}(\\xi^{\\vee\\otimes\\kappa^\\vee})$-modules. All refinements of\nConjecture~2 stated below deal with the properties of this kernel.\n\n\\subsection{} {\\bf Property 1. Dependence on the parameter $\\kappa$; classical limits.}\n\nLet us first define the notion of asymptotic twisted $\\mathcal D$-module, or\n$\\mathcal D_X^{\\mathop{\\rm asym}\\nolimits}(\\xi^{\\otimes t})$-module, on a smooth variety $X$ with a line bundle $\\xi$.\nThis notion is introduced in [3]; let us briefly discuss it.\n\nThere exists a natural sheaf $\\mathcal D_X^{\\mathop{\\rm asym}\\nolimits}(\\xi^{\\otimes t})$ of quasicoherent\n$\\mathcal O_{\\mathbb P^1}$-algebras on the product $\\mathbb P^1\\times X$, flat as an \n$\\mathcal O_{\\mathbb P^1}$-module,\nwhose fiber at the point $\\kappa\\in\\mathbb P^1$, $\\kappa\\ne\\infty$, is isomorphic to the sheaf\n$\\mathcal D_X(\\xi^{\\otimes\\kappa})$ of twisted differential operators, and the fiber over the point\n$\\infty\\in\\mathbb P^1$ is isomorphic to the sheaf $\\pi_*\\mathcal O_{\\widetilde T^*X}$, \nwhere\n$\\pi:\\widetilde T^*X\\to X$ is the twisted cotangent affine bundle over $X$, whose\ncocycle coresponds to the cocycle of the bundle $\\xi$ under the homomorphism\n$$\nd\\log:H^1(X,\\mathcal O_X^*)\\to H^1(X,\\Omega^1_X).\n$$\nSections of the sheaf $\\mathcal D_X^{\\mathop{\\rm asym}\\nolimits}(\\xi^{\\otimes t})$ are called twisted asymptotic\ndifferential operators, cf. [7]. On a sufficiently small open subset $U\\subset X$,\non which the bundle $\\xi$ is trivial, we have\n$$\n\\Gamma((\\mathbb P^1\\setminus\\{0\\})\\times U, \\mathcal D_X^{\\mathop{\\rm asym}\\nolimits}(\\xi^{\\otimes t}))\\simeq\n\\mathcal O_U[t^{-1},t^{-1}\\partial_1,\\ldots,t^{-1}\\partial_n],\n$$\nwhere $\\partial_1$, $\\ldots$, $\\partial_n$ is a basis of vector fields on $U$, $t$ is the\nparameter on the line $\\mathbb P^1$.\n\nBy definition, a $\\mathcal D_X^{\\mathop{\\rm asym}\\nolimits}(\\xi^{\\otimes t})$-module is a sheaf of\n$\\mathcal D_X^{\\mathop{\\rm asym}\\nolimits}(\\xi^{\\otimes t})$-modules on the product $\\mathbb P^1\\times X$ quasicoherent as an\n$\\mathcal O_{\\mathbb P^1\\times X}$-module.\n\nLet us return to the kernel $\\mathcal L_\\kappa$. The property of dependence of \n$\\mathcal L_\\kappa$ on the\nparameter $\\kappa$ states that the objects $\\mathcal L_0$, $\\mathcal L_\\kappa$ are the \nfibers of an object\n$\\mathcal L_t$ of the derived category of\n$\\mathcal D_{\\mathcal Bun_G}^{\\mathop{\\rm asym}\\nolimits}(\\xi^{\\otimes t})\n\\boxtimes_{\\mathcal O_{\\mathbb P^1}}\\mathcal D_{\\mathcal Bun_{G^\\vee}}^{\\mathop{\\rm asym}\\nolimits}(\\xi^{\\vee\\otimes \nt^\\vee})$-modules\non the product $\\mathbb P^1\\times\\mathcal Bun_G\\times\\mathcal Bun_{G^\\vee}$, flat as an \n$\\mathcal O_{\\mathbb P^1}$-module.\nHere $t^\\vee=1\/(rt)$. The fiber of this object at $t=\\kappa$ is the object \n$\\mathcal L_\\kappa$,\nthe fiber at $t=0$ is the object $\\mathcal L_0$ which is the kernel of the \ngeometric Langlands\ncorrespondence defined in 1.2 above, and the fiber at $t=\\infty$ is the \nobject $\\mathcal L_\\infty$\nwhich is the kernel of the geometric Langlands correspondence for the group $G^\\vee$, i.~e.,\nthe object $\\mathcal L_\\infty$ is obtained from $\\mathcal L_0$ by exchanging the roles \nof the groups $G$ and\n$G^\\vee$.\n\n\\subsection{} {\\bf Property 2: singular support of the kernel \n$\\mathcal L_\\kappa$}.\nOne has the Hitchin map [5]\n$$\n\\chi_G: T^*\\mathcal Bun_G\\to\\oplus_{i=1}^{\\mathop{\\rm rk}\\nolimits G}\\Gamma(C, \\omega_C^{\\otimes d_i}),\n$$\nwhere $d_i$ are the exponents of the group $G$, $\\mathop{\\rm rk}\\nolimits G$ is the rank of $G$. After restriction\nto certain open dense subset $U\\subset\\mathcal Bun_G\\times\\mathcal Bun_{G^\\vee}$, the \nkernel $\\mathcal L_\\kappa$\nshould be a coherent $\\mathcal D_{\\mathcal Bun_G}(\\xi^{\\otimes\\kappa})\n\\boxtimes\\mathcal D_{\\mathcal Bun_{G^\\vee}}(\\xi^{\\vee\\otimes\\kappa^\\vee})$-module whose singular support\ncoincides with the preimage of the diagonal\n$$\n\\Delta_\\kappa=\\{v_i,v_i^\\vee\\in\\Gamma(C,\\omega_C^{\\otimes d_i}):\nv_i^\\vee=\\kappa^{d_i}v_i\\}_{i=1}^{\\mathop{\\rm rk}\\nolimits G}\n$$\nunder the product of Hitchin maps\n$$\n\\chi_G\\times\\chi_{G^\\vee}: T^*\\mathcal Bun_G\\times T^*\\mathcal Bun_{G^\\vee}\n\\to\\left(\\oplus_{i=1}^{\\mathop{\\rm rk}\\nolimits G}\\Gamma(C, \\omega_C^{\\otimes d_i})\\right)^{\\oplus2}.\n$$\nThe classical limit of this property as $\\kappa\\to0$ yields the conjecture that\nthe singular support of $\\mathcal D_{\\mathcal Bun_G}$-modules $\\mathcal F_{\\P^\\vee}$ from the geometric Langlands\ncorrespondence is contained in the global nilpotent cone, which is the preimage of zero\nunder the Hitchin map.\n\n\n\\section{Relation with conformal field theory}\n\n\\subsection{} For simplicity assume in this Section that the group $G$ is adjoint, and the\ngroup $G^\\vee$ is simply connected. For the general case, see [3].\n\nLet us fix a Borel subgroup $B\\subset G$ with the unipotent radical $N$; let $H=B\/N$ be the\nCartan group. Consider the diagram\n$$\n\\leqno{(*)}\n\\xymatrix{\n & \\mathcal Bun_B^{>0}\\ar[dl]_{\\sigma}\\ar[dr]^{\\rho} & & & \\\\\n\\mathcal Bun_G & & \\mathcal Bun^{>0}_{B\/[N,N]}\\ar[dr]^{\\beta} & & \n\\mathcal Bun_{\\omega,H}\\stackrel{\\iota}{\\hookleftarrow}\\mathcal Conf_{G^\\vee}\\ar[dl]_{\\alpha} \n\\\\\n & & & \\mathcal Bun_H^{>0} & \\\\\n}\n$$\nThe only thing to be explained in this diagram is what are the spaces $\\mathcal Bun_{\\omega,H}$,\n$\\mathcal Conf_{G^\\vee}$, and what does the sign $>0$ mean. To explain this, note that\n$$\nB\/[N,N]\\simeq\\prod_{i=1}^{\\mathop{\\rm rk}\\nolimits G}B_i,\n$$\nwhere $B_i$ is a copy of the upper triangular Borel subgroup in $PGL(2)$. Hence\n$\\mathcal Bun_{B\/[N,N]}$ is identified with the moduli space of sets of exact triples\n$$\n\\leqno{(**)}\\qquad\\qquad\\qquad\\qquad 0\\to\\mathcal O_C\\to E_i\\to L_i\\to0,\n$$\nwhere $L_i$ is a line bundle on the curve $C$, $1\\le i\\le\\mathop{\\rm rk}\\nolimits G$. The projection\n$\\mathcal Bun_{B\/[N,N]}\\to\\mathcal Bun_H$ takes a set of triples $(**)$ to the set of line bundles $(L_i)\\in\\mathcal Bun_H$.\nBy definition,\n$$\n\\mathcal Bun_H^{>0}=\\{(L_i), \\deg L_i>0, 1\\le i\\le\\mathop{\\rm rk}\\nolimits G\\};\n$$\n$\\mathcal Bun_{B\/[N,N]}^{>0}$ and $\\mathcal Bun_B^{>0}$ are the preimages of $\\mathcal Bun_H^{>0}$ under the natural\nprojections. The projection\n$$\n\\beta:\\mathcal Bun_{B\/[N,N]}^{>0}\\to\\mathcal Bun_H^{>0}\n$$\nis a vector bundle whose fiber over a point $(L_i)\\in\\mathcal Bun_H^{>0}$ is the vector space\n$$\n\\oplus\\mathop{\\rm Ext}\\nolimits_C^1(L_i,\\mathcal O)\\simeq\\oplus H^1(C,L_i^{-1}).\n$$\nBy definition, $Bun_{\\omega,H}$ is the vector bundle dual to the vector bundle $\\beta$, and\n$\\mathcal Conf_{G^\\vee}$ is an open substack of this vector bundle defined as follows:\n$$\n\\begin{aligned}{}\n\\mathcal Bun_{\\omega,H}&=\\{(L_i,s_i),\\deg L_i>0,s_i\\in\\Gamma(C,\\omega_C\\otimes L_i)\\},\\\\\n\\mathcal Conf_{G^\\vee}&=\\{(L_i,s_i),s_i\\ne0\\}\\simeq\\{D_i: \\deg D_i>2g-2\\},\n\\end{aligned}\n$$\ni.~e., $\\mathcal Conf_{G^\\vee}$ is isomorphic to the space of sets of effective divisors $(D_i)$\non the curve $C$, i.~e. to the space of divisors with values in the semigroup $\\Gamma_+$\nof dominant weights of the group $G^\\vee$. Let us call these sets of divisors by\n{\\it coloured divisors}. As a variety $\\mathcal Conf_{G^\\vee}$ is the disjoint union of products of\nsymmetric powers of the curve $C$. The fact that an open dense substack of a\nvector bundle is a disjoint union of projective varieties, is due to the fact that each\npoint of the stack $\\mathcal Bun_H$ has the group of automorphisms $(\\mathbb C^*)^{\\mathop{\\rm rk}\\nolimits G}$.\n\nThe space $\\mathcal Conf_{G^\\vee}$ is naturally stratified:\n$$\n\\mathcal Conf_{G^\\vee}=\\sqcup_{\\lambda^\\vee}\\mathcal Conf_{G^\\vee}^{\\lambda^\\vee},\n$$\nwhere $\\lambda^\\vee=(\\lambda^\\vee_1,\\ldots,\\lambda^\\vee_N)$, $\\lambda^\\vee_i\\in\\Gamma_+$, and\n$$\n\\mathcal Conf_{G^\\vee}^{\\lambda^\\vee}=\\{\\lambda_1^\\vee x_1+\\ldots+\\lambda_N^\\vee x_N, x_i\\in C,\nx_i\\ne x_j\\text{ for }i\\ne j\\}\/\\mathfrak G_{\\lambda^\\vee},\n$$\nwhere $\\mathfrak G_{\\lambda^\\vee}$ is the group of permutations of indices $i$ preserving the weights\n$\\lambda_i^\\vee$. Denote the inclusion\n$\\mathcal Conf_{G^\\vee}^{\\lambda^\\vee}\\hookrightarrow\\mathcal Conf_{G^\\vee}$ by $j_{\\lambda^\\vee}$.\n\n\\subsection{} {\\bf Property 3 of the kernel $\\mathcal L_\\kappa$.} This property \ndescribes the object of\nderived category of twisted $\\mathcal D_{\\mathcal Conf_{G^\\vee}^{\\lambda^\\vee}\\times\\mathcal Bun_{G^\\vee}}$-modules\n$$\n\\leqno(***)\\qquad\\qquad\\qquad\\qquad \nj_{\\lambda^\\vee}^!\\iota^!F\\rho_*\\sigma^!\\mathcal L_\\kappa,\n$$\nwhere all the spaces in the diagram $(*)$ are multiplied by $\\mathcal Bun_{G^\\vee}$; $F$ denotes the\nFourier--Laplace transform of a $\\mathcal D$-module on the vector bundle $\\mathcal Bun_{B\/[N,N]}^{>0}$.\nThe object $(***)$ should be isomorphic to the twisted\n$\\mathcal D_{\\mathcal Conf_{G^\\vee}^{\\lambda^\\vee}\\times\\mathcal Bun_{G^\\vee}}$-module\n$KZ_{G^\\vee,\\kappa^\\vee}^{\\lambda^\\vee}$ constructed in conformal field theory.\nAs an\n$\\mathcal O_{\\mathcal Conf_{G^\\vee}^{\\lambda^\\vee}}\n\\boxtimes\\mathcal D_{\\mathcal Bun_{G^\\vee}}(\\xi^{\\vee\\otimes\\kappa^\\vee})$-module,\n$KZ_{G^\\vee,\\kappa^\\vee}^{\\lambda^\\vee}$ coincides with the induced module from the\n$\\mathcal O_{\\mathcal Conf_{G^\\vee}^{\\lambda^\\vee}\\times\\mathcal Bun_{G^\\vee}}$-module whose \nfiber at the point\n$$\n(\\lambda^\\vee_1x_1+\\ldots+\\lambda_N^\\vee x_N, P^\\vee)\n\\in\\mathcal Conf_{G^\\vee}^{\\lambda^\\vee}\\times\\mathcal Bun_{G^\\vee}\n$$\nis the tensor product\n$(V_{P^\\vee}^{\\lambda^\\vee_1})_{x_1}\\otimes\\ldots\\otimes (V_{P^\\vee}^{\\lambda^\\vee_N})_{x_N}$,\nwhere $V_{P^\\vee}^{\\lambda^\\vee_i}$ is the vector bundle on $C$ associated with the\nprincipal $G^\\vee$-bundle $P^\\vee$ and with the $G^\\vee$-module $V^{\\lambda^\\vee_i}$\nwith highest weight $\\lambda_i^\\vee$. Further, this\n$\\mathcal O_{\\mathcal Conf_{G^\\vee}^{\\lambda^\\vee}}\n\\boxtimes\\mathcal D_{\\mathcal Bun_{G^\\vee}}(\\xi^{\\vee\\otimes\\kappa^\\vee})$-module has a natural structure\nof a twisted $\\mathcal D_{\\mathcal Conf_{G^\\vee}^{\\lambda^\\vee}}$-module [2], which is a direct\ngeneralization of the Knizh\\-nik--Za\\-mo\\-lod\\-chi\\-kov connection to curves of genus $g$.\nA direct check with the use of Riemann-Roch theorem shows that the object $(***)$ has the same\ntwist. The property~3 of the kernel $\\mathcal L_\\kappa$ states that these two \nobjects are isomorphic.\n\n\\subsection{} {\\bf Classical limits of the property 3.}\n\na) {\\it The limit $\\kappa\\to0$} amounts to the geometric analog of the\nCasselman--Shalika--Shintani formula for the Whittaker function of an automorphic form. This\nstatement is that the $\\mathcal D_{\\mathcal Conf_{G^\\vee}^{\\lambda^\\vee}}$-module\n$$\nj_{\\lambda^\\vee}^!\\iota^!F\\rho_*\\sigma^!\\mathcal F_{\\P^\\vee}\n$$\nis isomorphic to the local system whose fiber over the point\n$\\lambda^\\vee_1x_1+\\ldots+\\lambda^\\vee_Nx_N\\in\\mathcal Conf_{G^\\vee}^{\\lambda^\\vee}$ equals\n$(V_{\\P^\\vee}^{\\lambda^\\vee_1})_{x_1}\\otimes\\ldots\\otimes (V_{\\P^\\vee}^{\\lambda^\\vee_N})_{x_N}$.\n\nb) {\\it The limit $\\kappa\\to\\infty$} amounts to the construction of the\n$\\mathcal D_{\\mathcal Bun_{G^\\vee}}$-module $\\mathcal F_\\P$, for $\\P\\in\\widetilde T^*\\mathcal Bun_G^\\circ$,\nby means of a $G$-oper with regular singularities with trivial monodromy [1].\n\nFor details, see [3].\n\n\\subsection{} The following natural question arises: what is the result of applying the same\noperation $j_{\\lambda^\\vee}^!\\iota^!F\\rho_*\\sigma^!$ to the twisted $\\mathcal D$-module\n$KZ_{G,\\kappa}^\\lambda$ on the product $\\mathcal Conf_G^\\lambda\\times\\mathcal Bun_G$? The arising twisted\n$\\mathcal D$-module on the product $\\mathcal Conf_G^\\lambda\\times\\mathcal Conf_{G^\\vee}^{\\lambda^\\vee}$ should\nbe related with the $W$-algebra $W_G^\\kappa\\simeq W_{G^\\vee}^{\\kappa^\\vee}$ [8]. In the case\n$G=SL(2)$ it is the Virasoro algebra.\n\nFor a statement of this kind for a curve $C$ of genus $g=0$ with marked points and for\n$G=SL(2)$, see [9].\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nRecognising people by the way they walk (also known as gait recognition or gait-based person identification) is a relatively new field of research.\nMost of previously studied methods work in the visual domain, where this topic is an active field of research since the last decade~\\cite{lee2002gait}.\nHowever, acoustic information can also be used for gait recognition.\nEven though the focus on this modality has so far been significantly less, results are promising.\nWhile in the visual domain, identification systems can rely on analysing the silhouette~\\cite{wang2003silhouette},\nthe task is much more difficult for systems working only with audio information.\nThe relevant information which can be exploited by such systems consists not only of the sounds of the steps,\nbut also adjacent sounds produced by the clothes of moving arms and legs.\nThese sounds are influenced by the gait pattern of the walking person,\nmaking them suitable to be used for person identification.\nFurthermore, the sounds produced during walking are highly dependent on factors such as the floor type, type of shoes and clothes.\n\nIn a user study~\\cite{makela2003use}, the potential of humans to recognise others by their walking sounds was evaluated.\nAfter a training phase, twelve subjects were able to identify their co-workers by their walking sounds with an accuracy of 66\\,\\%.\nThis result shows that sounds produced by walking persons convey characteristic information about the subject and can thus be used for person identification.\n\nPotential applications of gait-based person identification using audio information are smart homes for ambient assisted living, indoor surveillance scenarios, or access control systems.\nSuch an audio-based system can be used to enhance visual surveillance and facilitate multimodal approaches.\nAs compared to video-based person identification, acoustic systems will also work in the darkness, require less expensive hardware and often lower sensor density and are less obtrusive.\nAcoustic gait-based person identification is also known as \\textit{acoustic gait recognition}.\n\n\n\n\n\\vspace{-0.1cm}\n\\subsection{Contribution}\n\nThe contribution of this paper is a system for acoustic gait-based person identification that is based on hidden Markov models (HMMs).\nTo our knowledge, this is the first time that HMMs are applied for this task.\nWe use Mel-frequency cepstral coefficients (MFCCs) as audio features and HMMs with a cyclic topology for dynamic classification, in order to model the dynamics of gait patterns.\nWith the cyclic topology, one pass through the model corresponds to a half gait cycle containing one step.\nThus, the system is capable of detecting the individual steps in a recording and using them for person identification. \nExperiments are conducted using the TUM GAID corpus, which contains 3\\,050 recordings of 305 subjects in three walking variations in a realistic setup.\nThe recognition system is trained with normal walking style recordings and evaluated on other recordings of normal walking style as well as variations including a backpack and shoe covers.\nOur experimental results show that the developed system is capable of achieving excellent recognition rates compared to previous work.\n\n\n\n\n\\vspace{-0.1cm}\n\\subsection{Related Work}\n\nThe most-widespread approach for video-based gait recognition is the Gait Energy Image (GEI)~\\cite{han2006}, which is a simple silhouette-based approach.\nIt can be combined with face recognition~\\cite{hofmann2012combined} or with depth information~\\cite{hofmann2012gait}.\nFurthermore, model-based approaches have been proposed for visual gait recognition~\\cite{yam2004}.\nBesides using video or audio information, other methods to identify walking persons include using acoustic Doppler sonar~\\cite{kalgaonkar2007acoustic} or pressure sensors in the floor~\\cite{yun2003user}.\n\n\nUsing audio information for the task of gait-based person identification is a relatively new research field.\nIn~\\cite{she2004framework}, footstep sounds were detected in a corpus of various environmental sounds.\nA system for person identification using footstep detection was introduced in~\\cite{shoji2004personal}.\nThe system was tested with a database of five persons.\nThis work was extended in~\\cite{itai2006footstep} by adding psychoacoustic features such as loudness, sharpness, fluctuation strength and roughness.\nFinally, in~\\cite{itai2008footstep}, dynamic time warping was used for classification and the database was extended to contain ten persons. \nThe system achieves almost 100\\,\\% perfect classification rates (using ten persons).\nHowever, the task is simplified by reducing it to classification of pre-segmented footsteps.\nA similar task is addressed in the recently published study by Altaf et al.~\\cite{Altaf13-PIU}.\nThere, a database of segmented footstep sounds from ten persons is used.\nInstead of extracting spectral features, \nthe shape and properties of a footstep sound are examined in a temporal energy domain.\nAs a result, an identification accuracy above 90\\,\\% is achieved by using a large number of footsteps during testing.\nWhen using only three consecutive footsteps, which is more comparable to our work, an accuracy of 45\\,\\% is obtained.\nOther studies on acoustic gait-based person identification were presented in ~\\cite{deCarvalho2010identification,alpert2010acoustic}.\nThe weakness of all previous studies about acoustic gait-based person identification that are mentioned here is the fact that only small databases (mostly no more than ten subjects) that are overly prototypical have been employed.\nIn addition, very often, classification is performed using pre-segmented footsteps.\nIn our previous work~\\cite{13hof1}, we investigated the potential of spectral, cepstral and energy-related audio features in combination with support vector machines (SVM) for acoustic gait-based person identification.\nThis work was continued in~\\cite{Geiger13GBP}, where a feature analysis method was used to select relevant audio features.\nIn~\\cite{11wen2}, we had also employed cyclic HMMs, for animal sound classification.\nThe cyclic model topology proved to be efficient to model the repetitive structure of these sounds.\n\n\n\nThe remainder of this paper is structured as follows:\nIn Section~\\ref{sec:database}, we introduce the TUM GAID database which is used in the experiments.\nThe employed system is described in Section~\\ref{sec:system}, followed by the experimental setup and results in Section~\\ref{sec:experiments}.\nSome concluding remarks are given in Section~\\ref{sec:conclusions}.\n\n\\section{The TUM GAID Database}\n\\label{sec:database}\n\nFor our experiments, we use our freely available\\footnote{\\url{www.mmk.ei.tum.de\/tumgaid}} TUM Gait from Audio, Image and Depth (GAID) database~\\cite{13hof1}.\nThe motivation behind the TUM GAID database is to foster research in multimodal gait recognition.\nTherefore, data was recorded with an RGB-D sensor, as well as with a four-channel microphone array.\nThus, a typical colour video stream, a depth stream and an audio stream are simultaneously available.\nThe database contains recordings of 305 subjects walking perpendicular to the recording device in a 3.5\\,m wide hallway corridor with a solid floor.\nIn each recorded sequence, the subject walks for roughly 4\\,m, typically performing between 1.5 and 2.5 gait cycles (each of them consisting of two steps).\nMost of the sequences have a length of approximately 2 -- 3\\,s.\nThree variations are recorded for each subject: Normal walking ($\\mathcal{N}$), walking with a backpack ($\\mathcal{B}$), and walking with shoe covers ($\\mathcal{S}$).\nFor each subject, all recordings of the $\\mathcal{N}$ condition were recorded directly after each other.\nThis means that the same shoes and clothes are used, which corresponds more to a re-identification scenario.\nThe backpack constitutes a significant variation in gait pattern and sound, and the shoe covers pose a considerable change in acoustic condition.\nFigure~\\ref{fig:screenshots} shows screenshots of the three different walking conditions for one subject.\n\\begin{figure}[htb]\n\\vspace{-0.3cm}\n\\center{\n \\subfloat[Normal recording]{\\includegraphics[width=0.17\\linewidth]{Bild1.png}}\\hspace{0.1cm}\n \\subfloat[Backpack recording]{\\includegraphics[width=0.17\\linewidth]{Bild2.png}}\\hspace{0.1cm}\n \\subfloat[Shoe cover recording]{\\includegraphics[width=0.17\\linewidth]{Bild3.png}\n\n \n \n\n \\caption{Screenshots of three recordings in the TUM GAID database}\n\t\\label{fig:screenshots}\n\\vspace{-0.2cm}\n\\end{figure}\nFor each subject, there are six recordings of the $\\mathcal{N}$ setup, and two each of the $\\mathcal{B}$ and $\\mathcal{S}$ setups.\nThis sums to a total number of 3\\,050 recordings.\nThe metadata distribution of the database is well-balanced with a female proportion of 39\\,\\% and ages ranging from 18 to 55 years (average 24.8 years and standard deviation 6.3 years).\nMore than half of the subjects are wearing sneakers while other commonly-used types of shoes are boots and loafers.\n\nTo allow for a proper scientific evaluation and to prevent overfitting on the test data, the database is divided into a \\textit{development set} and a \\textit{test set}.\nThe two sets are person-disjunct and contain 150 and 155 subjects, respectively.\nBoth for the development and for the test set, the first four $\\mathcal{N}$ recordings of each subject are used for the enrollment process.\nThe other two $\\mathcal{N}$ recordings as well as the $\\mathcal{B}$ and $\\mathcal{S}$ recordings are used to perform the identification experiments.\nThis means that models are learnt only using the $\\mathcal{N}$ recordings, while the $\\mathcal{B}$ and $\\mathcal{S}$ conditions constitute previously unseen variations during the identification experiments and will therefore deteriorate the identification performance.\nThe partition of the database is shown in Table~\\ref{Tab:databaseid}.\n\\begin{table}[t]\n\\centering\n\\caption{Partition of the TUM GAID database}\n\\vspace{-0.2cm}\n\\begin{tabular}{lcc}\n& \\textbf{Development} & \\textbf{Test} \\\\\n& (150 subj.) &(155 subj.)\\\\\n\\midrule\n$\\mathcal{N}$1 -- $\\mathcal{N}$4 & Enrollment & Enrollment\\\\\n$\\mathcal{N}$5 -- $\\mathcal{N}$6 & Identification & Identification\\\\\n$\\mathcal{B}$1 -- $\\mathcal{B}$2 & Identification & Identification\\\\\n$\\mathcal{S}$1 -- $\\mathcal{S}$2 & Identification & Identification\\\\\n\\end{tabular}\n\\label{Tab:databaseid}\n\\end{table}\n\n\n\n\n\n\\section{System Description}\n\\label{sec:system}\n\n\n\n\nWe use an HMM system for classification.\nEach individual subject is modelled by one HMM.\nWhile we started with using system settings from a simple word-based speech recognition system, we modified and improved the system properties to fit to the problem of acoustic gait recognition.\n\n\n\n\\subsection{Audio Features}\n\n\nIn our previous work we focussed on exploring the suitability of different audio features for the problem of acoustic gait-based person identification~\\cite{Geiger13GBP}.\nUsing SVMs for classification, we evaluated different feature sets containing MFCCs and other spectral or energy-related features.\nSince SVMs are relatively robust (in contrast to HMMs) with regard to the number of employed features, we were able to improve the average identification accuracy (on the test set of the TUM GAID database) from 23.9\\,\\% (only MFCCs) to 28.2\\,\\% by adding and selecting relevant features.\nIn the present work, the focus is not on the front-end processing but rather on the back-end recognition system.\nTherefore we keep the front-end fixed to using only MFCCs.\nWe use MFCC features in the standard configuration: MFCCs 0--12 including their delta and acceleration coefficients, computed every 10 $ms$ from a 25 $ms$ Hamming window, resulting in 39 features in total.\nWhile the database provides four-channel audio recordings, we extract features from monaural recordings, which are obtained by averaging over the four channels.\nIn addition, we obtained slight improvements by processing the audio features with principal component analysis (PCA), without reducing the number of components.\nHere, the transformations are computed only on the enrollment data, and applied on both the enrollment and identification data.\n\nFigure~\\ref{fig:feat_spec} shows the spectrograms and corresponding first MFCC coefficients for two exemplary recordings ($\\mathcal{N}$ setup) of two different subjects.\n\\begin{figure}\n\t\\centering\n\n\t\\subfloat{\n\t\t\\includegraphics[trim = 11mm 80mm 30mm 80mm, clip, scale=0.35]{feature_spec_n01_p010}\n\t\t\\label{fig:feat_spec10}\n\t}\\\\\n\t\\vspace{-0.8cm}\n\n\t\\subfloat{\n\t\t\\includegraphics[trim = 11mm 80mm 30mm 80mm, clip, scale=0.35]{feature_spec_n01_p020}\n\t\t\\label{fig:feat_spec20}\n\t}\n\t\\caption{Spectrograms (top) and corresponding first MFCC coefficients (bottom), each, for a normal-type recording of two different subjects. Temporal position of footsteps is marked with a vertical line.}\n\t\n\t\\label{fig:feat_spec}\n\t\\vspace{-0.2cm}\n\\end{figure}\nThe spectrograms reveal a considerable static background noise, which is due to the recording environment.\nSeveral spectral peaks can be identified which correspond to the footsteps and the sounds between the steps, which are mostly made by the legs of the trousers or skirts rubbing against each other.\nIn the plot of the MFCCs, the temporal position of the steps are marked.\nThe behaviour of the MFCC features indicates that they are useful to detect the position of the steps and to distinguish between different persons.\n\n\\subsection{HMM System}\n\\label{ssec:basichmm}\n\nOur starting point is a simple HMM system that can be compared to a whole-word recognition system (each person representing one \\textit{word}) in speech recognition.\nEach subject in the dataset is represented by an HMM.\nThe models are equipped with a linear left-right topology.\nWith such a model topology, the HMM has to pass through all of its states sequentially without skipping a state.\nBefore introducing an appropriate step modelling method, which will be described in the next subsection,\nwe apply an approach where each recording containing several steps is modelled by one pass through an HMM.\nAs a result, rather large numbers of states (generally more than ten) are required to be able to model the dynamic sequence of sounds during walking.\n\nIn a standard HMM system, the observations are modelled with a mixture of Gaussians.\nHowever, our first experiments showed that the best results are obtained by using HMMs with a single Gaussian state model,\nas the amount of training data is very small and hence probably not sufficient to train a more fine-grained distribution of the features.\nAnother reason could be that a higher number of components leads to overfitting, modelling also the noise in the recordings.\n\nDuring decoding, a grammar controls the possible recognition output.\nOur most simple employed grammar follows the basic HMM system setup where exactly one pass through a model is allowed for each recording.\nA multi-step grammar is then introduced to let the system automatically segment the recording:\nAny number of repetitions of the same model (subject) is allowed.\nIn order to train the HMMs to model the separate steps, an approach using a cyclic HMM topology is employed as described in the following.\n\n\\subsection{Step Modelling}\n\\label{sec:stepmodeling}\n\nTo be able to model the individual steps in each recording, we use cyclic HMMs.\nIn our basic HMM system, each recording (containing several gait cycles) is modelled by one pass through the HMM.\nThe strategy of representing each gait cycle separately by one pass through all states of the HMM is better suited to model the observations.\nWe consider the two halves of each gait cycle to be equivalent (although in fact, there is a person-dependent assymetry~\\cite{nixon2006human}),\nand therefore the system is designed to model half a gait cycle (containing one step) by each HMM.\nIn this way, one pass through the HMM models the sounds of one step and adjacent sounds (produced by the moving arms and legs). \nThis method of step modelling is implemented in the system configuraton and training in the following way:\nThe state transition matrix of each HMM has a left-right topology, and jumps from the last state to the first state are allowed.\nModels are trained with embedded re-estimation, where the number of steps is known (as determined by simple video processing methods).\nAs a result, the position of the steps in the training data is automatically estimated during model training.\nTogether with the introduced multi-step decoding grammar, the developed system is then capable of detecting, segmenting and recognising the steps occurring in the recordings.\n\n\n\\section{Experiments}\n\\label{sec:experiments}\n\nExperiments are performed with the TUM GAID database that was described in Section~\\ref{sec:database},\nusing the development set for system design and tuning.\nFinally, we use the test set to evaluate our best system configuration.\nFor all systems evaluated using the development set, 15 HMM states appeared to be the optimal configuration. \nIn addition, the best results were obtained with six training iterations.\nFor each system setup, we report experimental results (identification accuracy) separately for the three different recording conditions (normal, backpack, shoe covers).\nIn addition, the average accuracy over these three conditions is included.\n\n\n\n\n\n\n\\vspace{-0.2cm}\n\\subsection{Development set}\n\nTable~\\ref{tab:resultsdev} shows the results on the development set for different system configurations.\n\\begin{table}[t!]\n\\begin{center}\n\\caption{\\em Development set (150 subjects) evaluation of different audio features, for the normal ($\\mathcal{N}$), backpack ($\\mathcal{B}$) and shoe cover ($\\mathcal{S}$) recording conditions.}\n\\begin{tabular}{c|ccc|c}\n & \\multicolumn{3}{|c|}{\\textbf{Condition}} & \\\\\n{\\textbf{Accuracy [\\%]}} & $\\mathcal{N}$ & $\\mathcal{B}$ & $\\mathcal{S}$ & \\bf average \\\\\n\\toprule\nbasic HMM & 53.3 & 30.7 & 7.0 & 28.2 \\\\\n+ multi-step decoding & 56.3 & 31.3 & 7.3 & 31.6 \\\\\n+ PCA & 57.7 & 34.3 & 9.7 & 33.9 \\\\\n+ step modelling & 69.7 & 44.7 & 9.3 & 41.2 \\\\\n\\end{tabular}\n\\label{tab:resultsdev}\n\\end{center}\n\\vspace{-0.2cm}\n\\end{table}\nThe basic HMM system without explicit modelling of separate steps (cf. Section~\\ref{ssec:basichmm}) is the first evaluated system.\nIn the normal recording condition, slightly more than half of the testing samples are classified correctly.\nAveraging over the three different conditions, an accuraccy of 28.2\\,\\% is obtained, which serves as a baseline for further experiments.\nThe first step towards the improved recognition system is the introduction of a decoding grammar which allows to recognise multiple sequential instances of the same subject in the recordings.\nThis modification improves the average accuracy to 31.6\\,\\% (mostly due to improvements in the $\\mathcal{N}$ setup).\nApplying PCA to the features improves the accuracy for all three recording conditions.\nTraining the system to model each step by one pass through an HMM (cf. Section~\\ref{sec:stepmodeling}) leads to the largest improvement in accuracy.\nIn the normal walking condition, more than two thirds of the samples are now identified correctly.\nThe accuracy in the backpack walking condition is also greatly improved, whereas the performance in the shoe cover condition remains largely unaffected.\nWhile the improvements obtained with the multi-step grammar and PCA are not significant, improved step modelling leads to a significant improvement in the $\\mathcal{N}$ and $\\mathcal{B}$ conditions and for the average accuracy (evaluated with a one-tailed t-test with a significance level of $\\alpha=0.05$).\n\n\n\nWith a simple analysis we examined the system's ability to correctly detect the individual steps.\nTo this end, we use the best-performing developed system (row four in Table~\\ref{tab:resultsdev}).\nFor the test samples of the normal walking conditions, we observe the number of steps detected by the system.\nThe average number of steps in these test recordings is 5.3, while the system predicts 4.3 steps, on average.\nFor correctly identified \\textit{subjects}, the average number of predicted \\textit{steps} is 5.0, while for incorrectly identified subjects it is 3.5.\nThis shows that when the subjects are identified correctly, the step segmentation works very well.\n\n\\vspace{-0.1cm}\n\\subsection{Test set}\n\nIn Table~\\ref{tab:resultstest}, we show the results on the test set, for our baseline system and the best system configuration.\nFor comparison, we include our previously published results on the same dataset.\n\\begin{table}[t!]\n\\begin{center}\n\\caption{\\em Test set (155 subjects) evaluation of our system compared to our previously published results, for the normal ($\\mathcal{N}$), backpack ($\\mathcal{B}$) and shoe cover ($\\mathcal{S}$) recording conditions.}\n\\begin{tabular}{c|ccc|c}\n & \\multicolumn{3}{|c|}{\\textbf{Condition}} & \\\\\n{\\textbf{Accuracy [\\%]}} & $\\mathcal{N}$ & $\\mathcal{B}$ & $\\mathcal{S}$ & \\bf average \\\\\n\\toprule\nvideo (GEI) \\cite{13hof1} & 99.4 & 27.1 & 52.6 & 59.7 \\\\\nbaseline SVM \\cite{13hof1} & 44.5 & 27.4 & 4.8 & 25.6 \\\\\nSVM + feat. sel. \\cite{Geiger13GBP} & 51.9 & 28.4 & 4.2 & 28.2 \\\\\n\\midrule\nbasic HMM & 41.0 & 24.2 & 7.1 & 24.1 \\\\\nimproved HMM & 65.5 & 36.5 & 9.0 & 37.0 \\\\\n\\end{tabular}\n\\label{tab:resultstest}\n\\end{center}\n\\vspace{-0.2cm}\n\\end{table}\nThe first row shows results of a state-of-the-art gait recognition method working with video data, namely the GEI~\\cite{13hof1}.\nThis method achieves almost perfect results in the normal walking condition, while especially the backpack and also the shoe variation constitute a real difficulty for the system (59.7\\,\\% on average).\nHowever, these results have to be interpreted carefully, since the GEI utilises mainly the appearance (the silhouette of a person) and not the behaviour (the gait pattern).\nUsing a large set of different audio features (1\\,625 static features per recording) and SVMs for classification (second row) was our first audio-domain baseline system~\\cite{13hof1}.\nNaturally, the addressed task is much more difficult when dealing only with audio data (average accuracy 25.6\\,\\%).\nHowever, this system can compete with the GEI in the backpack recording variation.\nIn~\\cite{Geiger13GBP}, we improved the SVM system by employing a feature-selection technique to chose relevant features for the task,\nobtaining an average identification accuracy of 28.2\\,\\%.\nNow, with our basic HMM setup, the resulting accuracy of 24.1\\,\\% is comparable to the baseline SVM system.\nThe methods introduced in this work (primarily modelling each step separately during model training and decoding)\nare able to bring a large improvement, reaching 37.0\\,\\%.\nIn the $\\mathcal{N}$ and $\\mathcal{B}$ recording conditions, the accuracy is improved significantly, by more than one third.\nThe accuracy of the video-processing method (GEI) in the backpack recording condition is surpassed by 26\\,\\% relatively.\nCompared to the previous best-performing audio system (the SVM system including feature selection) the average accuracy is improved by 24\\,\\%, relatively (significant in all recording conditions).\n\n\n\n\n\\vspace{-0.2cm}\n\n\\section{Conclusions}\n\\label{sec:conclusions}\n\n\n\nWe developed a model-based system for recognising people from walking sounds.\nThe system uses HMMs in a cyclic topology to automatically segment the recordings according to separate steps.\nExperiments were conducted using the TUM GAID database containing recordings of 305 subjects (150 in the development set and 155 in the test set) in three different recording conditions:\nnormal walking, walking with a backpack, and walking with shoe covers.\nThe results show that a basic HMM system (without explicit modelling of separate steps) achieves a similar performance in comparison to the SVM system presented in our previous work.\nImproving the system with the methods introduced in this work results in large performance gains in identification accuracy.\nWith this system, each half gait cycle is modelled by one pass through a cyclic HMM.\nThis covers the sound of one step and adjacent sounds, which are mainly produced by moving arms and legs.\nThus, it is clear that the backpack or shoe cover variation influence the identification performance in a negative way.\nHowever, when identification experiments are carried out with the same walking style and shoe type as the model was trained with (normal walking condition), almost two thirds of the subjects are identified\ncorrectly from the test set containing 155 individuals.\n\n\n\nGiven the challenging but application-friendly enrollment of only four examples per walking subject\nand in order to improve the robustness of the system, adopting approaches \nfrom speaker recognition like creation of models through adaption from a background model~\\cite{reynolds2000speaker} could be a promising strategy in the future.\nFurthermore, we will work on improving the system's robustness to variations.\nThis includes better coping with the backpack and shoe cover recording conditions.\nIn addition, the TUM GAID database contains a set of subjects with recordings made on two different dates in time (with three months in between).\nTherewith, the influence of changing types of shoes and clothes as well as possibly higher variation of the walking style on the system performance can be evaluated.\nIn order to improve the system in this direction, we want to test approaches to address session variability known from speaker recognition (such as joint factor analysis~\\cite{kenny2007joint}) as well as methods for model adaptation or feature transformation adopted from speech recognition systems.\n\n\n\\vfill\\pagebreak\n\n\n\\balance\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}