diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzlcom" "b/data_all_eng_slimpj/shuffled/split2/finalzzlcom" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzlcom" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\begin{figure*}\n\\includegraphics{smearing-explanation.pdf}\n\\caption{The transition to the double fringes packet for a binary. \\rev{Gray} curves show the fringes of the individual components \\rev{with vertical dotted lines indicating the position of their centres}. The \\rev{black} curve is the detected fringe packet (sum). In comic strip order: (i) An unresolved binary superimposes two fringe systems and achieve maximum fringe contrast; (ii) Contrast loss arises when the binary is resolved because individual fringe packets do not overlap exactly; (iii) The packet is elongated and loses its original shape when the binary is sufficiently separated, this is the transition to (iv) two separate fringe packets are clearly seen. Cases (i), (ii) are standard in interferometry. Case (iv) is easily analysed, but helps to understand why smearing occurs: each fringe packet doesn't seem impacted, but the power in its fringes is diluted by the incoherent flux from the other one; the resulting visibility will be a constant, strictly smaller than one, independent \\rev{of} binary separation. This paper focuses on case (iii) that has not been thoroughly studied in the optical. Some of the notations of the paper are also reported: $\\beta^{ij}$, the decentering of the fringe packet of an on-axis source due to instrumental effects; $\\xobj[ij]{o}$ the OPD position shift of the fringe packet for an off-axis source; $\\phiobj[ij]{o}$ the same as the latter expressed in terms of a phase shift.}\n\\label{fig:smearing-explanation}\n\\end{figure*}\n\nLong-baseline interferometry is an observational technique used from the optical \\citep{MIC20} to the radio domain \\citep{RYL46,PAW46} that allows to overcome the resolution limit of single-dish telescopes, as ultimately set by diffraction. To achieve such a goal an ideal interferometer measures the complex degree of coherence and relates this so-called complex visibility to the object intensity distribution through a Fourier transform \\citep{VCI34,ZER38}. Practically speaking, interference fringes are formed and their contrast and shift will be used to retrieve partial or total information on the complex visibilities.\n\nThere are numerous sources of error and biases that have to be evaluated and as much as possible corrected in order to provide a proper estimation of the interferometric observables. Among them, \\emph{bandwidth smearing} occurs in finite bandwidth for \\rev{objects spanning an extended field of view}. The interferogram corresponding to a point-like source has a coherence length of the order of $R\\lambda$ where $R$ is the spectral resolution and $\\lambda$ the central wavelength. For two points of an extended sourc\\rev{e} separated by a distance $\\theta$ along the projected baseline length $B$ corresponding to two telescopes of the array, individual fringe packets are shifted with \\rev{respect to} each other by an optical path difference $\\theta\/B$. When the OPD shift $\\theta\/B$ becomes of the order of, or greater \\rev{than}, the fringe packet width, i.e. when $\\theta \\approx R\\lambda\/B$, the fringe packets of these points do not overlap correctly and bandwidth smearing of the interferogram occurs (see bottom left panel of Fig.~\\ref{fig:smearing-explanation}). In other words, one can consider that the coherence length of an interferogram $R\\lambda$ corresponds to an angular extension on the sky $\\theta \\approx R\\lambda\/B$: it is called the interferometric field of view. Objects composed of multiple \\rev{incoherent} sources, either discrete or continuous, are affected by the smearing when their extent becomes of the order of the interferometric field of view.\n\nFigure \\ref{fig:smearing-explanation} shows an illustration of that effect applied to the case of a binary system. Each of the sources \\rev{contributes} with a fringe packet; the observed interferogram is their sum. The distance between the interferograms is proportional to the angular separation. We can distinguish four separation regimes: 1) the unresolved case; 2) the resolved case where the separation is a small fraction of the interferogram envelope; 3) the smeared regime where \\rev{the separation} is not anymore a small fraction and interferometric estimators are altered; 4) the ``double packet'' regime where two fringes packets are well separated.\n\nWhile this effect \\rev{has been} known for decades \\citep{THO73}, it cannot be remedied by calibration as other biases. This was analysed in a review by \\citet{BRI89} in the radio-astronomy context, in which the observer had no other choice but to define, somewhat heuristically, the best compromise between observing performance and limiting the bandwidth smearing. However, modern radio techniques, using \\textit{a posteriori} software recombination, can overcome the problem in many situations by using several phase centres, around which smearing does not occur. In the optical and the infrared, software recombination is not technically feasible and bandwidth smearing must be dealt with. \\citet{ZHA07} \\rev{recommend to limit} the field of view $\\theta$ to $1\/5$ of \\rev{the} theoretical value \\rev{of the interferometric field of view} i.e. $\\theta\\approx R\\lambda\/(5B)$ to remain in the resolved regime. For an interferometer working in the near-IR with 100\\,m baselines, this corresponds to 5--10 milliarcseconds of separation when a \\rev{wide-band filter ($\\lambda\/\\Delta\\lambda \\sim \\rev{5}$)} is used without spectral resolution. The main leverage to increase the interferometric field of view is adapting the spectral resolution or the baseline length. However, it comes very often at a prohibitive sensitivity cost (spectral resolution) or a loss of spatial \\rev{resolution} (baseline length).\n\nIn this paper, we present the first analytic calculation of the bandwidth smearing effect on the two main optical interferometric observables, namely the squared visibility and closure phase. We restricted the calculation to temporally encoded interferograms, including the so-called Fourier mode \\rev{(a full scan of the fringe packet)} and the \\rev{temporal} ABCD \\rev{(a 4-point scan of a central fringe)}, which are among the most popular optical schemes. Fourier mode has been or is being used at COAST \\citep{COAST}, IOTA with IONIC \\citep{IONIC} and FLUOR \\citep{IOTAFLUOR}, CHARA with FLUOR \\citep{CHARAFLUOR} and CLIMB \\citep{CLIMB}, VLTI with VINCI \\citep{VINCI}, PIONIER \\citep{PIONIER2,PIONIER}, and MIDI \\citep{MIDI}. \\rev{Temporal} ABCD is the choice at PTI \\citep{PTI} and the Keck Interferometer \\citep{KI}. It should be stressed that a similar line of reasoning can be used with very little adaptation to the 8-point time-encoded interferograms of NPOI \\citep{NPOI}, and, with more efforts, to spatially encoded interferograms such as in VLTI\/AMBER \\citep{AMBER} and static ABCD systems such as VLTI\/PRIMA \\citep{PRIMA}.\nThe derived formula\\newrev{e} can \\rev{be} applied to correct \\emph{any} squared visibility and closure phase analytic formula describing the object angular intensity distribution. We apply this corrective technique to the study of binary stellar systems. Indeed optical long baseline interferometry is a unique tool to study the astrometry of close binary systems with milli-arcsecond accuracy to provide direct means to measure accurate masses. Moreover very recently several attempts at searching for substellar companions \\citep{ABS11,ZHA11} are pushing the technique down to dynamical ranges where no adverse effects can be neglected. Since \\rev{most studies forgo} bandwidth smearing correction \\rev{without assessing the biases that may arise from such approximation}, we felt a proper treatment had become mandatory and would be useful in the future. For practical purposes we used the PIONIER instrument characteristics to provide an application of that work. PIONIER is currently being used at the Very Large Telescope Interferometer \\citep[VLTI,][]{VLTI} to combine four beams in the H band ($1.5$ to $1.8\\mu\\mathrm{m}$).\n\nSect.~\\ref{sec:hypnot} gives the analytic expression of the observables in the absence of atmospheric turbulence for an instrument working in fringe-scanning mode. Section~\\ref{sec:bin} is an application of these formulas to a binary star, which allows us to analyse the bias that smearing produces on the interferometeric observables and the model-fitted parameters of the binary. We also show there how simulated fringes of PIONIER are much better fitted with the smeared model than with the standard expression. Finally, section~\\ref{sec:atm} studies the impact of atmospheric turbulence on the observables, indicating that a moderate spectral resolution is enough to alleviate most of its effects.\n\n\\section{Modelling the bandwidth smearing: turbulence-free case}\n\\label{sec:hypnot}\n\\label{sec:ana}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=\\textwidth]{PIONIER-transmission.pdf}\n\\caption{Spectral transmission and fringe packet envelope for PIONIER, as measured on an internal calibration with the 3-channel spectral setting of the H band. The left column display\\newrev{s} the spectral transmission and instrumental phase for a telescope triplet, as contained in $\\textsflux[ij]{\\text{lamp}}(\\wavenum - \\wavenumzero)$. The right column shows the envelope and phase of the fringe packet, given by the Fourier transform of the latter. The slope of the instrumental phase translates into a fringe packet decentering, known as group delay. \\rev{Phases are expressed in radians.}}\n\\label{fig:PT}\n\\end{figure*}\n\n\n\\begin{table}\n\\caption{Principal notations of this paper.}\n\\label{tab:notations}\n\\begin{tabular}{ll}\n \\hline\\hline\n \\multicolumn{2}{l}{Indexing}\\\\\n $o$, $p$, $q$ & Source number (index)\\\\\n $i$, $j$, $k$ & Station number (superscript)\\\\\n \\hline\n \\multicolumn{2}{l}{Space and spatial frequency variables}\\\\\n $\\wavenum$, $\\wavenumzero$ & Wavenumber, central wavenumber\\\\\n $\\xi = \\wavenum - \\wavenumzero$ & Reduced wavenumber\\\\\n $\\opdvar$, $\\textopd[ij]$ & Optical path difference\\\\\n \\textxobj[ij]{o} & Fringe packet position in a perfect instrument\\\\\n $\\textphiobj[ij]{o} = 2\\pi\\wavenumzero\\textxobj[ij]{o}$\n & Fringe packet phase in perfect instrument\\\\\n $\\textphishift[ij]{}$ & Instrumental group delay (see Fig.~\\ref{fig:smearing-explanation})\\\\\n & $\\rightarrow$ Packet position is \\smash{$\\xobj[ij]{o} + \\phishift[ij]{}\/2\\pi\\wavenumzero$}\\\\\n \\hline\n \\multicolumn{2}{l}{Functions of wavenumber $\\wavenum$ or reduced wavenumber $\\xi$}\\\\\n $\\sfluxstar{o}(\\wavenum)$ & Spectrum of a point source\\\\\n $\\texttrans[i]{}(\\xi)$ & Transmission through an arm\\\\\n $\\loss[ij]{}(\\xi)$ & Instrumental contrast\\\\\n $\\insphi[ij]{}(\\xi)$ & Instrumental differential phase\\\\\n $\\textsflux[ij]{o}(\\xi)$ & The equivalent of $\\newrev{N_iN_j}$\\\\\n \\hline\n \\multicolumn{2}{l}{Functions of OPD $\\opdvar$ or phase $\\alpha = 2\\pi\\wavenumzero\\opdvar$}\\\\\n $\\textphasor[ij]{}(\\opdvar)$ & Complex coherent flux\\\\\n $\\textsmearing{}(\\alpha)$ & Complex smearing\\\\\n $\\textenvband{}(\\alpha)$ & Smearing amplitude\\\\\n $\\phiband{}(\\alpha)$ & Smearing phase\\\\\n\\hline\n \\multicolumn{2}{l}{Fluxes}\\\\\n $\\textflux{o}$ & Flux of a point source\\\\\n $\\textflux{}$ & Total flux\\\\\n $\\textflux[ij]{op}$ & Flux product equivalent\\\\\n\\hline\n \\multicolumn{2}{l}{Other}\\\\\n $\\dopd[ij]{}$ & OPD scanning speed\\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\nIn order to introduce the basic concepts of the data processing for fringe-scanning mode instruments, we remind \\rev{the reader} here how observables are derived \\rev{in monochromatic light}.\n\nIgnoring the atmosphere and instrumental signatures, the interferogram of a binary on baseline $ij$ can be written as\n\\begin{equation}\n N^{ij}(\\opdvar) = \\rev{N}_1 \\big[1 + \\cos 2\\pi\\wavenum\\opdvar\\big] \n + \\rev{N}_2\\big[ 1 + \\cos(2\\pi\\wavenum\\opdvar + \\phiobj[ij]{}) \\big]\\newrev{,}\n\\end{equation}\nwhere $\\rev{N}_1$ and $\\rev{N}_2$ are the fluxes of each component, $\\opdvar$ is the OPD \\rev{between the arms of the interferometer}, and $\\textphiobj[ij]{} = (2\\pi\\wavenum\\textbase[ij]\\cdot\\textpos{})$ is proportional to the binary separation $\\textpos{}$, the projected baseline $\\textbase[ij]$, and wavenumber $\\wavenum$.\n\nIt is convenient to use the coherent flux, a complex quantity representing the interferogram, from which the continuum $\\rev{N}_1 + \\rev{N}_2$ is removed and the negative frequencies are filtered out. In practice, one can take the Fourier transform of the interferogram, remove all frequencies but a small interval centred on the frequency of the fringes, and take the inverse Fourier transform. \\rev{The coherent flux can be written as}\n\\begin{equation}\n \\phasor[ij]{}(\\opdvar) = \\exp{2\\pi\\j\\wavenum\\opdvar} \n \\big[ \\rev{N}_1 + \\rev{N}_2\\, \\exp{\\j\\phiobj[ij]{}} \\big].\n \\label{eq:Mmono}\n\\end{equation}\n\nThe \\rev{square visibility amplitude} is obtained by dividing the power contained in the coherent flux by that in the continuum: \n\\begin{equation}\n\\begin{split}\n \\vsqPS[ij]{} &= \\frac{<|\\phasor[ij]{}(\\opdvar)|^2>_\\opdvar}{(\\rev{N}_1+\\rev{N}_2)^2}\\\\\n &= 1 - \\frac{4 \\rev{N}_1 \\rev{N}_2}{(\\rev{N}_1+\\rev{N}_2)^2} \\sin^2 \\frac{\\phiobj[ij]{}}2,\n\\end{split}\n \\label{eq:Vmono}\n\\end{equation}\n\\rev{where $_\\opdvar$ means the average of variable $x$ over the OPD.} In practice, the power may be computed using the Fourier \\rev{transform} of the coherent flux, which is strictly equivalent (Parseval's identity).\n\nWhen a triplet of telescopes $ijk$ is used, the closure phase is used to obtain partial information on the phase because it is independent \\rev{of} atmospheric turbulence. It is the argument of the bispectrum given by:\n\\begin{equation}\n\\begin{split}\n \\bisp[ijk]{} &=\\ <\\phasor[ij]{}(\\opd[ij](t))\n \\phasor[jk]{}(\\opd[jk](t))\n \\phasor[ki]{}(\\opd[ki](t))>_\\opdvar\\\\\n &= (N_1-N_2)^2 \n + 4\\rev{N}_1 \\rev{N}_2\n \\cos\\frac{\\phiobj[ij]{}}2 \n \\cos\\frac{\\phiobj[jk]{}}2 \n \\cos\\frac{\\phiobj[ki]{}}2 \n \\\\\n &\\quad -4\\j \\, \\rev{N}_1 \\rev{N}_2(\\rev{N}_1-\\rev{N}_2) \n \\sin\\frac{\\phiobj[ij]{}}2 \n \\sin\\frac{\\phiobj[jk]{}}2\n \\sin\\frac{\\phiobj[ki]{}}2 ,\n\\end{split}\n\\label{eq:Bmono}\n\\end{equation}\nwhere $\\textopd[ij]$, $\\textopd[jk]$, and $\\textopd[ki]$ are\nthe time-modulated OPDs on the three baselines, meeting the closure\nrelation $\\textopd[ij] + \\textopd[jk] + \\textopd[ki] = 0$. (Eq.~\\ref{eq:Bmono} gives \\rev{a compact, generic expression for the bispectrum in the same way \\citet{LEB12} did for the specific case of high-contrast binaries.})\n\nThe goal of this section is to describe the coherent flux, squared visibility, and closure phase of time encoded interferograms processed by means of Fourier analysis, when observing a source of arbitrary geometry in finite bandwidth. In other words, we seek to generalise Eqs.~(\\ref{eq:Mmono}, \\ref{eq:Vmono},~\\& \\ref{eq:Bmono}) and provide ready-to-use formulas to fit object models to smeared data. For the sake of simplicity we use a discrete formalism valid for a collection of point-like sources. The results presented here are easily generalised to systems of resolved, compact sources (Appendix~\\ref{ap:syscomp}) and to any system with our summations over a finite number of point-like sources replaced by integrations on the plane of the sky.\n\n\nThe most \\rev{frequently used} notations and symbols used in this section are given in Table~\\ref{tab:notations}. \n\n\n\\subsection{Interferogram}\n\\label{sec:interferogram}\n\nWe consider an interferometer with stations $i$, $j$, etc. separated by a baseline $\\base[ij]$ operating in a spectral channel centred on wavenumber $\\wavenumzero$. In the following developments we shall use $\\wavenum$, the wavenumber, and $\\xi = \\wavenum - \\wavenumzero$ as ``reduced'' wavenumber. Without losing generality, we assume that we observe an object made of several point sources $o$, $p$, etc. with positions $\\textpos{o}$, $\\textpos{p}$, etc. \\rev{in} the plane of the sky and spectra $\\textsfluxstar{o}(\\wavenum)$, $\\textsfluxstar{p}(\\wavenum)$, etc.\n\nThe interferometer measures the complex coherent flux of the electromagnetic field by forming dispersed fringes on a detector. In our case, fringes are obtained by a temporal modulation of the optical path difference (OPD) $\\opdvar$ around an ideal position $\\xobj[ij]{o}$. This position is related to the angular position of the source in the sky $\\pos{o}$ through the relation $\\xobj[ij]{o} = \\base[ij]\\ensuremath{\\!\\cdot\\!}\\pos{o}$. Each of the point sources contributes to a quasi-monochromatic interferogram per instrument spectral channel. Once the incoherent photometric contribution has been removed from the two telescopes and the negative frequencies have been filtered out in Fourier space, the complex coherent flux of one source reads: \n\n\\begin{equation}\n \\phasor[ij]{o}(\\xi,\\opdvar) = \n 2\\sflux[ij]{o} (\\xi)\n \\exp{\n 2\\jpi(\\wavenumzero+\\xi)(\\xobj[ij]{o} + \\opdvar) \n }\n \\label{eq:phasormono}\n\\end{equation}\n\nwhere $\\sflux[ij]{o} (\\xi)$ is the \\rev{``instrumental'' coherent flux density} \\rev{primarily} due to the wavelength-dependent instrumental effects\\rev{, but also to some extent to the spectrum of the source.} We can define this coherent flux density as:\n\n\\begin{equation}\n\\sflux[ij]{o}(\\xi) = \\loss[ij]{}(\\xi)\\sqrt{\\trans[i]{}(\\xi)\\trans[j]{}(\\xi)}\n \\,\\exp{\\j \\insphi[ij]{}(\\xi)}\n \\,\\sfluxstar{o}(\\wavenumzero + \\xi)\n \\label{eq:cohernorm}\n\\end{equation}\nwhere:\n\\begin{itemize}\n \\item $\\loss[ij]{}(\\xi)$, is the instrumental visibility, or instrumental contrast loss, \\newrev{and} has different origins such as differential polarisation or wavefront aberrations; \n \\item $\\insphi[ij]{}(\\xi)$, is the instrumental differential phase, \\newrev{and} arises from a difference of optical path lengths between the arms of the interferometer that depends on the wavelength. For example this can be the case when light travels through glass (e.g waveguides, dichroics) that do not have the same refraction index dependence as a function of wavelength.\n \\item $\\trans[i]{}(\\xi)$, is the spectral transmission through an arm including detector efficiency.\n\\end{itemize}\nWe assume that these instrumental signatures do not depend on the \\newrev{OPD position in the interferogram}, which is a good approximation in fringe-scanning mode\\newrev{, since the OPD modulation is obtained through a few micrometres of air or vacuum, with negligible dispersion. In other words, we assume that the instrumental differential phase is a static term that is not impacted by the movement of the differential delay lines.} However, this is usually not true for spatially dispersed fringes \\citep[see][for a generic expression for the fringes]{TAT06}, so that our approach needs adaptation to instruments like AMBER \\citep{AMBER}.\n\n\nIt is now possible to describe the coherent flux for an arbitrary number of sources and across a wider spectral bandpass:\n\n\\begin{equation}\n \\phasor[ij]{}(\\opdvar) = \n \\intinf \\sum_o \\phasor[ij]{o}(\\xi, \\opdvar) \\idiff\\xi, \n \\label{eq:phasorgen}\n\\end{equation}\n\nFor practical purposes we use the Fourier transform\n\\begin{equation}\n \\IFT{f}(\\opdvar) = \\intinf f(\\xi) \\exp{2i\\pi\\xi\\opdvar} \\idiff\\xi,\n\\end{equation}\nsubstitute Eq.~(\\ref{eq:phasormono}) into Eq.~(\\ref{eq:phasorgen}), and\nobtain \n\\begin{align}\n \\phasor[ij]{} (\\opdvar) = \n \\sum_o\n 2\\IFT{\n \\sflux[ij]{o}\n }(\\xobj[ij]{o} + \\opdvar)\n \\,\\exp{2\\jpi\\wavenumzero\\opdvar + \\j\\phiobj[ij]{o}}.\n \\label{eq:def:phasor}\n\\end{align}\nwhere $\\textphiobj[ij]{o} = 2\\pi\\wavenumzero\\textxobj[ij]{o}$. In the following, we will use the coherent flux expression (Eq.~\\ref{eq:def:phasor}) to compute the most \\rev{commonly used} interferometric observables i.e. the square visibility and the closure phase. In practice, $\\textsflux[ij]{o}$ is not known a priori. However, it can be inferred from fringes obtained on an internal lamp. The coherent flux of the lamp fringes yield $\\textsflux[ij]{\\text{lamp}}$ (see Eq.~\\ref{eq:def:phasor}). \\rev{If both the spectrum of the source $\\textsflux[\\star]{o}$ and that of lamp $\\textsflux{\\text{lamp}}$ are known, $\\textsflux[ij]{o} = \\textsflux[ij]{\\text{lamp}} \\textstrans[ij]{\\text{int}} \\, (\\textsflux[\\star]{o}\/\\textsflux{\\text{lamp}})$ (see Eq.~\\ref{eq:cohernorm}) where $\\textstrans[ij]{\\text{int}}$ is the transmission of the interferometer before the calibration lamp. The amplitude of the VLTI transmission is a smooth function of wavelength that can be considered constant. Its phase results from dispersive elements in the optical path. The optical elements of the VLTI before PIONIER are all in reflection and the most dispersive ones (the M9 dichroics) have been designed to display the least differential dispersion, so that the dispersion is dominated by the air in the non evacuated delay line. In the rest of this paper, we have considered near-zenithal observations for which the interferometric delay is small so that the air dispersion could be ignored as Appendix~\\ref{ap:gd} shows. While the presence of dispersion in non zenithal observations has a significant impact on the amount of smearing, it neither changes its order of magnitude nor the general conclusions of this paper. When the atmospheric dispersion must be tackled, it can be done either explicitly (Appendix~\\ref{ap:gd} explains how) or implicitly by letting the parameters of Sect.~\\ref{sec:isr} free in model fits, as \\citet{ZHA07} do for the spectral resolution.}\n\nAs an illustration, we show in the left panels of Fig.~\\ref{fig:PT} the spectral coherence transmission \\textsflux[ij]{\\text{lamp}} (amplitude and phase) that we measured on the internal source of PIONIER using three spectral channels across the H band on three baselines. The right panels correspond to the coherent flux of the fringes \\textphasor[ij]{\\text{lamp}} (amplitude and phase). \n\n\n\\subsection{Instrumental spectral response}\n\\label{sec:isr}\nIn this paper, after providing generic formulas using Fourier formalism, we will also give closed form expressions for direct use. To do so, we need an analytic description of the instrumental transmission ($\\texttrans[i]{}$) and differential phase ($\\textinsphi[ij]{}$). PIONIER's \\rev{instrumental coherent flux density} is obtained on a calibration lamp (Fig.~\\ref{fig:PT}, left panels)\\newrev{. It} displays a near-quadratic behaviour of the differential phase and a spectral transmission intermediate between top-hat and Gaussian functions.\n\nWe therefore describe the instrumental differential phase as\n\\begin{equation}\n \\insphi[ij]{} (\\xi) = \\insphi[ij]{}(0) + \\phishift[ij]{} (\\xi\/\\wavenumzero) + \\disp[ij]{} (\\xi\/\\wavenumzero)^2. \n \\label{eq:hyp:insphi}\n\\end{equation}\n\nThe linear term $\\textphishift[ij]{}$ in the instrumental differential phase $\\textinsphi[ij]{}(\\xi)$ translates into a fringe packet shift of $\\textphishift[ij]{}\/2\\pi\\wavenumzero$ with respect to the nominal zero OPD (see Fig.~\\ref{fig:smearing-explanation}, bottom right panel). It is called group delay. In a single-spectral channel interferometer it is possible to zero it by means of fringe tracking. When several spectral channels are observed at the same time, it is no longer possible to do so in all channels simultaneously. \\rev{For instance, if a central \\rev{spectral} channel is centred at zero OPD, adjacent channels may be shifted with respect to it if there is a differential behaviour of the dispersive elements (such as waveguides, dichroics, or air whose refractive index depend on wavelength) in the beam paths before the recombiner. In the bottom panels of Fig.~\\ref{fig:PT} (baseline 1-3), the central \\rev{spectral} channel is approximately centred at zero OPD (the solid line on the right panel \\newrev{shows the envelope of the fringe packet, i.e. the amplitude of the coherent flux}) with a slope of the phase averaging to $\\approx 0$ (same line of the left panel). The adjacent channels feature some shift (dashed lines on the right panels) and non-zero phase slope (same lines on the left). Appendix~\\ref{ap:gd} gives a further description of the group delay and its correction through fringe-tracking.} \n\nThe quadratic term in the instrumental differential phase $\\disp[ij]{}$ has a less visible impact on the fringe packet.\n\nWe will give results both for Gaussian and top-hat transmissions of FWHM $\\dwavenum{}$:\n\\begin{align}\n \\transG[i]{}(\\xi) &= \\wideexp{-\\frac{4 \\log 2}{\\dwavenum{}^2} \\xi^2},\n \\label{eq:hyp:bandpass}\\\\\n \\transH[i]{}(\\xi) &=\n \\begin{cases}\n 1 \\quad &\\text{if $|\\xi| \\le \\dwavenum{}\/2$},\\\\\n 0 \\quad &\\text{otherwise}.\n \\end{cases}\n\\end{align}\n\n\n \n\n\n\\subsection{Square visibility amplitude}\nThe square visibility amplitude is obtained from the coherent flux using:\n\\begin{equation}\n \\vsqPS[ij]{} \n = \\frac1{4\\normtot[ij]{}}\n \\intinf \\phasor[ij]{}(\\opdvar)\n \\!\\cdot\\! \\conj{\\phasor[ij]{}(\\opdvar)} \\idiff\\opdvar,\n \\label{eq:def:vsqPS}\n\\end{equation}\nwhere \\textnormtot[ij]{} is a normalisation factor that relates to the total flux of the target ($\\propto \\textfluxtot{}^2$) and \\rev{$\\conj{x}$ stands for the complex conjugate of $x$}. In the first line of the previous equation, we substitute Eq.~(\\ref{eq:def:phasor}) and expand the product into a double sum to find\\rev{:}\n\\begin{equation}\n \\vsqPS[ij]{}\n = \\frac1{\\normtot[ij]{}} \\sum_{o,p} \n \\exp{\\j(\\phiobj[ij]{o} - \\phiobj[ij]{p})}\n \\intinf\n \\IFT{\\sflux[ij]{o}}(\\xobj[ij]{o} + \\opdvar) \n \\IFT{\\sflux[ij]{p}}(-\\xobj[ij]{p} - \\opdvar) \n \\idiff\\opdvar.\n\\end{equation}\nUsing the change of variables $\\opdvar \\rightarrow u = \\opdvar + \\textxobj[ij]{o}$, a correlation of Fourier transforms is recognised and simplified into the Fourier transform of a product. Thus,\n\\begin{equation}\n \\vsqPS[ij]{} = \\frac1{\\normtot[ij]{}} \\sum_{o, p} \n \\IFT{\\sflux[ij]{o}\\sflux[ji]{p}}(\\xobj[ij]{o} - \\xobj[ij]{p})\n \\exp{\\j(\\phiobj[ij]{o} - \\phiobj[ij]{p})}.\n\\end{equation}\n\nThe bandwidth smearing is contained in $\\IFT{\\sflux[ij]{o}\\sflux[ji]{p}}$. It\ncan be made clearer by introducing the complex smearing\n\\begin{equation}\n \\smearing[ij]{op}(\\alpha) = \\frac{\n \\IFT{\\sflux[ij]{o}\\sflux[ji]{p}}(\\alpha \/ 2\\pi\\wavenumzero) \n }{ \\IFT{\\sflux[ij]{o}\\sflux[ji]{p}}(0)},\n \\label{eq:gen:V2smearing}\n\\end{equation}\n\\rev{where $\\alpha$ is an angular variable that is linked to the OPD by the relation $\\alpha = 2\\pi\\wavenumzero\\delta$.} It \\rev{is} convenient to use the amplitude and phase \\rev{of the smearing}: $\\textenvband[ij]{op} = |\\textsmearing[ij]{op}|$ is the contrast loss due to smearing and $\\textphiband[ij]{op} = \\arg \\textsmearing[ij]{op}$ is a phase shift induced by it. We also define the flux product equivalent---the equivalent to $\\flux{o}\\flux{p}$ in the monochromatic case---as\n\\begin{equation}\n \\norm[ij]{op} = \\intinf \\sflux[ij]{o}(\\xi)\\sflux[ji]{p}(\\xi)\\idiff\\xi.\n \\label{eq:def:norm}\n\\end{equation}\nWith these definitions, we can rearrange the square visibility amplitude:\n\\begin{equation}\n\\begin{split}\n \\vsqPS[ij]{} =\n \\sum_o & \\frac{\\norm[ij]{oo}}{\\normtot[ij]{}} \n + \\sum_{o < p}\n \\Bigg[\\frac{2\\norm[ij]{op}}{\\normtot[ij]{}}\n \\envband[ij]{op}(\\phiobj[ij]{o}-\\phiobj[ij]{p})\\\\\n &\\times \\cos \\left(\\phiobj[ij]{o}-\\phiobj[ij]{p} \n + \\phiband[ij]{op}(\\phiobj[ij]{o}-\\phiobj[ij]{p})\n \\right) \\Bigg]. \n\\end{split}\n\\label{eq:gen:vsqPS}\n\\end{equation}\nThese results are independent of the instrumental phase $\\insphi[ij]{}$. If $\\textenvband[ij]{op} = 1$ and $\\textphiband[ij]{op} = 0$ (no smearing) this formula is equivalent to the monochromatic case (Eq.~\\ref{eq:Vmono} in the case of a binary). In practice, model-fitting of square visibility amplitudes by multiple stellar systems uses Eqs.~(\\ref{eq:gen:V2smearing}, \\ref{eq:def:norm},~\\& \\ref{eq:gen:vsqPS}). Knowledge of $\\textsflux[ij]{o}$, needed in Eqs.~(\\ref{eq:gen:V2smearing} \\& \\ref{eq:def:norm}), can be inferred from fringes obtained on a calibration lamp (or a calibrator) if the spectra of both lamp and source $o$ are known, as we discussed in Sect.~\\ref{sec:interferogram}. \n\n\\def\\ensuremath{V_\\text{ins}}{\\ensuremath{V_\\text{ins}}}\nWhen the different sources share the same spectrum, i.e. $\\sfluxstar{o}(\\xi) \\propto \\sfluxstar{p}(\\xi)$, we may express the visibility as a function of the individual fluxes \\textflux{o} and the total flux \\textfluxtot{}. In Eq.~\\ref{eq:gen:vsqPS}, we then use the flux products in lieu of the flux products equivalents, i.e. $\\textflux[ij]{op} = \\ensuremath{V_\\text{ins}}\\textflux{o}\\textflux{p}$ and $\\textflux[ij]{} = \\textflux{}^2$, where \n\\begin{equation}\n \\ensuremath{V_\\text{ins}}^2 = \\intinf \\loss[ij]{}(\\xi)^2\\trans[i]{}(\\xi)\\trans[j]{}(\\xi)\n \\sfluxstar{}(\\xi)^2 \\idiff\\xi\\, \\Big\/ \\intinf \\sfluxstar{}(\\xi)^2 \\idiff\\xi \n\\end{equation}\nis the ``instrumental'' square visibility amplitude. Note that $\\ensuremath{V_\\text{ins}}$ also depends on the spectral profile. It only disappears in the calibration if the calibrator has the same spectral profile as the source. \n\nIn the cases of the Gaussian and top hat transmissions with FWHM $\\dwavenum{}$ around central wavelength $\\wavenumzero$ and a constant contrast loss $\\loss[ij]{}$ in the spectral channel, the smearing is purely real\n($\\phiband[]{} = 0$) and\n\\begin{subequations}\n\\label{eq:easy:vsqPS}\n\\begin{align}\n \\envbandH{}(\\alpha) \n &= \\sinc\\left(\\frac{\\alpha}{2\\resol{}}\\right),\n \\label{eq:C:vsqPS}\n \\\\\n \\envbandG{}(\\alpha) \n &= \\wideexp{\\left(\n -\\frac{\\alpha^2}{32\\resol{}^2\\log 2} \n \\right)},\n \\label{eq:gauss:vsqPS}\n\\end{align}\n\\end{subequations}\nwhere $\\resol{} = \\wavenumzero \/ \\dwavenum{}$ is the spectral resolution. For small enough baselines, we have shown in Appendix~\\ref{ap:smallsmearing} that an exponential formula can be used by properly choosing the value of $\\resol{}$. On real data, $\\resol{}$ will need to be set to a value that differs from the spectral resolution in order to account from the departure from Gaussian profile and the wavelength dependence of the contrast. In practice, a model fit of smeared data may leave it as a free parameter. If high precision is needed, the asymmetry of the spectral band and the slope of $\\loss[ij]{}$ give a non zero $\\gamma$. Cubic developments for the smearing terms $\\textenvband[]{}$ and $\\textphiband[]{}$ are given in Appendix~\\ref{ap:smallsmearing}.\n\n\n\n \n\n\n\n\\subsection{Closure phase}\n\\label{sec:ana:clo}\nA triple correlation or its Fourier transform, the bispectrum, or an equivalent method, is generally used to determine the closure phase \\citep{LOH83,ROD86}. The determination of the closure phase in direct space uses the phase of the bispectrum, given by:\n\\begin{equation}\n\\bispDS[ijk]{} = \\intinf \n \\phasor[ij]{}(\\opd[ij](t)) \n \\phasor[jk]{}(\\opd[jk](t)) \n \\phasor[ki]{}(\\opd[\\rev{ki}](t)) \n \\idiff t,\n\\label{eq:bispDS:1}\n\\end{equation}\nwhere $t$ is time in the case of linear OPD variations. By substitution of Eq.~(\\ref{eq:phasorgen}) into Eq.~(\\ref{eq:bispDS:1}) and writing $\\textopd[ij](t) = \\textdopd[ij] t$\n\\begin{equation}\n \\bispDS[ijk]{} =\n \\sum_{o,p,q} \n \\intinf\n \\phasor[ij]{o}(\\dopd[ij] t) \n \\phasor[jk]{p}(\\dopd[jk] t) \n \\phasor[ki]{q}(\\dopd[ki] t)\n \\idiff t.\n\\label{eq:def:bispDS}\n\\end{equation}\nIt follows from Eqs.~\\newrev{(\\ref{eq:def:phasor} \\& \\ref{eq:def:bispDS})} and closure relation $\\textdopd[ij] + \\textdopd[jk] + \\textdopd[ki] = 0$ that\n\\begin{equation}\n\\begin{split}\n \\bispDS[ijk]{} &\\propto\n \\sum_{o, p, q}\n \\Bigg[\n \\exp{i(\\phiobj[ij]{o} + \\phiobj[jk]{p} + \\phiobj[ki]{q})}\\\\\n &\\times\n \\intinf \n \\IFTsflux[ij]{o}(\\xobj[ij]{o} + \\dopd[ij] t)\n \\IFTsflux[jk]{p}(\\xobj[jk]{p} + \\dopd[jk] t)\n \\IFTsflux[ki]{q}(\\xobj[ki]{q} + \\dopd[ki] t)\n \\idiff t\n \\Bigg].\n\\end{split}\n\\label{eq:int:bispDS}\n\\end{equation}\nUsing the change of variables $t \\rightarrow u = \\xobj[ij]{o}\/\\textopd[ij] + t$, a triple cross-correlation of Fourier transforms can be recognised and expressed as the two-dimensional Fourier transform \n\\begin{equation}\n \\IFTtd{\\ f \\ }(\\opdvar_1, \\opdvar_2) \n = \\iintinf f(\\xi_1, \\xi_2) \n \\exp{2\\j\\pi(\\xi_1\\opdvar_1 + \\xi_2\\opdvar_2)} \n \\idiff\\xi_1\\idiff\\xi_2\n\\end{equation}\nof the triple product\n\\begin{equation}\n \\striple[ijk]{opq}(\\xi_1, \\xi_2) = \n \\sflux[ij]{o}(\\xi_1) \\sflux[jk]{p}(\\xi_2) \\sflux[ki]{q} \n \\Big(\n - \\frac{\\dopd[ij]}{\\dopd[ki]} \\xi_1 \n - \\frac{\\dopd[jk]}{\\dopd[ki]} \\xi_2\n \\Big).\n \\label{eq:def:striple}\n\\end{equation}\nThe bispectrum therefore reads\n\\begin{equation}\n\\begin{split}\n \\bispDS[ijk]{} \\propto \n \\sum_{o, p, q} \\Bigg[\n \\IFTstriple[ijk]{opq} \\Big(\n \\phiobj[ij]{o} - \\frac{\\dopd[ij]}{\\dopd[ki]} \\phiobj[ki]{q},& \n \\frac{\\dopd[jk]}{\\dopd[ki]} \\phiobj[ki]{q}\n - \\phiobj[jk]{p}\n \\Big)\\\\\n &\\times \\exp{\\j\\left(\\phiobj[ij]{o} + \\phiobj[jk]{p} + \\phiobj[ki]{q}\\right)}\n \\Bigg].\n\\end{split}\n\\end{equation}\nThe bandwidth smearing is contained in $\\IFTstriple[ijk]{opq}$. In order to make it clearer we need to introduce several terms. The triple flux product equivalent (corresponding to $\\flux{o}\\flux{p}\\flux{q}$ in the monochromatic case) is given by \n\\begin{equation}\n \\triple[ijk]{opq} = \\left| \\IFTstriple[ijk]{opq}(0, 0) \\right|,\n \\label{eq:gen:triple}\n\\end{equation}\nthe ``instrumental'' closure phase by\n\\begin{equation}\n \\insphi[ijk]{opq} = \\arg \\IFTstriple[ijk]{opq}(0, 0), \n \\label{eq:gen:insphi}\n\\end{equation}\nand the smearing by\n\\begin{equation}\n \\smearing[ijk]{opq}(\\phivar_1, \\phivar_2) = \n \\IFTstriple[ijk]{opq}( \\phivar_1 \/ 2\\pi\\wavenumzero, \n -\\phivar_2 \/ 2\\pi\\wavenumzero) \n \\,\/\\,\\IFTstriple[ijk]{opq}(0, 0).\n \\label{eq:gen:smearing}\n\\end{equation}\nThe ``instrumental'' closure phase is a flux-weighted mean over the spectral channel and thus also depends on the spectrum of the source. The triple flux product equivalent can be simplified to the triple flux product ($\\texttriple[ijq]{opq} \\propto \\textflux{o}\\textflux{p}\\textflux{q}$) when the sources have the same spectrum, i.e. $\\textsfluxstar{o}(\\xi) \\propto \\textsfluxstar{p}(\\xi)$. Note that the instrumental closure phase cancels out in the calibration only if the sources $o$, $p$, $q$ and the calibrator all share the same spectrum.\n\nWith these notations, the bispectrum reads\n\\begin{equation}\n\\begin{split}\n \\bispDS[ijk]{} \\propto \\sum_{o, p, q} \n \\Bigg[\n \\smearing[ijk]{opq}\n \\Big(\n \\phiobj[ij]{o} - &\\frac{\\dopd[ij]}{\\dopd[ki]} \\phiobj[ki]{q}, \n \\frac{\\dopd[jk]}{\\dopd[ki]} \\phiobj[ki]{q}\n - \\phiobj[jk]{p}\n \\Big)\\\\\n &\\times\\triple[ijk]{opq} \\exp{i\\left( \\phiobj[ij]{o} + \\phiobj[jk]{p} + \\phiobj[ki]{q}\n + \\insphi[ijk]{opq}\n \\right)}\n \\Bigg].\n\\end{split}\n\\label{eq:gen:bispDS}\n\\end{equation}\nIf $\\textsmearing[ijk]{opq} = 1$ (no smearing) and $\\insphi[ijk]{opq} = 0$ (no bandwidth-related differential phase), the formula is equivalent to the monochromatic case (Eq.~\\ref{eq:Bmono} for a binary). In practice, Eqs.~(\\ref{eq:def:striple}, \\ref{eq:gen:triple}, \\ref{eq:gen:insphi}, \\ref{eq:gen:smearing}, \\& \\ref{eq:gen:bispDS}) allow us to to model fit multiple stellar systems to smeared interferometric data. The knowledge of $\\textsflux[ij]{o}$ needed in Eq.~(\\ref{eq:def:striple}) can be inferred from calibration fringes obtained on an internal lamp (or a calibrator) as discussed in Sect.~\\ref{sec:interferogram}. \n\n\\rev{This modelling} can be further simplified using an analytic description of the bandpass. In that case, Eqs.~(\\ref{eq:gen:bispDS}~\\& \\ref{eq:bispDS}) can be used for the model fit of closure phases. In our cases of top-hat and Gaussian transmission of FWHM \\dwavenum{}, with a linear instrumental phase, we reorder baselines so that $\\textdopd[ki]$ has the largest absolute value, and we can assume it negative without losing generality. Then, the smearing can be simplified to\n\\begin{subequations}\n\\label{eq:bispDS}\n\\begin{align}\n \\smearingH[ijk]{}(\\phivar_1, \\phivar_2) &\\propto \n \\sinc\\left(\n \n \\frac{\\phivar_1 + \\phishift[ijk]{}}{2\\resol{}}\n \\right)\n \\sinc\\left(\n \\frac{\\phivar_2 + \\phishift[ijk]{}}{2\\resol{}}\n \\right)\n\\label{eq:gate:bispDS}\n\\\\\n \\smearingG[ijk]{}(\\phivar_1, \\phivar_2) &\\propto\n \\exp{ \n - \\frac{\n \n (\\phishift[ijk]{} + \\phivar_1)^2\n \n + (\\phishift[ijk]{} + \\phivar_2)^2 + \n \\left( \n \\phishift[ijk]{}\n - \\frac{\\dopd[jk]\\phivar_1 \n + \\dopd[ij]\\phivar_2} \n {\\dopd[ki]} \n \\right)^2 \n }{\n 16\\resol{}^2\\log 2 \n \\left(1\n + \\big(\\frac{\\dopd[ij]}{\\dopd[ki]}\\big)^2 \n + \\big(\\frac{\\dopd[jk]}{\\dopd[ki]}\\big)^2\\right) \n }\n }.\n\\label{eq:gauss:bispDS}\n\\end{align}\n\\end{subequations}\nIn the equations above, the ``group delay closure'' is expressed as\n\\begin{equation}\n \\phishift[ijk]{} = \n \\frac{\\dopd[ki]\\phishift[ij]{} - \\dopd[ij] \\phishift[ki]{}}\n {\\dopd[ki]}\n .\n\\label{eq:bisp:gd}\n\\end{equation}\nThe group delay closure is the consequence of the incorrect centering of the three fringe packets on the three baselines of the telescope triplets. Because of this de-centering, the centres of these packets are not scanned at the same time. In order to yield a usable closure phase, there should still be an overlap in the time intervals when the high contrast part of the packets are scanned. It means that the individual group delays \\textphishift[ij]{}, \\textphishift[jk]{}, and \\textphishift[ki]{}, and thus the group delay closure, should be of the order of a few times the spectral resolution or less ($\\textphishift[ijk]{} \\lesssim 2\\pi\\resol{}$). Since this overlap in time depends on the relative scanning speeds along the baselines, the group delay closure depends on $\\dopd[ij]$, $\\dopd[jk]$, and $\\dopd[ki]$. \n\nIn our analytic approach to the spectral transmission, the instrumental closure phase reduces to a constant term, independent of \\newrev{the} source\\newrev{s}\\begin{equation}\n \\insphi[ijk]{} = \\insphi[ij]{}(0) + \\insphi[jk]{}(0) + \\insphi[ki]{}(0).\n\\end{equation}\n\nAppendix~\\ref{ap:disp} explains how to use the Gaussian formula if the the quadratic chromatic dispersion term $\\textdisp[ij]{}$ is non zero. \n\n\n\n\n\n\n\n\n\n\\section{Consequence on companion search}\n\\label{sec:bin}\n\n\\subsection{Bias on the interferometric observables}\n\\label{sec:bias:observables}\n\n\\begin{table}\n \\caption{Test case used in \\rev{Figs.~\\ref{fig:ideal}~\\& \\ref{fig:phi:jitt}}. For the square visibility amplitude, the first baseline is used. The spectral resolution is, by definition, the major source of smearing. In addition, the visibility is slightly impacted by the spectral dispersion $\\disp[ij]{}$. The closure phase is strongly impacted by the group delay closure $\\phishift[123]{}$ (indirectly by group delays and OPD modulation speeds) and moderately by the dispersion $\\disp[ij]{}$.}\n\\label{tab:testcase}\n\\begin{tabular}{ll}\n\\hline\\hline\nBinary flux ratio & 0.6\\\\\nEffective bandpass & Gaussian\\\\\nSpectral resolution & lines: 7, 18, 42, contours: 3--100\\\\\nProjected telescope positions & $(0, B, 0.4B)$\\\\\n\\textit{Corresponding baselines} & $(B, -0.6B, -0.4B)$\\\\\nOPD modulation along baselines & $\\dopd[ij] = (\\dopd[12], -2\\dopd[12], \\dopd[12])$\\\\\nOPD bounds & $(\\pm 25\\lambda, \\mp 50\\lambda, \\pm 25\\lambda)$\\\\\nGroup delays & $\\phishift[ij]{} = (5, 0, -5)\\times2\\pi$\\\\\n\\textit{Corresponding group delay closure} \n & $\\phishift[123]{} = 10\\times 2\\pi$\\\\\nSpectral dispersion & $\\disp[ij]{} = 0$\\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\\begin{figure*}[p]\n\\subfigure[Square visibility amplitude]{\\includegraphics[width=\\linewidth]{ideal-visibility.pdf}\\label{fig:ideal:Vsq}}\n\\subfigure[Closure phase]{\\includegraphics[width=\\linewidth]{ideal-closure.pdf}\\label{fig:ideal:phi3}}\n\\caption{Square visibility amplitude (top) and closure phase (bottom) of a binary with flux ratio 0.6 (test case of Table~\\ref{tab:testcase}) observed with an interferometer with Gaussian bandpass under ideal atmospheric conditions and baselines $B$, $-0.6B$, $-0.4B$. In both figures, \\emph{top panel:} interferometric observable as a function of binary separation (milliarcseconds at one micron wavelength for a 100\\,m baseline) for an infinite resolution and three spectral resolutions approximately matching those of PIONIER. \\emph{bottom panel:} deviation of the smeared observable with respect to the infinite spectral resolution case, shown as contours in the separation-spectral resolution plane. In the lowest panel, the behaviour change around spectral resolution $\\resol{} = 8$ is explained by the transition from the single spectral channel mode (group-delay free in ideal fringe tracking conditions, \\rev{since a single fringe packet can be centred around zero OPD, see Appendix~\\ref{ap:gd}}) to the multiple channel observation (where \\rev{the fringe packets of the different spectral channels are shifted with respect to each other and therefore cannot be simultaneously positioned at zero OPD by the fringe-tracker, see Appendix~\\ref{ap:gd}}).}\n\\label{fig:ideal}\n\\end{figure*}\n\n\nThe first impact of the smearing is a tapering of the peak-to-peak amplitude of the oscillation of the visibility with baseline, hour angle, or spectral\nchannel, due to the smearing amplitude $\\envband{}$. The second \\newrev{impact} only concerns the closure phase in multi-channel observations\\rev{. I}t originates from the imperfect alignment of the fringe packets on baseline triplets, \nas measured by $\\phishift[ijk]{}$. In order to make these influences clearer,\nwe give in Fig.~\\ref{fig:ideal} the interferometric observables of a binary with a high flux ratio 0.6, whose characteristics are given in Table~\\ref{tab:testcase}.\n\n\\paragraph{Square visibility amplitude.}\nFigure~\\ref{fig:ideal:Vsq}, top panel, shows the theoretical smearing of the visibility amplitude of a binary as a function of reduced separation $\\theta B\/\\lambda$ (in $\\mathrm{mas}\\cdot\\mathrm{hm}\\cdot\\mu\\text{m}\\smash{^{-1}}$) for three different spectral resolutions ($\\approx 7, 18, 42$) corresponding to the observing modes available on PIONIER at the VLTI. The lower panel of the figure displays the error on the square visibility occurring from not taking smearing into account, as a function of separation and spectral resolution. The result is easily generalised to binaries of different flux ratios, as the relative error on the visibility $\\Delta|V^2| \/ |V^2|$ remains unchanged.\n\n\\paragraph{Closure phase.}\nFigure~\\ref{fig:ideal:phi3}, top panel, shows the theoretical closure phase of a binary for three different spectral resolutions ($\\approx 7, 18, 42$) corresponding to the observing modes available on PIONIER at the VLTI. It can be seen at small separations (5--10\\,$\\mathrm{mas}\\cdot\\mathrm{hm}\\cdot\\mu\\text{m}\\smash{^{-1}}$) that the intermediate spectral resolution ($\\approx 18$) shows more smearing than expected for these separations, in particular more than the broad-band $\\approx 7$ observing mode. The reason lies in \\rev{the dispersive elements in the light beams of the interferometer and instrument that decentre fringe packets more in some spectral channels than in others, thus making it impossible to centre all fringes packets at the same time. (see the imperfect centering of some spectral channels of PIONIER in Fig.~\\ref{fig:PT} and a description of the group-delay tracking in Appendix~\\ref{ap:gd})}. This effect is not seen in the broad band, where \\rev{the single fringe packet of each baseline can be centred with a fringe tracker, thus eliminating the group-delay}. This low-separation smearing approximately scales linearly with separation, as $f\\textphishift[ijk]{}\\theta\/\\resol{}\\smash{^2}$, where $f$ is the flux ratio of the binary, $\\theta$ the separation, and $\\textphishift[ijk]{}$ the group-delay closure (This can be obtained analytically by linearising Eq.~\\ref{eq:gauss:bispDS} and normalising by the bispectrum of a point-source calibrator.) At larger separations ($\\gtrsim 10\\mathrm{mas}\\cdot\\mathrm{hm}\\cdot\\mu\\mathrm{m}\\smash{^{-1}}$ in Fig.~\\ref{fig:ideal:phi3}), the closure phase is impacted by a combination of the tapering of the oscillation of the visibility (a purely spectral resolution effect, as seen in the visibility in Fig.~\\ref{fig:ideal:Vsq}) and the instrumental phase, the impact is relatively complex, and we can only recommend to use Eq.~(\\ref{eq:gauss:bispDS}) to model it. As an illustration, Fig.~\\ref{fig:closim} of Appendix~\\ref{ap:closim} compares the closure phase of the three spectral channels of PIONIER for a given configuration of the interferometer, and it is quite clear the the behaviour radically changes with channel and telescope triplet.\n\nThe lower panels displays the error on the closure phase occurring from not taking smearing into account, as a function of separation and spectral resolution. The figure shows a sharp discontinuity at resolution $\\resol{} = 8$ where the transition occurs from a single spectral channel (where the single fringe packet of each baseline is positioned at zero OPD by an ideal fringe-tracker) to spectrally dispersed fringes (with the fringe packets \\rev{of each baseline} that do not align well \\rev{because they are shifted with respect with each other by the instrumental phase}). Even for moderately resolved sources, percent precision requires a good enough spectral resolution ($\\resol{} \\gtrsim 40$ or more), adequate modelling of \\rev{bandwidth} smearing, or a good fringe-tracking on a single spectral channel at moderate spectral resolutions ($\\resol{} \\gtrsim 10$). \n\n\n\\subsection{Retrieving binary parameters}\n\n\\begin{figure}\n\\includegraphics[width=\\linewidth]{binobs-uv.pdf}\n\\caption{$(u, v)$ coverage of a typical 100\\,m baseline 4T observation (K0-A1-G1-I1) at the VLTI for an object close to the meridian, with 3 observations over a few hours.}\n\\label{fig:uv}\n\\end{figure}\n\nWe assess here the bias on the binary parameters that smearing produces. In order to model the data as \\rev{realistically} as possible we build synthetic binary fringes corresponding to a \\rev{typical scenario}: near-zenith object observed in a sequence of three sets of fringes separated by one hour using a large telescope quadruplet at VLTI (see Fig.~\\ref{fig:uv} for $u$, $v$ coverage). They are obtained from calibration fringes obtained by PIONIER on an internal calibration lamp, which can be considered as a point source observation for our purpose. Then, we feed \\rev{these} synthetic data to the PIONIER data reduction software and get visibility amplitudes and closure phases. They are calibrated using simulated fringes of a point-source calibrator. They are then fit with a binary model to derive the parameters of the binary. In a first step, the model is that of an unsmeared binary (Eqs.~\\ref{eq:Vmono}~\\& \\ref{eq:Bmono}), then we use the smeared model of Sect.~\\ref{sec:ana} with Gaussian bandpass (Eq.~\\ref{eq:gauss:vsqPS}~\\& \\ref{eq:gauss:bispDS}). \\rev{Additional transmission effects of the VLTI from the telescope up to the internal calibration lamp, positioned after the delay lines, have been ignored: the near-zenith observations we consider here are dominated by PIONIER's instrumental effects (as we discuss in Sect.~\\ref{sec:interferogram}). For non zenithal observations, where the interferometric delay in the delay lines is several tens of metres, the air dispersion in the delay lines becomes a factor of the same order of PIONIER's instrumental phase and can be modelled using Appendix~\\ref{ap:gd}.}\n\nIn our analysis, the separations in right ascension and declination are varied from $-30$ to 30\\,mas or approximately 10 times the angular resolution the interferometer and the magnitude differences from 0.1 to 3.3 (flux ratios from 0.05 and 0.95). For each point triplet of parameters, the difference between the fitted values and the input gives us the bias on the binary position and magnitude difference. The reduced chi square was determined assuming a 2\\% accuracy on visibilities and 20\\,mrad on closure phases typical of single-mode instrument performances on bright objects (like PIONIER). Figure~\\ref{fig:binobs-bias} shows the \\rev{absolute values of the errors} and reduced chi square at each separation and position angle at the given magnitude difference of 0.55 (flux ratio of 0.6). In Figure~\\ref{fig:binobs-bias-2}, we \\rev{consider possible biases and give} the median value of the \\rev{error with} its confidence intervals for a given binary separation, considering all the position angles and flux ratios at that separation.\n\n\\begin{figure*}\n\\includegraphics{binobs-bias.pdf}\n\\caption{Quality of least-squares model fitting of binary parameters to smeared interferometric observables. These observables are derived from PIONIER synthetic fringes in the 3-channel spectral resolution ($\\resol{} \\approx 20$) using the data reduction pipeline. The contour plots give the \\newrev{absolute value of the error in the model fits} for each position of the secondary assuming a binary flux ratio of 0.6. \\text{Left:} the binary model assumes monochromatic light and absence of smearing. \\text{Right:} the binary model assumes a Gaussian bandpass and takes into account the smearing. \\text{Top:} \\rev{absolute value of the} error on the binary separation. \\text{Middle:} \\rev{absolute value of the} error on the magnitude difference. \\text{Bottom:} reduced chi squares assuming 2\\% error on square\\newrev{d} visibilities and 20~mrad on closure phases.}\n\\label{fig:binobs-bias}\n\\end{figure*}\n\n\\begin{figure*}\n\\includegraphics[width=\\linewidth]{bias-smearing-allratios.pdf}\n\\caption{\\newrev{The solid lines give the median value of the errors on the fitted binary parameters} as a function of binary separation. \\newrev{If non zero and systematically of one sign, the median indicates a bias. The grayed area are the} confidence intervals for the errors (dark gray 1-$\\sigma$, light gray 2-$\\sigma$). At a given separation, all binary orientations and flux ratios were considered. \\text{Left:} the binary model assumes monochromatic light and absence of smearing. \\text{Right:} the binary model assumes a Gaussian bandpass and takes into account the smearing. \\text{Top:} \\newrev{error} on the binary separation. \\text{Middle:} \\newrev{error} on the magnitude difference. \\text{Bottom:} reduced chi square.}\n\n\\label{fig:binobs-bias-2}\n\\end{figure*}\n\n\\paragraph{Smearing-free binary model.} A binary model with the classical expression for the visibility amplitude and closure phase (Eqs.~\\ref{eq:Vmono}~\\& \\ref{eq:Bmono}) is fitted to synthetic PIONIER data with the three-channel spectral resolution. \nThe left panel of Fig.~\\ref{fig:binobs-bias} displays from top to bottom the \\newrev{absolute value of the error} on the secondary's position, the \\newrev{absolute value of the error} on the magnitude difference, and the reduced chi square for errors of 2\\% and 20\\,mrad on individual measurements of square visibility amplitudes and closure phases respectively. We checked that the results for other flux ratios are similar. The \\newrev{errors (with median value and confidence intervals)} for the parameters are given in Fig.~\\ref{fig:binobs-bias-2} (left panel) as a function of separation when the flux ratio is allowed to vary between detectable limits (0.05 to 0.95). \\newrev{The median value of the error indicates a bias, if it is non zero and consistently of one sign.} \n\nThe main impact of the smearing is a degradation of the goodness of fit at all separations, followed by errors on the flux ratio and separation at moderate separations, and a clear bias of both observables at larger separations. In our models, the secondary is dimmer than the input of the simulation \\newrev{more often than not} and the separation tends to be smaller \\newrev{more often than not}. \\newrev{(For instance, the confidence intervals on the errors of Fig.~\\ref{fig:binobs-bias-2} show that the error on the separation is approximately 5 times more likely to be negative than positive at a separation of 30\\,mas.)} The \\newrev{apparent dimming of the secondary} is easily explained by the tapering of the fringe contrast that occurs due to smearing. The \\newrev{bias on separation} is independent of smearing as we will see later on. \n\nEven at moderate separations (5--10\\,mas) the reduced chi square is around 3. However, the errors on the flux ratio and positions become significant (50\\,$\\mu$as and 20\\,mmag) only at higher separations ($\\gtrsim 15$ mas), as Fig.~\\ref{fig:binobs-bias}. At first sight, it seems to contradict the trend of Sect.~\\ref{sec:bias:observables}. In that section, we have found a significant smearing of the closure phase at small separations, as a result of the imperfect centering of fringe packets in an observation with multiple spectral channels. We easily reconcile these findings by noting that, as an average over the spectral band, the group delay is zero, i.e. both ends of the bands have group delays of same magnitude but opposite signs; thus their respective impacts on the observables approximately cancel out in the fit. The deviation of the individual spectral channels from the average over the band still explains the larger chi square. (Fig.~\\ref{fig:closim} in Appendix~\\ref{ap:closim} shows how the closure phases are impacted differently for the three spectral channels of PIONIER in low resolution mode.) \n\n\\paragraph{Smeared binary model.} We performed similar fits to synthetic smeared fringes of a binary \\rev{by} using the Gaussian formulas for the smearing (see Sect.~\\ref{sec:ana}). The \\newrev{absolute values of the errors} on the position and flux ratio are given for a binary with a flux ratio of 0.6 in the right panel of Fig~\\ref{fig:binobs-bias}. The \\rev{errors} on the position and magnitude difference, and the quality of the fit are given in the right panel Fig.~\\ref{fig:binobs-bias-2} for a wide range of flux ratios. \\newrev{In Fig.~\\ref{fig:binobs-bias-2}, the median value of the error indicates a bias if it is non zero and consistently of one sign.}\n\nTaking the smearing into account eliminates most of the errors and bias on the flux ratio. It also largely improves the quality of the fit, with a reduced chi square of 3 found at significant separations ($\\gtrsim 15$\\,mas) in \\rev{most cases}. The errors on the separation are improved at all separations but \\rev{the bias remains at larger separations}. We \\rev{have found that the bias is related} to the uncertainty on the effective wavelength of the interferometer, which varies by $\\approx 0.1$\\% across baselines on PIONIER; this phenomenon is independent \\rev{of} our adequate modelling of the smearing. \\rev{It is difficult to calibrate in the first place, because a deviation of the pie\\newrev{z}o scan speed from its nominal value has exactly the same observable consequence. (We note that including a proper spectral calibration in the instrument would solve for this problem.)} At 30 mas of separation, \\rev{a 0.1\\% bias translates into 30$\\,\\mu$as}, which is what we indeed find: \\rev{the solid lines in the top panels of Fig.~\\ref{fig:binobs-bias-2} show this bias both in the monochromatic model and the smeared one.} At specific binary parameters, seen as high \\rev{error} values islands on Fig.~\\ref{fig:binobs-bias}, \\rev{the discrepancy} originates from the difference between the smeared visibility and the Gaussian model: This happens close to smearing-induced phase jumps (see Fig.~\\ref{fig:closim} of Appendix~\\ref{ap:closim} for a comparison between Gaussian smearing and simulated values). High contrast binaries \\rev{do not feature these phase jumps} and are not impacted. For precision work \\rev{of high to moderate flux ratio binaries, we strongly recommend to discard closure phases} close to predicted jumps. \n\n\n\n\n\n\n\n\n\n\n\\section{Modelling the atmosphere}\n\\label{sec:atm}\n\\label{sec:atm:temp}\n\nThe estimators of the interferometric observables have been chosen to be mostly immune to atmospheric biases in the typical interferometric regime of a moderately resolved source, \\rev{i.e. when bandwidth smearing can be ignored}. In this section, we investigate possible biases when \\rev{bandwidth smearing becomes significant}, as \\citet{ZHA07} did for IOTA's closure phase estimator.\n\nFor temporal scanning, it is possible to write the differential piston---the variable differential phase induced by the atmosphere---as a function of OPD since time and OPD are linked \\citep[see for instance][]{jitter}. The jittered coherent flux can be expressed as a function of the ideal coherent flux\n\\begin{equation}\n \\phasorjitt[ij]{}(\\opdvar) = \n \\phasor[ij]{}(\\opdvar + \\piston[ij](\\opdvar))\n \\wideexp{\\left[\n -\\frac16 \\left(\n \\pi\\wavenumzero \n \\pderiv{\\piston[ij]}{\\opdvar}(\\opdvar)\n \\right)^2\\right],\n }\n \\label{eq:coherjitt}\n\\end{equation}\n\\rev{where $\\textpiston[ij]$ is the atmospheric differential piston on baseline $ij$.} The exponential term is the contrast loss due to piston variation during the integration, of the order of one millisecond for one OPD step of a temporal scan. It bears the assumption that the spectral envelope of the fringes does not have features as sharp as the fringe frequency and that the integration during one OPD step is fast enough (of the order of \\rev{a} millisecond in practice) to allow for a linearisation of piston. \n\n\\subsection{Orders of magnitude}\n\\label{sec:atm:om}\nAn analytic approach to the atmospheric turbulence can be taken, using the \nassumption that scanning is fast enough for the piston to vary linearly during\na sub-second scan, i.e. $\\textpist[ij]{} = \\textpist[ij]{0} + \n\\textpist[ij]{1} \\textopd[ij]$, where $\\textpist[ij]{0}$\nis the group-delay tracking error and $\\textpist[ij]{1}$ a rate of piston \nvariation during scan. $\\textpist[]{0}$ and $\\textpist[]{1}$ are random variables when statistics over a large number of scans are derived. Using this approach, the coherent flux is:\n\\begin{align}\n\\begin{split}\n \\phasorjitt[ij]{} (\\opd[ij]) &= \n \\sum_o\n 2 \\IFTsflux{o}(\\xobj[ij]{o} + (1 + \\pist[ij]{1})\\opd[ij] + \\pist[ij]{0}) \n \\\\&\\qquad \\times \\exp{\n i\\phiobj[ij]{o} \n + 2i\\pi\\wavenumzero[(1+\\pist[ij]{1})\\opd[ij] + \\pist[ij]{0}]\n - \\frac16 (\\pi\\wavenumzero\\pist[ij]{1})^2 \n }.\n\\end{split}\n\\label{eq:atm:phasor}\n\\end{align}\nThis approach can be used to determine the orders of magnitude of the atmospheric effects.\n\n\\paragraph{Visibility.} \nThe piston variation term $1 + \\textpist[ij]{1}$ comes as a product of the OPD variable in Eq.~(\\ref{eq:atm:phasor}), so we recognise it as a scaling factor. $\\textpist[ij]{0}$ is a mere shift of the central OPD and has no impact---the square visibility does not depend on centering. Therefore, we can link the jittered visibility to the ideal case: \n\\begin{equation}\n \\vsqPS[ij]{\\text{jit}} = \\frac{1}{1+\\pist[ij]{1}} \\vsqPS[ij]{\\text{ideal}}\n \\wideexp{-\\frac13 (\\pi\\wavenumzero\\pist[ij]{1})^2}.\n\\end{equation}\nThe impact of atmospheric jitter is independent \\rev{of} the geometry of the source and, thus, smearing. For all separations it can be calibrated out if science target and calibrators are observed with similar atmospheric conditions.\n\n\\paragraph{Closure phase.} The group-delay tracking term $\\textpist[ij]{0}$ can be seen as a fringe shift that adds to the predicted fringe position $\\textphishift[ij]{} \\rightarrow \\textphishift[ij]{} + 2\\pi\\wavenumzero\\textpist[ij]{0}$ and the linear variation of the piston can be seen as a scanning velocity change $\\textdopd[ij] \\rightarrow \\textdopd[ij](1 + \\textpist[ij]{1})$. With these substitutions, the formulas of Sect.~\\ref{sec:ana:clo} can be used directly to determine the jittered closure phase. As we have seen, the predominant impact of the bandwidth smearing on the closure phase is the fringe decentering $\\textphishift[ij]{}$, so we expect the group-delay tracking errors to be the main source of bias. \n\n\\subsection{Numerical modelling}\n\n\\begin{figure}\n\\includegraphics[width=\\linewidth]{pdf\/jittered-interferogram.pdf}\n\\caption{One of the simulated temporal scans. The deformation of the envelope is correlated with the piston slope and the accordion-like features to variations of its slope. \\textit{Top:} piston; \\textit{Bottom:} simulated fringes.}\n\\label{fig:interf:jitt}\n\\end{figure}\nIn the high frequency regime the pistons at the different stations can be considered as uncorrelated when the baselines are larger than the outer scale of turbulence $\\mathcal{L}_0$ \\citep{KEL07}. With a median value $\\mathcal{L}_0 = 22$\\,m at Paranal \\citep{MAR00} baselines of the medium and large telescope quadruplets used with PIONIER normally fulfil the criterium. At other sites, for smaller baselines, or under relatively uncommon atmospheric conditions at Paranal, the pistons can be correlated. This correlation decreases the amount of atmospheric jitter for given coherence time and seeing, which in turns tends to decrease the bias on the interferometric observables. Therefore, we model the random piston $\\piston[i](t)$ using its spectral density\n\\begin{equation}\n \\DFT{\\piston[i]}(\\nu) = A\\nu^{-B} \\exp{\\j\\Phi^i(\\nu)},\n\\end{equation}\nwhere $A$ and $B$ are constants and $\\Phi^i(\\nu)$ is chosen randomly for each sampled temporal frequency $\\nu$. For Kolmogorov turbulence, the fast scan ($\\ll 1$\\,s) regime has $B = 17\/6$ \\citep{CON95} but there is \\rev{experimental evidence \\citep{DIF03}} that the slope is not as steep at VLTI, \\rev{with simulations by \\citet{ABS06} explaining it in terms of the piston induced at high frequency by the adaptive optics (imperfect) correction \\citep[``bimorph piston'', see][]{VER01} and wavefront errors produced by the injection into single-mode waveguides \\citep[``coupled piston'', see][]{RUI01}. \\citet{LIN99} have also measured a deviation from the Kolmogorov behaviour at PTI.} We used $B = 2$, which experimentally reproduces well the accordion features of temporal scans obtained under below average atmospheric conditions (see Fig.~\\ref{fig:interf:jitt}). We normalise $A$ to match the group-delay tracking rms in the differential piston $\\piston[ij] = \\piston[j] - \\piston[i]$.\n\nBy substituting in Eq.~\\ref{eq:atm:phasor}, we perform a numerical integration of Eqs.~(\\ref{eq:def:vsqPS}~\\& \\ref{eq:def:bispDS}) and obtain the jittered\nvisibility amplitude and closure phase.\n\n\\begin{figure*}\n\\includegraphics{jittered-closure.pdf}\n\\caption{Bias on the closure phase resulting from atmospheric piston in temporal scans, assuming that the static smearing is correctly modelled. The x-axis shows the reduced binary separation in milliarcseconds-hectometres of baselines per micron of wavelength (below) or the OPD between binary components (above). \\textit{Top:} bias and statistical errors for three spectral resolutions corresponding to PIONIER at the VLTI. \\textit{Bottom panel:} bias in the spatial resolution-spectral resolution plane. The bias decreases quickly with spectral resolution.}\n\\label{fig:phi:jitt}\n\\end{figure*}\n\n\n\\subsection{Bias on the observables} \nAs we have seen in Sect.~\\ref{sec:atm:om} there is little bias of the atmosphere on the square visibility amplitude and we could confirm it numerically. However, the bias can be substantial on the closure phase. Figure~\\ref{fig:phi:jitt} displays in its top panel the bias on the closure phase of our test-case binary as a function of separation, for the three spectral resolutions $\\resol{} = 7$, 18, 42 corresponding to PIONIER's modes. \\rev{For each separation, baseline, and spectral resolution considered in the simulation}, 100 random scans with a \\rev{remaining scatter of the fringe tracking of $6\\lambda$ (typical value by average conditions) have been generated. The closure phase on the telescope triplet is the average closure phase of the scans. To better identify the biases}, the closure phase of a \\rev{jitter-free observation} has been subtracted from the results. In the lower panel of the figure, the bias on the phase is given in the separation-spectral resolution plane. As one can see, the impact of the atmosphere is very important at low resolution but quickly vanishes for $\\resol{} \\gtrsim 20$. For three spectral channels across a typical IR band, the error on the phase is at most a few degrees or less. \n\n\\section{Discussion \\& Conclusion}\n\n\\subsection{Impact of the instrument and visibility estimator}\n\nAs already discussed by \\citet{PER05}, the square visibility amplitude is impacted differently for different estimators that otherwise would be equivalent in the absence of smearing. Not only is the amount of smearing different but the behaviour can be changed. Because it is a popular recombination method and it illustrates this argument, we have given the formulas for the smeared complex visibility of a time-modulated ABCD recombiner in Appendix~\\ref{ap:ABCD}. In Sect.~\\ref{sec:ana}, we have seen that the square visibility amplitude is not impacted by the fringe centering in full scans processed by Fourier analysis : in Eq.~(\\ref{eq:gen:V2smearing}), smearing is independent \\rev{of} absolute source position---only on source distances $\\textphiobj[ij]{o} - \\textphiobj[ij]{p}$---and group delay $\\textphishift[ij]{}$. Conversely, the ABCD visibility estimator shows explicit dependence on $\\textphiobj[ij]{o}$ and $\\textphishift[ij]{}$ (see for instance Eq.~\\ref{eq:ABCD:gauss:smearing}), and this propagates to the square visibility estimator. \n\nAlso, we have clearly put in evidence that instrumental features such as the OPD modulation scheme \\rev{(ABCD or Fourier mode, stroke speeds on the different baselines)} or the chromatic dispersion have a strong impact on the closure phase. In particular, the smearing behaviour of the closure phase of PIONIER (Fig.~\\ref{fig:closim}) shows different trends on different triplets or different spectral channels: on one hand, different telescope triplets are impacted differently because of the different OPD modulations; on the other hand, different spectral channels of the same triplet behave in different manners, as a consequence of different chromatic signatures. While the square visibility amplitude did not show a strong dependence on instrumental signature for full scans processed by Fourier analysis (Sect.~\\ref{sec:ana}), this is not necessarily the case. For instance, a time-modulated ABCD method displays impact for both visibility and phase (see Eq.~\\ref{eq:ABCD:gauss:smearing} in Appendix~\\ref{ap:ABCD}).\n\n\\rev{We therefore stress} that each data reduction pipeline and each instrument require their own modelling of the smearing. In this paper, we have provided a generic formalism which can be used as is for VLTI\/PIONIER and probably with little adaptation to other instruments that temporally scan most of the fringe packet. \n\n\\subsection{When only part of the fringe packet is sensed}\n\nAlso, our developments make the implicit assumption that most of the flux of \\newrev{the} fringe packet is measured, i.e. that the OPD range is significantly larger than the FWHM of the fringe envelope. Actually, our developments still hold if the centres of the fringe packets originating from the different parts of the source are scanned but the extremities of the fringe packet are cropped, providing that the cropping is not too aggressive. \\rev{In the case of PIONIER, the partial cropping on some baselines does not prevent a good agreement between simulated fringed and our analytic development, as Fig.~\\ref{fig:closim} shows.} \n\nHowever, it is clearly not the case in the ABCD method when a fringe-tracker locks the recombiner on the ``central'' fringe \\citep[e.g][]{SHA80}. While the smearing can be derived theoretically for this method (see Appendix~\\ref{ap:ABCD}), \\rev{its magnitude will depend on the location of the fringe (i.e the OPD) onto which the fringe tracker locks. In the aforementionned Appendix it is shown that the visibility depends on the position of a source which in turns depends on the value of the group delay \\textphishift[ij]{} (see Eq.~\\ref{eq:ABCD:beta}). For relatively compact objects, the fringe tracker locks onto the brighter fringe or a local zero of the group delay and possible biases are calibrated out when observing an (almost) point-like calibrator under similar conditions. When a source is smeared, the fringe tracker does not necessarily behave in the same manner on source and calibrator, since there is no longer an obvious location of a central fringe (e.g. in the extreme case of a double fringe packet, it may lock on either packet). Therefore,} it is quite likely that instruments sensing the central fringe of sources more resolved than a few beam widths \\rev{(i.e. a few times the resolution power of the interferometer) will lead to altered measurements}, unless \\rev{(a)} a high spectral resolution \\rev{is used ($\\resol{} \\gg \\textphishift[ij]{}$ in Eq.~\\ref{eq:ABCD:beta})} or \\rev{(b) the fringe tracking scheme can be modelled with enough detail to know on which part of a given smeared fringe packet it locks}. In particular, instruments that target high accuracy astrometry with the ABCD method like GRAVITY \\citep{GRAVITY} and PRIMA \\citep{PRIMA} will require that both the tracking reference and the science target are not very resolved.\n\n\\subsection{Image reconstruction}\nOur approach clearly targets parametric analysis, by providing formulas to model fit interferometric data by systems of compact sources. Image reconstruction however, usually relies on the Fourier relation between visibility and image, a relation which is broken in finite bandwidth. Thus, image reconstruction is made difficult as \\cite{BRI89} already noted in radio interferometry.\n\n\n\\subsection{Dealing with bandwidth smearing in practice}\nThe angle of attack of radio astronomers to limit bandwidth smearing\n(see e.g \\citet{BRI89}), is to restrict its effects either by\nincreasing the spectral resolution to optimise the interferometric\nfield of view or centering the phase tracking delay optimally to\nreduce the radial spread. Optical interferometry users do not have\nnecessarily such a flexibility. One of the important differences\nbetween the wavelength regimes is that, in the optical, because the\narrays have \\rev{many fewer} telescopes, most of the users do not actually\nreconstruct images but rather model directly the interferometric\nobservables. This \\rev{has} been done to an extreme level of precision\nwhere visibilities are measured to a fraction of percent\n\\citep[e.g.][]{Absil:2008} and closure phases to a fraction of a degree\n\\citep[see e.g][]{Zhao:2011}. The particularly large impact of the\nsmearing, even for moderately resolved sources, undermine the idea\nthat the parameters for a large number of objects might be derived\neffortlessly using the traditional techniques.\n\nIt therefore appears reasonable to adopt a two step strategy to deal with\nbandwidth smearing first by \\emph{limiting the static instrumental smearing\nby design} and secondly by \\emph{operating the instrument under\nconditions that allow a proper modelling of the induced biases}.\n\n\\emph{Limiting the instrumental smearing.} We have seen that the ``group delay\nclosure'' is the major contributor to a static smearing effect in the closure\nphase \\rev{for instruments that operate in Fourier mode}; it depends on the\ngroup delays and the OPD modulation scheme. The scanning speed scheme can be\nchosen so as to minimise the average group delay closures. For the\n\\rev{temporal ABCD, visibility amplitudes and closures phases are directly\nimpacted by the group delay, and this mitigation can longer be used. Since the\ngroup delay is mostly produced by a static chromatic dispersion in the instrument (waveguides, optical elements), an} integrated approach\nto differential dispersion and birefringence compensation can be attempted as\ndiscussed in \\citep{LAZ12}. Solutions exist that can provide guided or free\nspace optics instrument with dispersion compensation \\citep{Vergnole:2005}.\n\\rev{Correcting the air dispersion in the delay lines in real time may prove\nmore difficult to implement than static correction of the dispersion in the optical elements, so that evacuated delay lines are probably part of the solution for larger baseline lengths ($\\gg 100$\\,m) \\newrev{and at shorter wavelengths where the air dispersion is larger}.}\n\n\\emph{Modelling the biases.} We have shown that bandwidth smearing can be\nmodelled provided that, a moderate spectral resolution is used (the first\nobvious step) \\rev{and} the \\rev{estimators of the observables are properly\ncalculated}. In very low spectral resolution or in full-band ($\\resol{} \\sim\n5$) observations atmospheric effects must also be decently constrained. For the\nlatter, initial studies \\citep[e.g.][]{LIN99,DIF03} have shown the correlation\nbetween atmospheric turbulence and low frequency statistics of piston but these\nare not necessarily well adapted to the sub second exposure\n\\citep[e.g.][]{ABS06}. Dedicated further characterisation of piston statistics\n\\rev{vs. monitored atmospheric properties} would be needed. In summary, the\nultimate tool to obtain a smeared source's \\rev{properties} will simulate the\ninstrumental visibility numerically taking the instrumental signatures, in\nparticular a dedicated spectral calibration, and the atmosphere into account.\n\n\\subsection{Concluding remarks}\n\n\\beginrevision\nOptical interferometry is increasingly used for precise measurements of high flux ratios and\/or separation. Application of this precision techniques range from the detection of hot dust components around debris-disc host stars or the search for direct detection of hot Jupiters to the accurate astrometry of binary systems in search of precise mass determination. \n\nWe have focused our work on a rarely studied effect that can alter significantly these astrophysical measurements, the so-called the bandwidth smearing. This bias-inducing phenomenon arises from the wavelength-dependence in the characteristics of the instrument, the atmosphere, and the source. We have modelled its impact by analysing its influence on the instrumental fringe contrast and determined how it alters the visibility amplitudes and closure phases. The magnitude of this effect will depend, for a given instrument, on the spectral resolution and the extension of the observed field of view and in some cases on the atmospheric piston.\n\nWe have demonstrated analytically how to calibrate for this degradation in the context of popular temporal fringe scanning instruments and applied this analysis to the specific case of binary systems by computing the error or biases induced on the separation vector and flux ratio.\n\nWe have further discussed ``real-life'' constraints such as the influence of the atmospheric piston, the use of different fringe encoding schemes or the imperfections of the fringe tracking quality. We believe that the current analysis can be used with little effort to correct for potential bandwidth smearing biases in almost any astrophysical case.\n\\endrevision\n\n\\section*{Acknowledgements}\n\\rev{We would like to thank an anonymous referee and Chris Haniff who helped us to improve this paper. This research has made use of NASA's Astrophysics Data System, the free softwares maxima, Yorick, and python. It has been supported by Comit\\'e Mixto ESO-Chile and Basal-CATA (PFB-06\/2007).}\n\n{\\footnotesize\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Methods}\n\\noindent\n$\\textbf{Crystal growth and magnetic characterizations.}$ $\\rm{(MnBi_2Te_4)(Bi_2Te_3)_{\\emph{n}}(\\emph{n} = 1, 2,...)}$ single crystals were grown by flux method \\cite{N33}. Mn powder, Bi lump and Te lump were weighed with the ratio of Mn: Bi: Te = 1: 8: 13 (MnTe: $\\rm{Bi_2Te_3}$ = 1: 4). The mixture was loaded into a corundum crucible sealed into a quartz tube. The tube was then placed into a furnace and heated to 1100 \u00b0C for 20 h to allow sufficient homogenization. After a rapid cooling to 600 \u00b0C at 5 \u00b0C\/h, the mixture was cooled slowly to 585 \u00b0C (581 \u00b0C) at 0.5 \u00b0C\/h for $\\rm{MnBi_4Te_7}$ ($\\rm{MnBi_6Te_{10}}$) and kept at this temperature for 2 days. Finally, the single crystals were obtained after centrifuging. The centimeter-scale plate-like $\\rm{MnBi_4Te_7}$ and $\\rm{MnBi_6Te_{10}}$ single crystals can be easily exfoliated. Magnetic measurements of $\\rm{MnBi_4Te_7}$ and $\\rm{MnBi_6Te_{10}}$ single crystals were measured by the vibrating sample magnetometer (VSM) option in a Quantum Design Physical Properties Measurement System (PPMS-9 T). The temperature-dependent magnetization measurements are described in detail in Supplementary Materials.\n\\bigskip\n\n\\noindent\n$\\textbf{Preparation of the ultra-thin samples.}$ The $\\rm{MnBi_4Te_7}$ and $\\rm{MnBi_6Te_{10}}$ flakes with different thicknesses were first mechanically exfoliated on a polydimethylsiloxane (PDMS) substrate by the Scotch tape method. The exfoliated samples on PDMS substrates were then dry-transferred onto 285 nm $\\rm{SiO_2}$\/Si substrates with evaporated gold films. Then, a layer of PMMA was spin-coated on the thin flakes for protection.\n\\bigskip\n\n\\noindent\n$\\textbf{AFM characterization.}$ The thickness of the ultra-thin samples wwas verified by the atomic force microscopy characterization using the Oxford Cypher S system in tapping mode. According to the height line profiles, the $\\rm{MnBi_4Te_7}$ and $\\rm{MnBi_6Te_{10}}$ were confirmed to possess an alternated lattice structure of BT (1 nm) + MBT (1.4 nm) and BT (1 nm) + BT (1 nm) + MBT (1.4 nm). See more details in Supplementary Materials.\n\\bigskip\n\n\\noindent\n$\\textbf{RMCD measurements.}$ The RMCD measurements were performed based on the Attocube closed-cycle cryostat (attoDRY2100) down to 1.6 K and up to 9 T in the out-of-plane direction. The linearly polarized light of 633 nm HeNe laser was modulated between left and right circular polarization by a photoelastic modulator (PEM) and focused on the sample through a high numerical aperture (0.82) objective. The reflected light was detected by a photomultiplier tube (THORLABS PMT1001\/M). The magnetic reversal under external magnetic field was detected by the RMCD signal determined by the ratio of the a.c. component of PEM at 50.052 kHz and the a.c. component of chopper at 779 Hz (dealt by a two-channel lock-in amplifier Zurich HF2LI). The errors of ratio of FM and AFM components are determined by the instability of the data acquired during RMCD measurements.\n\\bigskip\n\n\\noindent\n$\\textbf{STEM characterization.}$ Samples for cross-sectional investigations were prepared by standard lift-out procedures using an FEI Helios NanoLab G3 CX focused ion beam system. To minimize sidewall damage and make the samples sufficiently thin to be electronically transparent, final milling was carried out at a voltage of 5 kV and a fine milling at 2 kV. Aberration-corrected STEM imaging was performed using a Nion HERMES-100 operating at an acceleration voltage of 60 kV and a probe forming semi-angle of 32 mrad. HAADF images were acquired using an annular detector with a collection semi-angle of 75-210 mrad. EELS measurements were performed using a collection semi-angle of 75 mrad, an energy dispersion of 0.3 eV per channel, and a probe current of $\\sim$20 pA. The Mn-$L$ (640 eV) and Te-$M$ (572 eV) absorption edges were integrated for elemental mapping after background subtraction. The original spectrum images were processed to reduce random noise using a principal component analysis (PCA) tool. HAADF image simulations were computed using the STEM\\_CELL software simulation package matching the microscope experimental settings described above and using a supercell with a thickness $\\sim$20 nm.\n\n\n\\section{\\label{sec:level1}DATA AVAILABILITY}\nThe data that support the findings of this study will be available at an open-access repository with a doi link, when accepted for publishing.\n\n\\section{\\label{sec:level3}ACKNOWLEDGEMENT}\nThis work was supported by the National Key R\\&D Program of China (Grants No. 2018YFA0306900, No. 2017YFA0206301, 2019YFA0308602, No. 2019YFA0308000, and 2018YFA0305800), the National Natural Science Foundation of China (Grants No. 62022089, No. 12074425, and No. 11874422), Strategic Priority Research Program (B) of the Chinese Academy of Sciences (Grant No. XDB33000000), Beijing Natural Science Foundation (Grant No. JQ21018 and BJJWZYJH01201914430039), and the fundamental Research Funds for the Central Universities (E1E40209).\n\n\\section{Author contributions}\nY.Y., S.Y., and X.X. conceived the project, designed the experiments, analyzed the results and wrote the manuscript. S.Y. and X.X. conducted the RMCD measurements. H.W. and T.X. grew the $\\rm{MnBi_4Te_7}$ and $\\rm{MnBi_6Te_{10}}$ bulk crystals. M.X., S.T., and H.L. grew the $\\rm{MnSb_2Te_4}$ bulk crystal. Y.H. prepared the few-layer samples. Y.P. and J.Y. performed the magnetic characterizations of the bulk crystals. R.G. performed the STEM characteristics under the supervision of W.Z. Y.Z. and Z.L. helped with results analysis. All authors discussed the results and contributed to the manuscript.\n\n\\section{ADDITIONAL INFORMATION}\nCompeting interests: The authors declare no competing financial interests.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIt is well known that Einstein's General Relativity (GR) is plagued by short-scale divergences, be it in the context of the curvature singularities inside black holes, or the big bang and related singularities in cosmology. In the context of point masses, the the gravitational field close to the mass becomes singular, and curvature invariants diverge. These singularities ``survive'' the Newtonian limit, where they resurface as unbounded tidal forces. Seemingly, general covariance is not the guiding principle to ameliorate the situation. On the other hand, these singularities are unphysical and have to be dealt with. What to do?\n\nOne may hope that these issues will be resolved once a consistent UV completion of GR (``quantum gravity'') is known. In the meantime, we can instead attempt to find modifications of GR that feature finite potentials, regarding these modifications as the effective field theory limit of some more fundamental concepts, hitherto unknown. Before jumping into technical details, let us consider some modifications of Newtonian gravity by considering the field of a $\\delta$-like mass distribution\n\\begin{align}\n\\label{eq:delta-density}\n\\rho = 4\\pi G m\\delta(r) \\, .\n\\end{align}\nThen, the Poisson equation gives the well-known Newtonian potential of a point mass,\n\\begin{align}\n\\bigtriangleup \\phi = \\rho \\, , \\quad \\phi(r) = -\\frac{Gm}{r} \\, .\n\\end{align}\nThis potential is singular, and it gives rise to infinite tidal forces. In this essay, we shall uphold Eq.~\\eqref{eq:delta-density}, that is, the notion of sharply concentrated densities. Then, a simple Pauli--Villars-type regularization scheme of the Poisson equation can ameliorate the situation:\n\\begin{align}\n\\label{eq:modified-poisson}\n\\bigtriangleup (1 + M^{-2}\\bigtriangleup)\\phi = \\rho \\, , \\quad \\phi(r) = -\\frac{Gm}{r} \\left( 1 - e^{-M r} \\right)\\, ,\n\\end{align}\nand $M$ is a large mass scale such that for $M \\rightarrow \\infty$ one recovers the original Poisson equation. For finite $M$, however, the potential is now finite at the origin, $\\phi(0) = -GMm$. Note, however, that it is not regular since its derivative does not vanish: $\\phi'(0) = GmM^2\/2$. If we think of the Newtonian limit of GR, this would imply that the corresponding spacetime has a conical singularity at the origin. There is another problem, however, which is generic to higher-derivative modifications:\n\nTypically, they bring along massive ghost degrees of freedom, which in turn lead to unstable vacua upon quantization. In the above example, the Green function is\n\\begin{align}\nD(r) = \\frac{1}{\\bigtriangleup(1 + M^{-2}\\bigtriangleup)} = \\frac{1}{\\bigtriangleup} - \\frac{M^2}{M^2 + \\bigtriangleup} \\, ,\n\\end{align}\nthe second term of which corresponding to a ghost of mass $M$. The particle spectrum of this theory, then, will not only feature a massless graviton of helicity 2, but also a massive ghost mode \\cite{VanNieuwenhuizen:1973fi,Stelle:1977ry}.\n\nA recent approach \\cite{Modesto:2011kw,Biswas:2011ar} that avoids the excitation of ghost modes at tree level \\cite{Shapiro:2015uxa} is aptly called \\emph{ghost-free gravity}. To see how this works, let us consider the following modification of the Poisson equation:\n\\begin{align}\ne^{f(\\bigtriangleup)} \\bigtriangleup \\phi = \\rho \\, ,\n\\end{align}\nwhere $f(\\bigtriangleup)$ is some polynomial of the Laplace operator defined as a formal power series (we are forced to introduce again at least one scale $M$ such that we can build the dimensionless combination $M^{-2}\\bigtriangleup$ that enters the power series). Then, the propagator is\n\\begin{align}\nD(r) = \\frac{1}{e^{f(\\bigtriangleup)} \\bigtriangleup} \\, , \n\\end{align}\nbut by construction there are no new poles since the exponential function is nowhere zero on the real axis. This can be made more rigorous in a mathematical sense by noticing that the exponential of a polynomial is a so-called \\emph{entire function} which does not have any poles at finite distance from the origin. Then, one can apply Picard's little theorem for entire functions to ensure that the denominator of the propagator never goes to zero (except at the graviton pole), which is a fundamental mathematical statement that surpasses simple plausibility arguments.\n\nAs it turns out (and as we shall see below), the gravitational potential for some generic choices of the function $f(\\bigtriangleup)$ is regular at the origin. However, as a theory with an infinite number of derivatives, this ghost-free gravity is necessarily non-local at some characteristic scale $\\ell \\equiv M^{-1}$. What is the nature of this non-locality? In this essay we shall treat it as classical, that is, $\\ell \\gg \\ell_\\text{Planck}$. This is consistent with regarding ghost-free gravity as an effective description of gravity, obtained by coarse-graining some underlying, more fundamental quantum description. In that sense, $\\ell > 0$ may be regarded as a reminder that ghost-free gravity may have a non-classical origin.\n\nLet us summarize: in both higher-derivative and infinite-derivative theories of gravity, one can attain a regular Newtonian potential that is (i) regular at the origin, and (ii) decays like $\\sim 1\/r$ for large distances. On the classical side there are various attempts to understand the non-linear regime of non-local gravity, whereas on the quantum side the perturbative structures are still to be fully understood.\n\nIn the remainder of this essay, we would like to venture in a somewhat orthogonal direction and study the gravitational field of point sources at mesoscopic distances away from the point source, that is, not directly at the origin, and also not very far away where the gravitational field approaches the standard Newtonian $1\/r$-behavior.\n\n\\section{A framework for linearized higher-derivative gravity}\nLet us now briefly sketch a more rigorous framework that we can employ to study the linearized gravitational field of matter distributions in both higher-derivative and infinite-derivative gravity on a flat Minkowski background. Writing the metric as $g{}_{\\mu\\nu} = \\eta{}_{\\mu\\nu} + h_{\\mu\\nu}$, the most general action quadratic in $h_{\\mu\\nu}$ in $D=d+1$ spacetime dimensions can be written as\n\\begin{align}\n\\begin{split}\nS &= \\frac{1}{2\\kappa} \\int \\mbox{d}^D x \\Big( \\frac12 h^{\\mu\\nu}\\,a(\\Box)\\Box\\,h_{\\mu\\nu}-h^{\\mu\\nu}\\,a(\\Box)\\partial_{\\mu}\\partial_{\\alpha}\\,h^{\\alpha}{}_{\\nu} +h^{\\mu\\nu}\\, c(\\Box)\\partial_{\\mu}\\partial_{\\nu} h \\\\\n&\\hspace{75pt} - \\frac12 h\\,c(\\Box)\\Box h \n+ \\frac12 h^{\\mu\\nu}\\,\\frac{a(\\Box)-c(\\Box)}{\\Box}\\partial_{\\mu}\\partial_{\\nu}\\partial_{\\alpha}\\partial_{\\beta}\\,h^{\\alpha\\beta}\\Big) \\, ,\n\\end{split}\n\\end{align}\nwhere ``$\\Box$'' denotes the d'Alembert operator. The two functions $a(\\Box)$ and $c(\\Box)$ are non-local \\emph{form factors} that satisfy $a(0)=c(0)=1$ in order to reproduce linearized GR at large scales. The above action is indeed the most general one since the Bianchi identities relate the possible choices of functions $f(\\Box)$ to just two independent functions.\n\nFor the sake of simplicity, let us consider a simple case where $a(\\Box)=c(\\Box)$. Then, for a stress-energy tensor of a point mass, $T{}_{\\mu\\nu} = m \\delta^t_\\mu \\delta^t_\\nu \\delta^{(d)}(\\vec{r}\\,)$, and the metric ansatz\n\\begin{align}\n\\label{eq:metric-ansatz}\n\\mbox{d} s^2 = -\\left[1-2(d-2)\\phi\\right]\\mbox{d} t^2 + (1+2\\phi)\\mbox{d} \\vec{r}^{\\,2} \\, ,\n\\end{align}\nwhere $\\mbox{d} \\vec{r}^{\\,2} = \\mbox{d} x_1^2 + \\dots + \\mbox{d} x_d^2$ is the metric of flat space in Euclidean coordinates $x_i$ ($i=1,\\dots,d$), one obtains the field equations\n\\begin{align}\na(\\bigtriangleup)\\bigtriangleup \\phi = \\frac{\\kappa m}{d-1} \\delta^{(d)}(\\vec{r}\\,) \\, .\n\\end{align}\nThe Green function for this static problem takes the form\n\\begin{align}\n\\label{eq:green-function}\nD(r) = \\frac{1}{(2\\pi)^{\\frac d2} r^{d-2}} \\int\\limits_0^\\infty \\mbox{d} \\zeta \\frac{\\zeta^{\\frac{d-4}{2}}}{a(-\\zeta^2\/r^2)} J_{\\frac d2 - 1}(\\zeta) = \\frac{1}{2\\pi^2 r} \\int\\limits_0^\\infty \\mbox{d} \\zeta \\frac{\\sin \\zeta}{\\zeta} \\frac{1}{a(-\\zeta^2\/r^2)} \\, .\n\\end{align}\nwhere $J_n$ denotes the Bessel function of the first kind, and in the second equality we inserted $d=3$ (which we shall concern ourselves with in what follows). The gravitational potential is then given by\n\\begin{align}\n\\label{eq:potential-master}\n\\phi(r) = -\\frac{\\kappa m}{d-1} D(r) \\, ,\n\\end{align}\nand is easy to see that for $a=1$ one obtains the well-known result $\\phi(r) = -Gm\/r$ as obtained in the Newtonian limit of GR in four spacetime dimensions ($d=3$).\n\n\\section{Gravitational Friedel oscillations}\nGiven the general solution of the potential, Eq.~\\eqref{eq:potential-master}, we can now study it shape in various higher-derivative as well as infinite-derivative theories of gravity. The Green function \\eqref{eq:green-function} can either be evaluated analytically or numerically; for the sake of this essay, let us focus on analytical results that can easily be written down.\n\nTo that end we shall consider higher-derivative theories of the following class, call them $\\mathrm{HD_N}$,\n\\begin{align}\na(\\Box) = 1 + (-\\Box\/M^2)^N \\, , \\quad N \\in \\mathbb{N} \\, ,\n\\end{align}\nas well as a class of infinite-derivative ``ghost-free'' theories, call them $\\mathrm{GF_N}$:\n\\begin{align}\na(\\Box) = \\exp\\left[ (-\\Box\/M^2)^N \\right] \\, , \\quad N \\in \\mathbb{N} \\, .\n\\end{align}\nClearly these theories satisfy $a=1$ for $M \\rightarrow \\infty$, so for large scales they will reproduce linearized GR. The Green functions can be calculated analytically, and one obtains (see also \\cite{Frolov:2015usa})\n\\begin{align}\n\\begin{split}\n\\label{eq:green-functions}\n\\mathrm{GR} : \\quad & D(r) = \\frac{1}{4\\pi r} \\, , \\\\\n\\mathrm{HD_1}: \\quad & D(r) = \\frac{1}{4\\pi r} \\left( 1 - e^{-M r} \\right) \\, , \\\\\n\\mathrm{HD_2}: \\quad & D(r) = \\frac{1}{4\\pi r} \\left[ 1 - e^{-Mr\/\\sqrt{2}} \\cos\\left( Mr\/\\sqrt{2} \\right) \\right] \\, , \\\\\n\\mathrm{HD_3}: \\quad & D(r) = \\frac{1}{4\\pi r} \\left[ 1 - \\tfrac13 e^{-Mr} - \\tfrac23 e^{-Mr\/2} \\cos\\left( \\sqrt{3}Mr\/2 \\right) \\right] \\, , \\\\\n\\mathrm{GF_1}: \\quad & D(r) = \\frac{\\text{erf}(M r\/2)}{4\\pi r} \\, , \\\\\n\\mathrm{GF_2}: \\quad & D(r) = \\frac{M}{6\\pi^2}\\Big[ 3 \\Gamma\\!\\left(\\tfrac54\\right) {}_1\\!F\\!{}_3\\left( \\tfrac14;~ \\tfrac12,\\tfrac34,\\tfrac54;~ y^2 \\right) -2y\\Gamma\\!\\left(\\tfrac34\\right) {}_1\\!F\\!{}_3\\left( \\tfrac34;~ \\tfrac54, \\tfrac32, \\tfrac74;~ y^2 \\right) \\Big] \\, , \\\\\n\\mathrm{GF_3}: \\quad & D(r) = \\frac{M}{\\pi}\\Big[ -\\frac{1}{\\Gamma\\left(-\\tfrac16\\right)}{}_{1\\!}F{}_{\\!5}\\left( \\tfrac16;~ \\tfrac13, \\tfrac 12, \\tfrac 23, \\tfrac56, \\tfrac76;\\,-z^3 \\right) - \\frac{z}{2\\sqrt{\\pi}} {}_{1\\!}F{}_{\\!5}\\left( \\tfrac12;~ \\tfrac23,\\tfrac56,\\tfrac76,\\tfrac43,\\tfrac32;\\,-z^3\\right) \\\\\n&\\hspace{62pt} + \\frac{3z^2}{10\\Gamma\\left(\\tfrac76\\right)} {}_{1\\!}F{}_{\\!5}\\left( \\tfrac56;~ \\tfrac76,\\tfrac43,\\tfrac32,\\tfrac53,\\tfrac{11}{6};\\,-z^3\\right) \\Big] \\, ,\n\\end{split}\n\\end{align}\nwhere ${}_{p\\!}F{}_{\\!q}(a_1,\\dots,a_p;\\,b_1,\\dots,b_q;\\,z)$ denotes the generalized hypergeometric function and we defined the dimensionless radial variables $y \\equiv M^2 r^2\/16$ and $z \\equiv M^2r^2\/36$. See Fig.~\\ref{fig:potentials} for a visualization of the Green functions: in both higher-derivative and infinite-derivative gravity they are finite at $r=0$, whereas for GR the Green function diverges at the origin.\n\n\\begin{figure}[!htb]\n\\centering\n\\subfloat[Higher derivative theories for $N=1,2,3$.]\n{\n \\includegraphics[width=0.5\\textwidth]{potentials-hd-log-log.pdf}\n}\n\\subfloat[Infinite-derivative theories for $N=1,2,3$.]\n{\n \\includegraphics[width=0.5\\textwidth]{potentials-gf-log-log.pdf}\n}\n\\caption{The Green functions of $\\mathrm{HD_N}$ and $\\mathrm{GF_N}$ theories visualized for $N=1,2,3$. Whereas $N=1$ approaches the $1\/r$ power law directly, there are oscillations in cases of $N=2,3$.}\n\\label{fig:potentials}\n\\end{figure}\n\nThis constitutes a major insight of these calculations in the literature, and, at the linear level, one can easily extend these studies to $p$-branes in higher-dimensional Minkowski space \\cite{Boos:2018bxf}. Observe, however, the particular shape of the Green functions a bit closer. There appears to be a substructure: whereas the $N=1$ Green functions decay like $1\/r$ for large values of the dimensionless radial distance $M r$, there exist noticeable oscillations in the potentials for the cases $N=2,3$ \\cite{Modesto:2011kw,Modesto:2016ofr,Edholm:2016hbt,Conroy:2017nkc,Boos:2018bxf}; for a visualization, see Fig.~\\ref{fig:potentials}.\n\nThese oscillations have direct consequences for the local energy density $\\rho$ perceived by a static observer whose 4-velocity is tangential to $\\ts{\\xi} = \\partial_t$. For the metric ansatz \\eqref{eq:metric-ansatz} one obtains\n\\begin{align}\n\\rho \\equiv G{}_{\\mu\\nu} \\xi{}^\\mu \\xi{}^\\nu = (1-d)\\bigtriangleup \\phi \\, ,\n\\end{align}\nwhere $G{}_{\\mu\\nu}$ denotes the linearized Einstein tensor. For $N=1$ theories, this quantity is positive definite, whereas for $N=2,3$ it undergoes oscillations around zero that decay in strength with distance $M r$. See a visualization of this behavior in Fig.~\\ref{fig:energy-densities}.\n\n\\begin{figure}[!htb]\n\\centering\n\\subfloat[Higher-derivative theories for $N=1,2,3$.]\n{\n \\includegraphics[width=0.5\\textwidth]{energy-densities-hd-log.pdf}\n}\n\\subfloat[Infinite-derivative theories for $N=1,2,3$.]\n{\n \\includegraphics[width=0.5\\textwidth]{energy-densities-gf-log.pdf}\n}\n\\caption{The absolute value of the local energy density $\\rho \\equiv G{}_{\\mu\\nu}\\xi{}^\\mu\\xi{}^\\nu$, $\\ts{\\xi} = \\partial_t$, undergoes oscillatory behavior in the cases $N=2,3$, whereas there are no oscillations in the case $N=1$ (both for higher-derivative and infinite-derivative theories). Close to the origin, $M r \\approx 0$, one has $\\rho > 0$; at the points of diverging slope, the energy density vanishes. Between these points, it switches its overall sign.}\n\\label{fig:energy-densities}\n\\end{figure}\n\nUsing these diagrams, we can extract some typical wavelengths: from Eq.~\\eqref{eq:green-functions} it is clear that the wavelengths of oscillation are constant in the cases $N=2,3$ in higher-derivative gravity due to the explicit appearance of trigonometric functions. For the infinite-derivative theories the behavior is more involved. The oscillations still scale with $M{}^{-1}$, but they decay with increasing distance $M r$. A rather qualitative fit gives\n\\begin{align}\n\\mathrm{GF_2} : \\quad M\\delta_2 \\sim 9.68 \\, (M r)^{-0.28} \\, , \\qquad\n\\mathrm{GF_3} : \\quad M\\delta_3 \\sim 8.28 \\, (M r)^{-0.16} \\, ,\n\\end{align}\nbut a closer inspection reveals that the precise wavelengths oscillate over and under these curves, see Fig.~\\ref{fig:wavelengths} for more details.\n\n\\begin{figure}[!htb]\n\\centering\n\\subfloat\n{\n \\bgroup\n\t\\def1.2{1.2}\n\t\\footnotesize\n\t\\begin{tabular}{rccrcc}\n\t$M r$ & $M\\delta_2\/2$ & ~~ & $M r$ & $M\\delta_3\/2$ \\\\ \\hline\n\t 4.59 & 3.16 && 4.65 & 3.23 \\\\\n\t 7.75 & 2.76 && 7.88 & 3.02 \\\\\n\t10.51 & 2.53 && 10.90 & 2.86 \\\\\n\t13.04 & 2.38 && 13.76 & 2.75 \\\\\n\t15.42 & 2.26 && 16.51 & 2.66 \\\\\n\t17.68 & 2.17 && 19.17 & 2.58 \\\\\n\t19.85 & --- && 21.75 & ---\n\t\\end{tabular}\n\t\\egroup\n}\n\\qquad\n\\subfloat{\\adjustbox{raise=-6pc}{\\includegraphics[width=0.5\\textwidth]{wavelength-fit.pdf}}}\n\\caption{We evaluated the zeroes of the energy density for $\\mathrm{GF_2}$ theory and $\\mathrm{GF_3}$ theory numerically, from which we can read off (half the) wavelength $M\\delta_N$. Contrary to the higher-derivative theories, in infinite-derivative theories the wavelengths are not constant, but decrease with increasing $M r$. To first approximation, this can be described by simple power laws.}\n\\label{fig:wavelengths}\n\\end{figure}\n\nIt seems that these oscillations occur irrespective of the precise modification method of GR. While they are absent for $N=1$, we should remark that the case $N=1$ is somewhat degenerate for both classes of theories considered: in the higher-derivative framework the potential is indeed finite at the origin, but it's first derivative is non-zero, leading to a conical singularity (as well as a diverging energy density, see Fig.~\\ref{fig:energy-densities}). As it turns out, the scalar case of $N=1$ infinite-derivative theory has time-dependent instabilities \\cite{Frolov:2016xhq}.\n\nIn other words: for all regular versions of both higher-derivative and infinite-derivative gravity, these oscillations do occur at distances where $M r \\sim \\mathcal{O}(1)$ before they decay roughly like a power law. Since these theories are classical, and hence $M \\ll m_\\text{Planck}$, the typical distance $r \\sim M^{-1} \\mathcal{O}(1)$ might be accessible to experiment at some point in time.\n\nOscillations of energy density, somewhat similar to the ones we described here, are well known in condensed matter physics where they are called \\emph{Friedel oscillations} \\cite{Friedel:1952,Friedel:1954,Friedel:1958}: upon insertion of a positively charged impurity into a cold metal the overall charge density around this impurity exhibits spatial oscillations. This effect is usually calculated at 1-loop using the random phase approximation wherein the photon propagator picks up a fermion-loop as a correction term. In other words, the screening mechanism of an electric charge inside a cold metal is to be treated as a scattering problem \\cite{Altland:2006}.\n\nThere is also a physically intuitive explanation: in the Jellium model, electrons in a metal at low temperature fill up the Fermi sphere up to a maximum momentum of $k_\\ind{F}$ while the positive ions form a rigid background structure. Electrons close to the Fermi momentum $k_\\ind{F}$ are most prone to interact with the impurity, and since these electrons are non-local objects (scale of non-locality $\\sim k_\\ind{F}^{-1}$), they cannot compensate the positive charge exactly: they overcompensate the charge, and thereby induce a spatially oscillating charge distribution.\n\n\\section{Discussion and conclusions}\n\nIn the recent literature, there has been a lot of focus on (i) the classical non-linear behavior and (ii) the perturbative quantum structure of infinite-derivative gravity. In this essay, we pursued a somewhat different direction by focussing on the linearized theory at mesoscopic distances, $M r \\sim \\mathcal{O}(1)$, where both the gravitational potential as well we the local energy density exhibits fluctuations, the latter one assuming negative values in some regions. For values of $M \\gg m_\\text{Planck}$ these oscillations might become observable at some point in the future \\cite{Accioly:2016qeb,Accioly:2016etf,Perivolaropoulos:2016ucs}.\n\nIn an analogy to condensed matter physics and Friedel oscillations in cold metals, we think it may be appropriate to call the oscillations described in this essay \\emph{gravitational Friedel oscillations.} Since they do not only appear in higher-derivative theories (wherein they can be interpreted as spurious effects occurring at the Pauli--Villars regularization scale due to the presence of complex poles \\cite{Accioly:2016qeb,Giacchini:2016xns}) and they survive the ghost-free limit, we think that these oscillations are of some physical relevance.\n\nAt the present stage, the perturbative structure of infinite-derivative ``ghost-free'' gravity is not fully understood \\cite{Shapiro:2015uxa,Giacchini:2016xns,Biswas:2013kla,Talaganis:2016ovm} (see, however, the recent work \\cite{Calcagni:2018gke,Buoninfante:2018mre}), and it is also not clear whether ghost-ridden higher-derivative theories can be considered physically viable classical theories at distances close to the Pauli--Villars regularizations scale $M$. It would also be interesting to study the linearized gravitational field in other non-local modifications of gravity \\cite{Hehl:2009es} arising from non-local constitutive relations rather than from a quadratic tensor action with somewhat ad hoc non-local form factors.\n\nSince the oscillations occur in both higher-derivative and infinite-derivative classes of theories, quite independently from one another, we hope that the above observations may prove helpful in extracting observational criteria on the gravitational potential at mesoscopic distances.\n\n\\section*{Acknowledgements}\nThe author benefited from discussions with Valeri P.\\ Frolov, Hennadii Yerzhakov, and Andrei Zelnikov (all Edmonton), as well as Breno Loureiro Giacchini (Rio de Janeiro), and is moreover grateful for a Vanier Canada Graduate Scholarship administered by the Natural Sciences and Engineering Research Council of Canada as well as for the Golden Bell Jar Graduate Scholarship in Physics by the University of Alberta.\n\n\\begin{singlespace}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nDifferent natural landforms such as valleys, forests, and other areas have a significant impact on the propagation of radio waves\\cite{b1}. The forest area is covered by dense trees, which makes the multipath fading and non-line-of-sight channel very complex, and differs from the effects of urban buildings\\cite{b7}. Some main factors affecting radio wave propagation, such as the distance between the transmitting and receiving antennas, the height of the antenna, and the type of ground objects, are reflected in the path loss formula as variable functions\\cite{b8}. However, in different geographical environments, topographic relief, vegetation height and density, climate, and other factors have various degrees of influence on propagation\\cite{b9}. Therefore, when these propagation models are applied in specific environments, the corresponding variable functions should be different, and it is necessary to identify a reasonable channel model. The accuracy of the channel model is very important for network deployment, because the inappropriate channel models will lead to significant reduction of network coverage\\cite{b10}. \n\nDifferent typical propagation models have different characteristics, and they are applicable to different environment. Typical propagation models include Okumura Hata model\\cite{b18}, cost-231 Hata model\\cite{b19}, SPM model\\cite{b20}, etc. These models are suitable for cities, suburbs and villages, but not for forest areas. In\\cite{b16}, the authors proposed the Erceg model which is based on extensive experimental data collected by AT\\&T Wireless Services across the United States in 95 existing macro cells at 1.9GHz. The terrains are classified in three categories. Category A is hilly terrain with moderate-to-heavy tree density, category B is hilly terrain with light tree density or flat terrain with moderate-to-heavy tree density and category C is mostly flat terrain with light tree density. Soon later, In\\cite{b17}, Stanford University proposed Stanford University Interim (SUI) channel model is a set of 6 channel models representing three terrain types and a variety of Doppler spreads, delay spread and line-of-sight\/non-line-of-site conditions that are typical of the continental US. The terrain type A, B, C are same as those defined earlier for Erceg model. However, these models were proposed earlier and are mainly applicable to the North American environment.\n\nOther scholars have proposed channel models for specific forest environment, in addition to the typical channel models. In \\cite{b11}, the authors studied the oblique leaf path of roadside woodland, including three vegetation types, and obtained the attenuation loss results of oblique path at C-band yielding 0.9 dB overall improvement and up to 20 dB regional improvement in root mean square errors. In \\cite{b12}, the authors investigated the propagation behavior of 28-GHz millimeter wave in coniferous forests and model its basic transmission loss, and proposed novel fully automated site-specific models. The root-mean-square deviations between model predictions and simulation results are 11.3 dB for an ITU woodland model and 6.8 dB for a site-specific model published in this paper. In \\cite{b13}, the authors characterized the wireless channel for a short-range, temperate, medium density forest environment at the 5-GHz band. In \\cite{b14}, the authors presented measurement results and propose empirical models of ultra-wideband signal propagation in forest environments after recording more than 22000 measurements at 165 sites in four different forest environments in Virginia and Maryland. However, these works only focus on the single scenario.\n\nIn this paper, we carry out propagation measurement campaigns in two different types of forest areas, and record the measured values of signal propagation loss in those areas. Owing to the large attenuation of the signal in the forest area, the propagation effect of the lower frequency band is better, and in order to study the communication status of the emergency frequency band, so we adopted the 605 MHz frequency band for measurement. Then we use three classical large-scale path loss models and forest excess attenuation models to characterize the measured data, providing a comprehensive model comparison. Through the analysis of the results, we develop a new forest-specific path loss model, which has better performance than representative existing models.\n\n\\section{Description of measurements}\n\n\n\nIn order to study the impact of different forest areas on signal propagation, we selected two different types of areas which are Jiaozi snow mountain and Pudu-river dry-hot valley where channel propagation measurement campaigns were conducted in March 2022.\n\n\\begin{figure}[htbp]\n\\centerline{\\includegraphics[width=8.5cm]{xueshan.pdf}}\n\\caption{Geographical environment of Jiaozi snow mountain}\n\\label{fig}\n\\end{figure}\n\nJiaozi snow mountain is located at the junction of Luquan County and Dongchuan District in Yunnan Province, China, with a maximum altitude of 4344.1 meters (m), a minimum altitude of 2300 m, and a relative height difference of more than 2000 m. Jiaozi snow mountain belongs to seasonal snow mountain, which is the lowest snow mountain in the northern hemisphere. There are 15000 mu of Abies lanceolata primary secondary forest and Rhododendron forest, which are typical dense forest scenes. Fig. 1 shows the geographical scenario of Jiaozi snow mountain.\n\nPudu-river dry-hot valley is located in the Pudu river area in Yunnan Province as well. There are seven vegetation types, 11 vegetation subtypes, 17 formation groups, and 28 formations in the reserve, including dry-hot valley hard leaf evergreen oak forest, semi-humid evergreen broad-leaved forest, mountaintop bryophyte dwarf forest, cold temperate shrub, and cold temperate meadow. Fig. 2 shows the geographical scenario of Pudu-river dry-hot valley.\n\n\n\\begin{figure}[htbp]\n\\centerline{\\includegraphics[width=8.5cm]{hegu.pdf}}\n\\caption{Geographical environment of Pudu-river dry-hot valley.}\n\\label{fig}\n\\end{figure}\n\n\n\\begin{figure}[htbp]\n\\centerline{\\includegraphics[width=8.5cm]{equip.pdf}}\n\\caption{Transmitter and receiver equipment. The backpack base station on the left is transmitter and the handheld device on the right is receiver.}\n\\label{fig}\n\\end{figure}\n\n\n\n\\begin{table}[htbp]\n\\caption{MEASUREMENT INFORMATION}\n\\begin{center}\n\\begin{tabular}{|l|l|}\n\\hline\nBase station information & Parameters or information \\\\ \\hline\nTransmitting antenna height & 1.5 m \\\\ \\hline\nCoordinate of Jiaozi snow mountain & (102.848226, 26.0845327) \\\\ \\hline\nCoordinate of Pudu-river dry-hot valley & (102.7342136, 26.02112129) \\\\ \\hline\nBase station transmission power & 43 dBm \\\\ \\hline\nBase station antenna type & Omnidirectional antennas \\\\ \\hline\nTransmit antenna gain & 5 dBi \\\\ \\hline\nReceiving antenna gain & 0 dBi \\\\ \\hline\nTransmission band & 605 MHz \\\\ \\hline\nCell reference signal power & 15.2 dBm \\\\ \\hline \n\\end{tabular}\n\\end{center}\n\\end{table}\n\nIn the propagation measurement campaigns, we used a backpack base station as the signal transmitter (Tx) in a fixed position, equipped with an omni-directional antenna, while a handheld device was used as the receiver (Rx), with an omni-directional receiving antenna inside. Tx and Rx devices are shown in Fig. 3. The transmit power of the base station is 43 dBm, the transmit antenna gain is 2.5 dBi, the receive antenna gain is 0 dBi, and the carrier frequency is 605 MHz. The relevant information of the measurements is provided in Table \\uppercase\\expandafter{\\romannumeral1}. We recorded the longitude and latitude of every transmission and reception position, adopted the continuous wave as the signal source to transmit the signal, and carried out the on-board test on the preset routes. \n\nWe collected and recorded the pilot signal received power at the on-board test cell phone. Since the test was aimed to obtain the path loss data in the actual network, the test data truly reflects the propagation of broadband signals in the local wireless environment. As there is no need to set up a base station by itself, this test scheme is simple and convenient. Fig. 4 illustrates the measurement trajectory, and different colors of trajectory indicate different reference signal received power where green represents the minimum and red represents the maximum. The measurement data in Jiaozi snow mountain and Pudu-river dry-hot valley will be shown in Fig. 5.\n\n\n\n\n\\begin{figure}[t]\n\\centering \n\\subfigure[]{\n\\label{Fig.sub.1}\n\\includegraphics[width=8cm,height = 5cm]{xueshan_luxian.pdf}}\\\\\n\\subfigure[]{\n\\label{Fig.sub.2}\n\\includegraphics[width=8cm,height = 5cm]{hegu_luxian.pdf}}\n\\caption{Measurement trajectory in Jiaozi snow mountain and Pudu-river Dry-hot valley. The colors in the graphs indicate reference signal received power value, where green and red represent the lowest and highest power, respectively.}\n\\label{1}\n\\end{figure}\n\n\n\\begin{figure}[t]\n\\centering \n\\subfigure[]{\n\\label{Fig.sub.1}\n\\includegraphics[width=9cm]{tu4.eps}}\\\\\n\\subfigure[]{\n\\label{Fig.sub.2}\n\\includegraphics[width=9cm]{tu3.eps}}\n\\caption{Measurement data set in Jiaozi snow mountain and Pudu-river dry-hot valley.}\n\\label{1}\n\\end{figure}\n\n\n\n\n\n\\section{Experimental results}\n\nAfter obtaining the data set, we consider three path loss models as the baseline generic model which are the alpha-beta\u2013gamma (ABG) model\\cite{b2}\\cite{b6}, the close-in free-space reference distance (CI) model\\cite{b2}\\cite{b6}\\cite{b3}, and the free-space path loss (FSPL) model. The expressions of CI, ABG, and FSPL are as follows:\n\n\\begin{small}\n\\begin{equation}\n\\mathrm{P L}_{\\mathrm{CI} }=10 n \\log_{10}{\\frac{d}{d_{0} } }+20\\log_{10}{\\left ( \\frac{4\\pi\\times 10^{9}}{c}\\right )}+20 \\log_{10}{f} \n\\end{equation}\n\\end{small}\n\\begin{small}\n\\begin{equation}\n\\mathrm{P L}_{\\mathrm{ABG} }=10 \\alpha\\log_{10}{d} +\\beta +10 \\gamma\\log_{10}{f} \n\\end{equation}\n\\end{small}\n\\begin{small}\n\\begin{equation}\n\\mathrm{P L}_{\\mathrm{FSPL} }=20\\log_{10}{(\\frac{4\\pi fd\\times 10^{9}}{c} ) }\n\\end{equation}\n\\end{small}\n\n\\noindent where $n$ denotes the path loss exponent (PLE), $d_{0}$ is the close-in free-space reference distance and is set to 1 m\\cite{b2}, $d$ is the 3-D T-R separation distance in meters, $\\alpha$ and $\\gamma$ are coefficients showing the dependence of path loss on distance and frequency, respectively, $\\beta$ is an optimized offset value for path loss in decibels, $f$ is the carrier frequency in GHz, $c$ is the speed of light.\n\nNote that the CI model has a very similar form compared with the ABG model, but has fewer model parameters and more solid physical basis\\cite{b2}\\cite{b6}. Since additional attenuation is caused by the occlusion of vegetation in the forest area, we use the ITU horizontal forest model\\cite{b4} as the excess path loss model.The expressions of ITU horizontal forest model is as follows:\n\n\\begin{small}\n\\begin{equation}\n\\mathrm{P L}_{\\mathrm{ITU-H} }=A_m\\left[ 1-e^ {\\left( -d\\mu\/A_m \\right)} \\right]\n\\end{equation}\n\\end{small}\n\n\\noindent where $\\mu$ denotes the specific attenuation for very short vegetative paths (dB\/m) and $A_m$ denotes the maximum attenuation for one terminal within a specific type and depth of vegetation (dB). Next, we combine FSPL model and ITU horizontal forest model to fit the measured data\\cite{b5}. The expression of FSPL-H is given by:\n\n\\begin{small}\n\\begin{equation}\n\\mathrm{P L}_{\\mathrm{FSPL-H} }=20\\log_{10}{ (\\frac{4\\pi fd\\times 10^{9}}{c} )} +A_m\\left[ 1-e^ {\\left( -d\\mu\/A_m \\right)} \\right]\n\\end{equation}\n\\end{small}\n\n\\begin{figure}[t]\n\\centering \n\\subfigure[]{\n\\label{Fig.sub.1}\n\\includegraphics[width=8.7cm]{tu1.eps}}\\\\\n\\subfigure[]{\n\\label{Fig.sub.2}\n\\includegraphics[width=8.5cm]{tu2.eps}}\n\\caption{Measured path loss data and fitting results of the two classical models, the ITU forest excess attenuation model, and the proposed BHF model.}\n\\label{1}\n\\end{figure}\n\n\nFor short-distance specific forest scenes, we build a simple but powerful scene-specific model by more carefully characterizing forest-specific propagation loss, which can simplify the expression and parameters compared with directly combining two types of models presented above. We name the model Beijing University of Posts and Telecommunications horizontal forest model (BHF). The expression of BHF is as follows:\n\n\\begin{small}\n\\begin{equation}\n\\mathrm{P L}_{\\mathrm{BHF}}=10\\alpha \\log_{10}{d}+\\beta+\\zeta \\tanh (d \/20)+20 \\log_{10}{f} \n\\end{equation}\n\\end{small}\n\n\\noindent where $\\alpha$ is a coefficient showing the dependence of path loss on the conventional log-scaled distance, $\\beta$ is an optimized offset value for path loss in decibels, $\\zeta$ is a coefficient characterizing the path loss caused by vegetation attenuation.\n\n\n\\begin{table}[htbp]\n\\caption{Optimized Model Parameters In The Baseline And Proposed Path Loss Models}\n\\begin{center}\n\\begin{tabular}{|l|l|l|l|l|}\n\\hline\nSite & Model & \\begin{tabular}[c]{@{}l@{}}$n$(CI)\\\\$\\alpha$(ABG)\\\\$Am$(FSPL-H) \\\\ $\\alpha$(BHF)\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}$\\beta$(ABG)\\\\$\\mu$(FSPL-H) \\\\$\\beta$(BHF)\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}$\\gamma$(ABG)\\\\ $\\zeta$(BHF)\\end{tabular} \\\\ \\hline\nJiaozi snow & CI & 3.8 & - & - \\\\ \\cline{2-5} \nmountain & ABG & 2.9 & 31.8 & 2.0 \\\\ \\cline{2-5} \n\\multicolumn{1}{|c|}{} & FSPL-H & 40.0 & 1.2 & - \\\\ \\cline{2-5} \n & BHF & 1.6 & -1305.2 & 1407.0 \\\\ \\hline\nPudu-river & CI & 4.0 & - & - \\\\ \\cline{2-5} \ndry-hot valley & ABG & 1.9 & 57.7 & 2.0 \\\\ \\cline{2-5} \n & FSPL-H & 43.8 & 4.6 & - \\\\ \\cline{2-5} \n & BHF & 0.8 & 48.3 & 64.2 \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\nThe BHF model contains the negative exponential characteristic attenuation caused by vegetation, where the changes are made on the basis of ITU horizontal model, and the additional attenuation is close to the hyperbolic tangent function. Compared with the ITU horizontal model in (4), the function about the additional attenuation of vegetation changes more gently in the BHF model. We fit the BHF model and three baseline models by using the least-square method to find the optimal model parameters. The optimized model parameters are shown in Table \\uppercase\\expandafter{\\romannumeral2}. It can be seen from the Table \\uppercase\\expandafter{\\romannumeral2} that the coefficients of the CI and FSPL-H models are relatively stable, with little change across environments, due to the free-space reference term acting as an anchor point. In contrast, the coefficients of the ABG and BHF models are strongly influenced by the environment.\n\nFig. 6 shows the fitting results of the CI, ABG, FSPL-H, and BHF models. It can be observed from Fig. 6 that the CI model and ABG model are two straight lines. Because the relationship between the parameter term and the 3D T-R separation distance in the expression is multiplication, when other coefficients are given, the function between path loss and distance in the logarithmic scale is a linear function. Since the adaptation ability of straight lines is relatively limited, the fitting errors of CI and ABG models are large. The CI model has fewer parameter variables than the ABG model, rendering larger errors. The relationship between $d$ and parameters of the other two models are more complex, so the fitting effect is better. It can be seen that the BHF model has stronger capability of adapting with the trend of measured data in Fig. 6(b).\n\n\\begin{table}[htbp]\n\\caption{Overall Models Performance}\n\\begin{center}\n\\begin{tabular}{|l|lll|l|}\n\\hline\n- & \\multicolumn{3}{l|}{Traditional} & Proposed \\\\ \\hline\nModel & \\multicolumn{1}{l|}{CI} & \\multicolumn{1}{l|}{ABG} & FSPL-H & BHF \\\\ \\hline\nJiaozi snow mountain & \\multicolumn{1}{l|}{4.6} & \\multicolumn{1}{l|}{4.1} & 3.6 & 3.0 \\\\ \\hline\nPudu-river dry-hot valley & \\multicolumn{1}{l|}{13.1} & \\multicolumn{1}{l|}{9.7} & 9.3 & 8.3 \\\\ \\hline\nNumber of model parameters & \\multicolumn{1}{l|}{1} & \\multicolumn{1}{l|}{2} & 2 & 3 \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\nWe use root-mean-square error (RMSE) and the number of parameters to quantify the fitting effect of the models, which are given in Table \\uppercase\\expandafter{\\romannumeral3}. Note that for single frequencies, $\\gamma$ in the ABG model is set to 2, thus there are actually two model parameters in the ABG model.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nIt can be seen from Table \\uppercase\\expandafter{\\romannumeral3} that the fitting error for Jiaozi snow mountain is smaller than that for Pudu-river dry-hot valley in general, because the different types and densities of vegetation and the geographical environment have an impact on the transmission of signals. The terrain of Pudu-river dry-hot valley is steeper than that of Jiaozi snow mountain. Accoding to Table \\uppercase\\expandafter{\\romannumeral3}, the RMSEs of the BHF model are 3.0 dB and 8.3 dB for Jiaozi snow mountain and Pudu-river dry-hot valley, respectively. This shows that the BHF model proposed in this paper has the best fitting effect overall and is more suitable for the forest environment. \n\n \n\\section{Conclusion}\n\n In this paper, we have provided results from real-world measurement campaigns to assess channel characteristics for the forest environment. The signal measurement data near Jiaozi snow mountain and Pudu-river dry-hot valley are used to compare the attenuation of vegetation with comprehensive large-scale path loss models. Inspired by these results, we have developed a new site-specific path loss model. Compared with typical traditional models, the model proposed in this paper yields significantly smaller fitting errors with acceptable computational complexity, and is thus more suitable for the forest environment.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\nA problem arising throughout both nuclear theory---from {\\it ab-initio} nuclear\ntheory \\cite{Ekstrom2019,Piarulli2016} to density functional theory (DFT)\n\\cite{Bender2003,schunck2019energy}---and supervised machine learning is the\nfitting of a model to data. Formally, given a computer model $m$ evaluated at\ninputs $\\bm{\\nu}_1, \\ldots, \\bm{\\nu}_{\\nd}$, one seeks a parameter vector $\\xb \\in \\R^{\\nx}$ so that\nthe outputs $m\\left(\\bm{\\nu}_1;\\xb\\right), \\ldots, m\\left(\\bm{\\nu}_{\\nd};\\xb\\right)$\nagree with data $\\db=[d_1, \\ldots, d_{\\nd}]$ within the assumed uncertainties. For example, the inputs $\\bm{\\nu}$\nmight characterize a particular configuration of the atomic nucleus (defined by the number of its\nproton and neutron constituents). For such a case, the\ndata might include observables such as experimentally measured binding energies and charge\nradii. Or, generally, the inputs could correspond to 64-pixel by 64-pixel images, and the data could represent labels such as ``cat'' or ``banana.'' \n\nAlthough fitting can result in many different formulations of optimization problems, the most common form in \nthe physical sciences \nfollows a $\\chi^2$-convention wherein independence of errors is assumed and one seeks to solve\n\\begin{equation}\n\\label{eq:function}\n\\min_{\\xb \\in \\R^{\\nx}} f(\\xb), \\qquad \\mbox{where } f(\\xb) = \\sum_{i=1}^{\\nd} \\left(\\frac{m\\left(\\bm{\\nu}_i;\\xb\\right) - d_i}{\\sigma_i}\\right)^2,\n\\end{equation}\nwith $\\sigma_1, \\ldots, \\sigma_{\\nd}>0$ often interpreted as experimental and\/or model error bars \\cite{DNR14}. \nMore general types of such objective functions (also called ``loss functions'' or ``penalty functions'') include those that take into account correlations, such as\n\\begin{equation*}\n\\label{eq:correlated}\nf(\\xb) = \\sum_{i=1}^{\\nd} \\sum_{j=1}^{\\nd} w_{i,j} \\left(m\\left(\\bm{\\nu}_i;\\xb\\right) - d_i\\right) \\left(m\\left(\\bm{\\nu}_j;\\xb\\right) - d_j\\right).\n\\end{equation*}\n\nIn this paper we address the squared-loss formulation in \\cref{eq:function},\nwhich we generalize as the finite sum of squares of nonlinear functions of the\nparameter vector $\\xb$; that is,\n\\begin{equation}\n\\label{eq:genfun}\nf(\\xb) = \\sum_{i=1}^{\\nd} F_i(\\xb)^2.\n\\end{equation}\nThroughout the text, we refer to these general functions, $F_i$, as \\emph{component functions}.\nSince objective functions of the form in \\cref{eq:genfun} are found throughout supervised learning, \nmany optimization methods used for training machine learning models are applicable here.\nIn contrast to\nstandard fitting problems that arise in nuclear theory, however, the number of data,\n$\\nd$, used when training machine learning models tends to be massive. For\nexample, as of August 2020, the open images dataset \n\\cite{openimages20}\ncontained nearly 60 million image labels. \nWhen fitting nuclear models, \nthe value of $\\nd$ is typically many orders of magnitude smaller; this is the case in the study conducted in this paper. \n\nA natural question is thus whether the algorithms used to train machine learning models can benefit the physicist who has a computer model and desires to solve fitting problems. \nHere we investigate the strengths and limitations of different optimization algorithms for minimizing \\cref{eq:genfun} through a case study from nuclear theory. We focus on the ``derivative-free'' case where gradients \n$\\nablab f(\\xb), \\nablab F_1(\\xb), \\ldots, \\nablab F_{\\nd}(\\xb)$ \nand higher-order derivatives \nare unavailable for use in the optimization. \nThis is often the setting when the computer models are composed of iterative components \\cite{more2014nd,WSS15} or when dependencies on legacy computer codes pose obstacles to algorithmic differentiation \\cite{Berz1996}, which is a key enabling technology for deep learning \\cite{2015arXiv150205767G}.\n\n\nIn \\cref{sec:algorithms} we summarize the set of optimization algorithms tested. We focus on methods for local optimization (i.e., those that do not seek to asymptotically cover the entire parameter space) since we are interested in assessing performance within a budget of function evaluations.\nSuch a budget limits the applicability of global optimization algorithms. \nOur case study, involving the fitting of the Fayans energy density functional (EDF) to\ndata across the nuclear chart, is described in \\cref{sec:fayans}. This problem was selected in part because it shares characteristics with many fitting problems.\nFor this problem, there are $\\nd=198$ data, $\\nx=13$ parameters to be optimized, and correlations among the errors $m\\left(\\bm{\\nu}_i;\\xb\\right) - d_i$ are evident.\nNumerical results are presented in \\cref{sec:numerical}, and we summarize both the\nconsistency and efficiency of the tested algorithms. Our performance measures\nemphasize how the efficiency of optimization methods, as measured in function evaluations, can change depending on one's ability to evaluate components $F_i(\\xb)$ concurrently. \n\nAlthough our focus is on optimization-based approaches for training, these could also be used in a larger framework of statistical calibration (e.g., as discussed in \\cite{HigdonJPG15}).\n \n\n\n\n\n\\section{Derivative-free optimization\/training algorithms}\n\\label{sec:algorithms}\n\nWe consider five algorithmic families of iterative methods for local, unconstrained derivative-free optimization. \nThe first two algorithms are deterministic, and the latter three are randomized\n(sometimes called ``stochastic''). For randomized algorithms, the \nsequence of points in parameter space at which the component functions will be evaluated is generated stochastically\/nondeterministically by the method. \n\nThe randomized algorithms considered in this study are designed to have the ability to vary the number of component functions evaluated in any one iteration. \nThroughout this paper, we refer to this number as the \\emph{batch size}, denoted $\\nb$.\nIn our experiments, as is typical in such batch-sampling-based algorithms, a\nbatch of size $\\nb$ is generated by sampling uniformly $\\nb$ many times without\nreplacement from the integers $\\{1, \\ldots, \\nd\\}$. Hence, the maximum batch size $\\nb=\\nd$ corresponds to evaluating all of the component functions.\n\nWe now briefly describe each of the algorithm types, along with their hyperparameters and our implementation. \nFor additional details on these and other derivative-free optimization methods, we refer the reader to \\cite{Conn2009a,LMW2019AN}.\n\n\n\\subsection{Deterministic algorithms}\n\nIn general, deterministic methods have the property that, given a starting point\nand hyperparameter values, the sequence of points in parameter space generated by the method will be\nthe same every time the optimization is repeated. The deterministic methods considered here also assume that all of the\n$\\nd$ component functions in \\cref{eq:genfun} are evaluated before the next\npoint in parameter space to be evaluated is determined. \nThat is, we address a batch-synchronous, rather than asynchronous, environment; the latter may be appropriate when the component function evaluation time varies significantly and\/or individual component functions depend on relatively few parameters \\cite{Recht2011hogwild}.\n\n\\subsubsection{Direct search algorithm.}\n\\label{sec:nelder}\n\nThe Nelder-Mead simplex algorithm \\cite{NelderMead} is a popular direct search algorithm for general derivative-free optimization \\cite{PhysRevLett.67.1334,pres07}. The version tested here is from the MATLAB routine \\texttt{fminsearch} based on \\cite{JCL98}. \n\nThe Nelder-Mead algorithm determines a new point for evaluation by performing operations on a simplex\ndefined by $\\nx+1$ previously evaluated affinely independent points\n\\cite{Conn2009a,Wright2012,AudetHare2017}. The particular choice of operations is dictated \nby the $f$ values associated with each of the simplex's vertices. \nSince the algorithm bases its decision on the complete evaluation of $f$, accepted points $\\xb_k$ monotonically decrease the objective: $f(\\xb_0) > f(\\xb_1) > \\ldots$. \nMultiple complete evaluations of $f$ may be required before an acceptable point is found, and each of these evaluations corresponds to $\\nd$ component function evaluations.\n\nThe sole hyperparameter in our Nelder-Mead implementation is the initial simplex size. This size can be interpreted as defining the size of the neighborhood within which Nelder-Mead begins its search. \n\n\\subsubsection{Model-based trust-region algorithm.}\n\\label{sec:pounders}\n\nPOUNDERS \\cite{SWCHAP14} is a deterministic method that exploits the structural form in \\cref{eq:genfun} by constructing a local surrogate model of each component function $F_i$. POUNDERS was used in the optimization of the UNEDF family of energy density functionals \\cite{UNEDF0,UNEDF1,UNEDF2} and chiral nucleon-nucleon interactions \\cite{Ekstrom13,EksJPG15}. \n\nThe surrogate models in POUNDERS are updated with each new function evaluation, and the algorithm assumes that all $\\nd$ component functions are evaluated at each point. \nA new point to evaluate \nis obtained by locally minimizing an\naggregation of the component surrogate models. Thus, unlike the Nelder-Mead\nmethod, POUNDERS requires and exploits full knowledge of the individual \ncomponent function values $F_1(\\xb_k), \\ldots, F_{\\nd}(\\xb_k)$. \nSimilar to Nelder-Mead, \nsince POUNDERS evaluates all component functions, accepted points monotonically decrease the objective, and multiple such evaluations of $\\nd$ components may be required before decrease in the function value is found.\n\nThe primary hyperparameter in POUNDERS is the radius used to define the initial neighborhood within which the surrogate models are constructed and optimized over.\n\n\n\\subsection{Derivative-free stochastic approximation}\n\\label{sec:sgd}\n\nIn supervised learning tasks of machine learning---a class of optimization problems containing \\cref{eq:genfun}---the\nworkhorse optimization method for obtaining (approximate) solutions to \\cref{eq:genfun} has been \nstochastic gradient methods \\cite{RobbinsMonro1951}. \nFor an excellent contemporary survey of stochastic gradient methods, see \\cite{BottCurtNoce16}. \nStochastic gradient methods as applied to \\cref{eq:genfun} resemble traditional gradient descent methods, \nthe basic iteration of which takes the form\n\\begin{equation}\\label{eq:gd_iteration}\n \\xb_{k+1}= \\xb_k-\\alpha_k\\nablab f(\\xb_k).\n\\end{equation}\nWhen $\\nd$ is large, however, it may become computationally prohibitive to\nevaluate, or even numerically estimate, the gradient of \\cref{eq:genfun}:\n\\begin{equation}\n\\label{eq:gradsum}\n\\nablab f(\\xb_k)=\\sum_{i=1}^{\\nd} \\nablab F_i^2(\\xb_k).\n\\end{equation}\n\nThus, stochastic gradient methods compute approximations to the gradient \\cref{eq:gradsum} \nby including in the sum only a\nsampled batch of the component function indices $\\{1,\\dots,\\nd\\}$. \nIn its simplest form, a single $i(k)\\in\\{1,\\dots,\\nd\\}$ is chosen at random from a \ndiscrete (often uniform) distribution, and the basic iteration in \\cref{eq:gd_iteration} is replaced with\n\\begin{equation}\\label{eq:sgd_basic_iteration}\n \\xb_{k+1}= \\xb_k-\\alpha_k\\nd\\nablab F_{i(k)}^2(\\xb_k).\n\\end{equation}\nEffectively, the cost of performing \\cref{eq:sgd_basic_iteration}\nis a factor of $\\nd$ times cheaper than the cost of performing \\cref{eq:gd_iteration}.\nThis represents significant computational savings in performing a single iteration when $\\nd$ is large,\nat the expense of using an inaccurate gradient approximation.\nMore generally, one can consider sampling a batch of component function indices\n$\\B_k\\subseteq\\{1,\\dots,\\nd\\}$ of size $\\nb(k)\\leq\\nd$,\nand replacing \\cref{eq:gd_iteration} with \n\\begin{equation}\\label{eq:sgd_batch_iteration}\n \\xb_{k+1} = \\xb_k-\\alpha_k\\displaystyle\\frac{\\nd}{\\nb(k)}\\displaystyle\\sum_{i\\in B_k}\\nablab F_{i}^2(\\xb_k).\n\\end{equation}\nThe rationale behind the random sampling approach in \\cref{eq:sgd_batch_iteration}\nis that the expected value (with\nrespect to the stochastic sampling of component function indices) of \n$\\displaystyle\\frac{\\nd}{\\nb(k)}\\displaystyle\\sum_{i\\in B_k}\\nablab F_{i}^2(\\xb_k)$\nis exactly\n\\cref{eq:gradsum}. \nWe note that when $\\nb(k) < \\nd$, the step from $\\xb_k$ to $\\xb_{k+1}$ will be based on incomplete information; however, since the sampled batches will be independent from one iteration to the next, these methods probabilistically find a zero of the full gradient \\cref{eq:gradsum} when the step sizes decay fast enough. \n\nDating almost as far back as the earliest stochastic gradient methods \\cite{RobbinsMonro1951},\nderivative-free variants of the iterations in \\cref{eq:sgd_basic_iteration} and in \\cref{eq:sgd_batch_iteration} have been proposed \\cite{KieferWolfowitz}. \nAll these methods \nperform an analogous iteration \n\\begin{equation}\n\\label{eq:basic_update}\n\\xb_{k+1} = \\xb_k - \\alpha_k \\db_k, \\qquad k=0,1,2,\\ldots,\n\\end{equation}\nwith $\\db_k \\in \\R^{\\nx}$ serving as an estimate of a gradient quantity. \nThese algorithms differ in their selection of the step size $\\alpha_k>0$ and, most distinctively, the choice of direction $\\db_k$. \n\n\\subsubsection{Kiefer-Wolfowitz method.}\nIterations of type \\cref{eq:basic_update} are found in the Kiefer-Wolfowitz (KW) method \\cite{KieferWolfowitz}.\nThe KW method computes \\emph{finite differences of sampled functions} to\napproximate directional derivatives in each of the $\\nx$ coordinate directions. \nAlthough other variants exist \\cite{LEcuyerYin1998,Kleinman1999}, \nin this paper we use the most common sampling described in the following.\n\nIn the $k$th iteration, we uniformly sample a \nbatch $B_k\\subseteq\\{1,\\dots,\\nd\\}$ of size $|B_k|=\\nb(k)$. \nGiven a fixed finite-difference parameter $h$, \nforward differences are used to approximate the partial derivatives needed to\nestimate the gradients of the $\\nb(k)$ squared component functions associated\nwith the batch. Specifically, we compute\n\\begin{equation}\n\\label{eq:kw-fs}\n\\db_k = \\frac{\\nd}{\\nb(k)} \\displaystyle \\sum_{i\\in\\B_k} \n\\gb_i(\\xb_k;h), \n\\end{equation}\nwhere\n\\begin{equation}\n\\label{eq:FD}\n\\gb_i(\\xb_k;h) = \\frac{1}{h}\n\\left[\\begin{array}{c}\nF_i\\left(\\xb_k+h\\mathbf{e}_1\\right)^2-F_i\\left(\\xb_k\\right)^2 \\\\\n\\vdots \\\\\nF_i\\left(\\xb_k+h\\mathbf{e}_{\\nx}\\right)^2-F_i\\left(\\xb_k\\right)^2 \n\\end{array}\\right].\n\\end{equation}\nIn our experiments, we refer to the algorithm that uses \\cref{eq:kw-fs} as $\\db_k$ in \\cref{eq:basic_update}\nas ``KW.'' \n\nObserve that in KW, $\\nb(\\nx+1)$ component function evaluations are performed in a single iteration. \nAs with any method using \\cref{eq:basic_update}, both a sequence of step sizes $\\{\\alpha_k\\}$ and a sequence\nof batch sizes $\\{\\nb(k)\\}$ must be selected. \nAdditionally, the finite-difference parameter $h>0$ must be selected. \nFor the sake of simplicity in presentation,\nwe have chosen to keep $h$ fixed, with the immediate consequence that \n$\\db_k$ is a biased estimator of $\\nablab f(\\xb_k)$ for all $k$, even when $\\nb(k)=\\nd$. \n\n\\subsubsection{Bandit method.}\n\\label{sec:bandit}\nWe now consider members of a class of derivative-free methods that have become increasingly attractive in supervised learning over the past decade,\nthe so-called (two-point) bandit methods \\cite{Agarwal2010,Ghadimi2013,Duchi2015,Gasnikov2017,Shamir2017}. \nSimilar to KW, bandit methods can employ a batch\n$B_k\\subseteq\\{1,\\dots,\\nd\\}$ of component function indices in the computation\nof $\\db_k$, which is computed based on finite differences and employed in\niterations of the type \\cref{eq:basic_update}.\nIn each iteration of a bandit method, however, only \\emph{one} directional\nderivative is numerically approximated per element in the batch; in contrast, KW uses \n$\\nx$ partial derivatives per element.\nFor the basic iteration of what we refer to as the ``Bandit'' method in the following,\nthe direction vector $\\db_k$ becomes\n\\begin{equation}\n\\label{eq:bandit-fs}\n\\db_k = \\frac{\\nd}{\\nb(k)} \\sum_{i\\in\\B_k} \n\\left(\n\\frac{F_i\\left(\\xb_k+h\\mathbf{u}_k\\right)^2-F_i\\left(\\xb_k\\right)^2}{h} \n\\right)\\mathbf{u}_k,\n\\end{equation}\nwhere $\\mathbf{u}_k$ is a \\emph{randomized} direction. In particular, \nwe sample $\\mathbf{u}_k$ uniformly from the surface of an $\\nx$-dimensional sphere centered at the origin and of unit radius.\nOnce again, the quantity $h$ in \\cref{eq:bandit-fs} denotes a fixed finite-difference parameter. \nIn the case where $\\nb(k)=\\nd$ for all $k$, the Bandit method is related to the\niteration used in the Gaussian smoothing method \\cite{Nesterov2015}.\nWe remark that even when $\\nb(k)=\\nd$, the Bandit method is still randomized because of the\nrandom directions $\\mathbf{u}_k$. \n\nWhereas a KW method involves $\\nb(\\nx+1)$ component function evaluations in a single iteration, \nthe Bandit method entails only $2\\nb$ component function evaluations in a single iteration. \nIn common with a KW method, however, the Bandit method requires a selection of\nthe finite-difference parameter $h$,\na sequence of step sizes $\\{\\alpha_k\\}$,\nand a sequence of batch sizes $\\{\\nb(k)\\}$. \n\n\n\\subsection{Adaptive sampling quasi-Newton method}\n\\label{sec:aqn}\n\nWe now consider adaptive sampling quasi-Newton (AdaQN) methods \n\\cite{BollapragadaICML18, RBSWshort19,RBSWlong20}, which iteratively construct a local quadratic surrogate model according to the sampled component functions and select\nsearch directions $\\db_k$ as an approximate minimizer of the quadratic surrogate\nmodel.\nThe quadratic surrogate model is updated at every iteration using the differences between current and previously evaluated forward-difference gradient approximations. \nWhereas the KW and Bandit methods considered here use a prescribed sequence of batch sizes $\\{\\nb(k)\\}$, \nAdaQN adaptively increases the \nbatch size. \nDifferent adaptive rules \\cite{RBSWshort19,RBSWlong20} will increase the batch sizes differently; and we consider one such rule, called the \\emph{norm test}, in this study. \n\nAdaQN computes a direction $\\db_k$ of the form similar to \\cref{eq:kw-fs}:\n\\begin{equation}\n\\label{eq:adaqn-fs}\n\\db_k = \\frac{\\nd}{\\nb(k)} \\Hb_k\\displaystyle \\sum_{i\\in\\B_k} \n\\gb_i(\\xb_k;h),\n\\end{equation}\nwhere $\\gb_i$ is defined in \\cref{eq:FD} and $\\Hb_k$ is a quasi-Newton matrix defining the quadratic surrogate model, \nupdated such that $\\Hb_{k+1} \\mathbf{v}_k = \\xb_{k+1} - \\xb_k$, where \n\\begin{equation}\n\\label{eq:adqn-fs-hk}\n\\mathbf{v}_k =\n\\displaystyle \\sum_{i\\in\\B_k} \\Big(\\gb_i(\\xb_{k+1};h) - \\gb_i(\\xb_k;h)\\Big).\n\\end{equation} \nUnlike KW, however, the step size $\\alpha_k$ in AdaQN is adaptively determined in each\niteration via a \\emph{stochastic backtracking line search}\n\\cite{BollapragadaICML18, RBSWshort19}, an automatic procedure that ensures\nsufficient decrease in a sampled function. This procedure \nrequires evaluating the currently sampled component functions \nalong the direction $\\db_k$, with the associated number of such evaluations,\n$l_k$, varying at each iteration; typically, $l_k$ is less than $5$. Observe that in\nAdaQN, $2|B_k|(\\nx+1)$ component function evaluations are required to compute\n$\\db_k$ (and, for free, $\\mathbf{v}_k$) and $|B_k|l_k$ component function evaluations are required to compute $\\alpha_k$. \n\nThe primary hyperparameter in AdaQN is the initial batch size. All the other hyperparameters associated with AdaQN are set to their default values as specified in \\cite{RBSWlong20}. \n\n\n\n\\section{Case Study: Optimizing Fayans energy density functional}\n\\label{sec:fayans}\n\nA key pursuit in the understanding of atomic nuclei is a global (i.e., across the nuclear chart) description of nuclei. For such a task, the microscopic tool of choice\nis nuclear DFT rooted in the\nmean-field approach \\cite{Bender2003}. An effective interaction in DFT is given by the EDF,\nwhose parameters are adjusted to experimental data.\nOver the past decade, increasingly refined EDFs have been developed, with increasingly complex and computationally expensive computer models; see, for example, \\cite{UNEDF0,UNEDF1,UNEDF2}. Because of the expense of these computer models, their calibration has focused largely on point-\/optimization-based estimation. Bayesian approaches have been demonstrated with limited (e.g., 200 in \\cite{McDonnellPRL15}) model evaluations, with nontrivial failure (associated with convergence at insufficient fidelity levels) rates of roughly 9\\% of the designs present even in relatively narrow (in terms of posterior probability) regions of parameter space; see \\cite{HigdonJPG15}. \n\nWe focus on calibration of a Fayans EDF\n\\cite{Fayans1998}, for which computer models have recently been developed and\ndemonstrated to correct some systematic effects of other state-of-the-art\nfunctionals \\cite{Reinhard2017}. This functional form has recently sparked\nsignificant interest, especially in the context of charge radii measurements \\cite{Hammen2018,Miller2019,Gorges2019,deGroote2020}. \n\n\n\\begin{table}[htb]\n\\caption{\\label{tab:observableClasses} Nine classes of physical\nobservables that constitute all observables included in the study\n\\cite{Reinhard2017}.}\n\\begin{indented}\n\\lineup\n\\item[]\\begin{tabular}{lcc}\n\\br\nClass & Symbol & Number of Observables\\cr\n\\mr\nBinding Energy & $E_B$ & 63 \\cr\nCharge Radius & $r_{\\textrm{ch}}$ & 52 \\cr\nDiffraction Radius & $R_{\\textrm{diffr}}$ & 28 \\cr\nSurface Thickness & $\\sigma$ & 26 \\cr\nNeutron Single-Level Energy & $\\epsilon_{ls,n}$ & 5 \\cr\nProton Single-Level Energy & $\\epsilon_{ls,p}$ & 5 \\cr\nDifferential Radii & $\\delta\\langle r^2 \\rangle$ & 3 \\cr\nNeutron Pairing Gap & $\\Delta E_n$ & 5 \\cr\nProton Pairing Gap & $\\Delta E_p$ & 11 \\cr\n\\mr\n& & $\\nd = 198$ \\cr\n\\br\n\\end{tabular}\n\\end{indented}\n\\end{table}\n\n\\subsection{Problem specification}\n\nThe computer model $m\\left(\\bm{\\nu};\\xb\\right)$ for the currently used Fayans EDF has $\\nx=13$ free model\nparameters and employs a pool of fit data for spherical nuclei that primarily comprises\nbulk properties of the nuclear ground state (energy, radii, surface thickness),\nthree-point binding energy differences to calibrate pairing strengths, and some\nisotopic differences of root mean square radii in calcium isotopes. Specifically, the pool\nused for this study is that used to fit the new Fayans EDF\nFy($\\Delta r$) reported in \\cite{Reinhard2017} but with the even-odd staggering\nof binding energies replaced by the even-even data. \nThe total dataset\nconsists of $\\nd=198$ observables of different classes (see\nTable~\\ref{tab:observableClasses}) that are associated with 72 different spherical, ground-state,\neven-even nucleus configurations (with these configurations encapsulated by $\\bm{\\nu}$). \nThe weights ($\\sigma_i$) associated with each observable in the pool are related to those in \\cite{Reinhard2017} and are detailed in the \nsupplemental material \\cite{suppl}. The data, weights, and model outputs together define the collection of component functions $F_1(\\xb), \\ldots, F_{198}(\\xb)$ used in \\cref{eq:genfun}.\n\nTo ensure that our optimizations solve the specified problem, we identified \ntransient platform errors,\ntransient software faults outside of our control, user error, and \nreproducible software faults in the Fayans model evaluation software\nas the classes of failures that can occur during an optimization run and that\nmust be understood and handled sensibly throughout the study. \nWe developed scripts to scan over\nall optimization results and their associated metadata so that possible failures\ncould be flagged and manually inspected. When a transient failure was positively\nidentified and was determined to affect the data quality, the\nassociated optimization run could simply be rerun and the results manually verified as\nacceptable. Since the failures associated with the Fayans model software, which are discussed further in\n\\cref{sec:FayansSW}, are reproducible, rerunning a failed optimization is not an option. As\na result, schemes for handling this class of errors were developed and\nimplemented. A detailed discussion of this handling is given in \\cref{sec:mods}.\n\n\\subsection{Fayans model evaluation software}\n\\label{sec:FayansSW}\nThe code that is used in our study to evaluate the Fayans \nEDF is derived from a code solving nonrelativistic nuclear \nHartree-Fock equations for spherically symmetric nuclei \n\\cite{Rei91aR}, which is under continuous development. We have\nidentified two classes of reproducible software faults within\nthe Fayans model evaluation software. The numerical methods used\ninternally by the code are iterative, and therefore the first class of\nfailures is the inability of the methods to satisfy \na specified stopping criterion within a given maximum\niteration budget. \nWhile a single computation that does not satisfy\nthis criterion would normally be deemed as a failed result, for this study and informed by the experience and knowledge of\nthe developer, we implemented a secondary stopping criterion. This\ncriterion, which is a relaxation of the primary\ncriterion, is employed as follows. If a computation has failed to\nachieve the primary stopping criterion within the budget but does\nachieve the secondary criterion within the budget, then the result is\nflagged as \\emph{marginally convergent}. If, however, a computation\ndoes not satisfy either criterion within the budget, the associated\nmodel evaluation is flagged as \\emph{nonconvergent}.\n\nThe second class of failures contains those computations that could not\nbe run to completion because at runtime the code encountered a situation that\nwas incompatible with its computations. Such failures, which are referred to as\n\\emph{runtime failures}, could arise because \nof exceptional conditions that cause internal algorithmic failures or because \nthe computation is being\nevaluated in a region of the parameter space for which the functional \nis unstable \\cite{Hellemans2013,Pastore2015}.\nWhen runtime\nfailures occur, the Fayans model code reports an error, execution is aborted,\nand the associated model evaluation result is flagged as failed.\nTo avoid such severe failures as much as possible, we have\nestablished empirical, ``reasonable'' bounds for the model parameters,\nwhere reasonable means that we want to avoid \ninstabilities as well as unphysical results (e.g., unbound\nmatter). For details regarding the region of assumed stability of Fayans EDF that is characterized by these bounds, see the supplemental material \\cite{suppl}.\n\nKnowledge of this region has not been programmed\ninto the optimization methods, and therefore any optimization\ncan evaluate the model at points outside the region of stability. We expect that\nsome methods, such as the randomized methods, might have a greater propensity\nfor choosing points outside the region of stability and that the various methods might also differ in their ability to recover from such bad evaluations. Our means for managing such potential difficulties is detailed next.\n\n\n\\subsection{Modifications to minimize and address possible failures}\n\\label{sec:mods}\n\nA necessary step in facilitating the automatic training of any simulation-based model is to ensure that error handling is adequately addressed. All of the methods of \\cref{sec:algorithms} use some form of the output \n$\\left \\{F_i(\\xb) ; \\; i \\in B_k \\right\\}$\nat a queried point $\\xb$ to inform their internal decision-making. Consequently, it is necessary to address what occurs if the evaluation $F_i(\\xb)$ fails for one or more components $i \\in B_k$. \nIn this paper, we seek to make minimal changes to the methods stated in\n\\cref{sec:algorithms} and instead modify the objective function to\naccount for the variety of situations that can be encountered as discussed in\n\\cref{sec:FayansSW}. \nBefore detailing each of these modifications,\nwe stress that throughout this article, information about specific points in parameter\nspace is reported in the original unscaled space used by physicists. However,\nthe optimization methods used in this study were implemented to work on a scaled version of the\nparameter space; hence, it is understood that the domain of the\nobjective function is the scaled space. Unless otherwise stated, the points in parameter\nspace discussed in the remainder of this section should be assumed to be with respect to\nthe scaled parameter space used for optimization. For more information regarding the choice of \nscaling, we refer the reader to the supplemental material \\cite{suppl}.\n\n\\subsubsection{Projection.}\n\\label{sec:projection}\n\nThe tested optimization methods all were intended primarily for unconstrained optimization. We operate in such a setting here and do not provide any method with prior knowledge \nof valid combinations of parameters.\nWe observed that, depending on the quality of gradient estimators obtained by the randomized \nmethods, the directions $\\db_k$ could become so large as to generate steps into physically\nmeaningless or unstable regions of parameter space. \nTo help such methods avoid divergence, we alter the objective function to include a projection \nonto an $\\ell_1$-ball centered around the\npoint $\\bar{\\xb}$.\nThe unscaled version of this point is\ngiven in the supplemental material \\cite{suppl}.\nBecause of the scaling, it is\nappropriate to use an isotropic $\\ell_1$-ball \nfor defining a reasonable region; that is, we compute the projection \n\\begin{equation}\n\\label{eq:project}\n\\xb_{\\mathbf{P}} = \\displaystyle\\arg\\min_{\\yb\\in\\mathbf{P}} \\|\\yb-\\xb\\|_2, \n\\quad \\textrm{where } \n\\mathbf{P}= \\left\\{\\yb\\in \\R^{\\nx}: \\|\\yb-\\bar{\\xb}\\|_1\\leq 2\\right\\}.\n\\end{equation} \nOur choice of using the $\\ell_1$-norm to define $\\mathbf{P}$ is motivated by \nour observation that failures are more likely to occur when many parameter components deviate significantly from $\\bar{\\xb}$.\n\nWe then pass the projected point $\\xb_{\\mathbf{P}}$ to the Fayans model simulation for\nevaluation. \nWe modify the\nobjective function \\cref{eq:genfun} by applying a multiplicative penalty to each residual $F_i(\\xb)$ based on the distance between $\\xb_{\\mathbf{P}}$ and $\\xb$; that is,\n\\begin{equation}\n\\label{eq:projection_modification}\n\\tilde{F_i}(\\xb) = F_i(\\xb_{\\mathbf{P}})\\left(1+ \\left\\|\\xb-\\xb_{\\mathbf{P}}\\right\\|_2^2\\right).\n\\end{equation}\nWe acknowledge that the replacement of each $F_i$ with $\\tilde{F_i}$\ncan introduce nonsmoothness \nat the boundary of $\\mathbf{P}$, even when we assume each $F_i$ is smooth in a neighborhood of the boundary of $\\mathbf{P}$. \n\n\\subsubsection{Observable convergence.}\n\\label{sec:convergence}\nTo account for marginally convergent and noncovergent results as well as\noccasional runtime failures, and\ninformed by the belief that convergent computations are more likely to\nindicate physically meaningful points in parameter space, we further modified\nthe observable data $\\tilde{F_i}(\\xb)$ in \\cref{eq:projection_modification} by\ncomputing\n\\begin{equation*}\n\\hspace{-55pt}\n\\label{eq:convergence_modification}\n\\hat{F_i}(\\xb) = \n\\left\\{ \n\\begin{array}{ll}\n\\tilde{F_i}(\\xb) &\\mbox{if the computation of }F_i\\left(\\xb_{\\mathbf{P}}\\right)\\mbox{ succeeded} \\\\\n(1 + \\lambda_m^2)\\tilde{F_i}(\\xb) & \\mbox{if the computation of }F_i\\left(\\xb_{\\mathbf{P}}\\right)\\mbox{ was marginally convergent} \\\\\n(1 + \\lambda_n^2)\\tilde{F_i}(\\xb) & \\mbox{if the computation of }F_i\\left(\\xb_{\\mathbf{P}}\\right)\\mbox{ was nonconvergent } \\\\\n(1 + \\sigmar^2)\\tilde{F_i}(\\xb) & \\mbox{if the computation of }F_i\\left(\\xb_{\\mathbf{P}}\\right)\\mbox{ had a runtime failure}, \n\\end{array}\n\\right.\n\\end{equation*}\nwhere $\\sigmar \\ge \\lambda_n \\ge \\lambda_m \\ge 0$ denote penalty parameters. In our study, we\nset $\\lambda_m=2, \\lambda_n=5$, and $\\sigmar=100$. With these considerations, our \nmodified objective function, seen by all of the optimization algorithms, is\n\\begin{equation}\n\\label{eq:modfun}\n\\hat{f}(\\xb) = \\sum_{i=1}^{\\nd} \\hat{F_i}(\\xb)^2.\n\\end{equation}\n\n\n\\subsubsection{Recovering from failed simulations.}\n\\label{sec:failures}\nIn our study, not even the use of the modified objective function\n$\\hat{f}(\\xb)$ can cover every possible failure case. \nWhen the Fayans simulation returned no output $\\hat{f}(\\xb)$ whatsoever---a situation that we refer to as a \\emph{hard failure}---none of the methods that we tested can continue. \nWe thus slightly modified the methods to handle hard failures. \nThe deterministic methods (POUNDERS and Nelder-Mead) were modified to terminate gracefully when a hard failure occurred,\nreturning the point in parameter space corresponding to the best-found objective value in the run up until the hard failure occurred. \nThe randomized methods were augmented with a simple backtracking procedure, like the one employed in AdaQN. \nAfter a direction $\\db$ was computed, if the function evaluation at the next suggested point $\\xb+\\db$ resulted in hard failure, then the direction $\\db$ was replaced by $0.1\\db$, and we reevaluated at $\\xb+\\db$. This process was repeated until the evaluation of $\\xb+\\db$ did not result in hard failure. \nAs we will see in the numerical results, the deterministic methods and AdaQN\nnever suggested a point that resulted in hard failure; but KW and Bandit did encounter hard failures, depending on the selection of hyperparameters. \n\n\n\\section{Numerical Results}\n\\label{sec:numerical}\n\nWe now study the performance of the algorithms from \\cref{sec:algorithms} on the function in \\cref{eq:modfun}.\nWe first tune the identified hyperparameters to obtain hyperparameter values to maximize the performance of each algorithm. \nSince computational budgets may limit one's ability to perform comprehensive hyperparameter tuning, the insensitivity to hyperparameter selection \n(as well as the variability overall) may be a key consideration in selecting an optimization algorithm. \nWe report on this sensitivity and perform a thorough study of each algorithm using the best hyperparameter values found.\n\nFor this study, the results for all $\\nd=198$ component functions were\nstored at each evaluated point, even when an optimization method with $\\nb<198$ was not provided \n(or charged for) this full set of component function evaluations.\nStoring this information allowed us to\nreevaluate, during postprocessing, the true function (i.e., with all 198 component functions) \nfor every point queried.\n\nThe randomized algorithms of \\cref{sec:algorithms} require a forward-difference parameter $h>0$. \nIn our computations we use $h=5\\cdot 10^{-7}$.\nThis value was obtained by estimating the noise level in each $F_i^2$ following \\cite{more2011ecn}. These noise estimates were then used to determine $h$ following the procedure in \\cite{more2011edn}. Although variation was seen across different component functions $i$ and different directions in $\\R^{\\nx}$, the effect of this variation turned out to be mild, and hence we used a fixed difference parameter for all component functions. \n\n\n\n\\subsection{Tuning of hyperparameters}\n\\label{sec:tuning}\n\nFor our hyperparameter tuning procedure, we randomly selected 5 starting points from the\nsame $\\ell_1$-ball as in \\cref{eq:project}. \nWe ran each \nmethod with a budget of $700\\nd$ component function evaluations from each starting point. \nEach randomized method was run with three different seeds from each starting point, while deterministic methods were run once from each starting point. \n\nThe three main classes of hyperparameters, and the ways we chose to vary them, are defined below.\n\n\\subsubsection{Step-size hyperparameters.}\nEvery method that we tested, with the exception of AdaQN, requires some kind of (initial) step-size parameter. \nWhile POUNDERS and Nelder-Mead require a single radius parameter,\nthe stochastic approximation methods KW and Bandit require a \\emph{sequence} of step-size parameters \n$\\{\\alpha_k\\}$, as seen in \\cref{eq:gd_iteration}.\nFor all four methods, we chose to use a common set of step-size hyperparameters based on the set \n$J\\equiv \\{3,4,5,6,7\\}.$\n\nIn the case of POUNDERS, the hyperparameter value $\\alpha=2^{-j}, j\\in J$ sets the initial trust-region radius; \n and in the case of Nelder-Mead, the hyperparameter value $\\alpha=2^{-j}, j\\in J$ sets the initial simplex radius. \nFor the stochastic approximation methods, we opted to use a schedule of decaying step sizes\n$\\alpha_k = 2^{-j}\/(k+1), j\\in J$. \nEmploying such a harmonic sequence as the step-size schedule for stochastic approximation methods is in line with standard convergence theory for those methods. \nWe remark again that AdaQN employs adaptive step sizes and hence does not require a step-size hyperparameter. \n\n\\subsubsection{Batch-size hyperparameters.}\nEach of the stochastic methods\nrequires the specification of a batch-size parameter. Recall from \\cref{sec:sgd} that a batch is drawn uniformly from the $\\nd$ component functions and that such draws are independent from one draw to the next.\nIn each iteration, KW and Bandit methods require a batch size $\\nb$ of component function evaluations to compute a gradient estimator; recall \\cref{eq:gradsum}. Following standard practice, we chose to hold $\\nb$ constant for these methods. \nWhile AdaQN adaptively increases $\\nb$ during the course of an algorithm, it still requires an initial $\\nb$. \nFor all three methods, we used a set of 4 common batch sizes\n$$\\nb\\in\\{11,33,99,198\\}.$$\nWe interpreted $\\nb$ as the constant batch size for KW and Bandit methods and as the initial batch size for AdaQN. \nObserve that all of our tested $\\nb$ divide $\\nd=198$, which is helpful for comparing the stochastic methods with the full-batch deterministic methods. \nMoreover, when $\\nb=\\nd$, \nKW and AdaQN are deterministic methods while Bandit is still a randomized method since it employs a random direction $\\mathbf{u}_k$ in each iteration.\n\n\n\n\n\n\\subsection{Performance metrics}\n\\label{sec:metrics}\nWe now discuss various measures of effort in order to compare the performance of the methods.\nWe label by $f_{s,*}$ the minimum function value evaluated over all runs \ninstantiated from the $s$th starting point $\\xb_{s,0}$, regardless of method, seed, and all relevant hyperparameters. \nWe say that a point\n$\\xb_{s,k}$ is\n\\emph{$\\tau$-optimal for starting point $\\xb_{s,0}$} provided \n\\begin{equation}\n \\label{eq:tau-optimality}\n \\displaystyle\\frac{f(\\xb_{s,k})-f_{s,*}}{f(\\xb_{s,0})-f_{s,*}}\\leq\\tau.\n\\end{equation}\nA point $\\xb_{s,k}$ satisfying \\cref{eq:tau-optimality} has achieved a fraction $\\tau$ of the best-known decrease from starting point $\\xb_{s,0}$.\n\nOur primary measure for any run is the best function value, $\\min_{k\\leq K} f(\\xb_{s,k})$, as a function of the number of points, $K$, evaluated during that run. We often report these results in terms of the number of component functions evaluated, that is, $\\sum_{k\\leq K} \\nb(k)$. This also allows us to track the number of component function evaluations needed to achieve $\\tau$-optimality, for a specified value of $\\tau\\in (0,1)$.\nNote that for some values of $\\tau$, not all runs may achieve $\\tau$-optimality; \nwhen a run fails to do so, we define the number of component function evaluations it required to attain $\\tau$-optimality as the budget of component function evaluations it was given.\n\n\n\\subsection{Results of hyperparameter tuning}\n\\label{sec:empirical}\nWe now show the results of hyperparameter tuning to search for ``best\" step sizes and\/or batch sizes, where appropriate. \nIn \\Cref{fig:pounders_nmead} we look at summary hyperparameter tuning results for POUNDERS and Nelder-Mead,\nand in \\Cref{fig:adaqn} we look at summary results for AdaQN. \nFor AdaQN, we \nchose to tune only initial batch sizes.\n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=0.92\\linewidth]{Fig1}\n\\caption{Tuning the step-size parameter for POUNDERS (left) and Nelder-Mead (right). \nThroughout such plots in this paper, the \nvertical axis shows the value of $\\hat{f}$, which is defined in \\cref{eq:modfun}, that is best among those seen within the specified number (horizontal axis) of evaluations. \nSolid lines denote median performance over all starting points (and stochastic replications), while the translucent bands denote the $25$th and $75$th percentiles of performance.\nThe number in parentheses in the legend denotes the \\emph{average} number of hard failures produced by the Fayans-model simulation during the run of the algorithm.\nThe solid black horizontal line denotes the value of $\\hat{f}(\\xb_1)$ in Table~\\ref{tab:BestResults}. \n\\label{fig:pounders_nmead}}\n\\end{figure}\n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=.55\\linewidth]{Fig2}\n\\caption{Tuning the batch-size parameter for AdaQN. \\label{fig:adaqn}} \n\\end{figure}\n\nBased on the results for AdaQN in \\Cref{fig:adaqn}, \nwe chose to initialize $\\nb=11$, which finds the same quality of median solutions in terms of $\\hat{f}$ values as do other batch sizes toward the end of its budget\nbut identifies better solutions earlier on (in terms of the $25$th, $50$th, and $75$th percentiles). \n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=.7\\linewidth]{Fig3} \n\\caption{Median number of component function evaluations needed by POUNDERS and Nelder-Mead to attain $\\tau$-optimality, where $\\tau=0.01$. \n\\label{fig:pounders_nmead2}}\n\\end{figure}\n\n\n\nWe see in \\Cref{fig:pounders_nmead} that the performance of POUNDERS is extremely robust to the selections of initial trust-region radius. \nFor Nelder-Mead, we observe that its performance is not as independent of the simplex radius as POUNDERS is independent of its initial trust-region radius. This is summarized in \\Cref{fig:pounders_nmead2}, which follows the example set in \\cite{asi2019importance} and shows\nthe \\emph{median} amount of component function evaluations required by a method to attain $0.01$-optimality. \n\nBecause the POUNDERS performance was so similar for all initial trust-region radius values, we \nselected $\\alpha=2^{-4}$. \nFor Nelder-Mead, because the best final median function value occurred for $\\alpha=2^{-4}$, and because the median performance of $\\alpha=2^{-4}$ was nearly as fast as the median performance of $\\alpha=2^{-3}$ in finding $0.01$-optimal solutions, we selected $\\alpha=2^{-4}$. \n\nFor the remaining stochastic approximation methods, we first fix batch-size parameters and compare the median performance of step-size parameters. These results are shown, respectively, in \\Cref{fig:bandit-batchsizes} and \\Cref{fig:kw-batchsizes}.\nWe remark that in the case of $\\nb=11$, we did not test all the step-size parameters because of the extreme computational cost of running\nthese two methods with $\\nb=11$ in our computational setup. \nInstead, for $\\nb=11$ we tested the two step sizes that \nresulted in the fewest average hard failures from running only one seed per starting point. \nIn \\Cref{fig:kw-batchsizes}, we see that for KW, \nthis selection of step sizes matches the selection of step sizes that performed best for $\\nb=33$.\n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{Fig4.pdf}\n\n\\caption{Hyperparameter tuning results for Bandit; from left to right, and then top to bottom, are batch sizes 198, 99, 33, and 11.\n\\label{fig:bandit-batchsizes}}\n\\end{figure}\n\n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{Fig5.pdf}\n\\caption{Hyperparameter tuning results for KW; from left to right, and then top to bottom, are batch sizes 198, 99, 33, and 11.\n\\label{fig:kw-batchsizes}}\n\\end{figure}\n\n\n\n\n\nIn \\Cref{fig:bandit-batchsizes}, we observe that for each fixed batch size, there is a step size that provides a clear best median performance. Unlike in the comparisons made for POUNDERS, Nelder-Mead, and AdaQN, however,\nthe difference in the percentile performance between the Bandit methods and the average number of hard failures encountered across different runs should be taken into account. \nWith these three considerations in mind, for $\\nb=198$, we selected $\\alpha=2^{-3}$. \nFor $\\nb=99$, balancing the significantly lower number of hard failures encountered by $\\alpha=2^{-4}$ compared with $\\alpha=2^{-3}$, as well as the better $75$th percentile performance of $\\alpha=2^{-4}$, we selected $\\alpha=2^{-4}$. \nFor $\\nb=33$, because of the similar median final performance of $\\alpha=2^{-5}$ and $\\alpha=2^{-3}$, coupled with the better $75$th percentile performance of $\\alpha=2^{-5}$ and lower number of hard failures encountered by $\\alpha=2^{-5}$, we selected $\\alpha=2^{-5}$.\nFor $\\nb=11$, the choice of $\\alpha=2^{-5}$ was clear. \n\nIn \\Cref{fig:kw-batchsizes}, the choice for $\\nb=198$ and $\\nb=99$ was fairly easy to make, at $\\alpha=2^{-4}$ and $\\alpha=2^{-5}$, respectively. The choice for $\\nb=33$ was less clear; the median performance of $\\alpha=2^{-5}$ was not much worse than the median performance of $\\alpha=2^{-4}$; however, because the $75$th percentile performance of $\\alpha=2^{-5}$ was better than that of $\\alpha=2^{-4}$, and because $\\alpha=2^{-5}$ had fewer average hard failures, we selected $\\alpha=2^{-5}$. \nFor $\\nb=11$, we selected $\\alpha=2^{-5}$ because of its failure-free performance. \n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{Fig6.pdf}\n\\caption{Best step sizes of each batch size for Bandit (left) and KW (right). \\label{fig:best-stepsizes-compared}}\n\\end{figure}\n\nHaving downselected the step-size parameters per batch size, we now compare the methods across different batch sizes in \\Cref{fig:best-stepsizes-compared}.\nWe see that for KW, using the smallest tested batch size $\\nb=11$ with a step size of $\\alpha=2^{-5}$ is the best setting of hyperparameters. \nThe situation was less clear for Bandit methods. \nWhile the median performance within the budget of component function evaluations was best with batch size 11, this parameter combination exhibited \nmany hard failures; \nas a tradeoff between a smaller number of hard failures and \na reasonable median performance, we selected step size $\\alpha=2^{-4}$ and $\\nb=99$. \n\n\\subsection{Comparing tuned methods on additional starting points}\nHaving performed the hyperparameter tuning in the preceding section to select appropriate hyperparameters for each of the five methods, we then ran the selected variant of each method on a larger set of problems.\nIn particular, we randomly generated twenty starting points (instead of five) from the\nsame $\\ell_1$-ball as in \\cref{eq:project} and again\nran three seeds for each starting point for each of the randomized methods. \nThe budget for each method was extended to $1500\\nd(\\nx+1)$ component function evaluations, more than double the budget provided in the hyperparameter tuning runs. These results are presented in \\Cref{fig:compare-all}.\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.6\\linewidth]{Fig7.pdf}\n\\caption{Comparing the best variants of all methods. \n \\label{fig:compare-all}}\n\\end{figure}\n\nThe results as seen in \\Cref{fig:compare-all} are remarkable.\nEven in the full run, POUNDERS continues to exhibit an interesting phenomenon where after a fixed number of full-batch component function evaluations, the objective function found suddenly drops and exhibits very low variation in the $25$th to $75$th percentile band in decreasing to a final solution. This sort of robustness and consistency in performance is certainly desirable.\n\nIn terms of final median performance, AdaQN and Nelder-Mead find similar quality solutions as POUNDERS. \nOne could argue that the performance of Nelder-Mead, in terms of overall median and other percentile trajectories, is \ndominated by the performance of POUNDERS. \nThe performance of AdaQN is interesting in that it is not strictly dominated by the performance of POUNDERS. In fact, if one were interested only in generating reasonable solutions (say $\\tau-$optimal solutions, where $\\tau\\approx 0.25$) in as few component function evaluations as possible, AdaQN is a better choice than POUNDERS. \nIf one were simultaneously interested in the robustness of the final solution, then AdaQN remains a strong choice. This is in contrast with KW, which also achieves gains fairly quickly but does not have the same final robustness as exhibited by AdaQN. For all but a few $\\tau$ values, the Bandit method is bested by KW, which may be attributed partly to the nontrivial failure rate experienced by Bandit.\n\nOur comparisons thus far have measured computational expense in terms of the number of component function evaluations. Such metrics are fully justified in computing environments where a single component function can be evaluated at a time.\nFor sufficiently large parallel computing environments, resources are available to perform full batch (here $\\nb=\\nd=198$) evaluations simultaneously. We now examine the case between the extremes of \na single component function evaluated at a time and all $\\nd$ component functions evaluated at a time.\nMethods capable of subsampling (i.e., using batch sizes $\\nb<\\nd$) are potentially promising in such intermediate regimes.\n\nOur \\emph{resource utilization plots} illustrate these considerations. \nBy resource size we denote the number of component function evaluations \nthat can be simultaneously computed.\nGiven a resource size, we refer to the number of \\emph{rounds} as the iterations of such whole resources to evaluate the component function evaluations needed to \nachieve a performance metric (e.g., $\\tau$-optimality as in \\cref{eq:tau-optimality}). \nFor example, if the resource size is 11, then a method with $\\nb=198$ will use at least 198\/11=18 \nrounds to make an optimization step, while a method with $\\nb=11$ will potentially use only one such round. \n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.9\\linewidth]{Fig8}\n\\caption{Resource utilization plots with respect to $\\tau$-optimality (a: $\\tau=0.5$; b: $\\tau=0.1$) for the tuned methods shown in \\Cref{fig:compare-all}. The construction of these figures assumes that component functions can only be evaluated in parallel at a single point in parameter space at a time. In addition, some choices of resource size result in poor performance when the size is incompatible with $\\nb$. For example, evaluating 33 component functions with a resource size of 22 is charged two rounds (44 possible evaluations). \nSimilarly, a method with nonadaptive batch size $\\nb$ will not be able to benefit, absent failures, from resource sizes larger than $\\nb$. \n \\label{fig:rups}}\n\\end{figure}\n\n\\begin{table}\n\\caption{\\label{tab:initialevals} Minimum number of initial component function evaluations to evaluate first (noninitialization) optimization step.}\n\\centering\n\\begin{tabular}[t]{lr}\n\\br\n Method & $F_i$ evaluations\\\\\n\\mr\nAdaQN & $(\\nx+2)\\nb(0)$ \\\\\nBandit & $3 \\nb $ \\\\\nKW & $(\\nx+2)\\nb$ \\\\\nNelder-Mead & $(\\nx+2)\\nd$ \\\\\nPOUNDERS & $(\\nx+2)\\nd$ \\\\\n\\br\n\\end{tabular}\n\\end{table}\n\nWe highlight computational environments where POUNDERS is not the most obvious choice by showing select values of $\\tau$ in the resource utilization plots in \\Cref{fig:rups}. \nFor low demands on solution quality \n($\\tau=0.5$), Bandit methods are exceptionally good, identifying $0.5$-optimal solutions remarkably quickly. This is because Bandit requires very few evaluations to get started; \\Cref{tab:initialevals} shows that Bandit (with $\\nb=33$) will have evaluated its first step (to $\\xb_1$) after 99 component function evaluations. The left plot in \\Cref{fig:rups} shows that on over half the runs, this number (3 rounds at a resource level of 33) is sufficient for satisfying this coarse accuracy level. Nelder-Mead and POUNDERS show a similar behavior after their first step is performed, but this step requires 15 rounds at a resource level of 198 (90 rounds at a resource level of 33). AdaQN's smaller batch size allows it to outperform POUNDERS and Nelder-Mead at lower resource levels, but is insufficient for catching Bandit at the coarse accuracy $\\tau=0.5$. \n\n\nWhen we tighten the accuracy demands to $\\tau=0.1$, we see that the deterministic methods (i.e., those with a full batch $\\nb=198$) are again best, even for resource sizes as small as 11. This plot also shows that AdaQN's adaptive batch size allows it to remain competitive even at this tighter accuracy for resource sizes up to 99.\n\n\n\n\n\n\n\\section{Discussion}\n\\label{sec:discussion}\n\nOur results show that the deterministic methods tested were insensitive to starting point in terms of finding a good objective function value with a limited number of $\\hat{f}$ evaluations. Furthermore, these methods were generally insensitive to hyperparameters, did not evaluate points that resulted in hard failures, and are attractive even if the expense of evaluating the Fayans model allowed for computing only a fraction (e.g., 11\/198=1\/18) of the component functions at a time. For problems where even smaller fractions are possible or when less accurate solutions are desired with even smaller computational budgets than those tested here, AdaQN appears especially promising. \nWe expect that such methods that can use smaller batch sampling will become more attractive as the number of fit data significantly increases (as in the case of traditional supervised learning applications).\n\nAs part of understanding the quality of results achieved in this study, we identified the best run, \nin terms of lowest $\\hat{f}$ value, for each of the 20 starting points.\nEleven of these 20 best results were found with POUNDERS, which also found the overall best result; seven by AdaQN; and two by Nelder-Mead. \nAll 20 points of these best results are contained in $\\mathbf{P}$ and resulted in fully converged Fayans model evaluations.\n\\revised{The parameter values of two of these best points are presented in unscaled form in Table~\\ref{tab:BestResults}.}\n{The parameter values of two of these best points are presented in\nunscaled form in Table~\\ref{tab:BestResults}. To give an impression of typical\nparameters from previous fits, we also show the parameters for\nFy($\\Delta r$) \\cite{Reinhard2017} and Fy($\\Delta r$,HFB) \\cite{Miller2019,Reinhard2020}.} \n\n\n\n\\begin{table}\n\\caption{\\label{tab:BestResults} Unscaled parameter values for two of the 20 best optimization results in the study.\n\\revised{and their associated objective function value.}\n{The parameters are given up\nto six digits, which suffices to reproduce the output values shown in \\Cref{tab:Chi2Comparison}.}\nThe point $\\xb_1$\nhad the lowest objective function value in the study and is chosen as the representative of\nthe group of the four best runs; the point $\\xb_{5}$ had the best result in\n\\revised{the second grouping of the remaining 16 best runs. For the definition of Fayans EDF parameters, see \\cite{suppl}. $\\rho_\\mathrm{eq}$ is in fm$^{-3}$; $E\/A, K, J, L$ are in MeV; other parameters are dimensionless. For comparison, parameter values of Fy($\\Delta r$) \\cite{Reinhard2017} and Fy($\\Delta r$,HFB) \\cite{Miller2019,Reinhard2020} EDFs are also given.}\n{the second grouping of the remaining 16 best runs. For the definition\nof Fayans EDF parameters, see \\cite{suppl}. $\\rho_\\mathrm{eq}$ is in\nfm$^{-3}$; $E\/A, K, J, L$ are in MeV; other parameters are\ndimensionless. As a guideline for typical model parameters, the values\nfor Fy($\\Delta r$) \\cite{Reinhard2017} and Fy($\\Delta r$,HFB)\n\\cite{Miller2019,Reinhard2020} EDFs are also given.} \n}\n\\begin{indented}\n\\lineup\n\\item[]\n\\begin{tabular}[t]{lll|ll}\n\\br\n Parameter & $\\xb_1$ & $\\xb_{5}$ & \\multicolumn{1}{c}{Fy($\\Delta r$)}\n & \\multicolumn{1}{c}{Fy($\\Delta r$,HFB)} \\\\\n\\mr\n$\\rho_\\mathrm{eq}$ \t\\cmmnt{RHO\\_NM} & \\0\\00.165755 & \\0\\00.166182 & \\0\\00.160 & \\0\\00.164\\\\\n$E\/A$ \\cmmnt{EOVERA} & \\0\\-15.8715 & \\0\\-15.8780 & \\0\\-16.11 & \\0\\-15.86 \\\\\n$K$ \t \\cmmnt{COMPR} & 192.686 & 185.156 & 219 & 210.3\\\\\n$J$ \t \\cmmnt{ASYMM} & \\028.8018 & \\028.8467 & \\029 & \\028.1 \\\\\n$L$ \t \\cmmnt{DASYM} & \\035.6545 & \\031.5877 & \\030 & \\037.5\\\\\n${h_{2-}^\\mathrm{v}}$ \\cmmnt{H2VM} & \\0\\07.08066 & \\0\\04.71124 & \\0\\01.2150 & \\022.8090 \\\\\n${a_+^\\mathrm{s}}$\t \\cmmnt{ASP} & \\0\\00.594920 & \\0\\00.620893 & \\0\\00.6047 & \\0\\00.56548 \\\\\n${h_{\\nabla}^\\mathrm{s}}$\t \\cmmnt{HGRADP}& \\0\\00.510148 & \\0\\00.613192 & \\0\\00.6656 & \\0\\00.45795\\\\\n${\\kappa}$\t \\cmmnt{C0NABJ} & \\0\\00.192851 & \\0\\00.191370 & \\0\\00.18792 & \\0\\00.19833 \\\\\n${\\kappa'}$\t \\cmmnt{C1NABJ} & \\0\\00.0383998 & \\0\\00.0532395 & \\0\\0\\-0.0237 & \\0\\00.44008 \\\\\n${f_{\\mathrm{ex}}^\\xi}$\t \\cmmnt{FXI} & \\0\\0\\-3.70050 & \\0\\0\\-3.63760 & \\0\\0\\-4.4720 & \\0\\0\\-4.4556 \\\\\n${h_\\nabla^\\xi}$\t \\cmmnt{HGRADXI} & \\0\\03.17494 & \\0\\03.48559 & \\0\\03.227 & \\0\\03.113 \\\\\n${h_{+}^\\xi}$\t \\cmmnt{H1XI} & \\0\\03.22592 & \\0\\03.13267 & \\0\\04.229 & \\0\\04.2440\\\\[5pt]\n\\br\n\\end{tabular}\n\\end{indented}\n\\end{table}\n\n\\Cref{fig:MeanChi2PerClass} shows the outputs of these 20 points by observable class (see Table~\\ref{tab:observableClasses}). For each observable class, by $\\chi^2$ we denote the contributions to $\\hat{f}(\\xb)$ from that observable class (and hence the sum over all observable classes is $\\hat{f}(\\xb)$). We normalized these $\\chi^2$ by the number of observables in the associated class to obtain the average $\\chi^2$ of each observable class, $\\overline{\\chi^2}$. \n\\Cref{fig:MeanChi2PerClass} suggests that the results can be partitioned into two groups. This partitioning is related not only to $\\overline{\\chi^2}$ but also to the values of $\\hat{f}$. The four results labeled as Low $\\hat{f}$ corresponds to those results with $\\hat{f}$ less than 49; the 16 other results, labeled as High $\\hat{f}$, have a slightly higher $\\hat{f}$. The results with lowest $\\hat{f}$ from each group are denoted by $\\xb_1$ and $\\xb_5$ in Table~\\ref{tab:BestResults}.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=.5\\linewidth]{Fig9.pdf}\n\\caption{Average $\\chi^2$ by observable class ($\\overline{\\chi^2}$) plotted for each of the 20 best\nresults obtained in the study. In terms of this quantity, the 20 results clearly can be\npartitioned into two different groups. The results in one such group are\ncolored blue and also correspond to the four results with the lowest $\\hat{f}$\nresults in the study.\\label{fig:MeanChi2PerClass}}\n\\end{figure}\n\nThe $\\chi^2$ and $\\overline{\\chi^2}$ values are given for the same two points in Table~\\ref{tab:Chi2Comparison}. In general, the low-$\\hat{f}$ cluster appears to fit radius-based observables better than does the high-$\\hat{f}$ cluster, but at the expense of the quality of fit to energy-based observables. In particular, the fit to the isotopic differences of charge radii in Ca isotopes is better, but the fits to the two pairing gaps deteriorate. \n\n\n\\begin{table}\n\\caption{\\label{tab:Chi2Comparison} Breakdown of the $\\chi^2$ and average $\\chi^2$ ($\\overline{\\chi^2}$) by \nobservable class (see\nTable~\\ref{tab:observableClasses}) for the two points in parameter space given\nin Table~\\ref{tab:BestResults}. In bold are those values with potentially significant differences between the two groups of best results.} \n\\begin{indented}\n\\lineup\n\\item[]\n\\begin{tabular}[t]{l||ll|lll}\n\\br\n & \\multicolumn{2}{c|}{$\\xb_1$} & \\multicolumn{2}{c}{$\\xb_5$} \\\\\nClass & $\\chi^2$ & $\\overline{\\chi^2}$ & $\\chi^2$ & $\\overline{\\chi^2}$\\\\\n\\mr\n$E_B$ & \\09.64 & 0.153 & \\09.06 & 0.144 \\\\\n$R_{\\rm diffr}$ & \\09.49 & 0.339 & \\09.81 & 0.351 \\\\\n$r_{\\rm ch}$ & 16.41 & 0.316 & 17.95 & 0.345 \\\\\n$\\sigma$ & \\02.48 & 0.095 & \\03.17 & 0.122 \\\\\n$\\epsilon_{ls,p}$ & \\00.78 & 0.156 & \\00.70 & 0.141 \\\\\n$\\epsilon_{ls,n}$ & \\03.64 & 0.728 & \\03.90 & 0.780 \\\\\n$\\delta\\langle r^2 \\rangle$ & \\00.25 & \\textbf{0.082} & \\01.31 & \\textbf{0.436} \\\\\n$\\Delta E_p$ & \\03.52 & \\textbf{0.320} & \\02.66 & \\textbf{0.242} \\\\\n$\\Delta E_n$ & \\02.12 & \\textbf{0.425} & \\01.12 & \\textbf{0.225} \\\\\n\\mr\n$\\hat{f}$\t & 48.33 & & 49.69 & \\\\\n\\br\n\\end{tabular}\n\\end{indented}\n\\end{table}\n\n\n\nThese results underscore the value of optimization methods being able to train physics models with few model evaluations. \nSuch efficiency allows one to perform several different optimizations (e.g., from different starting points, with different fit data) and thereby identify potentially different local minimizers. \nThe subsequent study of distinct local minima could be useful; the ability of a solution to model the desired physics often matters more than the final objective function value.\n\n\n\n\\section{Perspectives}\n\nIn this study, we addressed the calibration of the nuclear physics model Fayans EDF using the $\\chi^2$-minimization, which can be viewed as a supervised machine learning problem. \nThe model is somewhat computationally expensive and the \nderivative information with respect to the model parameters is not available. To this end, we investigated the strengths and limitations of five algorithmic families of iterative methods for local, unconstrained derivative-free optimization. We considered two deterministic and three randomized methods. We analyzed hyperparameter tuning considerations and variability associated with the methods, and illustrated considerations for tuning in different computational settings. In total, nearly a half million CPU core hours were expended for this study, an indication of the infeasibility for doing thorough hyperparameter tuning and comparison for many nuclear physics model training problems. \n\nFor the model considered, we conclude that the performance of POUNDERS, within a budget of function evaluations, is extremely robust. The Fayans EDF optimization results obtained in this work are generally consistent with those of\nFy($\\Delta r$) \\cite{Reinhard2017} and Fy($\\Delta r$,HFB) \\cite{Miller2019,Reinhard2020} models, see Table~\\ref{tab:BestResults}. In particular, the set $\\xb_1$, which performs very well on the $\\delta\\langle r^2 \\rangle$ class appears to be fairly close to Fy($\\Delta r$,HFB). The extension of the Fayans model to isovector pairing, suggested in \\cite{Reinhard2017}, will be carried out in the following work, which will also contain detailed discussion of resulting quantified nuclear properties.\n\n\n\n\\section*{Acknowledgments}\nThe work at Argonne was \nsupported by the U.S.\\ Department of\nEnergy, Office of Science, Office of Advanced Scientific Computing\nResearch, applied mathematics and SciDAC programs under Contract No.\\\nDE-AC02-06CH11357 and by the NUCLEI SciDAC-4 collaboration. This work was also supported by the U.S.\\ Department of Energy, Office of Science, Office of Nuclear Physics under award numbers DE-SC0013365 (Michigan State University) and DE-SC0018083 (NUCLEI SciDAC-4 collaboration).\nWe gratefully acknowledge the computing resources provided on Bebop, a \nhigh-performance computing cluster operated by the Laboratory Computing Resource \nCenter at Argonne National Laboratory.\n\n\n\n\\section*{References}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}