diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzozjh" "b/data_all_eng_slimpj/shuffled/split2/finalzzozjh" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzozjh" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\\indent \\indent Cerenkov Ring Imaging devices settle specific problems of pattern recognition.\nGenerally, devices are designed in such a way that one has to look for photons\nprojected onto a circle \\cite{YPSILANTIS, SEGUINOT}, the radius of which being simply connected \nto the Cerenkov angle of the charged\ntrack radiating these photons. In RICH detectors (as for DELPHI), the main problem is due to\nbackground photons which come together with the real signal photons and render difficult\npattern recognition, basically because the number of independent constraints which can be used is \nsmall. Traditionally, current methods in pattern recognition follow more or less\nthe ideas from Baillon \\cite{BAILLON}, assume the knowledge of the charged track information \nand perform a maximum likelihood fit to the number of background and\/or signal photons, under the\nvarious mass assumptions.\n\nIt is not of current knowledge to attempt a full fit of the ring parameters, mainly because\nbackground photons are expected to spoil the fit quality, and anyway prevent to define a \nreasonable $\\chi^2$ to be minimised and then a fit probability having the properties expected for\na probability distribution. This is mainly due to the difficulties encountered when trying\nto select a set of photons associated with a track and possibly still affected by\ncontamination from background photons.\n\nMotivated by the DIRC device \\cite{DIRC0,DIRC}, which will be used by the BaBar Collaboration \\cite{BABAR} \nat the SLAC PEP--II B--factory, we reexamine this problem with the goal of associating to a given charged\ntrack, a set of photons with a low level of contamination, having in mind to be able to check in each\nphoton sample the effect of a residual background contamination. This provides several problems to be\nsolved in order to have a procedure able to run under realistic conditions.\n\nIn a DIRC device, one can define with a controlled accuracy the Cerenkov photon directions.\nThis allows to reconsider the problem of data representation, with the aim of having \na well defined geometrical figure to look for. We advocate here that the stereographic\nprojection provides such a suitable tool. Whatever are the errors, it allows to look always\nfor an image of the Cerenkov ring which is surely a circle arc, without any special \nhardware request on the DIRC device considered. This is true even if the \ncharged track direction is poorly known, possibly even unknown. In real life however, because\nof background, one cannot completely avoid some knowledge -- even approximate and\/or subject\nto systematic errors -- of informations about the charged track direction. In this representation,\nthe radius of this circle is connected in a simple way with the Cerenkov angle, and the\ncircle center with the charged track direction. The stereographic projection has also \nbeen proposed as a tool for pattern \nrecognition{\\footnote{We thank T. Ypsilantis (INFN, Bologna) for this information\nand for having drawn our attention on the corresponding references.}}\n with a RICH detector which will use a 27 Kton\nwater target and radiator in order to detect neutrino oscillation at Gran Sasso, using\na long base line neutrino beam from CERN \\cite{LBLRICH}. \n \nIn working conditions, one is faced with another problem, specific to the DIRC~: generally,\nthe part of the Cerenkov cone populated by observed photons represents a relatively small\nazimuthal window compared with 360$^{\\circ}$. In any representation of the data, this \ncorresponds to a small circle arc populated\nwith measured points. Taking into account the relative magnitude of the errors affecting \nthese points and the circle radius, this generally prevents a direct use of standard circle fit \nalgorithms. Indeed, in such conditions, the fit quality is poor, even in absence of background,\nas the radius is systematically underestimated. In order to circumvent this difficulty,\na further information has to be used and it happens that the charged track direction\nis a good candidate for this purpose. This is basically the idea of Ref. \\cite{BAILLON},\nbut in a completely different context. Indeed, one\ncan always assume that we have some knowledge about the charged track direction provided\nby some tracking device located in front of the DIRC. However, this implies a deep change\nin the circle fit algorithm structure.\nThe pattern recognition problem of the long base line RICH\nof Ref. \\cite{LBLRICH}, even if not trivial, is much simpler than for the DIRC as the circle arc\nto reconstruct is generally quite large and the errors affecting photon directions are much smaller.\n\nIn this way, one is led to introduce as measurements to be fit the angular distances\nbetween each of the various photon directions and the charged track direction. This \nintroduces strong correlations which have to be\ncarefully accounted for when constructing the expression for the $\\chi^2$ to be minimised.\nThese correlations are partly due to the measurement errors on the charged track direction\nand partly due to multiple scattering effects undergone by the charged track inside the radiator~; \nthis last kind of correlations can be important in identifying tracks of very low momentum \n(typically below 1 GeV\/c) and they survive even if the circle arc populated by photons\nis large enough that we need not worry about circle center information. \n\nFinally, having defined appropriate tools for pattern recognition and particle identification,\nit remains to define a procedure able to provide a set of photons really associated with the track.\nAs the number of constraints available for photon recognition is small, tools allowing\nto control background contamination effects in this sample are needed. \n\n\\vspace{0.7cm}\nThe paper is organised as follows. In Section \\ref{dircdet}, we briefly outline the DIRC structure\nand properties relying on known literature \\cite{DIRC0,DIRC}, in its aspects relevant to reconstruction\nand particle identification. In Section \\ref{stereo}, we recall the properties of the stereographic\nprojection and the connection with Cerenkov cone reconstruction. Section \\ref{circlefit} is devoted\nto describing the circle fit algorithm and recognition procedure when the circle arc populated by \nmeasured points is small~; we also\nsketch here how to deal with photon contamination control and background removal. In section \\ref{simdirc1},\nwe describe the Monte Carlo we have coded in order to check the full procedure~; all effects affecting\nthe recognition procedure are included and varied from minimum (to check the basic model properties)\nto a maximum (photons produced by several tracks in the same bar, with or without additional flat background). \nThe procedure of photon selection and background removal is described in details in section \\ref{photsel}, it is\nmore specific to the DIRC problem than the fit itself. This procedure\nrelies on using first a clean subsample of photons (unambiguous photons) to start an iterative \nrecognition procedure. In section \\ref{MCresults} we describe, the working of the procedure and show\nthat it behaves well, even under very large background conditions~; the effects of\ncorrelations are specifically illustrated and it is shown that they cannot be neglected.\nOf course, photon selection relies on cutting out photons candidates~; then, Section \\ref{cutadjust} illustrates\nhow cut levels can be adjusted and tuned in such a way that the probability distributions are not\ntoo much affected. We thus show that pull distributions and probability distributions allow\nto perform the background removal with a controlled quality.\n \nFinally, two appendices contain a full treatment of the multiple scattering effects and correlations. \nWe describe here also our modelling, where all effects are taken into account at first order only.\nMonte Carlo results show that the effects of this approximation are small in the region where a specific\nparticle identification is expected to be reliable with a DIRC device, {\\it i.e.} above 500 MeV\/c. \n\n\n\n\\section{Outline of a DIRC Device and Properties}\n\\label{dircdet}\n\n\\indent \\indent\nThe DIRC (Detection of Internally Reflected Cerenkov light) is a new type of detector which will be\nused as the main particle identification system in the BaBar experiment \\cite{BABAR} at PEP2 collider (SLAC).\nA prototype of the DIRC \\cite{DIRC} has been extensively tested in the CERN PS beam in 1995\/1996,\ndemonstrating the validity of the DIRC concept.\nIt will not be described here in full details. We will rather briefly outline the general principles\ninvolved in its conception and operation, more information can be found in recent literature on the\nBaBar DIRC (see Refs.\\cite{DIRC}, \\cite{DIRC0} and \\cite{DIRCNEW}). We mainly limit\nourselves to the aspects of relevance for the procedure we develop and the Monte Carlo we will\nuse in order to test it.\n\n\\vspace{1.cm}\n\nThe originality of a DIRC device is that, contrarily to most other Cerenkov ring imaging detectors, it\nmakes use of the Cerenkov light generated in the radiator medium by trapping photons (through total internal\nreflection) into the radiator itself and guiding them toward a set of photomultipliers (PMTs) for detection~;\nthis also allows the detector to be quite thin in the direction of the incoming tracks, because the Cerenkov cone\nexpands outside the main sensitive area of the detector.\n\nIn the DIRC, the radiator is made of long rectilinear fused silica bars of modest rectangular section, a material\nchosen mainly because of its high refractive index (1.474) \\cite{DIRC,DIRCNEW} compared to air or nitrogen and its long \nabsorption length in the UV region.\nAs sketched in Fig. \\ref{dircscheme}, the quartz\/air interfaces of the bars act as perfect\nmirrors for a wide range of Cerenkov photons incidence angles, and, for a sufficient optical and geometrical\nquality, they are able to transport the photons to the bar exit, with unperturbed directions except for \nreflection symmetries with respect to the bar faces.\n\nThe number of photons produced in the quartz bars depends on several parameters. Interestingly, it tends\nto increase with the incidence angle (since the path length inside the radiator\nmedium grows), which is a behavior rather opposite to more traditional ring imaging detectors, where a large\npart of the Cerenkov light is lost {\\it because} of the internal reflection inside the radiator.\n\nTo avoid loosing too much light at the bar exit, the array of photomultiplier windows\nand the light quartz bar exits are immersed in pure water, which refractive index (1.34) is \nclose to the quartz one (1.474), so that photons crossing the quartz\/water interface have a \nlow probability of being reflected back into the bar. \n\nThere is also a reflecting device parallel to the bottom bar surface put at the bar end (in the water)\nin order to redirect toward the PMTs the photons emerging from the bar with either downward or too upward\ngoing directions~; the other bar end is equipped with a small mirror in optical contact with the quartz\nfor the same purpose. \n\nOnce the Cerenkov image is detected on the PMTs array, two informations are available~: the\nlocation of the hit PMT and the photon detection time. This image (see Fig. \\ref{dircimage} for an example\nfrom the DIRC prototype tested at CERN \\cite{DIRC}) is in fact a superposition of different reflections of the\noriginal Cerenkov cone by the bar reflecting surfaces. So, a given hit can belong to the Cerenkov cone\ncentered on the original track direction, or to any of its images with respect to the reflecting \nsurfaces of the bar.\nIn the case of the prototype of Ref. \\cite{DIRC}, 8 reflected photon directions (due to 3 possible symmetry\nreflections planes) are to be taken into account. This ambiguity problem is specific to the DIRC~: given a\ndefinite photon direction, there are as many track--photon associations as reflection planes and it may be\nthat several combinations are physically acceptable. Fortunately, it may also happen that only one solution is\nacceptable~; this defines unambiguous solutions.\nFig. \\ref{dircimage} also illustrates that the arc populated by the hits can be relatively\nsmall (of the order of 60 degrees).\n\nUsing the spatial information, one is able to reconstruct uniquely the original Cerenkov angle of\na photon (which is emitted on the Cerenkov cone) if the ambiguities related with the various\n{\\it possible} reflections that could affect the photon during its propagation can be disentangled.\nThe timing information is mainly used for a preliminary recognition of background photons, generally out of \ntime for a given track, as the information provided by \nthe individual photon detection time is very poor compared to the spatial information given by the PMTs.\n\n\\vspace{1.cm}\nThus, a possible strategy for pattern recognition in the DIRC is effectively to try discriminating spatially\nbetween photon ambiguities in order to determine the correct symmetry assignment, and then use the set of non\nambiguous resulting photon Cerenkov angles (from the whole image) in order to compute the relevant quantities we are \ninterested in~: the track Cerenkov angle and its error. This is achieved through a fitting procedure, and these\ninformations allow in a second step to refine the choice among the ambiguous photon solutions left\nprovisionally aside, and take part of them into account in a final fit.\nAmong the difficulties that are to be met, the smallness of the arc length populated by photons should be noticed\nand has to be especially addressed. \n\nIn the following sections, we detail the different parts of the algorithm implemented in order to achieve this goal.\n\n\n\\section{Pattern Recognition Using the Stereographic Projection } \n\\label{stereo}\n\n\\indent \\indent In a transparent medium of index $n$,\nCerenkov photons are emitted by a charged particle with \n an angle $\\theta_C$ with respect to the charged track\ndirection and this angle is given by~: \n\n\\begin{equation\n\\cos{\\theta_C}=\\frac{1}{n \\beta}\n\\label{basic1}\n\\end{equation}\n\n\\noindent where $\\beta$ is the speed of the particle. As the photon\nwavelength{\\footnote{ We postpone to Appendix A commenting \non the influence of chromaticity fluctuations (dependence of $n$ on the photon \nwavelength $\\lambda$) which affects each photon direction.\n}} is generally not measured, this turns out practically to \nassume that the refractive index $n$ is known with a random error $\\delta n$,\nindependently for each photon. \n\nLet us associate to each of the charged track and photon directions \n a $unit$ vector, and draw all of them from\na common origin denoted $(0,0,0)$. All end points of these vectors lay\non a unit sphere, and all photon directions generate\na cone of half aperture $\\theta_C$ around the charged track direction. \nFrom now on we shall always refer to photon\nand charged track directions only in this representation.\n\n\\subsection{The Stereographic Projection}\n\n\\indent \\indent\nThe intersections of the photon directions\nwith a plane perpendicular to the charged track direction at unit distance \nfrom the origin define a set of points \ndistributed along a circle of radius $\\tan{\\theta_C}$, centered at the intersection\nof the charged track direction with this plane. \nHowever, if the $actual$ charged track direction is not the one chosen\nin order to define the plane, the figure represented by the intersections of the\nphoton directions with this plane is no longer a circle but an ellipse, and departure \nfrom a circle may become large if the charged track direction is poorly known\\cite{LHCB},\nbecause of systematic errors in the measurement of the charged track direction, \nor misalignments effects. \n\nA way to circumvent this problem (or minimize it at least) is to use the\nstereographic projection{\\footnote{Basically, it is a standard conformal mapping\nof the sphere onto a plane, {\\it i.e.} angles on the sphere are conserved\nin the projection.}} \\cite{JULIA}. \nLet us briefly recall it. Let us choose on the\nunit sphere defined above the pole axis along the measured charged track direction.\nThe stereographic projection of points on the sphere is the intersection of the \nlines joining the south pole $(0,0,-1)$ to these points with the equatorial plane.\nIn this transform, a circle drawn on the sphere ({\\it i.e.} the intersection\nof the Cerenkov cone with the sphere) is projected out as a circle. Then\nthe Cerenkov cone centered along the charged track direction becomes a circle\nof radius $\\tan{\\theta_C\/2}$, centered at the origin. This origin is simply \nthe image of the charged track direction, {\\it i.e.} the projection of the north\npole as seen from the south pole. This is sketched in Fig. \\ref{sphere}.\nRef. \\cite{LBLRICH} prefers performing the stereographic projection onto\na plane tangent to the sphere at the north pole, rather than onto the equatorial plane~;\ncorrespondingly, the algebra is slightly modified with respect to what will be presented just below.\n\nIn practical applications, however, we only know approximately the direction of the\ncharged track, and therefore the pole axis as defined above coincides only \napproximately with the $actual$ charged track direction. In order to illustrate\nwhat happens, let us assume that the $actual$ unknown charged track direction \nmakes an angle $\\alpha$, possibly large, with the pole axis ({\\it i.e.} the\n$reconstructed$ charged track direction). \nThen, by means of the stereographic projection, the images of the photon directions in the \nequatorial plane are still on a circle (see Fig. \\ref{sphere}), \nthe radius $R$ of which being~:\n\n\\begin{equation\nR =\\displaystyle \\tan{\\frac{\\theta_C}{2}} \n\\frac{1+ \\displaystyle \\tan^2{\\frac{\\alpha}{2}}}\n{ 1- \\displaystyle \\tan^2{\\frac{\\theta_C}{2}} \\displaystyle \\tan^2{\\frac{\\alpha}{2}}}\n\\label{basic2}\n\\end{equation}\n\n\\noindent and the circle center is shifted from the origin to a point located \nat distance $r_0$~:\n\n\\begin{equation\nr_0 =\\displaystyle \\tan{\\frac{\\alpha}{2}} \n\\frac{1+ \\displaystyle \\tan^2{\\frac{\\theta_C}{2}}}\n{ 1- \\displaystyle \\tan^2{\\frac{\\theta_C}{2}} \\displaystyle \\tan^2{\\frac{\\alpha}{2}}}\n\\label{basic3}\n\\end{equation}\n\n\\noindent close to the image of the actual charged track direction, which is located at $\\tan{\\alpha\/2}$.\n\n\nIt is clear from rels. (\\ref{basic2}) and (\\ref{basic3}) that the error on the circle center \nis of first order in $\\alpha$, while the corresponding error on the radius is only of second order.\nTherefore, the robustness of the stereographic projection follows from the fact that the\nanalytical shape of the Cerenkov figure in the equatorial plane is always a circle, even\nif the actual charged track direction is quite different from the measured one.\nMoreover, the radius is affected at second order only by angular errors on the charged\ntrack direction. If the angle $\\alpha$ were large, it is clear from Rels. (\\ref{basic2}) and (\\ref{basic3})\nthat having determined by fit $R$ and $r_0$ allows anyway to reconstruct the correct Cerenkov angle.\nStated otherwise, the pole axis used in order to perform the stereographic projection \nmay be chosen independently of any assumption on the charged track direction. In general, we have~:\n\n\\begin{equation}\n\\tan{\\theta_C}=\\displaystyle \\frac{2 R}{1-R^2+r_0^2}\n\\simeq \\frac{2 R}{1-R^2}\\left [ 1-\\frac{r_0^2}{1-R^2} \\right ]+{\\cal{O}}(r_0^4)\n\\label{basicf}\n\\end{equation}\n \n\\noindent If $r_0$ is small enough, $R$ is negligibly affected{\\footnote{If $\\theta_C=500$ mr,\nin order that $2 \\arctan{R}$ provides an overestimate of at least 0.1 mr, the\nerror on the charged track direction should be, at least, $\\alpha \\simeq 30$ mr.\n}} but can easily be corrected. \n\nThe correspondence between the coordinates of a point on the unit sphere $(X,Y,Z)$, and \nthose of its image $(x,y)$ into the equatorial plane (through the stereographic projection) is\ndefined by~:\n\n\\begin{equation\nx=\\frac{X}{1+Z}~~~~~, ~~~~~y=\\frac{Y}{1+Z}~~~~,~~~~(X^2+Y^2+Z^2=1)\n\\label{basic4}\n\\end{equation}\n\nThis transform is never singular in our case as we have always $Z > 0$.\n\n\n\\subsection{Handling Measurements and Errors}\n\\label{errmeas}\n\n\\indent \\indent\nLet us assume that the above coordinates $X$, $Y$ and $Z$ (or any other quantity)\nare measurable quantities with known covariance error matrix, originating from random distributions\n$\\hat{X}$, $\\hat{Y}$ and $\\hat{Z}$~; assuming the measurement process\nunbiased, we can write $\\hat{X}=X_{true} +\\delta X$, where $X_{true}$ is the true\n(unknown) value of $X$ and $\\delta X$ a random variable of standard deviation $\\sigma_X$ and of zero \nmean{\\footnote{ \nWe shall always use the notation $$ for the expectation value of any random variable $f$.}}~:\n\n\\begin{equation\n\\left \\{\n\\begin{array}{lll}\n<\\delta X>=0 \\\\[0.5cm]\n<[\\delta X]^2>=\\sigma_X^2\n\\end{array}\n\\right.\n\\label{basic5}\n\\end{equation}\n\n\\noindent Correspondingly, for any other measurable quantity, we define\na centered error function carrying the corresponding standard deviation\n($\\sigma_Y$ or $\\sigma_Z$, for instance). As we assume the measurement process unbiased,\nwe should have indeed $<\\hat{X}>=X_{true}$. Using this language, true and measured values\nare quantities which differ by first order terms ${\\cal{O}}(\\delta X)$.\n\nThe error functions affecting the coordinates $x$ and $y$ (in the equatorial plane)\ncan be derived by differentiating Eqs. (\\ref{basic4})~:\n\n\\begin{equation\n\\left \\{\n\\begin{array}{lll}\n\\delta x= \\displaystyle \\frac{1}{1+Z} \\delta X - \\frac{X}{(1+Z)^2} \\delta Z \\\\[0.7cm]\n\\delta y=\\displaystyle \\frac{1}{1+Z} \\delta Y - \\frac{Y}{(1+Z)^2} \\delta Z \\\\\n\\end{array}\n\\right.\n\\label{basic6}\n\\end{equation}\n\nWhen estimating errors using these expressions, $X$, $Y$ and $Z$ should be the corresponding\ntrue values. As they are unknown, one classically uses instead the measured central values.\nIn terms of differentials, this lack of information affects second order terms (like $(\\delta X)^2$).\nSo, at first order, it is legimate to use directly measured values while estimating coefficient functions.\nIt is clear that going beyond first order development in analytic expressions would raise a problem here,\nas the additional terms would be competing with the (uncontrolled) terms introduced by using the measured values\ninstead of the true ones.\n\nIf the measurement $(X,Y,Z)$ is unbiased, the point $(x,y)$ is unbiased too at leading order\n({\\it i.e.} $<\\delta x>=<\\delta y>=0$). The covariance terms ($<[\\delta x]^2>$, $<[\\delta y]^2>$ and\n$<\\delta x \\delta y>$) can be computed in terms of $X$, $Y$ and $Z$ and of their errors and\ncorrelations~; when computing them, one has to take into account that $X^2+Y^2+Z^2=1$ and that\ntheir error functions are not independent~: $X \\delta X+ Y \\delta Y+ Z \\delta Z=0$. \n\n \n\\section{A Circle Fit Algorithm}\n\\label{circlefit}\n\n\\indent \\indent \nAs explained in section \\ref{stereo}, using the stereographic projection, the \ndirections of the Cerenkov photons emitted by a charged track are represented\nby points in a plane laying on a circle. Up to second order terms, the circle center is nothing but \nthe projection of the charged track direction onto the (equatorial) plane of the sphere.\nTherefore, the reconstruction problem of the Cerenkov angle is replaced\nby a problem of circle recognition in a plane.\n\nIt is a long standing problem to find the most suitable way to perform a circle\nfit to a given set of points affected by measurement errors (see refs. \n\\cite{CIRCLE1,CIRCLE2,CIRCLE3} for instance). The main problems addressed \nin (necessarily) approximate procedures are~:\n \n\\begin{itemize}\n\\item linearisation of circle parametrisation \n \n\\item non--gaussian character of the errors on the circle center coordinates and radius.\n\\end{itemize}\n\nIn addition to the above mentioned questions, we address two more issues, connected\nwith the BaBar DIRC, namely~:\n\n\\begin{itemize}\n\\item in any representation,\nthe measured points are not spread out onto the whole circle, but along a relatively\nsmall arc (about 60$^{\\circ}$ degrees). Taking into account the relative magnitude\nof the error on the points and of the circle radius value, this happens to affect deeply\nthe circle fit quality, if no additional information on the circle center is accounted for. \n\n\\item there exist\ncorrelations among photons as a consequence of the multiple scattering undergone \nby the emitting charged track. Accounting for further constraints (charged track direction measurement)\nmay introduce further correlations (see Sections 2 and 3 in Appendix B for instance).\n\n\\end{itemize}\n \n\n\\subsection{The $\\chi^2$ for Fitting a Circle Arc}\n \n\\indent \\indent\nLet us assume that we have $n_{\\gamma}$ measured points $(x_i,y_i)$ with\nrandom errors $(\\delta x_i,\\delta y_i)$, not necessarily gaussian. \nAs we restrict our study to a DIRC device where the points are actually photons,\nwe shall use indifferently the words photon and point. \nWe do not state any assumption on these errors, except that the measurements are unbiased~:\n\n\n\\begin{equation\n<\\delta x_i>=<\\delta y_j>=0~~~,~~~ \\forall i,j=1, \\cdots ~n_{\\gamma}\n\\label{fit1}\n\\end{equation}\n\nStated otherwise, the\nexpectation values $<\\delta x_i~ \\delta x_j>$, $<\\delta x_i~ \\delta y_j>$ and\n$<\\delta y_i~ \\delta y_j>$ which define errors and correlations are not constrained\nand no further assumption is needed on higher order moments.\nIn the approach we have followed,\nthe effects of multiple scattering are not affected to the photons measurements but\nrather to the track direction. \n\n \nWe assume that we have, beside the circle points, also a $measurement$\n of the circle center coordinates and its error~; we define our reference frame\nin such a way that this measured center is located at the origin.\nThe circle parameters we fit are $a$ and $b$ (the center coordinates) and $R$ (its radius). \nThis parametrisation is examined in full details in Appendix B.\n\n\nThe $\\chi^2$ to minimise in order to get the best circle fitting $n_{\\gamma}$\npoints is defined by (see Appendix B 4)~:\n\n\\begin{equation\n\\chi^2= \\sum_{i,j} (d_i-R) (d_j-R) V^{-1}_{ij} + A^t \\Sigma^{-1} A\n\\label{fit2}\n\\end{equation}\n \n\\noindent with~:\n\n\\begin{equation\nd_i =\\sqrt{(x_i-a)^2+(y_i-b)^2}\n\\label{fit3}\n\\end{equation}\n\n\\noindent where $a$ and $b$ are the circle center coordinates parameters \n(measured as $(0,0)$, up to errors) to be fit.\nWe have denoted by $A$ the vector{\\footnote{Actually, the\nvector $(a,b)$ should be written $(a-a_{measured},b-b_{measured})$~; however we take\ninto account that the measurement has been conventionally set at $(0,0)$.}} $(a,b)$\nof the center coordinate and by $A^t$ its \ntransposed. The error matrix $\\Sigma$ depends on the reconstruction\nerrors of the track at the DIRC entrance ($\\Sigma_0$), on the number of photons\nwhich take part to the fit ($n_{\\gamma}$) and on the multiple scattering undergone\nby the charged track inside the radiator~; it is explicitly computed in Appendix A\nand in Sections 2 and 3 of Appendix B.\n\nIn usual approaches this second contribution to the $\\chi^2$ \nis not considered \\cite{CIRCLE1,CIRCLE2,CIRCLE3}~; however, when the circle center is constrained\nby an auxiliary measurement, it is legitimate to use it. Moreover, it is harmless \nto remove it only if the circle length populated by\nthe measured points is large enough (typically greater than 180$^\\circ$) and\/or if\n$\\sqrt{<[\\delta d_i]^2>}\/d_i$ is small enough. In the case of the DIRC, none of these conditions\nare practically met, and removing the constraint on the center represented by the second term in the RHS of\nRel. (\\ref{fit2}) may simply lead to a complete failure of the circle fit, even in absence of fake \nphotons. \n\n\\vspace{1.cm}\n\nThe matrix $V$ which appears in Rel. (\\ref{fit2})\nis the error covariance matrix. It is also the matrix\nof the error function expectation values~:\n\n\\begin{equation\nV_{ij}=<\\delta d_i ~ \\delta d_j>~~~,~~~ \\forall i,j=1, \\cdots ~n_{\\gamma}\n\\label{fit4}\n\\end{equation}\n\n\\noindent The error function $\\delta d_i$ affecting the measurement $i$ is given by~:\n\n\\begin{equation\n\\delta d_i =\\displaystyle \\frac{x_i(\\delta x_i+\\delta a_i)+y_i(\\delta y_i +\\delta b_i)}{d_i}\n\\label{fit5}\n\\end{equation}\n\n\\noindent (see Appendix B4) up to higher order terms. In this expression, we use as central\nvalues for the circle center coordinates the point $(0,0)$, while the error\nfunctions on the circle center are estimated for each photon. It is the reason\nwhy the error functions which appear in Rel. (\\ref{fit5}) are $\\delta a_i$ and $\\delta b_i$, \nreferring to the charged track direction when it emits photon $i$ and $\\delta x_i$\nand $\\delta y_i$ are the measurement errors of the direction of photon $i$.\nThe form of the error functions for $\\delta a_i$ and $\\delta b_i$ is given\nin Appendix B1, using preliminary results from Appendix A. \nThe form of the error function $\\delta d_i$ is explained in Appendix B4~; \nthe elements of the matrix $V$ in Rel. (\\ref{fit4})\nare computed in Section B 5. One can see there how\ncorrelations terms like $<\\delta a_i \\delta a_j>$, $<\\delta a_i \\delta b_j>$, \\ldots\nproduce non zero correlation terms in $<\\delta d_i \\delta d_j>$. \nConsequently, one can interpret $a$ and $b$ in Rel.(\\ref{fit3}) as the mean\nvalues (to be fit) of the sets $\\{a_i\\}$ and $\\{b_i\\}$. Actually, the single\ninformation -- except for errors -- we have on these sets are the measured values\nat the bar entrance~; how this is accounted for is explained in details in Appendix B.\n\n\n\n\\subsection{Linearisation of the Circle Parameter Equation}\n\n\\indent \\indent\n It remains to linearise Eq. (\\ref{fit2}). The procedure is quite\nusual \\cite{CIRCLE1,CIRCLE2,CIRCLE3} and turns out to replace Eq. (\\ref{fit2})\nby~: \n\n\\begin{equation\n\\chi^2= \\sum_{i,j} (d^2_i-R^2) (d^2_j-R^2) \\displaystyle \\frac{V^{-1}_{ij}}{4R_0^2} \n+ A^t \\Sigma^{-1} A\n\\label{fit6}\n\\end{equation}\n \n\\noindent where $R_0$ is an estimate of the radius (a weighted mean of the\n$d_i$ at start and, in the forthcoming steps of the iteration the fit value of $R$ at the \nprevious step of the iteration). The matrix $V$ depends on the circle center\ncoordinates. They are chosen here at start as $(0,0)$, and can also be updated\nin the forthcoming steps of the iteration procedure.\nThe final step toward linearisation is to define as \nfitting parameter $c=R^2-a^2-b^2$ instead of $R$, together with $a$ and $b$. \nThen Eq. (\\ref{fit6}) becomes~:\n\n\n\\begin{equation\n\\left \\{\n\\begin{array}{lll}\n\\chi^2= \\sum_{i,j} {\\cal C}_i {\\cal C}_j W^{-1}_{ij}\n+ A^t \\Sigma^{-1} A \\\\[0.7cm]\nW_{ij}=4R_0^2 ~V_{ij}\\\\[0.5cm]\n{\\cal C}_i=x_i^2+y_i^2-2ax_i-2by_i-c\\\\[0.5cm]\n\\end{array}\n\\right.\n\\label{fit7}\n\\end{equation}\n\n\\noindent and this last expression for $\\chi^2$ is quadratic in $a$, $b$ and $c$.\nEqs. (\\ref{fit6}) and (\\ref{fit7}) simply follow from the fact that near the minimum we have~:\n\n$$ (d_i-R) = \\displaystyle \\frac{(d^2_i-R^2)}{(d_i+R)} \\simeq \\frac{(d^2_i-R^2)}{2R_0} $$\n\n\\noindent This relation fastly improves when iterating, and few iterations only are \nneeded in order to be at less than $10^{-2}$ from $\\chi^2_{min}$ (generally 2 steps\nare enough for the accuracy just quoted).\n\nThe conditions defining the minimum are~:\n\n\\begin{equation\n\\begin{array}{llll}\n\\displaystyle \\frac{\\partial \\chi^2}{\\partial a}=0~~~,\n&\\displaystyle \\frac{\\partial \\chi^2}{\\partial b}=0~~~,\n&\\displaystyle \\frac{\\partial \\chi^2}{\\partial c}=0\n\\end{array}\n\\label{fit8}\n\\end{equation}\n\n\\noindent and provide a linear system of equations for $a$, $b$ and $c$ which gives an optimum \nsolution to the minimisation problem. On the other hand, denoting the variables $a$, $b$ and $c$\nby $u_{\\alpha}$, ($\\alpha=1,2,3$), Eqs. (\\ref{fit7}) can be written (summation over repeated indices\nis understood)~:\n\n\\begin{equation\n\\chi^2= T_{\\alpha \\beta}u_{\\alpha}u_{\\beta}+ Z_{\\alpha}u_{\\alpha}+K\n\\label{fit9}\n\\end{equation}\n\n\\noindent where the matrix $T$, the vector $Z$ and the scalar $K$ can easily be expressed\nin terms of the (given) $W$ and $\\Sigma$ matrices and of the photon coordinates $([x_i,y_i],\ni=1,\\cdots n_{\\gamma})$ moments. In addition, the matrix $T^{-1}$ is the error covariance matrix\nfor the fit parameters $(u_{\\alpha}, \\alpha=1,2,3)$ \\cite{PDG}. This\ncovariance matrix gives the error contour at $\\chi^2_{min}+1$, the 1$\\sigma$ contour.\n\nThe number of degrees of freedom associated with the $\\chi^2$ in rel. (\\ref{fit7}),\nis $n_{\\gamma}-1$ and then the fit probability is the value of the $\\chi^2$ probability function\n${\\rm Pr}(\\chi^2, n_{\\gamma}-1)$.\nOne can also define a consistency check of the set of photons under consideration with \nthe set of track parameters~: the charged track direction providing the expected \ncoordinates of the circle center $a_0=0$ and $b_0=0$ (the measured values) and the --five-- possible values\nof the circle radius corresponding to the --five-- possible mass assignments for\nthe charged track, $R_k$ ($k=1, \\cdots 5$). In this case, the $\\chi^2$\nsimplifies to~:\n\n\\begin{equation\n\\left \\{\n\\begin{array}{lll}\n\\chi^2= \\sum_{i,j} {\\cal C}_i {\\cal C}_j W^{-1}_{ij}\\\\[0.7cm]\nW_{ij}=4R_k^2 ~V_{ij}\\\\[0.5cm]\n{\\cal C}_i=x_i^2+y_i^2-c_0\\\\[0.5cm]\n\\end{array}\n\\right.\n\\label{fit10}\n\\end{equation}\n\n\\noindent where $c_0$ takes five possible values $c_0=R_k^2$, each corresponding\nto one of the possible values of $R_k=\\tan{\\theta^k_C\/2}$.\nIn this way, one can check the consistency of the set of photons considered\nwith the measured charged track parameters ($a_0=0$, $b_0=0$, $R_k$) for each of the five possible \nmass assignments. The $\\chi^2$ just above\ncorresponds exactly to $n_{\\gamma}$ degrees of freedom. One can then decide to choose the \nbest assignment as being the one which corresponds to the lowest $\\chi^2$, provided\nit is above some significance threshold (a lower probability cut). We illustrate\nin the following that the corresponding probability distributions have all expected\nproperties.\n\n\\subsection{Fit Likelihoods, Fake Photons and Contamination} \n \n\\indent \\indent Let us denote $\\chi^2_{n_{\\gamma}-1}$ and $\\chi^2_{n_{\\gamma}}$, the\n$\\chi^2$ defined resp. by Rels. (\\ref{fit7}) and (\\ref{fit10}). Given a set of photons,\nit is clear that, using standard formulae, the former $\\chi^2$ allows to define a maximum \nlikelihood including the center measurement, while the latter leads to the likelihoods of the\nfull measured track with the full photon set considered, for each possible mass assignment.\n\nIn practical applications, however, it should be noted that defining the {\\it true} \n(maximum) likelihood, implies that~: \n\n\\begin{itemize}\n\\item the photons considered are indeed photons, \n\\item the photons are actually connected with the track considered,\n\\item the photons errors are correctly estimated. \n\\end{itemize}\n\nThis sets the problem of background photons. Indeed, in the DIRC,\nactual photons have errors which are well approximated by gaussians and\ncan be computed, more or less accurately, with known information\n(geometry, chromatic errors, \\ldots). If an observed hit\nis not an actual photon (noise) or if it is an actual photon but\nemitted by another track, possibly from another quartz bar, its error distribution\n(and its standard deviation) are completely unknown~;\ntherefore, their\nactual contribution to any $\\chi^2$ cannot be estimated with any\ncontrolled accuracy.\n\nStated otherwise, any procedure aiming at providing a reasonable estimate\nof the $\\chi^2$ probability (or of any likelihood), should remove background photons\nfrom the photon sample kept in the fit and the $\\chi^2$ estimates. On the other\nhand, it is clear that, in presence of noise, any estimate of the $\\chi^2$\nis altered, except if one would be able to remove background from the photon set at the 100\\%\nlevel, which is generally hopeless.\n \nFortunately, the level of actual contamination in any photon set left\nby any cleaning up procedure can be checked statistically. Indeed, as can be seen\nfrom Rels. (\\ref{fit6}) -- (\\ref{fit9}), the fit solution found for $R$ (denoted here $R_{fit}$) \nand its error $\\sigma_R$, crucially depend on the photons used, their errors and their mutual\ncorrelations, all informations which can be accessed with a good accuracy.\nIf the photon set is too much contaminated, the fit solution may depart\nsignificantly from the expected solution.\n\nThe pull $(R_{fit}-R_{true})\/\\sigma_R$ should follow a centered gaussian law of unit standard deviation.\nThis property provides a quantitative criterium to test the quality of background removal.\nIndeed, the most likely effect of background\nis to shift $ R_{fit}$ from its expected value~; this is reflected in the pull\ndistribution by a shift of its mean value from zero and an increase of its rms\nwith respect to 1. The magnitude of observable departures from the standard pull expectations \nclearly signals a more or less acceptable level of contamination, as soon as a correct pull behavior has been\nascertained with noise--free samples. Therefore, checking the model with noise--free\nsamples is an unavoidable step in constructing the procedure.\n\nWhen running with Monte Carlo data, all checks\ncan easily be performed~; when running with real data, several checks are still possible\nusing selected events samples like pions from $K^0_S$ decays or protons from $\\Lambda$\ndecays, which can be selected using kinematics only, {\\it i.e.} independently of the DIRC.\nAssuming that background conditions are not especially dependent on the existence\nof such particles in the events, the quoted pull can be plotted and the influence\nof background contamination inferred.\n\n\n\\subsection{Pattern Recognition Using a Circle Fit Algorithm}\n\n\\indent \\indent In practical applications we have\na collection of hits that are associated with Cerenkov photons emitted by a given track.\nThese hits are of three different kinds~:\n\n\\begin{itemize}\n\\item photons emitted by the track under consideration\n\\item photons emitted by other tracks than that under consideration\n\\item background hits associated with electronic noise of photomultipliers\nor to unidentified tracks (which practically can be merged with noise). \n\\end{itemize}\n \nHits of the first kind have generally well behaved error functions (not far from gaussians,\nif not exactly gaussians), while the other two kinds of hits have practically unknown error\ndistributions and should be removed in order to give a statistical meaning to the circle fit.\nStated otherwise, the photon sample must be cleaned up. There is also, specific\n to the DIRC device, another category of fake photons (the ambiguity problem) which will be addressed\nin Section \\ref{photsel}.\n\nBasically, the cleaning procedure of the photon sample relies on the fact that photons, actually\nemitted by the track considered, have error distributions which can be well approximated by\ngaussians. In this case, a suitable criterium in removing fake photons is to eliminate all hits\ngiving a contribution to the $\\chi^2$ greater than some maximum value.\n\nTherefore, the recognition procedure turns out to perform a fit with a starting\nsample of photons, in order to have an estimate of the circle parameters ($a$, $b$ and $R$),\nthen compute the $\\chi^2$ distance of each photon to this circle, remove those which are too \nfar, and restart the procedure with the surviving photons~; the procedure is repeated\nup to convergence. At this point,\none can consider that the circle parameters are reliable and reexamine all possible\nphotons in order to keep those which are at acceptable $\\chi^2$ distance from the expected\ncircle (typically less than about 9). In this way, one recovers $ambiguous$ photons\nwhich were put aside in order to start the reconstruction procedure.\nUsing this new enlarged sample, one can then perform a final circle fit and get the\noptimum circle parameter values.\nPractically, in the case of the DIRC, there are some subtleties which allow to improve\nbackground rejection and photon recovering~; they will be described with more details\nin Section \\ref{photsel}.\n\n\n\\section{Simulation of a simplified DIRC}\n\\label{simdirc1}\n\n\\indent \\indent\nA fast Monte Carlo program was written in order to test the reconstruction algorithm. \nWe have coded this program in order to output all needed\ninformation allowing to check the algorithm behavior in full details, which is \nnot usually an easy task within a Monte Carlo simulating a complete experimental detector.\nThis program simulates only one quartz bar and a PMT array plane (3 cm diameter PMTs packed in a\nrectangular lattice, located at approximately 120 cm from the bar exit). The angle of the PMT plane with respect\nto the bar axis could be chosen. The bar itself was 5 meters long, which is approximately twice longer than the\nDIRC prototype bar of Ref. \\cite{DIRC}, but corresponds to the actual size of the bars\nof the BaBar DIRC \\cite{BABAR}. The distance between the bar exit face and the PMT plane has been chosen\nfollowing BaBar DIRC setup.\n\n\\vspace{1.cm}\nThe simulation included most part of effects that could hamper DIRC performances: \n\\begin{itemize}\n\\item Errors on track direction and momentum, supposed to simulate the response imprecision of a tracker\nin front of the DIRC bar. In current running conditions we have chosen $\\sigma(p)\/p=3.~10^{-3}$\nand the generated angular errors have rms $\\sigma(\\theta)=\\sigma(\\phi)= 1$ mr.\n\n\\item Realistic detector geometry (geometrical uncertainties are important in the DIRC)\n\\item Chromaticity in the quartz radiator medium (dependence of medium index $n_Q$ -- and hence Cerenkov angle --\non photon wavelength)~; it corresponds to choosing $\\delta n_Q\/n_Q=6~ 10^{-3}$.\n\n\\item The ratio $g$ of the water to quartz refractive indices was treated as independent of the photon wavelength.\nThis is close to the real situation, where $\\delta g \/g$ is typically 10$^{-3}$.\n\n\\item Track multiple scattering in the quartz bar.\n\n\\item Number of Cerenkov photons emitted along the track proportional to $\\sin^2{\\theta_C}$ and\nto the charged particle path inside the radiator medium. The number of generated photons per\ncentimeter has been chosen \\cite{DIRC,DIRCNEW} as $N_0=135$.\n\n\\item The bar end opposite to the water tank bar exit window\nis treated as a mirror. \n\n\\item Full account of photon reflection and refraction properties inside the bar, providing a correct\nbar acceptance simulation.\n\n\\item The active part of the PMT plane has been truncated to half a plane, in order to prevent getting\nfully populated Cerenkov rings. \n\n\\end{itemize}\n\n\n\nIn this simulation, the geometry and detector structure are incomplete (compared to the final BaBar\nDIRC \\cite{BABAR}): there is no reflecting wedge ending the quartz bar at the water tank entrance\n(suppressing one reflection plane for photons) and geometrical imperfections of the bar\n(expected to be very small anyway) are not simulated. The PMT array here is a\nplane while in BaBar detector, it is umbrella shaped \\cite{BABAR}. It is packed\nas a rectangular lattice (while the BaBar packing is hexagonal, introducing in\nthis way additional correlations among geometrical errors, however of limited\nmagnitude). \n\nIn addition, no interaction of the track with the bar is taken into account (except for\nmultiple scattering), in particular no energy loss is implemented. The PMTs spectral response\nfunction is also not used in the simulation, except for computing the mean water and quartz refractive\nindices (and their dispersion). No photon absorption inside the bar is accounted for.\nFinally, there is no magnetic field effect.\n\nTo summarize, there is no conceptually important difference between this simulated setup and that\nof BaBar and, instead, all relevant features are accounted for.\n\n\\vspace{1.cm}\n\nThe main data sample used for this study is a set of 5000 single electron, muon, pion, kaon, and proton tracks having a\nlarge bar incidence angle and momentum range ($20^{\\circ} < \\theta_{inc} < 70^{\\circ}, 0.5 < P < 5$ GeV). This sample\ncould be used in several ways~: in normal mode, one event contained only a single track~; another mode of operation\nallowed to superimpose several tracks in one event, for physics background studies ~; a last possibility\nwas to add a random and (spatially) uniform noise to the event tracks, for other noise studies. These background\nconditions can thus be made quite severe compared to normal conditions. The quality checks of the\nwhole procedure have been performed accurately by varying at will the magnitude of all errors, the background kind and\nlevel, and the phase space window.\n\nThe momentum range where the algorithm has been fully tested goes down to 500 MeV, practically the kaon Cerenkov\nthreshold. At this momentum\nthe angular error due to the multiple scattering for a pion is about 10 mr (for a bar thickness of 1.7 cm)\nthis is quite comparable to the angular error due to the PMT window size (about 7 mr rms).\nGoing to lower momenta is possible~; however, in this case the angular error due to multiple scattering \nmay become dominant (for a 200 MeV pion, it is about 30 mr, far above the PMT geometrical error). \nIn this case the procedure still works, however our first order estimates of the errors might become moderately\naccurate and higher order terms might become necessary{\\footnote{Actually, as below $\\simeq 700$ MeV, the main PID\ndevice in BaBar is the drift chamber by means of the dE\/dx measurements \\cite{DIRCNEW}, it does not seem useful to \ngo to such complications. An overlap region of about 200 to 300 MeV, where an optimum reconstruction can be performed \nwith the DIRC and using dE\/dx, allows already interesting cross--checks.}}. \n\n\n\\section{Photon Selection and Background Removal}\n\\label{photsel}\n\n\\subsection{Outline of the Procedure}\n\n\\indent \\indent\nThe linearized $\\chi^2$ fit we have described needs, as input, photons with\nunambiguously defined direction with respect to the track momentum~: so, the \nmain step of the photon selection procedure is to lower the number of \nambiguities arising in the reconstructed photon direction due to the various\nreflection symmetries of our problem (bar surfaces, mirrors)~; ambiguities should be selected\naccording to criteria which guarantee the symmetrized photon solutions correspond to \npossible or probable Cerenkov angles for the current track. Ambiguous photons cannot be\nused directly by the fit and so they are ignored during the first steps of the \nprocedure (part of them will be recovered by a dedicated algorithm, see below).\nThis ambiguity removal is actually performed in several steps using different criteria\n(detailed in section \\ref{ambigid}).\n\n\nThe next step of the selection procedure is the removal of the possible\nbackground photons contaminating the unambiguous photons population~; an iterative cut procedure\ninvolving a median estimator is used for this purpose (section \\ref{bkgrem}).\n\nThe algorithm requires a minimum number of \nunambiguous photons in order to go on (typically 3, but this number can be lowered to 2).\nIf there are enough unambiguous photons left, the parameters of interest (Cerenkov angle,\ncoordinates of the center of the Cerenkov circle in the equatorial plane) are fit to this\nset of remaining unambiguous photons. This preliminary fit is made only in order to have a first \napproximation of these parameters and of their errors. \n\n\nA second and analogous fit is applied to a photon population built by adding to the primary\nset of unambiguous photons, photons which were originally flagged as ambiguous,\nbut which have parameters not too far from the parameters resulting of the primary fit, \naccording to the $\\chi^2$ distance criterium . These additional unambiguous photons should also\nfulfill another condition which guarantees they are \"unambiguous\", at least to some extent~:\nfor each new photon, the two ambiguities closest to the central parameters should be themselves\nsufficiently separated in the relevant $\\chi^2$ distance (see section \\ref{recovphot}).\n\n\nThis last fitting operation produces a new (and final) determination of the circle fit\nparameters and errors.\n\n\n\\subsection{Ambiguity Identification and Removal}\n\\label{ambigid}\n\n\\subsubsection{Ambiguous -- Unambiguous Cerenkov Photons}\n\\label{ambigid1}\n\n\\indent \\indent\nIn our problem, each hit recorded in the DIRC PMT array is associated with a reconstructed\ntrack. This primary association is made using straightforward criteria~: for each hit, the 8 symmetrized\nCerenkov angles are selected by requiring that they lay into a physically meaningful interval (typically,\nbounded by the Cerenkov angles corresponding to the extreme mass hypotheses).\n\nThis step is important, because it controls the total number of PMT hits\/tracks that will be examined by the full\nreconstruction algorithm, and hence influences significatively the time performance of the algorithm.\nAfter this primary simple association step, one PMT hit\/track pair usually still gives rise to several\nCerenkov angle solutions. These different solutions are called \"ambiguities\" hereafter~: \nassigning a unique Cerenkov angle to a photon for one track turns out to discriminate between these ambiguities.\n\nUsually, this primary association step leaves an average number of approximately 2 ambiguities per hit,\nso that a large part of the photons are still ambiguous.\nTo further select our sample of \"solutions\", we restrict even\nmore the allowed range of Cerenkov angles around the various mass hypotheses, through a cut in the\n$|\\delta\\theta|=|\\theta_{solution}-\\theta_{hypothesis}|$ variable\ncomputed for each solution Cerenkov angle. This cut is made typically at the level of 30 mr which corresponds\nto a rather large angular range around each mass assignment~; Fig.\n\\ref{dthetacut1}a shows that this window can be naturally defined and checked on a track sample.\nSolutions found in the allowed range around at least one of the five mass hypotheses are \nconsidered. As $\\theta_{hypothesis}$ is computed from the track momentum and a mass assumption, it is\nalways in a valid range~; $\\theta_{solution}$ however is computed from measured angles and mean refractive index,\nthen, even for the correct solution, it may be physically meaningless (i.e greater than the maximum Cerenkov angle).\nSuch values have nevertheless to be kept in order to prevent biasing distributions.\n\n\nSince other studies \\cite{BABAR} have shown that the main background in the DIRC are photons generated by\nother tracks in the same event, another cut is made, on the same grounds, to cope with this \"inter track\" noise\nin multitrack events~: each PMT hit\/track solution surviving this 30 mr cut \nis tested versus the other {\\it measured} tracks of the event to check if it can be associated with another\ntrack according to the same $|\\delta\\theta|$ criterium\n(at the more restrictive level of 10 mr in this case, see Figure \\ref{dthetacut1}b). The second \n$|\\delta\\theta|$ cut discards only a limited number of photons but an important number of background solutions.\n\nIn order to be classified unambiguous, each photon should give one and only one reflection solution\nrelative to the track considered, such that $|\\delta\\theta|<30$ mr and no solution\nsuch that $|\\delta\\theta|<10$ mr, relative to any other track in the event.\n\nThe levels of the two $|\\delta\\theta|$ cuts (30 and 10 mr) have been adjusted such as to correspond\nto a 3 $\\sigma$ and a 1 $\\sigma$ cut; this means they have been kept at a quite loose level,\nleaving about 99\\% of the signal visible by the algorithm. Obviously, these criteria can easily\nbe adjusted without a Monte Carlo, as illustrated by Fig. \\ref{dthetacut1}.\n \n\\subsubsection{Perpendicular Tracks Recovering}\n \n\\indent \\indent\nThe previous classification of photons as \"ambiguous\" or \"unambiguous\" is not well\nadapted to the case when the track direction is almost perpendicular or parallel to some surface of \nthe quartz bar~: in this case, trivial ambiguities appear systematically because of the intrinsic\nsymmetries in the bar + track system, {\\it i.e.} one allowed Cerenkov angle for this PMT hit\/track pair\nwill always generate two valid solutions per trivial symmetry plane, since these reflections have in fact\n(almost) the same Cerenkov angle.\n\nTo cure this problem, a narrow cut on the incidence angle of the track with respect to the bar \nis made which, for this kind of tracks, runs a procedure discarding systematically these trivial\nadditional solutions, keeping randomly one of them. The magnitude of this cut is mainly related\nto the geometry of the quartz bar + PMTs system, and in particular it is chosen \nquite small compared to the angular size of one PMT as seen from the water side bar exit. In our case, the PMTs have\na diameter of 3 cm and are distant from approximately 120 cm from the bar exit~: the corresponding angular\naperture is around 25 mr (corresponding to about 7 mr rms)~; the cut is set at 7 mr. \n\nThis cut is slightly biasing but, as the trivial ambiguity kept is random, this bias is surely\nlimited. Monte Carlo studies do not show any clear signal of bias related with this cut, moreover\nthe proportion of almost perpendicular tracks can be expected small at least for phase space reasons.\n\n\\vspace{1.cm} \nAfter having applied the two $|\\delta\\theta|$ cuts and the \"perpendicular tracks\" recovering cut,\nphotons which still allow for several Cerenkov angle solutions are declared \"ambiguous\", and photons\nadmitting one and only one solution are \"unambiguous\". The latter ones are used directly in the\nrest of the procedure. The three cuts already described are clearly independent of the particle kind.\n\n\n\\subsection{Cut on the Number of Unambiguous Photons}\n\n\\indent \\indent\nAfter the preliminary step described above\n(step \"A\"), a first cut on the number of unambiguous photons is made~: we require\nthe fit to run with at least 2 photons as input. In principle, one should require 3 photons to be sure\nto keep a non singular $3\\times3$ matrix in the minimisation procedure~; nevertheless, \nthe constraint put on the circle center, allows to lower safely this limit to 2 input photons.\nThe mean ratio of the number of unambiguous photons to real photon hits (having survived the primary\nassociation cuts) at this point of the reconstruction is around 55\\%~; this ratio obviously depends\non the cuts one is using.\n\nWhen the number of unambiguous photons for the current event is lower than 2, in order to avoid loosing \nsystematically the track, the algorithm tries repeating step A using a less restrictive \n$|\\delta\\theta|$ cut level which is lowered from 30 to 20 mr in several steps. This part of the algorithm\nconcerns only a few percents of the events, and no significant bias could be traced back to it.\nThe gain in efficiency of the algorithm after this \"smoothed\" $|\\delta\\theta|$ cut is around 5\\% of\nevents with two tracks. \n\n\n\\subsection{Background Removal using a Median Cut }\n\\label{bkgrem}\n\n\\indent \\indent\nIn the previous step, we have already performed the removal of some background effects.\nIndeed, the ambiguous photons represent a kind of combinatorial background, which is identified and left\naside provisionally, awaiting improved informations on the Cerenkov circle from the set of unambiguous photons.\nMoreover, the 10 mr cut on $|\\delta\\theta|$ drops out already most of the contamination\nproduced by photons associated with other reconstructed tracks, which can affect the unambiguous photon\nset associated with the correct track, \n\n\n\\vspace{0.7cm}\n\\indent \\indent\nAfter the construction of the unambiguous photons sample, it is possible to improve the removal of background\nphotons, or, at least, to remove outlier photons.\nFurther background may originate from several sources~: \n\n\\begin{itemize}\n\n\\item Photons created by {\\it unreconstructed} tracks (low energy tracks produced in the detector materials,\nbacksplashes from a nearby calorimeter, \\ldots)\n\n\\item Improperly accounted for photon solutions (i.e. wrong reflection hypotheses considered as \ncorrect unambiguous solutions).\n\n\\item Background originating from the accelerator, PMT noise, cosmic rays \n\\ldots is expected lower than the other types of background.\n \n\\end{itemize} \n\nThe first type of background is event dependent and\nmay have a complicated structure, whereas the second and third types\nare rather flat in Cerenkov angle space (at least on a large range around the nominal Cerenkov angle).\nHowever, we have found no significant difference in the reconstruction behaviour between them and thus, they \nare treated likewise.\nThe second kind of background has been found relatively easy to deal with, at the expense of loosing \nfor the fit strongly ambiguous photons ({\\it i.e.} solutions which are consistent with more than one image of the \nCerenkov cone, even with the improved accuracy allowed by the fit estimation of the circle parameters).\nThese strongly ambiguous photons have not been used in the circle fit~; \nthey can nevertheless be counted in order to improve the information on the total number of photons \nassociated with the track.\n\n\\vspace{0.7cm}\nIn any case, it is possible to cut out noise outside a window centered at the median Cerenkov\nangle (ideally, around the correct Cerenkov angle) of the set of unambiguous photons \nassociated with a given track. The same window will be used afterwards as the area where the fit will be performed.\n\nThe median is used here instead of the mean as central location estimator as it is less sensitive to noise and\nallows a better window setting (the \"mean\" location estimator is known to be non robust with respect to\noutlier tails, see for example \\cite{MEDIAN1,MEDIAN2}).\n\nIn conditions where the background is low (and\/or mostly produced by other reconstructed tracks),\nthe cut to be performed around the median corresponds to a gaussian cut{\\footnote{$\\sigma$\nbeing defined by a theoretical computation of the gaussian errors.}} of about 3$\\sigma$~; \nin much harder background conditions this cut may have to be lowered to about 2$\\sigma$, affecting the probability\ndistribution shape only in the low reconstruction probability region.\n\nThe median cut procedure is iterated on the unambiguous photon sample till convergence is reached\n({\\it i.e.} until no new photon is removed), which usually happens after 2 or 3 iterations.\nThe ratio of the number of photons, unambiguous at this point of the procedure,\nto photons having survived the first association step cuts is approximately 50\\%.\n\n\\vspace{0.7cm}\n\nIn case of even higher background (like for example when many unreconstructed tracks are present in the\nevent) the median procedure itself behaves poorly, since background photons may accumulate at several locations\nin Cerenkov angle space, simulating signal for almost all mass hypothesis.\nIn this case, it is meaningless to rely on simple location estimators (like the median): in particular,\nthe preceding procedure has an identification performance which depends on the track momentum when this one\nis close to the Cerenkov threshold of the various mass hypothesis. In such heavy background conditions, it is better\nto perform a fit only in restricted Cerenkov angle areas, and not on the full Cerenkov angle range: in this way,\nthe response becomes more uniform. The choice beetween the two selection methods for the fit window \ndepends on the noise level, and can be done automatically. \n\n\\vspace{0.7cm} \n\nAfter the median step, the sample of surviving photons is supposed to be free at least of the influence of\nunwanted tails which could spoil the subsequent fit. This preliminary fit gives a first\nestimation of the Cerenkov angle and the circle center in equatorial plane\nfor our sample of filtered unambiguous photons.\n\nA last \"cleaning\" cut is applied after this first fit in order to remove outlier photons which have \nindividual Cerenkov angles\nfar from the fitted Cerenkov angle~; these photons could still exist and could degrade the results of\nthe rest of the procedure. Typically, a fixed 3 $\\sigma$ cut is applied for this purpose. \n\nAfter this last cut, the unambiguous photons represent about 45\\%\nof the actual population of photons associated with the track. The corresponding spectrum is\nshown in Fig. \\ref{recov_ratio}a.\n\n\n\\subsection{Ambiguous Photons Recovering}\n\\label{recovphot}\n\n\\indent \\indent\nIt is possible to get more photons in the final fit after recovering\npart of the photons previously flagged as ambiguous, through a procedure using a $\\chi^2$ criterium~: \nthose among them which have a contribution to the $\\chi^2$ (estimated at the reconstructed \nCerenkov angle and circle center computed by the first fit) which\nis not too high are declared unambiguous and included into the fitting sample,\nwith the additional condition that their two best solutions are not too close to each other\naccording to $\\chi^2$ distance.\n\nPractically, this means that the photon solutions are not farther away from the first\nfit Cerenkov angle than typically 3 $\\sigma$ and that the closest among the other solutions\nis beyond, typically, 3.5 or 4 $\\sigma$.\n\nThis \"double $\\chi^2$\" criterium allows to input in the second fitting step a sample of photons of the order of 65\\%\nof the original sample (under large background conditions) or about 80\\% (no additional background). The gain\nrepresented by the recovering procedure thus amounts to 50\\% to 100\\% of the\nphoton number compared to the sample size before the recovering step.\nThe proportion of finally used photons wrt the detected photons associated with a track is plotted \nin Fig. \\ref{recov_ratio}b for the sample with no additional background.\n\nOne may imagine to replace this double $\\chi^2$ criterium by a single one~: keep the best solution provided\nit is below $\\simeq 3 \\sigma$, whatever is the $\\chi^2$ of the second ambiguity solution. In \nconditions where the background is small, this increases significantly the number of photons -- \nthen, the accuracy on the reconstructed angle -- and the reconstruction quality. \nIn very hard background conditions, however, one has to study carefully to which extent the\nsubsequent gain in photons does not degrade the reconstruction quality. It is not done here.\n\n\\section{Monte Carlo Results}\n\\label{MCresults}\n\n\\indent \\indent\nThe results presented in this section have been obtained using the set of cuts defined in Section \n\\ref{photsel}. The numerical values of these cuts have been tuned depending on the background \nconditions affecting each of the Monte Carlo samples.\nWe postpone to Section \\ref{cutadjust} the discussion on cut handling and tuning. \nHere we examine the results obtained from analysing these samples, in order to draw conclusions\non the fit quality and the various aspects of background removal.\n\nAnother important aspect is the algorithm performance for what concerns the separation power between the\nvarious mass hypotheses, more directly related to the use of the reconstruction for physics.\nMany ways to estimate this performance can be devised~; here we will discuss only simple criteria.\nIndeed, even if background effects are realistic in our Monte Carlo, and sometimes pessimistic,\nactual performances can be precisely defined only with a detailed simulation of a given experiment or with real data.\nMoreover, the \"performance\" requested depends strongly on the physics goals one is willing to achieve.\n\n\n\\subsection{One Track Event Display}\n\n\\indent \\indent \nIn order to substantiate the problem of pattern recognition and background removal when dealing with\nthe DIRC, it is not useless to display some events. For this purpose, we show in Figs. \\ref{event_display1} \nand \\ref{event_display2} an event with one track (actually a kaon of 1.087 GeV\/c momentum) \nsuperimposed with a low flat background\n(additional number of PMT hits at the level of about 20\\% of the signal PMT hits).\nThe stereographic projection has been performed with the polar axis along the bar axis and then,\nthe Cerenkov circles are not centered at the origin. \nCircles corresponding to the original track are drawn thick (the outer one looks even thicker as the\n$e$, $\\mu$ and $\\pi$ assumptions gives circles nearly superimposed), their images with respect to all \nsymmetry planes are the shaded circles shown in only the upper Fig. \\ref{event_display1}.\nThe lower Fig. \\ref{event_display1} displays the same region with only the circles\nassociated with the original track direction under the various mass assumptions. The points\nrepresented are the solutions surviving the primary association cut (see Section above)~;\nthe shaded area is the region of zero acceptance.\n\nFig. \\ref{event_display1} illustrates the task of the recognition and expresses as clearly as possible \nthe usefulness of the charged track information~; most of the background shown \nis produced by ambiguities. This means that this event is relatively clean.\nThe algorithm described above extracts the unambiguous photon subsample (see upper Fig. \\ref{event_display2})~;\nin the present case all unambiguous photons are located along the circle corresponding to\nthe kaon assumption. This is the effect of the median cut referred to above~; indeed,\nby looking at Fig. \\ref{event_display1}, one clearly sees that unambiguous photons\nbelonging to the proton circle (the innermost one) have been removed by the procedure.\n\nThe subset of ambiguous photons is shown in the lower Fig. \\ref{event_display2}~;\nthe photon solutions which are to be examined by the recovering procedure belong to all mass assumptions.\nThe extracted ambiguous photons which will be added to the unambiguous photon sample \nbelong only to the kaon assumption circle.\n\n\\subsection{ One Track Events with no Background}\n\n\\indent \\indent\nWe first analyse single track events generated with no external background. \nThis allows to study the algorithm performances and get quality checks under optimum\nconditions. The track momentum range goes from 0.5 to 5 GeV\/c and the sample contains equal numbers \nof $e$, $\\mu$, $\\pi$, $K$ and $p$.\n\nNo (flat) random noise is superimposed to the event--tracks and no additional track embedded\nin these events, but this does not mean that they are background free. Indeed, the ``combinatorial''\nbackground represented by wrongly assigned reflection assumptions to hits pollute the sets of\nunambiguous photons. Therefore, we can check simultaneously the behaviour of the algorithm and the removal\nof wrongly assigned photons. In this case, the median cut for background removal can be put\nat a very loose value ($\\simeq 4 \\sigma$) compared to harder background conditions.\n\nFig. \\ref{poinref}a shows that the reconstruction probability is close to be flat~: the mean value is\n0.52 (expected 0.50), while its rms is 0.277 (expected 0.288), both parameters being close to\nexpectations for a flat distribution. The bias produced by slightly overcutting in order to\nremove wrongly assigned (true) photons is thus negligible. The reconstruction quality is illustrated\nin Fig. \\ref{poinref}b, where the Cerenkov angle pull is represented~; it is very close to a \ncentered gaussian distribution of unit standard deviation. This means that errors are well understood\nand correctly accounted for, including multiple scattering handling. This also means that\nthe identification is close to optimum{\\footnote{Of course, as expected, the $\\pi$ --$\\mu$--$e$ separation\nis poor in the track momentum range explored.}} (pions were identified at the level of 95\\% and\nkaon contamination was about 4\\%). Finally, Fig. \\ref{poinref}c\nshows the bias ($\\theta_C^{true}-\\theta_C^{fit}$)~; the mean bias found is about 0.3 mr, and the\nspread is about 3 mr. The number of tracks sent to the procedure was 5000, the fraction\nwhich was found with at least 2 unambiguous photons, and thus reconstructed{\\footnote{\nAmong the 12\\% events lost, about 2.2\\% are protons below the Cerenkov threshold.}}, was 88\\%. \nTherefore the single significant effect of the combinatorial background is to reduce by about 10\\%\nthe number of events with at least 2 unambiguous photons. Nevertheless, if reconstruction quality\nhas to be considered, tracks with at most one unambiguous photon\n(even when lowering the $|\\delta\\theta|$ cut to 20 mr) look somehow suspicious.\n\n\\vspace{1.cm}\nThe reconstructed Cerenkov angle pulls are plotted \nfor a few bins of track momentum in Fig.\n\\ref{thcpulbias_vsp}a~: the rms of these pulls are quite stable as a function of the track momentum, \nand are close to 1. \nThis stability implies that error estimation is correct, even in relatively small track momentum\n bins ~: the non gaussian behaviour of the Cerenkov angle distribution, even if not smoothed by an averaging over the\ntrack momentum, looks quite limited. There seems to be a systematic bias (at the 30\\% level) at low \ntrack momentum which decreases with increasing momenta~; this should be attributed to\nharder multiple scattering effects. Indeed, one should remark in Fig. \\ref{thcpulbias_vsp}a\nthat, going to higher and higher momenta, allows at the same time to reduce the histogram bias\nto smaller and smaller values, while the rms become closer and closer to 1.\n\nAbsolute deviations from the simulated Cerenkov angle are also plotted in the same track momentum bins on\nFigure \\ref{thcpulbias_vsp}b, showing the dependence of the errors on the track momentum, as noticed before.\nThese plots also show that the general shape of the errors distributions in each momentum bin is correct,\n{\\it i.e.} not far from a true gaussian.\n\nThere is an analogous situation when one examines the dependence of the errors and biases of the reconstructed\nCerenkov angle with respect to the track incidence angle on the quartz bar~: in this case,\nFig. \\ref{thetacpul_vsdip} shows clearly the existence of a systematic bias for tracks hitting the bar\nwith a high incidence angle{\\footnote{Angles are expressed with respect to\nthe perpendicular to the bar face from which the particle enters in the quartz.}}.\nThis seems to corresponds to the already noticed effect with low momentum tracks, {\\it i.e.}\nlarger multiple scattering effect, but due to longer paths inside the quartz bar. Indeed, in this case,\ntracks with high incidence angles may suffer stronger changes compared with low incidence angles.\nThe rms dispersion of the pulls looks here also quite stable when the track incidence angle varies,\nas demonstrated in the same figure. \n\n\\vspace{1.cm}\n\nInstead of fitting the circle parameters\non the photon sample extracted from data, one can decide to look for the $\\chi^2$ probability\n(with $n_{\\gamma}$ degrees of freedom) by fixing completely the parameters to their values inferred\nfrom the charged track direction (see Rel. \\ref{fit10}). The result of this exercise is shown\nin Fig. \\ref{prob_pik} for true pions and kaons. The probability distribution happens to\nbe flat under the right mass assumptions, while it becomes sharply peaked towards 0, under\nthe wrong mass assumptions. Then, in addition to unaccurately reconstructed tracks, \nthe very low probability bins may be enriched with tracks carrying wrongly assigned mass. It is usual\nto take this into account by a probability threshold~; in the present case, a cut at about 2\\%\nlooks enough. \n \n \n\\subsection{Effect of Correlations}\n\n\\indent \\indent\nThe previous sample, affected only by the minimum (combinatorial) background, allows to study at various levels\nthe effects of correlations. To be more precise, the data sample is intrinsically affected by correlations~; a \nmethod to study their effects is to remove correlations terms in the analysis, {\\it i.e.} in the algorithm. \nIt was remarked in Appendix B5, that photons close in azimuth are strongly correlated. On the other\nhand, we know that the circle arc populated by the Cerenkov photons can be small. Then, one can guess\nthat the net effect of correlations should be indeed strong.\n\nA first way to compute the Cerenkov angle and its error, neglecting all correlations, is to set~:\n\n\\begin{equation}\n\\displaystyle R=\\sigma_R^2 \\sum_i \\frac{d_i}{\\sigma_i^2}~~~,~~~\\frac{1}{\\sigma_R^2}=\\sum_i \\frac{1}{\\sigma_i^2}\n\\end{equation} \n\n\\noindent and knowing $R_{true}$, one can compute the pull rms. In these expressions, $\\sigma_i^2$ is the\nsquared error of $d_i$. This corresponds to fixing the charged track direction to its \ncentral value given by the tracking device. In such a computation one actually neglects correlations\nin estimating $\\sigma_R$, since otherwise we would have used $ 1\/\\sigma_R^2=\\sum_{i,j} V^{-1}_{ij}$\n(see Rels. (\\ref{fit4}) and (\\ref{appb6}))\nand not only the trace of the inverse of V, which becomes diagonal when correlations are neglected.\nThe corresponding pull is always well centered (reflecting the fact that the charged track direction measurement\nis unbiased) while the pull rms varies dramatically with the spreads $\\sigma_{\\theta}$ and $\\sigma_{\\phi}$ as\nillustrated by the upper curve in Figs. \\ref{correl_dth}.\n\n\\vspace{0.7cm}\nAnother way to proceed is to neglect correlations in the expression for $\\chi^2$ (Rel. (\\ref{fit7}))\nand solve at minimum. In this case, the result provides a fit value $R_{fit}$ practically not biased.\nIts error $\\sigma_R$ is computed from the solution at minimum $\\chi^2$ and from the matrix $T^{-1}$ \n(see Rel. (\\ref{fit9}))\nwhich gives the errors and correlations for $a$, $b$, $c$, from which one can deduce $\\sigma_R$ using~: \n\n\\begin{equation} \n\\displaystyle \\sigma_R^2=\\frac{1}{4R^2_{fit}} <[\\delta c +2 a \\delta a + 2 b \\delta b]^2>\n\\end{equation}\n\n\\noindent and $R^2_{fit}=c+a^2+b^2$, {\\it i.e.} one takes into account the fact that the actual center \nis not the origin, but is closer to its fit value. \nThe corresponding rms pull is the middle curve in Figs. \\ref{correl_dth}. \nIt shows already a much better behaviour than the previous method result which\nneglected {\\it all} correlations {\\it stricto sensu}. This full account of the\nfit center location mainly explains\nthe interesting behaviour of the pull rms at large values for $\\sigma_{\\theta}=\\sigma_{\\phi}$. \n\nFinally, the lowest curve in Figs. \\ref{correl_dth} shows the solution taking into account all\ncorrelations as explained in the sections above and in the appendices. In this case, the pull rms\nremains always close to 1 and departures are never worse than about 10\\% only{\\footnote{Actually,\ngoing to much larger values for $\\sigma_{\\theta}=\\sigma_{\\phi}$, the lowest curve remains\nat about 0.9 while the middle curve crosses 1 and goes on decreasing.}}. \n\nComparing the two Figs. \\ref{correl_dth}, it is clear that the mean effect is much more\ndramatic if correlations due to multiple scattering are harder (at low mean momentum). \nIt is also clear from these figures that there is a small systematic effect (10 \\% of the pull rms)\nthat our model does not account for. The reasons for this systematic effect are several~; first, \nerrors and correlations due to\nmultiple scattering are treated statistically{\\footnote{We take into account only mean effects \ndue to multiple scattering. Actually, the emission time sequence of the detected photons is not \nknown and, moreover, the real multiple scattering effect undergone by the track \nbefore each photon is emitted is also unknown. Therefore, it seems hard to imagine\nhow to go beyond averaging.}}~; second, at low momentum and\/or at large angular errors\non the incoming charged track direction, non--linear effects become visible. Figs. \n\\ref{correl_dth} show however that these effects remain of limited influence. \n\nOne may wonder why the second method which neglects correlations gives\na pull rms which improves with large angular errors on the incoming charged track \ndirection, a behaviour quite different from the simple uncorrelated mean (first\nof the methods above). This is actually due to the peculiar origin of this kind\nof correlations compared to multiple scattering. In Fig. \\ref{correl_ms}, \nthe pull rms is plotted as a function of $1\/p$ ({\\it i.e.} for \nincreasing angular errors dues to multiple scattering)~; crosses correspond to the second \nmethod, dots to the third (standard) method. It is clear herefrom that neglecting \ncorrelations gives a worse and worse result when multiple scattering \nerrors increase.\n\nThe different behaviour of correlations due to multiple scattering only\n(case A) and to errors on the incoming track direction only (case B)\ncan be understood to some extent. Let us assume that we are in\ncase B~; if we have had an exact knowledge of the actual charged track direction,\ncorrelation effects would vanish when using it to subtract the center\nposition from the photon coordinates.\nUsing the fit center, in place of of the measured center (here the origin),\nimproves somehow the approximation made of the actual charged track direction~;\nthis is well reflected in Figs. \\ref{correl_dth} by the difference\nof slope between the curves with square and cross markers. \nInstead, in case A, there is clearly no longer one and only one actual charged track direction\nassociated with the photon set, and consequently, neglecting correlations should degrade the\nresult more and more as multiple scattering effects increase~; this is reflected by the behaviour\nshown in Fig. \\ref{correl_ms} by the (cross) curve. Real life stays in between cases A and B,\nsince actually we are in a mixed case.\n\nIn any case, as the ``physical region'' for $\\sigma_{\\theta}$ and \n$\\sigma_{\\phi}$ is expected to be\naround 1 to 2 mr, it is clearly preferable to account for all correlations. \nDepending on the mean track momentum, the pull rms departure from 1 may be as large as $\\simeq$ 50\\%\ninstead of the $\\simeq$ 10\\% mainly due to non--linear effects.\nThis problem is reflected by the $\\chi^2$ probability distributions. Focusing on the mixed\ntrack sample with $0.5$ GeV $\\leq p \\leq 5$ GeV and neglecting only the correlations in the $\\chi^2$\n(second of the methods just discussed), we get the distributions shown in Fig. \\ref{chi2prob_nocor}.\nHere are displayed the probability distributions corresponding\nto various values for $\\sigma_{\\theta}=\\sigma_{\\phi}$ from 0 --Fig. \\ref{chi2prob_nocor}a --\nto 5 mr --Fig. \\ref{chi2prob_nocor}e -- compared with the case where all correlations\nare normally accounted for (Fig. \\ref{chi2prob_nocor}f), corresponding to 5 mr. We see that, already for \nsmall angular errors on the charged track direction, the probability distribution is badly distorted\ncompared to flatness, while accounting normally for correlations gives an acceptably flat \nprobability distribution{\\footnote { The difference in flatness between Fig. (\\ref{poinref}a)\nand Fig (\\ref{chi2prob_nocor}f) is simply due to different median cuts~: In the former case\nit was set at the very loose level of 4$\\sigma$, while in the later it is 3$\\sigma$.\nOne can furthermore compare these two figures and remark that a tighter median cut\nmainly affects the low probability region by depopulating it somehow.}}, even at 5 mr.\nOne should also notice from Fig \\ref{chi2prob_nocor}f that the systematic effects already noticed,\nwhich survives our treatment of errors and correlations at $\\sigma_{\\theta}=\\sigma_{\\phi}=5$ mr \nis not hard enough that it would spoil the shape of the distribution probability.\n\nOne thus has to notice the dramatic effect of a bad estimate of the center coordinates \non the rms pull and then on probability distributions. This effect on the pull\nrms is actually due to a wrong estimate of the errors and correlations.\nIndeed, as noticed in Section \\ref{stereo} when discussing Rel. (\\ref{basicf}), \nthis effect affects much less the central estimate of the radius and hence, \nof the Cerenkov angle than the error estimation itself.\n\n\n\\subsection{Track--Events with various Background Conditions}\n\n\\indent \\indent\nHere we examine samples contaminated by various kinds of background~: flatly distributed noise on the\nPMT detection plane, or merging of the track with one or sometimes two (unidentified or identified)\ntracks which produce additional photons.\nThe chosen momentum range is still 0.5 to 5 GeV\/c and the population contains the five possible particle\nkinds in equal numbers.\nThis kind of background conditions can be considered the hardest as these additional tracks enter the same DIRC bar,\nsometimes with directions very close to that of the track under identification.\n \n\nThe pull of the Cerenkov angle for the sample of mixed particles with one identified additional\ntrack is plotted on Fig. \\ref{thetacres}a. This plot gives two important informations~: on the one hand,\nthe pull bias remains as small as when there is no background (see Fig. \\ref{poinref}b)~;\non the other hand, the pull rms is close to one (typically 1.2), as one expects if the\nerror model is correct. A simple gaussian fit to the distribution in Fig. \\ref{poinref}b gives \n $\\sigma= 0.95$ with a very good fit probability. All this shows that\nthe tails to the distributions remain limited and do not affect the fit quality.\n\nFigure \\ref{thetacres}b, a plot of the difference $\\theta_C^{true}-\\theta_C^{fit}$, allows to estimate the \nabsolute dispersion of this quantity. Compared with the case with no background (see Fig. \\ref{poinref}c),\none sees clearly that the bias is unchanged and limited (0.3 mr), whereas the dispersion is slightly\nincreased (3.6 mr instead of 3 mr).\n\nThis dispersion depends, among other things, of the track momentum and dip angle. With this respect,\nFig. \\ref{thetacerrtheory} shows the mean theoretical error on the reconstructed Cerenkov angle as a \nfunction of this\nmomentum~; one can check that this error increases noticeably at low momentum (mainly because of\nmultiple scattering effects). Errors become smaller and reach a minimum plateau at high track momentum\nbecause of the Cerenkov angle saturation, the greater number of Cerenkov photons (proportional\nto $\\sin^2{\\theta_C}$), and smaller multiple scattering effects.\n\n\\vspace{1.cm}\nThe fit results obtained when no background was added to the track were good\n(see Figs. \\ref{poinref}, \\ref{thcpulbias_vsp} and \\ref{thetacpul_vsdip}). \nThey deteriorate only slightly if noise is added~: Fig. \\ref{thetacpul_bkg} indicates that the\nalgorithm still behaves well under background conditions connected with the presence of other (measured) tracks\nin the event. In Figs. \\ref{thetacpul_bkg}b and \\ref{thetacpul_bkg}c, the reconstructed Cerenkov angle pulls are displayed\nwhen one or two other tracks are superimposed to the signal track (their existence is known to the algorithm,\nwhich uses this knowledge to reject part of this background as ambiguities, as explained in Section\n\\ref{ambigid1})~; the dispersion and bias of the reconstructed angle are found close to the normal ones.\n\nFor the case of Figure \\ref{thetacpul_bkg}a, where flat background has been superimposed to the signal\ntrack, the algorithm seems to be more affected, since a sizeable bias of 15\\% appears, the rms dispersion\nincreases but stays close to 1. This bias still is limited, even if noise conditions are quite severe compared to\nwhat is expected with a real detector.\n\nFig. \\ref{chi2prob1} shows the $\\chi^2$ probability distribution for $n_{\\gamma}-1$ \ndegrees of freedom for an equal mixture of single electron, muon, pion, kaon and proton tracks.\nIt illustrates the effects of the same cuts on different background conditions (the cuts used have been\ncalibrated on events with only a flat background).\nThe plots are shown for several cases of actual background conditions: a) no\nbackground, b) random flat background on the detection plane (at the level of 100\\% of the signal photons),\nc) 1 other track considered as background ({\\it i.e.} no secondary $|\\delta\\theta|$ cut, no knowledge\nof the existence of the second track, see section \\ref{ambigid1}).\n\nFig.\\ref{chi2prob1}a shows that the actual effect of the cuts on background free events is \nmainly concentrated in the low probability bins which appear slightly depopulated, while the\nrest of the distribution looks acceptably flat. Figs. \\ref{chi2prob1}b and \\ref{chi2prob1}c\nshow that the main effect of background photons is to provide a large peak at small probability,\nand thus the need for a threshold probability. These last two figures tends also to show that \na large flat background is harder to account for than the background associated with photons\ngenerated by an unmeasured track~: in the former case, the low probability peak extends up to\n$\\simeq$ 15\\%, while in the later case the effect of this peak is negligible above $\\simeq$ 5\\%.\nThis kind of plots, which can be produced with any data set, are tools allowing to \ntune the minimum probability above which the fit of the Cerenkov ring is considered.\n\n\\vspace{1.cm}\n\nAs a global check,\nFig. \\ref{chi2pndf} shows a plot of the $\\chi^2$ per number of degrees of freedom obtained after running the\nprocedure on the same sample of single tracks for several noise environments~: a) without any background, b) with a\nrandom flat background, c) with several tracks per event considered as background for the original track. One\ncan see that after the cut adjustment, the mean $\\chi^2$ per ndof is close to 1. The rms of the $\\chi^2$ per ndof\ndistribution behaves also as expected, for example in the case (a) the mean is $0.96$ and the rms of the\ndistribution ($0.32$ for a mean number of degrees of freedom of $21$) is effectively close to \n$\\sqrt{\\frac{2}{ndof}}$ ($0.31$). The number of reconstructed events varies between the three plots because\nof different cut levels adapted to different noise conditions. \n\n\n\\subsection{Particle Identification}\n\n\\indent \\indent The final goal of pattern recognition and circle reconstruction is particle identification.\nIn order to illustrate how the procedure behaves, we present results obtained for one track, assuming\nanother track has crossed the same (and single) bar. The angle between the two tracks is random\nbetween 0$^{\\circ}$ and 50$^{\\circ}$.\n\nIn Fig. \\ref{midvsp} we show the reconstruction of generated pions and present (in a)) the\ncase when the accompanying track has been reconstructed and the secondary cut on \n$|\\delta\\theta|$ has been applied. In (b), we assume this track has not been measured\nand then one cannot use this further cut. In both cases, no threshold probability has been\nrequired and the identification is attributed to the largest probability (which can thus be quite small).\nThe correct identification (except for $e$--$\\mu$--$\\pi$ degeneracy) is good, above 90\\% anyway.\nOne can further see that the secondary cut on $|\\delta\\theta|$ produces a large\nimprovement by reducing severely the misidentification of pions as kaons and protons.\nDespite the $e$--$\\mu$--$\\pi$ degeneracy, below $\\simeq$ 950 MeV pion\nmisidentification as electron looks negligible and, then, electron identification below this threshold\nis possible with a good accuracy. Misidentification is low below $\\simeq$ 3.5 GeV\/c.\n\nIn Fig. \\ref{midvsp}c, we have applied both the secondary cut on $|\\delta\\theta|$ \nand a threshold probability of 1\\% which cleans up almost completely the momentum range below\n$\\simeq$ 3.5 GeV\/c. It is easy to see that, strengthening this threshold to 3 \\%, sharply reduces the\nmisidentification which becomes significant only above $\\simeq$ 4 GeV\/c.\n \nAll this illustrates that the largest difficulties are due to unmeasured tracks or unstructured noise.\nAs soon as a limited additional information is available (secondary track directions), the quality\nof the identification sharply improves. Setting a threshold probability at a low level\nappears naturally to be a suitable cleaning up tool, as a non--negligible part of the\nmisidentification, and hence of contamination, is produced by low probability reconstructions.\n\n\\subsection{Summary}\n\n\\indent \\indent\nAs a matter of conclusion, even under severe background conditions, it is possible to remove \nbackground and select photons associated with a given charged track with a good efficiency\n(still about 90\\% of the tracks are fit) and with a limited contamination (about 95 \\%\nof particles identified as pions are indeed pions and the misidentification as kaons is\nalways low, sometimes very low). This is achieved by starting with using\nthe subset of unambiguous photons~; indeed fitting with them the Cerenkov ring gives access\nto refined values of the circle parameters which are used when reexamining the ambiguous photons. \nThese improved parameter values allow a refined treatment of the photons left aside\nas ambiguous.\n\nThe procedure described in Section \\ref{photsel}\ncould possibly be improved, but basically it contains most of the needed ingredients.\nMoreover, the fit algorithm presented in Section \\ref{circlefit} looks well adapted\nin order to fit circle arcs and to provide $\\chi^2$ distances for fake photon removal.\nThis method allows to use all basic statistical tools (probabilities,\nlikelihoods \\ldots).\nOf course, we know that fake photons cannot be removed at the 100\\% level, however, we have proved\nthat their residual contamination, in severe enough conditions, can be made low enough that the fit quality \nand its statistical meaning are not spoiled. \n\n\\section{Cut Handling}\n\\label{cutadjust}\n\n\\indent \\indent\nThe various cuts described in Section \\ref{photsel} represent finally a set of 8 parameters~:\ntwo cuts on $|\\delta\\theta|$ allowing to define the starting sample of unambiguous photons,\nthe median cut level for background removal, \nthe \"perpendicular tracks\" cut used to remove trivial ambiguities,\nthe two $\\chi^2$ distance cuts in the ambiguous photons recovering step,\nthe minimum number of unambiguous photons, and the tails removal cut after the first fitting\nstep.\n\nSome of these cuts can be adjusted in order to adapt the algorithm to different noise and background conditions,\nsome are nearly fixed by the detector characteristics ($|\\delta\\theta| < 30$ mr, perpendicular\ntrack cut), or by algebra (the minimum number of unambiguous photons\nis set at the smallest admissible value).\nAlso, one can verify that the last two cuts only influence marginally the reconstruction procedure performance. \n\nThe 3 $\\chi^2$ cuts have a well defined statistical meaning and the procedure outputs usual $\\chi^2$ probabilities.\nSimple criteria can be used in order to adjust these cuts, in such a way that they filter the background without \nspoiling too much the signal. Indeed, too stringent cut values may bias dramatically the fit quantities or\nspoil the probability density function of the fit results. Too loose cuts may instead\nresult in very low reconstruction efficiency and poor fit quality. As, in the DIRC problem, errors are\nclose to gaussians, $3\\sigma$ in tuning these cuts is a magic value of well defined meaning.\nWhen having to cut below this level one has to check for potentially harmful effects (biases or unacceptable\nchanges in the probability density function of the fit results). \n\n\\subsection{Tuning of the Adjustable Cuts}\n\n\\indent \\indent\nIn fact, only the first three parameters in the preceding list can be considered as adjustable by \nthe user to adapt the algorithm to different noise and background conditions.\n\nAs illustrated above, the main tool in tuning the various cut levels is the probability\ndistribution of reconstructed rings. From the most common experience, one knows that if errors and\ncorrelations are well understood, these distributions are flat between 0 and 1; one can always\nassume this has been checked on clean samples.\nClean samples can indeed be constructed using a Monte Carlo. Tagged samples of identified\n(by other means) particles, or low multiplicity samples, can always be extracted from data. \n\nThen, once the reliability of the error\/correlation handling is ensured,\nany departure from flatness has to be attributed to background. The set of cuts\ndefined above is devoted to the removal of the various kinds of background photons.\nThey are tuned by asking the particular value of the cut to optimize the flatness\nof the probability distribution. Ideally, when all cuts are at optimum values,\nthe probability distribution has mean value 0.5 and rms 0.289.\n\nIn real life however, one knows that an actually flat distribution always exhibits a peak at low\nprobability, reflecting the fact that there always exist configurations where events are\nimproperly reconstructed. This situation has been already met here (see Figs. \\ref{chi2prob1}).\nTherefore the above rules have to be slightly modified.\nFlatness has to be requested above some threshold value $\\alpha$, which can be relatively\nlarge for the purpose of tuning ($\\alpha=10\\%$, 20\\% or more are as good). One can always compute\nthe mean value and rms of the probability distribution above this threshold as a function of\nthe cut value and fit it (or compare) to respectively $(1-\\alpha)\/2$ and $(1-\\alpha)\/\\sqrt{12}$.\n\nFig. \\ref{chi2prob2}, which deals with a sample of single tracks with no additional background,\nillustrates how this tuning can be performed for the median cut. When fixing the cut level at 1$\\sigma$,\nwe are clearly cutting too tight and then the probability distribution exhibits a huge peak at 1. Releasing\nthis cut allows to recover a non--biasing behavior at a cut level of about 3 to 4 $\\sigma$ as one\ncan expect~: the mean probability is found close to 0.5 and its rms close to $1\/\\sqrt{12}$.\nOne can refine the tuning in an obvious manner. \n\nOne may notice that the number of entries in the histograms is only marginally dependent on this cut\nlevel~: the algorithm efficiency is not directly affected by it (but the final errors on the fitted\nquantities are changing with this cut -- roughly like $\\frac{1}{\\sqrt{n_\\gamma}}$ -- since the total\nnumber of unambiguous photons entering the fit is cut dependent, even if the recovering procedure\nfinally limits its variation).\n\nOn the other hand, the low probability peak, as said above, reflects the existence of\nimproperly reconstructed rings. This is partly due to configurations where the errors\nare not well estimated{\\footnote{In the DIRC problem, the final errors are computed\nby differentiating functions. They are geometry dependent and sometime\nclose to singular points of these functions.}} and partly due to the level of background\nwhich survives the cuts. Usually, such a peak is made harmless by setting a \nthreshold probability.\n\nThe two $|\\delta\\theta|$ cut play an important role in the magnitude of this peak.\nThe primary $|\\delta\\theta|$ cut, if it happens to be too loose, accepts\nmore easily background photons in the unambiguous photon subsamples. These background photons\nwill in turn degrade the fit quality and contribute to increase the peak at zero probability.\nHowever, it cannot be set too tight (for instance $\\leq 5$ mr) since in this case, the\nnumber of unambiguous photons primarily associated with tracks may decrease too much\n(below 2), and consequently the number of tracks lost will increase without\na significant improvement at the zero probability peak.\n\nThe secondary $|\\delta\\theta|$ cut acts likewise as can be seen\nfrom Fig. \\ref{dthetacut2}. Its strong power of rejecting the inter--track noise is \nillustrated by the vanishing low probability peak in quite hard background conditions. \n\nThese examples illustrate how a given cut level can be tuned on any sample of tracks~; here again the procedure\napplies to simulation and real data as well. The cut levels are tuned in such a way that the probability\ndistribution has its expected flat shape. This tuning allows to recover at the same time a \nmean probability of $\\simeq 0.5$ and a spread of $\\simeq 1\/\\sqrt{12}$ rms. After discarding (eventually) the low\nprobability bins, it is easy to compute the efficiency\nof any further cuts in probability, because of the flatness of the remaining region.\n\n\\subsection{Effect of non tunable cuts}\n\n\\indent \\indent\nThe non tunable cuts, briefly considered before, are the perpendicular tracks recovering cut and the\nminimum number of unambiguous photons cut. \nThe former is tunable only in the sense that it should be adjusted on data or on Monte Carlo simulation not to spoil\nthe algorithm acceptance (eventually creating biases if the cut is too wide), and not being rendered inefficient\nby a too stringent width (in this case, only a small fraction of the almost perpendicular tracks is concerned\nby the cut). Anyway, as stated before, this cut has only limited effects~: its activation or deactivation is hardly\nnoticeable on the studied data.\n\nThe second non adjustable cut has a rather strong influence on the global algorithm acceptance. Figure \\ref{nambgam}\nshows the distribution of the number of unambiguous photons at the end of the procedure~: it is clear that if the\ncut level is increased starting from the normal value of 2, the number of non accepted tracks will be strongly\naffected, since in this region the distribution is showing a strong dependence of the number of tracks upon\nthe number of unambiguous photons. Normally, there is no point in increasing the cut level above\n2 or 3 unambiguous photons.\n\n\\vspace{1.cm}\nTo conclude this section, we can say that the choice of the cut level for both cuts is not really free, but\nconstrained by considerations depending strongly on the use of the algorithm and the type of data.\n\n\n\\section{Conclusions}\n\n\\indent \\indent \nWe have studied a procedure able to perform pattern recognition among photons in order to reconstruct\nthe Cerenkov angle associated with the charged track emitting them. \nThe procedure, which has been developed for the case of the\nDIRC, may apply {\\it mutatis mutandis} to data from other ring imaging devices.\nThe basic requirement was that the procedure should provide a good approximation\nof a $\\chi^2$ value in order that a probability can be acceptably defined,\neven in presence of a huge background mixed to the signal photons.\nWe have shown that such a procedure can indeed be constructed and proved\nto work satisfactorily from the kaon Cerenkov threshold up to 4$\\div$5 GeV\/c.\n\nWe advocated the use of the stereographic projection which guarantees\nthat the figure to be fit is always a circle, whatever systematic\nerrors, misalignment problems, etc \\ldots are. Moreover, we have shown that\nsuch kind of errors affects negligibly the estimate of the central value for the radius\n({\\it} i.e. the Cerenkov angle).\n\nThe basic tool of the procedure is a fit corresponding to minimising\na $\\chi^2$ expression, linearised as commonly\ndone. In the case of the DIRC, this $\\chi^2$ has\nto be modified in order to take into account that the photon do not\npopulate the full circle, but rather a relatively small fraction of it.\nThe modification implemented turns out to take into account in the fit procedure\nthe existence of a measurement of the charged track direction, beside the photon\ndirections. In this way, systematic errors in the circle reconstruction \ncan be avoided almost completely. We have also widely illustrated the role of\ncorrelations and methods to estimate them. The fit procedure returns a value\nof the circle radius (connected with the Cerenkov angle) and an improved\ninformation on the charged track direction. This last information has been shown\nto be crucial in cases where the charged track direction is poorly known, either intrinsically\n(from the tracking device in front of the DIRC), or because of large multiple scattering effects.\n\n\nWe have shown, that using this tool, it is possible to define an algorithm\nable to solve ambiguities and remove efficiently background photons. It relies on\nan iterative procedure, based on an intensive use of $\\chi^2$ distances. \nIt starts with unambiguous photons and recovers additional photons in a second pass.\nSuch a procedure depends on cuts, and it has been shown that one can check the effects of these cuts\nand tune their levels on pull and probability distributions. This allows also to define\na calibration procedure which can be worked out with real data in a simple way\nand optimized in real background conditions.\n\n\nWith this respect we have shown that the \"combinatorial\" background\ngenerated by ambiguities can be easily overcome. We have also shown that\nthe background produced for a given track by photons associated with\nother reconstructed tracks was easy to deal with.\nThe hardest background is provided by non--reconstructed tracks\nor flat background of various origins~; we have shown that it is possible to\ndeal reliably with them too, by using tighter cut levels.\n\n\nAs a final result, we have shown that the reconstruction and particle\nidentification are possible through a fit of the Cerenkov angle and particle\ndirection, with a remarkable efficiency (above 90 \\%). We have also shown\nthat the fit probability is correctly estimated and has the expected flat distribution. \nThis means that the cleaning up part of the procedure is able to discriminate efficiently between hits,\neven when there is a large background together with the signal. The residual background contamination\nlevel has been shown to be harmless under realistic conditions. Therefore one can use \nstandard statistical tools in order to calibrate cuts and check the reconstruction quality.\n \n\n \n\\newpage\n\n\\renewcommand{\\theequation}{\\Alph {section} . \\arabic {equation}}\n\\setcounter{section}{1}\n\\setcounter{equation}{0}\n\\section*{Appendix A : Errors and Correlations in a DIRC Device}\n \n\\subsection*{A1 Error Functions of the Charged Track Direction}\n\n\\indent \\indent The \ncharged track direction is affected by two different kinds of errors.\nThe first kind are the measurement errors on the track at the entrance into\nthe radiator (the quartz bar), the second is the error produced by its\nmultiple scattering inside the radiator, which makes that, for each emitted\nphoton, the Cerenkov angle $\\theta_C$ is relative each time to a slightly\nmodified track direction. Let us denote the incoming track direction by \n$\\vec{q}=(\\sin{\\theta} \\cos{\\phi}, \\sin{\\theta} \\sin{\\phi}, \\cos{\\theta})$~;\nthe condition $\\vec{q}^2=1$ implies that $\\vec{q} \\cdot \\delta\\vec{q}=0$, and then\nthat the error vector $\\delta\\vec{q}$ is perpendicular to the track direction\n$\\vec{q}$. Let us write it~:\n\n\\begin{equation}\n\\delta\\vec{q}=\\delta_1\\vec{q}+ \\delta_2\\vec{q}\n\\label{app1}\n\\end{equation}\n\n\\noindent where $\\delta_1\\vec{q}$ refers to the measurement errors and\n$\\delta_2\\vec{q}$ to the multiple scattering. The error functions on the track\nparameters $(\\theta,\\phi)$ being denoted $(\\delta\\theta,\\delta\\phi)$, we\nhave~:\n\n\\begin{equation}\n\\delta_1\\vec{q}=\\delta \\theta ~\\vec{v}+ \\sin{\\theta} \\delta\\phi ~\\vec{w}\n\\label{app2}\n\\end{equation}\n\n\\noindent where $\\vec{v}= \\partial \\vec{q}\/ \\partial \\theta$ and \n$\\vec{w}=[1\/\\sin{\\theta} ] \\partial\\vec{q}\/\\partial \\phi$ are unit vectors orthogonal to\neach other and to $\\vec{q}$. We may have $<\\delta\\theta\\delta\\phi> \\ne 0$.\nAfter a path of length $u$ inside the quartz, we also have~:\n\n\\begin{equation}\n\\delta_2\\vec{q}= \\displaystyle c \\sqrt{\\frac{u}{X_0}}\n\\left[\\varepsilon_1(u)\\vec{v} + \\varepsilon_2(u)\\vec{w}\\right]\n\\label{app3}\n\\end{equation}\n\n\\noindent where \\cite{PDG} $c=13.6~ 10^{-3}\/\\beta p[{\\rm GeV}]$, $\\beta$\nis the particle speed and\n$X_0$ is the quartz radiation length. The quantitites $\\varepsilon_i(u)$\nare gaussian random variables such that $<[\\varepsilon_i(u)]^2>=1$ and\n$<\\varepsilon_1(u)\\varepsilon_2(u)>=0$. Here and throughout the paper we neglect \nall departures from gaussian distributions \\cite{PDG}.\nFrom Rel. (\\ref{app3}), we get~:\n\n\\begin{equation}\n<[\\delta_2\\vec{q}]^2>= \\displaystyle 2c^2 \\frac{u}{X_0}\n\\label{app4}\n\\end{equation}\n\nHowever, as we don't know where the photon has been emitted, the best estimate\nof\n\\newline\n $<[\\delta_2\\vec{q}]^2>$ for this variance is its mean value\nover the path followed, assigning an equal probability to each possible emission point{\\footnote\n{The notation here is obvious~: the inner $<\\cdots>$ denotes the statistical mean\n({\\it i.e.} the expectation value)\nalready defined in the body of the text, while the outer$< \\cdots >_{xy\\cdots}$ \ndenotes the {\\it additional} average performed over continuous variables $x,y,\\cdots$.}}~:\n\n\\begin{equation}\n<<[\\delta_2\\vec{q}]^2>>_u= \\displaystyle \\frac{1}{L}\\int_0^L 2c^2 \\frac{u}{X_0} \ndu= c^2\\frac{L}{X_0}\n\\label{app5}\n\\end{equation}\n \n\\noindent where $L$ is the total path length of the charged particle inside the\nquartz.\n\n Let us assume that two photons are emitted after respectively paths\n$u_1$ and $u_2$ inside the quartz. As the photon detection does not reveal where it has\nbeen emitted, we can have with equal probabilities $u_1>u_2$ or $u_2>u_1$. Therefore, \nup to higher order corrections, we have~:\n\n\\begin{equation}\n\\left \\{\n\\begin{array}{llll}\n{\\rm if ~~~ } u_2> u_1~~:\n \\left \\{\n \\begin{array}{lll} \n\t\\delta_2\\vec{q}(u_1)=& \\displaystyle c \\sqrt{\\frac{u_1}{X_0}} \n\t\\left[ \\varepsilon_1(u_1) \\vec{v}+ \\varepsilon_2(u_1)\\vec{w} \\right]\\\\[0.5cm]\n\t\\delta_2\\vec{q}(u_2)=&\\delta_2\\vec{q}(u_1) + \\displaystyle c \\sqrt{\\frac{u_2-u_1}{X_0}} \n\t\\left[ \\varepsilon_3(u_2) \\vec{v}+ \\varepsilon_4(u_2)\\vec{w} \\right]\\\\[0.2cm]\n \\end{array}\n \\right.\n\\\\[1.0cm]\n{\\rm if ~~~ } u_1> u_2~~:\n \\left \\{\n \\begin{array}{lll} \n\t\\delta_2\\vec{q}(u_2)=& \\displaystyle c \\sqrt{\\frac{u_2}{X_0}} \n\t\\left[ \\varepsilon_1(u_2) \\vec{v}+ \\varepsilon_2(u_2)\\vec{w} \\right]\\\\[0.5cm]\n\t\\delta_2\\vec{q}(u_1)=&\\delta_2\\vec{q}(u_2) + \\displaystyle c \\sqrt{\\frac{u_1-u_2}{X_0}} \n\t\\left[ \\varepsilon_3(u_1) \\vec{v}+ \\varepsilon_4(u_1)\\vec{w} \\right]\\\\[0.2cm]\n \\end{array}\n \\right.\n\\end{array}\n\\right.\n\\label{app6}\n\\end{equation}\n\n\\noindent where the various $\\varepsilon$'s carry unit variance and are statistically \nindependent when they carry different indices and\/or different arguments \n({\\it i.e.} $< \\varepsilon_i(u_j) \\varepsilon_k(u_l)>=0$ only if $i \\ne k$ and\/or\n$j \\ne l$). \n\nOne can check that $\\delta_2\\vec{q}(u_2)$ and $\\delta_2\\vec{q}(u_1)$ have the same variance\ngiven by Rel. (\\ref{app4}$\\!$) (or by (\\ref{app5}$\\!$)) and it is shared equally\nbetween their $\\vec{v}$ and $\\vec{w}$ components (remind that $\\vec{v} \\cdot \\vec{w}=0$).\nMoreover, we can now compute the expectation value~: \n\\begin{equation} \n<\\delta_2\\vec{q}(u_1) \\cdot \\delta_2\\vec{q}(u_2)>=<[\\delta_2\\vec{q}(u_1)]^2> \\Theta(u_2-u_1)+\n<[\\delta_2\\vec{q}(u_2)]^2> \\Theta(u_1-u_2)\n\\label{app7}\n\\end{equation}\n\n\\noindent where $\\Theta$ is the standard step function and the expectations values\non the RHS are given by Rel. (\\ref{app4}) with the appropriate argument. It is\nuseful to have an estimate of this correlation coefficient~; it is \nobtained by averaging Rel. (\\ref{app7}) upon $u_1$ and $u_2$.\nThis is easily achieved~:\n\\begin{equation} \n<<\\delta_2\\vec{q}(u_1) \\cdot \\delta_2\\vec{q}(u_2)>>_{u_1u_2}=\\displaystyle \\frac{1}{L^2} \n\\int_0^L \\int_0^L du_1 du_2 <\\delta_2\\vec{q}(u_1) \\cdot \\delta_2\\vec{q}(u_2)>=\\frac{2}{3} c^2 \\frac{L}{X_0}\n\\label{app8}\n\\end{equation}\n\n Therefore, the average correlation amounts to 2\/3 of the average variance. The covariance fraction \n carried by each component are~:\n \n\\begin{equation} \n\\left \\{\n\\begin{array}{ll}\n<<[\\delta_2\\vec{q}(u_1)\\cdot \\vec{v}][\\delta_2\\vec{q}(u_2)\\cdot \\vec{v}]>>_{u_1u_2}&=\n\\displaystyle \\frac{1}{3} c^2 \\frac{L}{X_0} \\\\[0.5cm]\n<<[\\delta_2\\vec{q}(u_1)\\cdot \\vec{w}][\\delta_2\\vec{q}(u_2)\\cdot \\vec{w}]>>_{u_1u_2}&=\n\\displaystyle \\frac{1}{3} c^2 \\frac{L}{X_0} \n\\end{array}\n\\right . \n\\label{app9}\n\\end{equation}\n\n\\noindent while all other covariance mean values are zero.\n\n\\subsection*{A2 Finite Size Sample Corrections to Multiple Scattering Errors}\n\n\\indent \\indent\nIt is clear that the length of the path followed before the track emits any photon is inaccessible, \nnor the ordered time (or path) sequence of the detected photons. This implies that one has\nto work with averaged quantities. In the previous subsection, averaging is defined by Rels. (\\ref{app5}) \nand (\\ref{app8}), for respectively the variance and covariance terms. Averaging by integrals\nassumes the emission of an infinite number of photons along the charged track path\ninside the radiator.\n\nHere we present another method, based on a finite number of radiated photons~; it allows to\nfind the $1\/n$ corrections to the above method, while checking it conceptually.\n\nLet us assume $n$ detected photons are emitted along the path of length $L$. \nPhoton acceptance is mainly connected with their azimuth on the Cerenkov cone~; therefore\nwe can assume for simplicity,\nthat these photons are emitted after equal paths of length $L\/n$. Let us also assume \nwe work with each coordinate of the circle center in the equatorial plane (final results for variances have\nto be multiplied by 2 for comparison with the preceding subsection).\nWe denote by $x_i$ the coordinate of the charged track direction (actually its fluctuation around $x_0$, \nthe $true$ track coordinate{\\footnote{Obviously, the true coordinate at the DIRC entrance \nis approximated by the mean value provided by the tracking device in front of the DIRC, but does\nnot coincide with it.}} at the radiator entrance) \nwhen it emits the $i^{th}$ photon in its ordered time sequence. \nThen we have~:\n\n\\begin{equation}\n\\left \\{\n\\begin{array}{ll}\nx_1=&x_0+ \\epsilon_1 \\\\[0.5cm]\nx_2=&x_0+ \\epsilon_1 + \\epsilon_2 \\\\[0.5cm]\nx_3=&x_0+ \\epsilon_1 + \\epsilon_2+ \\epsilon_3 \\\\[0.5cm]\n\\cdots & \\cdots\n\\end{array}\n\\right.\n\\label{app10}\n\\end{equation}\n\n\\noindent where the functions $\\epsilon_k$ are centered independent random variables~:\n$<\\epsilon_k >=0$ and $<\\epsilon_k \\epsilon_l> = \\sigma^2 \\delta_{kl}$ ($\\sigma^2=c^2 L\/(nX_0)$, \n$c$ being already defined). Let us also define $A=n \\sigma^2$, a quantity independent of $n$,\nwhich coincides with the standard $\\theta_{rms}^{plane}$ of the Review of Particle Properties \\cite{PDG}.\nFor simplicity, we choose from now on $x_0=0$. Trivially, writing \n$<\\epsilon_k >=0$ for all $k$, does not mean it is true for any given track, but\nthat this is fulfilled by the mean values computed from a large sample of $tracks$. Indeed, \n$one$ set of $x_i$ corresponds to $one$ track and then to $one$ sampling of the $\\epsilon_k $'s.\n\nThe ordered time sequence is unknown~; nevertheless, we can define the multiple\nscattering variance of the sample by~:\n\n\\begin{equation}\nV_m= \\frac{1}{n}\\sum_k x_k^2\n\\label{app11}\n\\end{equation}\n\nUsing Rels. (\\ref{app10}), it is easy to find the expectation value of $V_m$~:\n\n\\begin{equation}\n= \\frac{n+1}{n} \\frac{A}{2}\n\\label{app12}\n\\end{equation}\n\n\\noindent which tends to half the value in Rel. (\\ref{app5}), when $n \\rightarrow \\infty$,\nas expected. Here we have to split up the result into two parts (correlated and uncorrelated).\nThe correlated part is the variance of the center of gravity~:\n\n\\begin{equation}\nG= \\frac{1}{n}\\sum_k x_k\n\\label{app13}\n\\end{equation}\n\nIt is easy to compute it and get~:\n\n\\begin{equation}\n= \\frac{(n+1)(2n+1)}{2n^2} \\frac{A}{3}\n\\label{app14}\n\\end{equation}\n\n\\noindent\nwhich tends to 2\/3 of $$ when $n \\rightarrow \\infty$, in agreement with Rel. (\\ref{app8}).\nThe mean uncorrelated part (reduced variance of the sample) is the mean value of~:\n\n\\begin{equation}\nW_m= \\frac{1}{n}\\sum_k (x_k-G)^2\n\\label{app15}\n\\end{equation}\n\n\\noindent which is~: \n\n\\begin{equation}\n = \\frac{n^2-1}{n^2}\\frac{A}{6}\n\\label{app16}\n\\end{equation}\n\n\\noindent and tends to 1\/3 of the variance when $n \\rightarrow \\infty$.\nTherefore, the integral averaging presented in the previous subsection gives\nindeed the large $n$ behaviour of the mean multiple scattering effects. The subleading\nterms are of the order $1\/n$ and small~; they can easily be read off the results in this subsection.\nThey show that the sharing uncorrelated--correlated parts of the variance departs from 1\/3~:~2\/3 by terms\nof the order $1\/(3n)$. They may become important only for number of photons of the order $n \\leq 4 \\div 5$.\n\n\\subsection*{A3 Handling of Other Errors}\n\n\\indent \\indent There are qualitatively two kinds of errors we have to deal with\nin the DIRC problem. The first kind are mostly geometric errors due \nto the reconstruction of the photon direction~: finite size of the photomultipliers,\nfinite size of the bar entrance window. There is no difficulty of principle in their \nreconstruction and propagation from the water tank and photomultipliers back into \nthe quartz bar and we will not comment on this any longer.\n\nThe second kind of errors (chromaticity) is due to the dependence of the\nrefraction indices (water and quartz as far as the DIRC is concerned)\nupon the photon wavelength. This is due to the fact that photons\nemitted by Cerenkov effect do not have a definite wavelength, which \nrather runs over a relatively wide spectrum. \n\nWhen refracting back the photon direction from\nwater (index $n_W(\\lambda)$) to quartz (index $n_Q(\\lambda)$), it is appropriate\nto consider as basic random variable the quartz index~; then, the ratio $g$ of\nthese indices can be expressed as a function of $n_Q$ .\nIn the BaBar setup, this ratio can be considered constant over\nthe range of wavelengths to which the photomultipliers are sensitive. \n\nThe quartz index is surely an appropriate variable because, thanks to Rel. (\\ref{basic1}),\nit affects directly the expected Cerenkov angle $\\theta_C$ and its error for each\nphoton. This is an important source of errors (it turns out to be equivalent\nto treating the index as a random variable with a standard deviation corresponding to\n$\\delta n\/ n \\simeq 0.6 \\%$). This error can be either attributed to the charged track\ndirection or to each photon direction separately, but, in the former case, one has to\ntake into account the fact that these photon errors are uncorrelated (this gives rise to corrections\nin the error magnitude once applied globally to the track direction {\\bf )}. In\nthe course of the fit procedure, the $\\chi^2$ which allows to reconstruct the circle radius\ntakes naturally this effect into account.\nMoreover, when fixing the Cerenkov angles to the values \nexpected from the charged track momentum, assuming the possible mass assignments\n(the consistency check referred to in the body of the text),\nit allows to take into account automatically the spread due to the \nphoton wavelength spectrum.\n\n \n\\newpage\n\\renewcommand{\\theequation}{\\Alph {section} . \\arabic {equation}}\n\\setcounter{section}{2}\n\\setcounter{equation}{0}\n\\section*{Appendix B : Basics for a Circle Arc Reconstruction Algorithm}\n\n\\indent \\indent As noted in the main text, when fitting a circle having at hand points (subject\nto measurement errors) spread out onto a relatively small arc (typically 60$^{\\circ}$),\nand if the error on each point is non-negligible compared to the radius of the circle{\\footnote{\nWith this respect a ratio of 2\\%, typical for the BaBaR DIRC, is large when points are on a circle \narc of about 60$^{\\circ}$. For a track fit in a drift chamber, 60$^{\\circ}$ is relatively large as\nthe ratio of error to radius is here of about 10$^{-3}$.}}, the fit quality becomes poor. \n\nIndeed, one can attempt a circle fit using standard algorithms \\cite{CIRCLE1,CIRCLE2,CIRCLE3}.\nHowever, fitting the 3 circle parameters under the typical conditions sketched above, using \n{\\it only} the points on the circle leads to unexpected bad results. The center coordinate\nin the direction associated with the two outmost ``experimental'' points\nis relatively good~; however, the center coordinate in the direction toward the arc\nis significantly and systematically displaced towards the arc\n(compared with its expected value) and consequently \nthe fit radius is systematically underestimated, sometimes badly.\n\nOne way to circumvent this problem is to introduce additional information about the circle center \n(the charged track direction as measured by another device than the DIRC, for instance a drift chamber).\nIn this case, the fit algorithm works much better (as illustrated in this paper) in the sense that the \npull is found with the expected unit standard deviation and a negligible residual methodological \nbias{\\footnote{Practically, the residual methodological bias is of the order 10$^{-4}$ of the\nradius ({\\it i.e.} about 0.1 to 0.2 mr for the Cerenkov angle) and then it is\noverwhelmed by the bias originating from fake photons, which cannot be completely removed \nby any realistic procedure.}}.\n \n\n\\subsection*{B1 A Naive Estimate of the Error Matrix of the Charged Track}\n\n\\indent \\indent The vector $\\delta \\vec{q}$ defining the error on the charged track direction \n$\\vec{q}$ is given by Rels. (\\ref{app1}), (\\ref{app2}) and (\\ref{app3}). It is perpendicular to the \ncharged track direction, and it \nprojects out onto the equatorial plane (through the stereographic projection) as $\\delta \\vec{q}\/2$.\nDenoting $a$ and $b$ the coordinates of the circle center in the equatorial plane\nin the directions parallel resp. to $\\vec{v}$ and $\\vec{w}$ (both orthogonal to $\\vec{q}$), \nwe can define the error functions on the circle center by~:\n\n\n\\begin{equation}\n\\left \\{\n\\begin{array}{lll}\n\\delta a = \\displaystyle \\frac{1}{2} \\delta \\theta + \n\\frac{1}{2} c \\sqrt{\\frac{u}{X_0}} \\varepsilon_1(u)\\\\[0.5cm]\n\\delta b=\\displaystyle \\frac{1}{2} \\sin{\\theta} \\delta \\phi +\n \\frac{1}{2} c \\sqrt{\\frac{u}{X_0}} \\varepsilon_2(u)\n\\end{array}\n\\right .\n\\label{appb1}\n\\end{equation}\n\\noindent where $u$ is the path length followed by the charged track inside the radiator.\nNo average value over the path length has to be performed in this naive approach. Doing this way,\nthe error matrix $\\Sigma$ for the charged track can be computed by taking the expectation value of\n appropriate second order terms. One can then choose, as a rule of thumb, to approximate the\nerror functions above by considering the standard deviations at $u=L\/2$, {\\it i.e.} at half the full\npath of the charged track inside the radiator~; in this case, we have~:\n\n\\begin{equation}\n\\rm{rough ~estimates~:~~}\n\\left \\{\n\\begin{array}{lll}\n<[\\delta a]^2>= \\displaystyle \\frac{1}{4} \\left[ <[\\delta \\theta]^2>\n+\\frac{1}{2}c^2\\frac{L}{X_0} \\right]\\\\[0.5cm]\n<[\\delta b]^2>= \\displaystyle \\frac{1}{4} \\left[ \\sin^2{\\theta}<[\\delta \\phi]^2>\n+\\frac{1}{2}c^2\\frac{L}{X_0} \\right]\\\\[0.5cm]\n<\\delta a \\delta b>= \\displaystyle \\frac{1}{4} \\sin{\\theta} <\\delta \\theta \\delta \\phi>\n\\end{array}\n\\right.\n\\label{appb2}\n\\end{equation} \n\n\\noindent where the quantities $<[\\delta \\theta]^2>$, $<[\\delta \\phi]^2>$ and\n$<\\delta \\theta \\delta \\phi>$ are the elements of the error matrix $\\Sigma_0$ \nprovided by the reconstruction procedure \nfrom the tracking device in front of the DIRC bar. The question now is to get a motivated \n estimate of the full error matrix $\\Sigma$ of the charged track direction \nby taking into account theoretical {\\it a priori} information on multiple scattering.\n\n\n\\subsection* {B2 First Approach to the Charged Track Error Problem}\n\n\\indent \\indent Let us consider only the $a$ coordinate of the circle center,\nor restrict our problem to a one dimensionnal aspect. Solving the 2-dimensional \ncase ($a,b$) will follow straightforwardly.\n\nActually, what is of relevance for our problem, is the mean value and error \non the directions of charged tracks associated with the emitted {\\it and} detected photons. \nThere is some difference with the errors at half path inside the radiator, as\nwill be shown below.\n \nFor a charged track entering the radiator medium, the tracking device provides\na measurement of the direction with its error matrix ($\\Sigma_0$ referred to above).\nLet us associate with this measurement the origin in the equatorial plane where\nall directions are projected out. \n\n When the charged track emits photon $i$, it has a given (even if unknown\nexactly) direction~; this direction varies from photon to photon\nsimply because of the multiple scattering the charged track undergoes.\nThe intersection of this direction with the unit sphere has for image in the equatorial\nplane the point of coordinate $a_i$ along the direction $\\vec{v}$ (see Section A1)~;\nwhat is of relevance for the Cerenkov angle estimate is clearly\nthe mean value and standard deviation of the set $\\{a_i~;~i=1, \\cdots ~n\\}$.\n\nOn the other hand, even in perfect cases (no multiple scattering) the actual direction \nof a given track is obviously not the mean value (defined here as the central value given \nby the drift chamber reconstruction). Fitting the Cerenkov cone allows possibly to improve this \nlast estimate using a (large) number of additional informations (photons), going thus\ncloser to its {\\it actual} value. The effect of multiple scattering is that\nthe actual center seen from each photon is different and randomly distributed~;\nthen we cannot access $one$ actual circle center, but only define a mean value\nof the set of actual centers.\n\nMoreover, if photons can allow for improving the average estimate of the charged \ntrack direction, errors can be estimated from the (underlying) \nparent random distribution.\n \n\n\nTherefore, we can write~:\n\n\\begin{equation}\na_i=a_0+\\delta a_i~~~, ~~~i=1, \\cdots ~n\n\\label{appb3}\n\\end{equation}\n\n\\noindent where $a_0$ denotes here the expectation value of the \ncenter of gravity of the $\\{a_i\\}$ set{\\footnote {\nActually, this turns to define the $a_i$'s by relations\nlike Rels. (\\ref{app10}), with an additional offset $a_0$ in place of $x_0$. \n}} and\nwhere the random error functions $\\delta a_i$ given by Rel. (\\ref{appb1}) \nwith different paths $u_i$, are unbiased ($<\\delta a_i>=0$).\nIn connection with what said just above, it should be stressed \nthat the expectation value $a_0$ for all measured photons associated\nwith $one$ track is not necessarily zero, but the set of all $\\{a_0\\}$ (each associated\nwith a given track) is surely distributed around zero, with known deviations. \nThen, what is of relevance in our problem is\nthe value of $a_0$ and its standard deviation on a track by track basis. \n\nIn order to get the mean value and the error of the direction set $\\{a_i\\}$, one can\nminimize the function (we define a vector $g$ of components $g_i=1, ~\\forall i=1, \\cdots n $\nin order to match indices)~:\n\n\\begin{equation}\nF(a)=(ag_i-a_i) V^{-1}_{ij}(ag_j- a_j)~~~~,~~{\\rm{with~}} ~ ~V_{ij}=<\\delta a_i \\delta a_j>\n\\label{appb4}\n\\end{equation} \n\n\\noindent where summation over repeated photon indices is understood. The zero of $dF(a)\/da$\ngives this minimum (using also Expression (\\ref{appb3}))~:\n\n\\begin{equation}\n\\displaystyle a= \\frac{g_i V^{-1}_{ij} a_j}{g_i g_jV^{-1}_{ij}}\n=a_0+\\frac{g_i V^{-1}_{ij} \\delta a_j}{g_i g_jV^{-1}_{ij}}\n\\label{appb5}\n\\end{equation}\n\n\nThis expression gives the usual result for the estimate $a$ from a set\nof measurements $\\{a_i\\}$ in the least squares approach. Its expectation\nvalue{\\footnote{As all $a_i$ have been fixed at zero (the measured central value) for the track considered,\nthis corresponds to take as sampling $\\delta a_i=-a_0 g_i$.}} \n$=a_0$ is unbiased and its error function $ \\delta a$, which\ncan be read off Rel. (\\ref{appb5}), allows to compute its standard deviation \n$\\sigma_a$ ($\\sigma_a^2=<[\\delta a]^2>$)~:\n\n\\begin{equation}\n\\displaystyle \\frac{1}{\\sigma_a^2}=g_i g_jV^{-1}_{ij}\n\\label{appb6}\n\\end{equation} \n\nWhen there is no correlation ($ V_{ij}\\simeq \\delta_{ij}$), this expression gives \nthe usual result ($\\sigma_a^{-2}=\\sum_i \\sigma_{a_i}^{-2}$). \n\n\nIn general $\\delta a_i$ and $\\delta a_j$ are given by expressions like in Rels. (\\ref{appb1})\nwith two different path lengths $u_i$ and $u_j$, both unknown. Correspondingly\nthe quantity $V_{ij}=<\\delta a_i(u_i) \\delta a_j(u_j)>$ can only be estimated\nby performing the average as presented in Section A1 of Appendix A. This gives~:\n\n\\begin{equation}\nV_{ij}=<<\\delta a_i(u_i) \\delta a_j(u_j)>>_{u_iu_j}=\\left [ B E + \\frac{1}{24}A I \\right]_{ij}\n\\label{appb7}\n\\end{equation} \n\n\\noindent where $E$ is a rank 1 matrix \nsuch that each $E_{ij}=1$~; here $B=[<[\\delta \\theta]^2>+1\/3A]\/4$ and $A=c^2L\/X_0$.\nIn order to compute $\\sigma_a^2$ using Rel. (\\ref{appb6}), we need to invert $V$\njust defined. Indeed, using $E^2=nE$, it is easy to prove that~:\n\n\\begin{equation}\nV= \\lambda I + \\mu E \\Longleftrightarrow V^{-1}= \\frac{1}{\\lambda}[I- \\frac{\\mu}{n\\mu+\\lambda} E]\n\\label{appb8}\n\\end{equation}\n\n\\noindent \nwhere $n=$ dim $I=$ dim $E$ is also the number of points (photons).\nUsing this formula with the matrix in Rel. (\\ref{appb7}), we easily find~:\n\n\\begin{equation}\n\\sigma_a^2= \\frac{1}{4}[<[\\delta \\theta]^2> +\\frac{1}{3}A+\\frac{1}{6n}A] \n\\label{appb9} \n\\end{equation}\n\nThis relation could be expected beforehand. It shows that the uncorrelated part of the error\naffecting each $a_i$ (2\/3 of the variance associated with multiple scattering \nas shown by Rel. (\\ref{app8}), plus the variance provided by the device in front of the \nDIRC bar) is transfered to $a$ without changes, while the correlated part \n(1\/3 of the multiple scattering contribution to the full variance) scales with $n$.\nIf we had neglected the terms generated by the multiple scattering effects\nin $V_{ij}$ (for $i \\ne j$), we would have got instead $\\sigma_a^2=[<[\\delta \\theta]^2>\n+A\/(2n)]\/4$ which can be considerably smaller at low track momentum.\n\n\nActually, the finite size sample corrections (see Section A2) may be accounted for. \nConcerning the uncorrelated part, this would amount to $1\/n^2$ corrections and can be \nneglected~; the correlated part gives however a $1\/n$ contribution which corrects\nthe term $A\/6n$ in Rel. (\\ref{appb9}) by a factor of 4. Therefore, \nan improved expression for $\\sigma_a^2$ taking into account all $1\/n$ corrections is~: \n\n\n\\begin{equation}\n\\sigma_a^2 \\displaystyle\n\\left|_{n} \\right.= \\frac{1}{4}[<[\\delta \\theta]^2> +\\frac{1}{3}A+\\frac{2}{3n}A] \n\\label{appb9a} \n\\end{equation}\n\nIt is also interesting to compare Rels. (\\ref{appb9}) and (\\ref{appb9a}) with \nthe first Rel. (\\ref{appb2}).\nIndeed, one clearly sees that the variance for $a$ is smaller than the variance\nat mid path inside the quartz as soon as the number of photons is greater than 1\n(large $n$ limit) or 4 (finite $n$ corrected). Then, in all practical applications,\nRels. (\\ref{appb2}) do not reflect the sharing of the variance\nbetween correlated and uncorrelated parts and leads to an overestimate of the\nmultiple scattering contribution to the center errors.\n\n\n\\subsection*{B3 The Full Charged Track Error Problem}\n\n\\indent \\indent We have just treated (as a one--dimensional problem) the \ndetermination of the error on the the $a$ coordinate of the circle center,\ntaking into account the number $n$ of emitted photons. We have seen that\nthe uncorrelated part of its variance decreases as $1\/n$ while the\ncorrelated part is unaffected. However, our actual problem is two--dimensional\nand because of correlations terms like $<\\delta a_i \\delta b_j>$, it is not equivalent\nto the conjunction of two one--dimensional problems. \n\nIn order to complete the treatment, let us display the following identity\nfor the inverse $V^{-1}$ of a symmetric matrix $V$\nof rank and dimension $n$~:\n\n\\begin{equation}\nV^{-1}=\n\\left (\n\\begin{array}{lll}\nS_1 & \\widetilde{C}\\\\[0.5cm]\nC & S_2\n\\end{array}\n\\right )^{-1}\n=\n\\left (\n\\begin{array}{lll}\n(S_1-\\widetilde{C}S_2^{-1}C)^{-1} &- (S_1-\\widetilde{C}S_2^{-1}C)^{-1}\\widetilde{C}S_2^{-1}\\\\[0.5cm]\n-S_2^{-1}C (S_1-\\widetilde{C}S_2^{-1}C)^{-1} & S_2^{-1}+S_2^{-1}C (S_1-\\widetilde{C}S_2^{-1}C)^{-1}\n\\widetilde{C}S_2^{-1}\n\\end{array}\n\\right )\n\\label{appb10}\n\\end{equation}\n\n\\noindent where the submatrices $S_1$ and $S_2$ are square matrices of dimensions\nresp. $k~($, $S_{2~ij}=<\\delta b_i \\delta b_j>$ and\n$C_{ij}=<\\delta a_i \\delta b_j>$, where the error functions are given by expressions\nlike Rel. (\\ref{appb1}) with appropriate path lengths. It is easy to compute \nthese (sub--)matrices~:\n\n\\begin{equation}\n\\left \\{\n\\begin{array}{ll}\n\\displaystyle S_1=B_{\\theta} ~E + \\frac{1}{24}A~I &\n~~~,~~~\\displaystyle B_{\\theta}=\\frac{1}{4} [<[\\delta \\theta]^2> + \\frac{1}{3} A] \\\\[0.5cm]\n\\displaystyle S_2=B_{\\phi} ~E + \\frac{1}{24}A~I &\n~~~,~~~\\displaystyle B_{\\phi}=\\frac{1}{4} [\\sin^2{\\theta}<[\\delta \\phi]^2> + \\frac{1}{3} A ]\\\\[0.5cm]\n\\displaystyle C=B_{\\theta \\phi} ~E &\n~~~,~~~\\displaystyle B_{\\theta \\phi}=\\frac{1}{4} \\sin{\\theta}<[\\delta \\theta \\delta \\phi]> \n\\end{array}\n\\right.\n\\label{appb11}\n\\end{equation}\n\n\\noindent where the rank 1 matrix $E$ has been already defined and still \n$A=c^2 L\/X_0$. We clearly have $C=\\widetilde{C}$ and all submatrices here are $n \\times n$.\nRel. (\\ref{appb10}) is useful in our case because these three submatrices\nhave each a special form. \n\nWe can now define two sets of relations analogous to (\\ref{appb3}) for the $a_i$ and $b_i$\nintroducing this way $a_0$ and $b_0$,\nand the vector of dimension $2n$~: $(\\cdots, ag_i-a_i,\\cdots,bg_j-b_j,\\cdots)$. Then we can\ndefine a function $\\chi_c^2=F(a,b)$ in a way analogous to Rel. (\\ref{appb3}),\nusing this vector and the matrix of Rel. (\\ref{appb10}). Doing as previously,\nit is easy to find that the solution which minimizes $F(a,b)$ is a couple\nof random variables $(a,b)$ of expectation values $(a_0,b_0)$ (to be fit) with an inverse of covariance matrix\ndefined by the sum of the elements of each of the four submatrices in Rel. (\\ref{appb10}).\nThese sums can be easily computed knowing the submatrices $S_1$, $S_2$ and $C$ (Rels. (\\ref{appb11})),\nby means of Rels. (\\ref{appb10}) and (\\ref{appb8}).\n\nAfter tedious algebra, it follows from there that the center fixing term can be written~:\n\n\\begin{equation}\n\\chi^2_C=\\left ( a_0,b_0 \\right )\n\\left (\n\\begin{array}{ll}\n\\displaystyle \\frac{1}{4}[<[\\delta \\theta]^2> + \\frac{1}{3} A + \\frac{1}{6n} A] &\n\\displaystyle ~~~~~~~\\frac{1}{4} \\sin{\\theta} <\\delta \\theta \\delta \\phi> \\\\[0.5cm]\n\\displaystyle ~~~~~~~\\frac{1}{4} \\sin{\\theta} <\\delta \\theta \\delta \\phi> &\n\\displaystyle \\frac{1}{4}[\\sin^2{\\theta} <[\\delta \\phi]^2> + \\frac{1}{3} A + \\frac{1}{6n} A] \\\\[0.5cm] \n\\end{array}\n\\right )^{-1}\n\\left (\n\\begin{array}{l}\na_0 \\\\[0.5cm]\nb_0\n\\end{array}\n\\right )\n\\label{appb12}\n\\end{equation}\n\nIn writing this expression, we have used $a_0$ and $b_0$ instead of\n$a_0-a_{measured}$ and $b_0-b_{measured}$, taking into account that the corresponding \nmeasured values are zero by definition.\nThis relation defines the error covariance matrix of the charged track direction \nassociated with $n$ detected photons~; it differs from the matrix $\\Sigma_0$\nby its taking into account the multiple scattering undergone by the charged track inside the\nradiator. It can simply be written $\\Sigma=\\Sigma_0+[A\/3+A\/(6n)]I$.\n\nOne can apply the finite size sample corrections found in Section A2,\nas we did in the previous subsection. This \namounts to change the expression for $\\Sigma$ to $\\Sigma=\\Sigma_0+[A\/3+2A\/(3n)]I$.\n\n\nFinally, it should be noted that the $a_0b_0$ covariance\nterm is not affected\nby effects due to multiple scattering~; this could have been inferred from Rels. (\\ref{appb1})\n(and explains the covariance term in Rels. (\\ref{appb2}), as the expectation value\n$<\\varepsilon_1(u) \\varepsilon_2(v)>$ is zero for any values of $u$ and $v$). \nFrom now on, we name for clarity the parameters to be fit $a$ and $b$\ninstead of $a_0$ and $b_0$. \n\nIt should be noted, however, that multiple scattering effects imply that\nthere are as many centers (charged track directions) as photons. Therefore, \nany fit procedure can only provide a determination of the\n$mean$ center coordinates for each track considered, as an approximation\nof the actual center value at the DIRC bar entrance. \n\n\n\\subsection*{B4 The Minimization Function and the Circle Parameters}\n\n\\indent \\indent Given a set of points known each with some error, and assuming they\nshould be on a circle arc, the problem we state here is to define a function\n$F(a,b,R)$, the minimum of which providing the circle parameters. An usual approach\nis actually to choose as function $F$, a $\\chi^2$. Denoting by $R$ the circle radius\nand by $(a,b)$ the center coordinates, the function is~:\n\n\\begin{equation}\n\\chi^2_n=\\sum_{i,j=1,n}(d_i-Rg_i) V^{-1}_{ij}(d_j-Rg_j) \n\\label{appb13}\n\\end{equation}\n\n\\noindent where $d_i=\\sqrt{(x_i-a)^2+(y_i-b)^2}$ is the distance of each point $(x_i,y_i)$\nto the fit center, and $g_i=1$ defines a constant vector $g$ introduced only in order to\nhave a correct matching of repeated indices. A priori, the covariance matrix \nis defined by $ V_{ij}=<\\delta d_i \\delta d_j>$. Usually, the error functions\n$\\delta d_i$ are obtained by differentiating the expression for $d_i$~:\n\n\n\\begin{equation}\n\\delta d_i=\\frac{(x_i-a)\\delta x_i+ (y_i-b)\\delta y_i}{d_i}\n\\label{appb14}\n\\end{equation}\n\n\\noindent where $\\delta x_i$ and $\\delta y_i$ are the errors functions affecting\nthe photon measurements $x_i$ and $y_i$.\n\nIn our case, the points are spread onto a small arc\nand\/or the relative size of the errors compared with the circle radius is large~;\nthen, one has to introduce additional information for reasons already quoted at several\nplaces in the body of this paper. The most\nobvious additional information which is available in our case refers to the circle center. \n\nWe have as {\\it a priori} information the measurement provided by the tracking device located in front\nof the DIRC bar entrance~; this is summarized by a central value and a covariance error matrix (referred to\nanywhere above as $\\Sigma_0$). Obviously, this defines a distribution\n(normal, assuming we are lucky) but not the actual location of the center \nindeed associated with the track under consideration.\n\nThe question is now~: how to introduce the approximate knowledge\n$(0,0)$ of the given charged track direction and keep anyway its actual center coordinates\n$(a,b)$ to be fit? In the previous case (no charged track information), the measured \nquantities could be written~: \n\n\\begin{equation}\nx_i=x^0_i+a+\\delta x_i ~~,~~ y_i=y^0_i+b+\\delta y_i\n\\label{appc15a}\n\\end{equation}\n\n\\noindent\nwhere $x^0_i=R \\cos{\\varphi_i}$ and $y^0_i=R \\sin{\\varphi_i}$. Here \n$R$ is the true radius, $\\varphi_i$ is the true azimuth on the circle and\n$(a,b)$ the true center. Then the true value for the $x_i$ and $y_i$\nare respectively $x^0_i+a$ and $y^0_i+b$. Fixing the center at the ``measured''\nvalue $(a_i,b_i)$ (actually $(0,0)$), turns out to rewrite these equations~:\n\n\\begin{equation}\nx_i=x^0_i+a_i+\\delta x_i ~~,~~ y_i=y^0_i+b_i+\\delta y_i\n\\label{appc15b}\n\\end{equation}\n\nHowever, for a given well defined track, we can write~:\n\n\\begin{equation}\na_i=a+\\delta a_i~~~,~~~~ b_i=b+\\delta b_i ~~~~,~~~ \\forall i=1, \\cdots ~n\n\\label{appb15}\n\\end{equation}\n \n\\noindent where $\\delta a_i$ and $\\delta b_i$ are the error functions\nwhich take into account the errors at the entrance of the DIRC bar $and$ the multiple \nscattering undergone by the charged track up to the point where it\nemits photon $i$. In this way $(a,b)$ is the $actual$ center \nwhen it exists (no multiple scattering), otherwise it can be formally\ndefined as the mean value of the quantity corresponding to\n$G$ in Rel. (\\ref{app13}), which is then non--zero on a track by track basis. \nThen Eqs. (\\ref{appc15b}) can be rewritten~:\n\n\\begin{equation}\nx_i=x^0_i+a+ \\delta a_i+\\delta x_i ~~,~~ y_i=y^0_i+b+ \\delta b_i+\\delta y_i\n\\label{appc15c}\n\\end{equation}\n \nThe difference between the case when the center is left free and when it \nis constrained is transfered to the error functions which become $\\delta a_i+\\delta x_i$ and \n$\\delta b_i+\\delta y_i$ instead of respectively $\\delta x_i$ and $\\delta y_i$.\nConceptually, the difference comes from what is submitted to fit in both cases.\nIn the former case (free center), the measured quantitites submitted to fit are\nthe measured points $(x_i,y_i)$, while in the latter case (constrained center),\nthe quantitites\nsubmitted to fit are actually $(x_i-a_{measurement},y_i-b_{measurement})$.\nIn the former case, the center $(a,b)$ is fully fit, in the latter case\none fits the departure of the actual center from the measured point $(0,0)$.\n\nThen, Rel. (\\ref{appb14}) is still valid but should be rewritten~:\n\n\n\\begin{equation}\n\\delta d_i = \\frac{(x_i-a)(\\delta x_i+\\delta a_i)+(y_i-b)(\\delta y_i+\\delta b_i)}{d_i}\n\\label{appb16}\n\\end{equation}\n\n\\noindent in order to keep $\\delta x_i$ and $\\delta y_i$ their original meaning\n(errors due to the measurement of the photon direction without reference to the charged\ntrack). \nIf, moreover, we choose the origin in the plane in order that it coincides\nwith the image of the track direction provided by the tracking device, Rel. (\\ref{appb16})\ncan be approximated by~: \n\n\\begin{equation}\n\\delta d_i = \\frac{x_i(\\delta x_i+\\delta a_i)+y_i(\\delta y_i+\\delta b_i)}{d'_i}\n\\label{appb17}\n\\end{equation}\n\n\\noindent where $d'_i$ in the denominator is $d'_i=\\sqrt{x^2_i+y^2_i}$.\n$\\delta d_i$ in Rels. (\\ref{appb16}) and (\\ref{appb14}) differ only\nat first order and only by the differentials for $\\delta a_i$ and $\\delta b_i$.\n\nTherefore, the quantity $\\chi^2_n$ (Rel. (\\ref{appb13})) can be used with\n$V_{ij}=<\\delta d_i \\delta d_j>$, where the errors functions are given by\nRel. (\\ref{appb17}). As they depend on the path length followed up to the\nemission of each photon, this expression has to be approximated by its mean value\n$V_{ij}=<<\\delta d_i \\delta d_j>>_{u_iu_j}$ easily computable\nusing all information given above.\n\nIn this way, we are in position to define $\\chi^2_n$ as a function of \n$(a,b)$ and $R$. It remains to account for forcing the coordinates \n$(a,b)$ to remain in the neighborhood of the measured point $(0,0)$~;\nthis is achieved{\\footnote{One may ask oneself whether Rel. (\\ref{appb18})\nactually exhausts the problem. Indeed, one may be tempted to introduce\nin the $\\chi^2$ to be minimized, terms coupling $a$ and $b$ with the\n$d_i-Rg_i$~; this is not studied here. Anyway, such additional terms\nwould surely degrade the algorithm speed. From the results already at hand, \none may conclude that their effect is small for track momenta above 500 MeV\/c.\n}} \nby defining the following $\\chi^2$~:\n\n\\begin{equation}\n\\chi^2 = \\chi^2_n + \\chi^2_C\n\\label{appb18}\n\\end{equation}\n\n\\noindent using Rels. (\\ref{appb13}) and (\\ref{appb12}), where \nthe influence of multiple scattering is already taken\ninto account, including correlations. While $\\chi^2_n$\nis $n-3$ degrees of freedom, the $\\chi^2$ just defined\nis $n-1$ degrees of freedom.\n\nOne could ask oneself about the influence of the additional\nterm $\\chi^2_C$ when minimising, if the circle arc happens to be large enough\nthat such a term is actually useless. We have checked numerically\nthis case using trivial simulations, where the populated arc length \nand the number of \"measured\" points could be varied at will. We have \nfound that for large circle arcs (about 180$^{\\circ}$ or more)\nthe additional term $\\chi^2_C$ did not prevent to get the same solution as\nwhen it is removed, with completely negligible fluctuations.\n\n\nAnother possibility could be considered. This turns to give up \nfitting the center, accepting the measured value $(0,0)$ as optimum.\nUsing the notation $d'_i \\equiv d_i(a=0,b=0)$, this turns\nto estimate the radius by~:\n\n\\begin{equation}\nR=\\displaystyle \\sigma_R^2 \\sum_{ij} g_i V^{-1}_{ij} d'_j ~~~, \n~~~\\frac{1}{\\sigma_R^2}=\\sum_{ij} g_i V^{-1}_{ij} g_j\n\\label{challenge}\n\\end{equation}\n\nThis gives results close in quality to minimising Eq. (\\ref{appb18}),\nif the fixed center $(a,b)=(0,0)$ is correctly measured. If however, there\nis any bias in this estimate, it may become much worser. Indeed, in case\nwhen the measured center is slightly biased, minimising Eq. (\\ref{appb18})\nis safe as one recovers the correct center location, even starting from a \nwrong center~; instead, using Eq. (\\ref{challenge}), the estimate of \nthe radius suffers piling up effects which summarize in sensible effects. \nFor instance, assuming 2.8 mr systematic error on the charged track direction\n(known with statistical accuracy 2 mr rms),\nthe radius pull gets a rms of 1.12 instead of 0.98,\nunder the same conditions, for the procedure we recommand.\nThis difference of behaviour degrades in presence of background \n(1.4 compared to 1.2) or if the statistical accuracy on the charged \ntrack direction worsens. \n\n\\subsection*{B5 Correlations among Photons in the Equatorial Plane}\n\n\\indent \\indent \n\nThe radius $\\tan{\\theta_C\/2}$ of the circle is approximated by each of the photon\ndistance to the common center ($d'_i =\\sqrt{x_i^2+y_i^2}$~). Each such estimate\nis affected by an error function which is given by Rel. (\\ref{appb17}),\n with the measured center set at the origin in the equatorial plane.\nFrom expressions given above, we can then deduce~:\n\n\\begin{equation}\n\\left \\{\n\\begin{array}{llll}\n<[\\delta d_i]^2>&=&\\displaystyle \\frac{1}{d^{'2}_i}\n\\left[\nx_i^2[<[\\delta x_i]^2> +\\frac{1}{4}<[\\delta \\theta]^2>]+\ny_i^2[<[\\delta y_i]^2> +\\frac{1}{4}\\sin^2{\\theta}<[\\delta \\phi]^2>]+ \\right.\\\\[0.5cm]\n~&~&\\left. \\displaystyle 2 x_iy_i[<\\delta x_i\\delta y_i>+\\frac{1}{4}\\sin{\\theta}<\\delta \\theta \\delta \\phi>]\\right]\n+ \\displaystyle \\frac{1}{8}c^2 \\frac{L}{X_0}\n\\\\[0.5cm]\n<\\delta d_i \\delta d_j>&=&\\displaystyle \\frac{1}{4d'_id'_j}\n\\left[\nx_ix_j<[\\delta \\theta]^2>+y_iy_j\\sin^2{\\theta}<[\\delta \\phi]^2>+ \\right.\\\\[0.5cm]\n~&~& \\left. \\displaystyle (x_iy_j+x_jy_i) \\sin{\\theta} <\\delta \\theta\\delta \\phi>\\right] \n+ \\displaystyle \\frac{(x_ix_j+y_iy_j)}{d'_id'_j}\\frac{1}{12}c^2 \\frac{L}{X_0}\n\\end{array}\n\\right.\n\\label{appb19}\n\\end{equation}\n\n\\noindent where we have assumed that the measurements ($x$ and $y$) for photons $i$ and $j$\nare statistically independent for ease of reading.\n\nThese relations give the expression for the elements of the error covariance\nmatrix which enter the fit procedure described in the body of the text and just above.\nThe variance for each estimate of the circle radius ($d'_i$) depends on the photon errors, \nthe track direction errors and the multiple scattering it undergoes~;\nthe covariance term exhibits an interesting feature~: up to the fact that the metric along $\\vec{v}$\nand $\\vec{w}$ are different, one sees a surprising correlation pattern. Qualitatively,\ncorrelations are the strongest (and positive) for photons close to each other in azimuth,\ncorrelations are the strongest (and negative) for pairs of photons opposite in azimuth, while\nthere is no correlation for photon pairs having azimuthal distance of $\\pi\/2$.\n\n\\subsection*{B6 Multiple Scattering Effects in the General Case}\n\n\\indent \\indent When the arc to be fit is large enough (possibly $2 \\pi$ radians),\nfixing the circle center becomes irrelevant. In this case, a question remains\nabout the influence of multiple scattering on the error definition and the fit \nprocedure.\n\nFor each photon, $d_i=\\sqrt{(x_i-a_i)^2+(y_i-b_i)^2}$ remains the basic quantity which \nenters the fit procedure. Whatever is the way to express the\nproblem, we have as free parameter the charged track direction\nand as ``data'' the angular distance of this direction with photon directions.\nAs noted above, when multiple scattering is active, the direction of the charged\ntrack varies from photon to photon with theoretically known statistical fluctuations.\nTherefore the estimate of the radius provided by each photon inherits the\nfluctuations of the charged track. Stated otherwise, the error due\nto multiple scattering can either be treated separately (as we did)\nor included in the error function of the measurement ($x_i,y_i$),\ntogether with the other contributions (geometrical errors, chromaticity error).\nThis means that Rel. (\\ref{appb17}) is still relevant. Therefore,\nwhen there is no center fixing term, \nit is equivalent to consider that the error\nfunction on ($x_i,y_i$) is ($\\delta x_i+\\delta a_i,\\delta y_i+\\delta b_i$) ,\nwhere the second term of each component is reduced to only the multiple \nscattering contribution. \n\nIn this case, Rels. (\\ref{appb19}) become~:\n\n\\begin{equation}\n\\left \\{\n\\begin{array}{llll}\n<[\\delta d_i]^2>&=&\\displaystyle \\frac{1}{d^{'2}_i}\n\\left[ \\displaystyle\nx_i^2<[\\delta x_i]^2> + y_i^2<[\\delta y_i]^2> + 2 x_iy_i<\\delta x_i\\delta y_i>\\right]\n+ \\displaystyle \\frac{1}{8}c^2 \\frac{L}{X_0}\n\\\\[0.5cm]\n<\\delta d_i \\delta d_j>&=&\\displaystyle \\frac{(x_iy_j+x_jy_i)}{d'_id'_j}\n\\frac{1}{12}c^2 \\frac{L}{X_0}\n\\end{array}\n\\right.\n\\label{appb20}\n\\end{equation}\n\nThis shows that correlations among photons always exist, only due however\nto the properties of multiple scattering. Therefore, for low momentum tracks,\nwhen the multiple scattering is dominant, correlations among photons can never\nbe ignored in any reconstruction procedure for devices like the DIRC.\nThis is clearly independent of the representation chosen for the data (here\nthe stereographic projection)~; it only relies on the fact that the measured\nquantities are angles between photons and\na single charged track direction which changes in a correlated way from one photon to the other. \n\n\\newpage\n\\section*{Acknowledgements}\n\\indent \\indent We thanks our colleagues of the BaBar DIRC group for the\ninterest they manifested in the successive steps of this work. We also\nwarmly acknowledge J. Chauveau for important remarks and comments.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\\section{The monster diagram} \\lbl{diagram}\n\n\\subsection{The vertices} Let ${\\mathfrak g}_{\\mathbf R}$ be the (semi-simple) \nLie-algebra of some compact Lie group $G$, let ${\\mathfrak g}={\\mathfrak g}_{\\mathbf R}\n\\otimes{\\mathbf C}$, let ${\\mathfrak h}\\subset i{\\mathfrak g}_{\\mathbf R}$ be a Cartan\nsubalgebra of ${\\mathfrak g}$, and let $W$ be the Weyl group of ${\\mathfrak h}$ in\n${\\mathfrak g}$. Let $\\Delta_+\\subset{\\mathfrak h}^\\star$ be a set of positive\nroots of ${\\mathfrak g}$, and let $\\rho\\in i{\\mathfrak g}_{\\mathbf R}^\\star$ be half\nthe sum of the positive roots. Let $\\hbar$ be an indeterminate, and\nlet ${\\mathbf C}[[\\hbar]]$ be the ring of formal power series in $\\hbar$\nwith coefficients in ${\\mathbf C}$.\n\n\\begin{myitemize}\n\n\\item ${{\\mathcal K}^F}$ is the set of all framed knots in ${\\mathbf R}^3$.\n\n\\item ${\\mathcal A}$ is the algebra of not-necessarily-connected chord diagrams,\n as in page~\\pageref{Adefinition}.\n\n\\item ${\\calB'_\\times}$ and ${\\calB'_{\\mathaccent\\cdot\\cup}}$ denote the space of Chinese characters (allowing\nconnected components that have no univalent vertices), as in\npage~\\pageref{Bdefinition}, taken with its two algebra structures.\n\n\\item ${U(\\frakg)^\\frakg[[\\hbar]]}$ is the ${\\mathfrak g}$-invariant part of the universal enveloping\nalgebra $U({\\mathfrak g})$ of ${\\mathfrak g}$, with the coefficient ring extended to be\n${\\mathbf C}[[\\hbar]]$.\n\n\\item ${S(\\frakg)^\\frakg_\\times[[\\hbar]]}$ \\lbl{Stdefinition} and ${S(\\frakg)^\\frakg_{\\mathaccent\\cdot\\cup}[[\\hbar]]}$ denote the ${\\mathfrak g}$-invariant\npart of the symmetric algebra $S({\\mathfrak g})$ of ${\\mathfrak g}$, with the\ncoefficient ring extended to be ${\\mathbf C}[[\\hbar]]$. In ${S(\\frakg)^\\frakg_{\\mathaccent\\cdot\\cup}[[\\hbar]]}$ we take\nthe algebra structure induced from the natural algebra structure of the\nsymmetric algebra. In ${S(\\frakg)^\\frakg_\\times[[\\hbar]]}$ we take the algebra structure induced from\nthe algebra structure of ${U(\\frakg)^\\frakg[[\\hbar]]}$ by the symmetrization map ${\\beta_\\frakg}:{S(\\frakg)^\\frakg_\\times[[\\hbar]]}\\to{U(\\frakg)^\\frakg[[\\hbar]]}$,\nwhich is a linear isomorphism by the Poincare-Birkhoff-Witt theorem.\n\n\\item ${P(\\frakh^\\star)^W[[\\hbar]]}$ is the space of Weyl-invariant polynomial functions on\n${\\mathfrak h}^\\star$, with coefficients in ${\\mathbf C}[[\\hbar]]$.\n\n\\item ${P(\\frakg^\\star)^\\frakg}[[\\hbar]]$ is the space of ad-invariant polynomial functions on\n${\\mathfrak g}^\\star$, with coefficients in ${\\mathbf C}[[\\hbar]]$.\n\n\\end{myitemize}\n\n\\subsection{The edges} \\lbl{TheEdges}\n\n\\begin{myitemize}\n\n\\item $Z$ is the framed version of the Kontsevich integral for knots as\ndefined in~\\cite{LeMurakami:Universal}. A simpler (and equal) definition for\na framed knot $K$ is\n\\[\n Z(K)=e^{\\Theta\\cdot\\text{writhe}(K)\/2}\n \\cdot S\\left(\\tilde{Z}(K)\\right)\n \\in{\\mathcal A}\\subset{\\mathcal A},\n\\]\nwhere $\\Theta$ is the chord diagram $\\silenteepic{theta}{0.5}$, $S$ is the\nstandard algebra map ${\\mathcal A}^r={\\mathcal A}\/<\\Theta>\\to{\\mathcal A}$ defined by mapping\n$\\Theta$ to $0$ and leaving all other primitives of ${\\mathcal A}$ in place, and\n$\\tilde{Z}$ is the Kontsevich integral as in~\\cite{Kontsevich:Vassiliev}.\n\n\\item $\\chi$ is the symmetrization map ${\\calB'_\\times}\\to{\\mathcal A}$, as on\npage~\\pageref{chidefinition}. It is an algebra isomorphism\nby~\\cite{Bar-Natan:Vassiliev} and the definition of $\\times$.\n\n\\item ${\\hat{\\Omega}}$ is the wheeling map as in page~\\pageref{Ohdefinition}. We\nargue that it should be an algebra (iso-)morphism\n(Conjecture~\\ref{WheelingConjecture}).\n\n\\item ${RT_\\frakg}$ denotes the Reshetikhin-Turaev knot invariant\nassociated with the Lie algebra ${\\mathfrak g}$~\\cite{Reshetikhin:QUEA,\nReshetikhin:Quasitriangle, ReshetikhinTuraev:Ribbon, Turaev:Invariants}.\n\n\\item ${{\\mathcal T}_\\frakg}$ (in all three instances) is the usual\n``diagrams to Lie algebras'' map, as in~\\cite[Section~2.4 and\nexercise~5.1]{Bar-Natan:Vassiliev}. The only variation we make\nis that we multiply the image of a degree $m$ element of ${\\mathcal A}$\n(or ${\\calB'_\\times}$ or ${\\calB'_{\\mathaccent\\cdot\\cup}}$) by $\\hbar^m$. In the construction of ${{\\mathcal T}_\\frakg}$\nan invariant bilinear form on ${\\mathfrak g}$ is needed. We use\nthe standard form $(\\cdot,\\cdot)$ used in~\\cite{ReshetikhinTuraev:Ribbon}\nand in~\\cite[Appendix]{ChariPressley:QuantumGroups}. See\nalso~\\cite[Chapter~2]{Kac:InfiniteDimensionalLieAlgebras}.\n\n\\item The isomorphism ${\\beta_\\frakg}$ was already discussed when ${S(\\frakg)^\\frakg_\\times[[\\hbar]]}$ was defined on\npage~\\pageref{Stdefinition}.\n\n\\item The definition of the ``Duflo map'' ${D(j_\\frakg^{1\/2})}$ requires some\npreliminaries. If $V$ is a vector space, there is an algebra map\n$D:P(V)\\to\\operatorname{Diff}(V^\\star)$ between the algebra $P(V)$ of polynomial\nfunctions on $V$ and the algebra $\\operatorname{Diff}(V^\\star)$ of constant coefficients\ndifferential operators on the symmetric algebra $S(V)$. $D$ is defined on\ngenerators as follows: If $\\alpha\\in V^\\star$ is a degree 1 polynomial\non $V$, set $D(\\alpha)(v)=\\alpha(v)$ for $v\\in V\\subset S(V)$, and\nextend $D(\\alpha)$ to all of $S(V)$ using the Leibnitz law. A different\n(but less precise) way of defining $D$ is via the Fourier transform:\nIdentify $S(V)$ with the space of functions on $V^\\star$. A polynomial\nfunction on $V$ becomes a differential operator on $V^\\star$ after\ntaking the Fourier transform, and this defines our map $D$. Either way,\nif $j\\in P(V)$ is homogeneous of degree $k$, the differential operator\n$D(j)$ lowers degrees by $k$ and thus vanishes on the low degrees of\n$S(V)$. Hence $D(j)$ makes sense even when $j$ is a power series instead\nof a polynomial. This definition has a natural extension to the case\nwhen the spaces involved are extended by ${\\mathbf C}[[\\hbar]]$, or even\n${\\mathbf C}(\\!(\\hbar)\\!)$, the algebra of Laurent polynomials in $\\hbar$.\n\nNow use this definition of $D$ with $V={\\mathfrak g}$ to\ndefine the Duflo map ${D(j_\\frakg^{1\/2})}$, where $j_{\\mathfrak g}(X)$ is defined for $X\\in{\\mathfrak g}$ by\n\\[ j_{\\mathfrak g}(X) =\n \\det\\left(\\frac{\\sinh\\operatorname{ad} X\/2}{\\operatorname{ad} X\/2}\\right).\n\\]\nThe square root $j_{\\mathfrak g}^{1\/2}$ of\n$j_{\\mathfrak g}$ is defined as in~\\cite{Duflo:Resolubles}\nor~\\cite[Section~8.2]{BerlineGetzlerVergne:DiracOperators}, and is a power\nseries in $X$ that begins with $1$. We note that by Kirillov's formula\nfor the character of the trivial representation (see e.g.~\\cite[Theorem\n8.4 with $\\lambda=i\\rho$]{BerlineGetzlerVergne:DiracOperators}),\n$j_{\\mathfrak g}^{1\/2}$ is the Fourier transform of the symplectic\nmeasure on $M_{i\\rho}$, where $M_{i\\rho}$ is the co-adjoint\norbit of $i\\rho$ in ${\\mathfrak g}_{\\mathbf R}^\\star$ (see\ne.g.~\\cite[Section~7.5]{BerlineGetzlerVergne:DiracOperators}):\n\\begin{equation} \\lbl{FourierTransform}\n j_{\\mathfrak g}^{1\/2}(X) = \\int_{r\\in M_{i\\rho}} e^{ir(X)}dr.\n\\end{equation}\n(We consider the symplectic measure as a measure on\n${\\mathfrak g}_{\\mathbf R}^\\star$, whose support is the subset $M_{i\\rho}$\nof ${\\mathfrak g}_{\\mathbf R}^\\star$. Its Fourier transform is a function on\n${\\mathfrak g}_{\\mathbf R}$ that can be computed via integration on the support\n$M_{i\\rho}\\subset{\\mathfrak g}_{\\mathbf R}^\\star$ of the symplectic measure.)\nDuflo~\\cite[th\\'eor\\`eme~V.2]{Duflo:Resolubles} proved that ${D(j_\\frakg^{1\/2})}$ is an\nalgebra isomorphism.\n\n\\item ${\\psi_\\frakg}$ is the Harish-Chandra isomorphism $U({\\mathfrak g})^{\\mathfrak g}\\to\nP({\\mathfrak h}^\\star)^W$ extended by $\\hbar$. Using the representation theory\nof ${\\mathfrak g}$, it is defined as follows. If $z$ is in $U({\\mathfrak g})^{\\mathfrak g}$\nand $\\lambda\\in{\\mathfrak h}^\\star$ is a positive integral weight, we set\n${\\psi_\\frakg}(z)(\\lambda)$ to be the scalar by which $z$ acts on the irreducible\nrepresentation of ${\\mathfrak g}$ whose heighest weight is $\\lambda-\\rho$. It\nis well known (see e.g.~\\cite[Section~23.3]{Humphreys:LieAlgebras})\nthat this partial definition of ${\\psi_\\frakg}(z)$ extends uniquely to a\nWeyl-invariant polynomial (also denoted ${\\psi_\\frakg}(z)$) on ${\\mathfrak h}^\\star$,\nand that the resulting map ${\\psi_\\frakg}:U({\\mathfrak g})^{\\mathfrak g}\\to P({\\mathfrak h}^\\star)^W$\nis an isomorphism.\n\n\\item The two equalities at the lower right quarter of the monster diagram\nneed no explanation. We note though that if the space of polynomials ${P(\\frakg^\\star)^\\frakg}[[\\hbar]]$\nis endowed with its obvious algebra structure, only the lower equality is\nin fact an equality of algebras.\n\n\\item ${\\iota_\\frakg}$ is the restriction map induced by the identification of\n${\\mathfrak h}^\\star$ with a subspace of ${\\mathfrak g}^\\star$ defined using the\nform $(\\cdot,\\cdot)$ of ${\\mathfrak g}$. The map ${\\iota_\\frakg}$ is an isomorphism by\nChevalley's theorem (see e.g.~\\cite[Section~23.1]{Humphreys:LieAlgebras}\nand~\\cite[Section~VI-2]{BrockerTomDieck:Representations}).\n\n\\item ${S_\\frakg}$ is the extension by $\\hbar$ of an integral operator. If\n$p(\\lambda)$ is an invariant polynomial of $\\lambda\\in{\\mathfrak g}^\\star$, then\n\\[ {S_\\frakg}(p)(\\lambda)=\\int_{r\\in M_{i\\rho}}p(\\lambda-ir)dr. \\]\n${S_\\frakg}$ can also be viewed as a convolution operator (with a measure\nconcentrated on $M_\\rho$), and like all convolution operators, it maps\npolynomials to polynomials.\n\n\\end{myitemize}\n\n\\subsection{The faces}\n\n\\begin{myitemize}\n\n\\item The commutativity of the face labeled ${\\silenteepic{Ca}{0.6}}$\nwas proven by Kassel~\\cite{Kassel:QuantumGroups} and\nLe and Murakami~\\cite{LeMurakami:Universal} following\nDrinfel'd~\\cite{Drinfeld:QuasiHopf, Drinfeld:GalQQ}. We comment that\nit is this commutativity that makes the notion of ``canonical Vassiliev\ninvariants''~\\cite{Bar-NatanGaroufalidis:MMR} interesting.\n\n\\item The commutativity of the face labeled ${\\silenteepic{Cb}{0.5}}$ is immediate from the\ndefinitions, and was already noted in~\\cite{Bar-Natan:Vassiliev}.\n\n\\item The commutativity of the face labeled ${\\silenteepic{Cc}{0.4}}$ (notice that this face\nfully encloses the one labeled ${\\silenteepic{Ce}{0.5}}$) is due to\nDuflo~\\cite[th\\'eor\\`eme~V.1]{Duflo:Resolubles}.\n\n\\end{myitemize}\n\n\\begin{proposition} \\lbl{PreciseWheelingTheorem}\nThe face labeled ${\\silenteepic{Cd}{0.5}}$ is commutative.\n\\end{proposition}\n\n\\begin{remark} Recalling that ${D(j_\\frakg^{1\/2})}$ is an algebra isomorphism,\nProposition~\\ref{PreciseWheelingTheorem} becomes the precise formulation\nof Theorem~\\ref{WheelingTheorem}.\n\\end{remark}\n\n\\begin{proof}[Proof of Proposition~\\ref{PreciseWheelingTheorem}] Follows\nimmediately from the following two lemmas, taking $C=\\Omega$ in\n\\eqref{ChatC}.\n\\def$\\hfill\\smily${$\\hfill\\smily$}\n\\end{proof}\n\n\\begin{lemma} \\lbl{FirstLemma} Let $\\kappa:{\\mathfrak g}\\to{\\mathfrak g}^\\star$ be the\nidentification induced by the standard bilinear form $(\\cdot,\\cdot)$ of\n${\\mathfrak g}$. Extend $\\kappa$ to all symmetric powers of ${\\mathfrak g}$, and let\n$\\kappa^\\hbar:S({\\mathfrak g})^{\\mathfrak g}[[\\hbar]]\\to S({\\mathfrak g}^\\star)(\\!(\\hbar)\\!)$\nbe defined for a homogeneous $s\\in S({\\mathfrak g})^{\\mathfrak g}[[\\hbar]]$ (relative to\nthe grading of $S({\\mathfrak g})$) by $\\kappa^\\hbar(s)=\\hbar^{-\\deg s}\\kappa(s)$.\nIf $C\\in{\\mathcal B}'$ is a Chinese character, $\\hat{C}:{\\mathcal B}'\\to{\\mathcal B}'$ is\nthe operator corresponding to $C$ as in Definition~\\ref{ChatDef}, and\n$C'\\in{\\mathcal B}'$ is another Chinese character, then\n\\begin{equation} \\lbl{ChatC}\n {{\\mathcal T}_\\frakg}\\hat{C}(C') = D(\\kappa^\\hbar{{\\mathcal T}_\\frakg} C){{\\mathcal T}_\\frakg} C'.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof} If $\\kappa j$ is a tensor in\n$S^{k}({\\mathfrak g}^\\star)\\subset{\\mathfrak g}^{\\star\\otimes k}$, the $k$'th\nsymmetric tensor power of ${\\mathfrak g}^\\star$, and $j'$ is a tensor in\n$S^{k'}({\\mathfrak g})\\subset{\\mathfrak g}^{\\otimes k'}$, then\n\\begin{equation} \\lbl{DInFull}\n D(\\kappa j)(j')=\\begin{cases}\n 0 & \\text{if $k>k'$,} \\\\\n \\parbox{2.6in}{\n the sum of all ways of contracting all the tensor components of $j$\n with some (or all) tensor components of $j'$\n }\\quad & \\text{otherwise.}\n \\end{cases}\n\\end{equation}\nBy definition, the ``diagrams to Lie algebras'' map carries gluing to\ncontraction, and hence carries the operation in Definition~\\ref{ChatDef}\nto the operation in~\\eqref{DInFull}, namely, to $D$. Counting powers of\n$\\hbar$, this proves~\\eqref{ChatC}.\n\\def$\\hfill\\smily${$\\hfill\\smily$}\n\\end{proof}\n\n\\begin{lemma} \\lbl{TgOmega}\n$\\kappa^\\hbar{{\\mathcal T}_\\frakg}\\Omega=j_{\\mathfrak g}^{1\/2}$.\n\\end{lemma}\n\n\\begin{proof} It follows easily from the definition of ${{\\mathcal T}_\\frakg}$ and $\\kappa^h$\nthat $(\\kappa^\\hbar{{\\mathcal T}_\\frakg}\\omega_n)(X)=\\operatorname{tr}(\\operatorname{ad} X)^n$ for any $X\\in{\\mathfrak g}$.\nHence, using the fact that $\\kappa^\\hbar\\circ{{\\mathcal T}_\\frakg}$ is an algebra morphism if\n${\\mathcal B}'$ is taken with the disjoint union product,\n\\[ (\\kappa^\\hbar{{\\mathcal T}_\\frakg}\\Omega)(X)\n = \\exp\\sum_{n=1}^\\infty b_{2n}(\\kappa^\\hbar{{\\mathcal T}_\\frakg}\\omega_{2n})(X)\n = \\exp\\sum_{n=1}^\\infty b_{2n}\\operatorname{tr}(\\operatorname{ad} X)^{2n}\n = \\det\\exp\\sum_{n=1}^\\infty b_{2n}(\\operatorname{ad} X)^{2n}.\n\\]\nBy the definition of the modified Bernoulli numbers~\\eqref{MBNDefinition},\nthis is\n\\begin{equation}\n \\det\\exp\\frac{1}{2}\\log\\frac{\\sinh \\operatorname{ad} X\/2}{\\operatorname{ad} X\/2}\n = \\det\\left(\\frac{\\sinh \\operatorname{ad} X\/2}{\\operatorname{ad} X\/2}\\right)^{1\/2}\n = j_{\\mathfrak g}^{1\/2}(X).\n \\prtag{\\smily}\n\\end{equation}\n\\renewcommand{$\\hfill\\smily$}{}\n\\end{proof}\n\n\\begin{proposition} \nThe face labeled ${\\silenteepic{Ce}{0.5}}$ is commutative.\n\\end{proposition}\n\n\\begin{proof} According to M.~Vergne (private communication), this is a\nwell known fact. We could not find a reference, so here's the gist of\nthe proof. Forgetting about powers of $\\hbar$ and ${\\mathfrak g}$-invariance\nand taking the Fourier transform (over ${\\mathfrak g}_{\\mathbf R}$), the\ndifferential operator ${D(j_\\frakg^{1\/2})}$ becomes the operator of multiplication by\n$j_{\\mathfrak g}^{1\/2}(iX)$ on $S({\\mathfrak g})$. Taking the inverse Fourier transform,\nwe see that ${D(j_\\frakg^{1\/2})}$ is the operator of convolution with the inverse Fourier\ntransform of $j_{\\mathfrak g}^{1\/2}(iX)$, which is the symplectic measure on\n$M_\\rho$ (see~\\eqref{FourierTransform}). So ${D(j_\\frakg^{1\/2})}$ is convolution with\nthat measure, as required.\n\\def$\\hfill\\smily${$\\hfill\\smily$}\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Introduction}\n\n\\subsection{The conjectures}\nLet us start with the statements of our conjectures; the rest of\nthe paper is concerned with motivating and justifying them. We\nassume some familiarity with the theory of Vassiliev invariants. See\ne.g.~\\cite{Bar-Natan:Vassiliev, Birman:Bulletin, BirmanLin:Vassiliev,\nGoussarov:New, Goussarov:nEquivalence, Kontsevich:Vassiliev,\nVassiliev:CohKnot, Vassiliev:Book} and~\\cite{Bar-Natan:VasBib}.\n\nVery briefly, recall that any complex-valued knot invariant $V$ can be\nextended to an invariant of knots with double points ({\\em singular\nknots}) via the formula $V(\\!\\silenteepic{DoublePoint}{0.5}\\!\\!) =\nV(\\!\\silenteepic{OverCrossing}{0.5}\\!\\!) -\nV(\\!\\silenteepic{UnderCrossing}{0.5}\\!\\!)$. An invariant of knots\n(or framed knots) is called a {\\em Vassiliev invariant}, or a {\\em\nfinite type invariant of type $m$}, if its extension to singular knots\nvanishes whenever evaluated on a singular knot that has more than $m$\ndouble points. Vassiliev invariants are in some senses analogues to\npolynomials (on the space of all knots), and one may hope that they\nseparate knots. While this is an open problem and the precise power of the\nVassiliev theory is yet unknown, it is known (see~\\cite{Vogel:Structures})\nthat Vassiliev invariants are strictly stronger than the\nReshetikhin-Turaev invariants (\\cite{ReshetikhinTuraev:Ribbon}), and in\nparticular they are strictly stronger than the Alexander-Conway, Jones,\nHOMFLY, and Kauffman invariants. Hence one is interested in a detailed\nunderstanding of the theory of Vassiliev invariants.\n\nThe set ${\\mathcal V}$ of all Vassiliev invariants of framed knots is a linear\nspace, filtered by the ``type'' of an invariant. The fundamental theorem\nof Vassiliev invariants, due to Kontsevich~\\cite{Kontsevich:Vassiliev},\nsays that the associated graded space $\\operatorname{gr}{\\mathcal V}$ of ${\\mathcal V}$ can be\nidentified with the graded dual ${\\mathcal A}^\\star$ of a certain completed\ngraded space ${\\mathcal A}$ \\lbl{Adefinition} of formal linear combinations of\ncertain diagrams, modulo certain linear relations. The ``diagrams'' in\n${\\mathcal A}$ are connected graphs made of a single distinguished directed line\n(the ``skeleton''), some number of undirected ``internal edges'', some\nnumber of trivalent ``external vertices'' in which an internal edge ends\non the skeleton, and some number of trivalent ``internal vertices''\nin which three internal edges meet. It is further assumed that the\ninternal vertices are ``oriented''; that for each internal vertices\none of the two possible cyclic orderings of the edges emanating\nfrom it is specified. An example of a diagram in ${\\mathcal A}$ is in\nfigure~\\ref{BasicDefinitions}. The linear relations in the definition\nof ${\\mathcal A}$ are the well-known $AS$, $IHX$, and $STU$ relations, also\nshown in figure~\\ref{BasicDefinitions}. The space ${\\mathcal A}$ is graded by\nhalf the total number of trivalent vertices in a given diagram.\n\n\\begin{figure}[htpb]\n\\[ \\eepic{BasicDefinitions}{0.85} \\]\n\\caption{\n A diagram in ${\\mathcal A}$, a diagram in ${\\mathcal B}$ (a Chinese character),\n and the $AS$, $IHX$, and $STU$ relations. All internal vertices shown\n are oriented counterclockwise.\n}\n\\lbl{BasicDefinitions}\n\\end{figure}\n\nThe most difficult part of the currently known proofs of the\nisomorphism $\\operatorname{gr}{\\mathcal V}\\cong{\\mathcal A}^\\star$ is the construction of a\n``universal Vassiliev invariant''; an ${\\mathcal A}$-valued framed-knot\ninvariant that satisfies a certain universality property. Such a\n``universal Vassiliev invariant'' is not unique; the set of universal\nVassiliev invariants is in a bijective correspondence with the set of all\nfiltration-respecting maps ${\\mathcal V}\\to\\operatorname{gr}{\\mathcal V}$ that induce the identity\nmap $\\operatorname{gr}{\\mathcal V}\\to\\operatorname{gr}{\\mathcal V}$. But it is a noteworthy and not terribly well\nunderstood fact that all known constructions of a universal Vassiliev\ninvariant are either known to give the same answer or are conjectured to\ngive the same answer as the original ``framed Kontsevich integral'' $Z$\n(see Section~\\ref{TheEdges}). Furthermore, the Kontsevich integral is\nwell behaved in several senses, as shown in~\\cite{Bar-Natan:Vassiliev,\nBar-NatanGaroufalidis:MMR, Kassel:QuantumGroups, Kontsevich:Vassiliev,\nLeMurakami2Ohtsuki:3Manifold, LeMurakami:Universal, LeMurakami:Parallel}.\n\nThus it seems that $Z$ is a canonical and not an accidental object. It\nis therefore surprising how little we know about it. While there are\nseveral formulas for computing $Z$, they are all of limited use beyond\nthe first few degrees. Presently, we do not know how to compute $Z$\nfor {\\em any} knot; not even the unknot!\n\nOur first conjecture is about the value of the Kontsevich integral of the\nunknot. We conjecture a completely explicit formula, written in terms\nof an alternative realization of the space ${\\mathcal A}$, the space ${\\mathcal B}$\nof ``Chinese characters'' (see~\\cite{Bar-Natan:Vassiliev}). The space\n${\\mathcal B}$ is also a completed graded space of formal linear combinations\nof diagrams modulo linear relations: the diagrams are the so-called\nChinese characters, which are the same as the diagrams in ${\\mathcal A}$\nexcept that a skeleton is not present, and instead a certain number\nof univalent vertices are allowed (the connectivity requirement is\ndropped, but one insists that every connected component of a Chinese\ncharacter would have at least one univalent vertex). An example of a\nChinese character is in figure~\\ref{BasicDefinitions}. The relations\nare the $AS$ and $IHX$ relations that appear in the same figure (but\nnot the $STU$ relation, which involves the skeleton). The degree of a\nChinese character is half the total number of its vertices. There is a\nnatural isomorphism $\\chi:{\\mathcal B}\\to{\\mathcal A}$ \\lbl{chidefinition} which maps\nevery Chinese character to the average of all possible ways of placing\nits univalent vertices along a skeleton line. In a sense that we will\nrecall below, the fact that $\\chi$ is an isomorphism is an analog of\nthe Poincare-Birkhoff-Witt (PBW) theorem. We note that the inverse map\n$\\sigma$ of $\\chi$ is more difficult to construct and manipulate.\n\n\\begin{conjecture} \\lbl{UnknotConjecture} (Wheels)\nThe framed Kontsevich integral of the unknot,\n$Z(\\bigcirc)$, expressed in terms of Chinese characters, is equal to\n\\begin{equation} \\lbl{OmegaDef}\n \\Omega=\\exp_{\\mathaccent\\cdot\\cup} \\sum_{n=1}^\\infty b_{2n}\\omega_{2n}.\n\\end{equation}\n\\end{conjecture}\n\nThe notation in~\\eqref{OmegaDef} means:\n\\begin{myitemize}\n\\item The `modified Bernoulli numbers' $b_{2n}$ are defined by the power\nseries expansion\n\\begin{equation} \\lbl{MBNDefinition}\n \\sum_{n=0}^\\infty b_{2n}x^{2n} = \\frac{1}{2}\\log\\frac{\\sinh x\/2}{x\/2}.\n\\end{equation}\nThese numbers are related to the usual Bernoulli numbers $B_{2n}$ and\nto the values of the Riemann $\\zeta$-function on the even integers via\n(see e.g.~\\cite[Section~12.12]{Apostol:AnalyticNumberTheory})\n\\[\n b_{2n}\n =\\frac{B_{2n}}{4n(2n)!}\n =\\frac{(-1)^{n+1}}{2n(2\\pi)^{2n}}\\zeta(2n).\n\\]\nThe first three modified Bernoulli numbers are $b_2=1\/48$, $b_4=-1\/5760$,\nand $b_6=1\/362880$.\n\\item The `$2n$-wheel' $\\omega_{2n}$ is the degree $2n$ Chinese character\nmade of a $2n$-gon with $2n$ legs:\n\\[\n \\omega_2=\\eepic{2wheel}{0.6},\\quad\n \\omega_4=\\eepic{4wheel}{0.6},\\quad\n \\omega_6=\\eepic{6wheel}{0.6},\\quad\\ldots,\n\\]\n(with all vertices oriented counterclockwise).\\footnote{\n Wheels have appeared in several noteworthy places before:\n \\cite{Chmutov:CombinatorialMelvinMorton, ChmutovVarchenko:sl2,\n KrickerSpenceAitchison:Cabling, Vaintrob:Primitive}. Similar but slightly\n different objects appear in Ng's beautiful work on ribbon\n knots~\\cite{Ng:Ribbon}.\n}\n\n\\item $\\exp_{\\mathaccent\\cdot\\cup}$ means `exponential in the disjoint union sense'; that is,\n it is the formal-sum exponential of a linear combination of Chinese\n characters, with the product being the disjoint union product.\n\\end{myitemize}\n\nLet us explain why we believe the Wheels Conjecture\n(Conjecture~\\ref{UnknotConjecture}). Recall (\\cite{Bar-Natan:Vassiliev}) that\nthere is a parallelism between the space ${\\mathcal A}$ (and various variations\nthereof) and a certain part of the theory of Lie algebras. Specifically,\ngiven a metrized Lie algebra ${\\mathfrak g}$, there exists a commutative square\n(a refined version is in Theorem~\\ref{CommutativityTheorem} below)\n\\[ {\n \\def{\\mathcal A}{{\\mathcal A}}\n \\def{\\mathcal B}{{\\mathcal B}}\n \\def{{\\mathcal T}_\\frakg}{{{\\mathcal T}_{\\mathfrak g}}}\n \\def{{U}^\\frakg(\\frakg}){{{U}^{\\mathfrak g}({\\mathfrak g}})}\n \\def{{S}^\\frakg(\\frakg}){{{S}^{\\mathfrak g}({\\mathfrak g}})}\n \\def\\Udef{{\\left(\\parbox{2.4in}{\\small the ${\\mathfrak g}$-invariant part\n of the completed universal enveloping algebra of ${\\mathfrak g}$}\n \\right)}}\n \\def\\Sdef{{\\left(\\parbox{2.1in}{\\small the ${\\mathfrak g}$-invariant part\n of the completed symmetric algebra of ${\\mathfrak g}$}\n \\right)}}\n \\eepic{CommutativeSquare}{0.8}\n} \\]\nin which the left column is the above mentioned formal PBW isomorphism\n$\\chi$, and the right column is the symmetrization map ${\\beta_\\frakg}:S({\\mathfrak g})\\to\nU({\\mathfrak g})$, sending an unordered word of length $n$ to the average of the\n$n!$ ways of ordering its letters and reading them as a product in\n$U({\\mathfrak g})$. The map ${\\beta_\\frakg}$ is an isomorphism by the honest PBW theorem. The\nleft-to-right maps ${\\mathcal T}_{\\mathfrak g}$ are defined as in\n\\cite{Bar-Natan:Vassiliev} by contracting copies of the structure\nconstants tensor, one for each vertex of any given diagram, using the\nstandard invariant form $(\\cdot,\\cdot)$ on ${\\mathfrak g}$ (see citations in\nsection~\\ref{TheEdges} below). The maps ${\\mathcal T}_{\\mathfrak g}$ seem\nto `forget' some information (some high-degree elements on the left\nget mapped to 0 on the right no matter what the algebra ${\\mathfrak g}$ is,\nsee~\\cite{Vogel:Structures}), but at least up to degree 12 it is faithful\n(for some Lie algebras); see~\\cite{Kneissler:Twelve}.\n\n\\begin{theorem} \\lbl{UnknotTheorem} Conjecture~\\ref{UnknotConjecture} is\n``true on the level of semi-simple Lie algebras''. Namely,\n\\[ {\\mathcal T}_{\\mathfrak g}\\Omega = {\\mathcal T}_{\\mathfrak g}\\chi^{-1}Z(\\bigcirc). \\]\n\\end{theorem}\n\nWe now formulate our second conjecture. Let\n${\\mathcal B}'=\\operatorname{span}\\left\\{\\silenteepic{CCExample}{0.5}\\right\\}\/(AS,IHX)$\n\\lbl{Bdefinition} be the same as ${\\mathcal B}$, only dropping the connectivity\nrequirement (so that\nwe also allow\nconnected components that have no univalent vertices). The space ${\\mathcal B}'$\nhas two different products, and thus is an algebra in two different ways:\n\\begin{myitemize}\n\\item The disjoint union $C_1{\\mathaccent\\cdot\\cup} C_2$ of two Chinese characters $C_{1,2}$\n is again a Chinese character. The obvious bilinear extension of ${\\mathaccent\\cdot\\cup}$\n is a well defined product ${\\mathcal B}'\\times{\\mathcal B}'\\to{\\mathcal B}'$, which turns\n ${\\mathcal B}'$ into an algebra. For emphasis we will call this algebra ${\\calB'_{\\mathaccent\\cdot\\cup}}$.\n\\item ${\\mathcal B}'$ is isomorphic (as a vector space) to the space\n ${\\mathcal A}'=\\operatorname{span}\\left\\{\\silenteepic{CDExample}{0.5}\\right\\}\/(AS,IHX,STU)$\n of diagrams whose skeleton is a single oriented interval\n (like ${\\mathcal A}$, only that here we also\n allow non-connected diagrams). The isomorphism is the map\n $\\chi:{\\mathcal B}'\\to{\\mathcal A}'$ that maps a Chinese\n character with $k$ ``legs'' (univalent vertices) to the average\n of the $k!$ ways of arranging them along an oriented interval\n (in~\\cite{Bar-Natan:Vassiliev} the sum was used instead of the\n average). ${\\mathcal A}'$ has a well known ``juxtaposition'' product $\\times$,\n related to the ``connect sum'' operation on knots:\n \\[ \\silenteepic{AnotherCD}{0.5}\\times\\ \\silenteepic{CDExample}{0.5}=\n \\silenteepic{ProductCD}{0.5}.\n \\]\n The algebra structure on ${\\mathcal A}'$ defines another algebra structure on\n ${\\mathcal B}'$. For emphasis we will call this algebra ${\\calB'_\\times}$.\n\\end{myitemize}\n\nAs before, ${\\mathcal A}'$ is graded by half the number of trivalent\nvertices in a diagram, ${\\mathcal B}'$ is graded by half the total number of\nvertices in a diagram, and the isomorphism $\\chi$ as well as the two\nproducts respect these gradings.\n\n\\begin{definition} \\lbl{ChatDef} If $C$ is a Chinese character, let\n$\\hat{C}:{\\mathcal B}'\\to{\\mathcal B}'$ be the operator defined by\n\\[ \\hat{C}(C')=\\begin{cases}\n 0 & \\text{if $C$ has more legs than $C'$,} \\\\\n \\parbox{2.6in}{\n the sum of all ways of gluing all the legs of $C$ to some (or all)\n legs of $C'$\n }\\quad & \\text{otherwise.}\n \\end{cases}\n\\]\nFor example,\n\\[ \\widehat{\\omega_4}(\\omega_2)=0; \\qquad\n \\widehat{\\omega_2}(\\omega_4)=\n 8\\eepic{SideGluing}{0.5}+4\\eepic{DiagonalGluing}{0.5}.\n\\]\nIf $C$ has $k$ legs and total degree $m$, then $\\hat{C}$ is an operator\nof degree $m-k$. By linear extension, we find that every $C\\in{\\mathcal B}'$\ndefines an operator $\\hat{C}:{\\mathcal B}'\\to{\\mathcal B}'$, and in fact, even infinite\nlinear combinations of Chinese characters with an {\\em increasing} number\nof legs define operators ${\\mathcal B}'\\to{\\mathcal B}'$.\n\\end{definition}\n\nAs $\\Omega$ is made of wheels, we call the action of the (degree\n$0$) operator $\\hat{\\Omega}$ \\lbl{Ohdefinition} ``wheeling''. As\n$\\Omega$ begins with $1$, the wheeling map is invertible. We argue\nbelow that $\\hat{\\Omega}$ is a diagrammatic analog of the Duflo\nisomorphism ${S}^{\\mathfrak g}({\\mathfrak g})\\to{S}^{\\mathfrak g}({\\mathfrak g})$\n(see~\\cite{Duflo:Resolubles} and see below). The Duflo isomorphism\nintertwines the two algebra structures that ${S}^{\\mathfrak g}({\\mathfrak g})$ has:\nthe structure it inherits from the symmetric algebra and the structure\nit inherits from ${U}^{\\mathfrak g}({\\mathfrak g})$ via the PBW isomorphism.\nOne may hope that $\\hat{\\Omega}$ has the parallel property:\n\n\\begin{conjecture} \\lbl{WheelingConjecture} (Wheeling\\footnote{Conjectured\nindependently by Deligne~\\cite{Deligne:Letter}.}) Wheeling intertwines\nthe two products on Chinese characters. More precisely, the map\n$\\hat{\\Omega}:{\\calB'_{\\mathaccent\\cdot\\cup}}\\to{\\calB'_\\times}$ is an algebra isomorphism.\n\\end{conjecture}\n\nThere are several good reasons to hope that\nConjecture~\\ref{WheelingConjecture} is true. If it is true, one would\nbe able to use it along with Conjecture~\\ref{UnknotConjecture} and\nknown properties of the Kontsevich integral (such as its behavior the\noperations of change of framing, connected sum, and taking the parallel of\na component as in~\\cite{LeMurakami:Parallel}) to get explicit formulas\nfor the Kontsevich integral of several other knots and links. Note\nthat change of framing and connect sum act on the Kontsevich integral\nmultiplicatively using the product in ${\\mathcal A}$, but the conjectured formula\nwe have for the Kontsevich integral of the unknot is in ${\\mathcal B}$. Using\nConjecture~\\ref{WheelingConjecture} it should be possible to perform\nall operations in ${\\mathcal B}$.\n\nPerhaps a more important reason is that in essence, ${\\mathcal A}$ and ${\\mathcal B}$\ncapture that part of the information about $U({\\mathfrak g})$ and $S({\\mathfrak g})$\nthat can be described entirely in terms of the bracket and the structure\nconstants. Thus a proof of Conjecture~\\ref{WheelingConjecture} would yield an\nelementary proof of the intertwining property of the Duflo isomorphism,\nwhose current proofs use representation theory and are quite involved. We\nfeel that the knowledge missing to give an elementary proof of the\nintertwining property of the Duflo isomorphism is the same knowledge\nthat is missing for giving a proof of the Kashiwara-Vergne conjecture\n(\\cite{KashiwaraVergne:CampbellHausdorff}).\n\n\\begin{theorem} \\lbl{WheelingTheorem} Conjecture~\\ref{WheelingConjecture}\nis ``true on the level of semi-simple Lie algebras''. A precise statement is\nin Proposition~\\ref{PreciseWheelingTheorem} and the remark following it.\n\\end{theorem}\n\n\\begin{remark} As semi-simple Lie algebras ``see'' all of the\nVassiliev theory at least up to degree~12 \\cite{Bar-Natan:Vassiliev,\nKneissler:Twelve}, Theorems~\\ref{UnknotTheorem} and ~\\ref{WheelingTheorem}\nimply Conjectures~\\ref{UnknotConjecture} and~\\ref{WheelingConjecture} up to that\ndegree. It should be noted that semi-simple Lie algebras do not ``see''\nthe whole Vassiliev theory at high degrees, see ~\\cite{Vogel:Structures}.\n\\end{remark}\n\n\\begin{remark} As the Duflo isomorphism has no known elementary proof, the\nLie algebra techniques used in this paper are unlikely to give full proofs\nof Conjectures~\\ref{UnknotConjecture} and~\\ref{WheelingConjecture}.\n\\end{remark}\n\n\\begin{remark} We've chosen to work over the complex numbers to allow for\nsome analytical arguments below. The rationality of the Kontsevich\nintegral~\\cite{LeMurakami:Universal} and the uniform classification of\nsemi-simple Lie algebras over fields of characteristic 0 implies that\nConjectures~\\ref{UnknotConjecture} and~\\ref{WheelingConjecture} and\nTheorems~\\ref{UnknotTheorem} and~\\ref{WheelingTheorem} are independent\nof the (characteristic 0) ground field.\n\\end{remark}\n\n\\subsection{The plan} Theorem~\\ref{UnknotTheorem} and\nTheorem~\\ref{WheelingTheorem} both follow from a delicate assembly of\nwidely known facts about Lie algebras and related objects; the main\nnovelty in this paper is the realization that these known facts can\nbe brought together and used to prove Theorems~\\ref{UnknotTheorem}\nand~\\ref{WheelingTheorem} and make Conjectures~\\ref{UnknotConjecture}\nand~\\ref{WheelingConjecture}. The facts we use about Lie-algebras amount to the\ncommutativity of a certain monstrous diagram. In Section~\\ref{diagram}\nbelow we will explain everything that appears in that diagram,\nprove its commutativity, and prove Theorem~\\ref{WheelingTheorem}. In\nSection~\\ref{proof} we will show how that commutativity implies\nTheorem~\\ref{UnknotTheorem} as well. We conclude this introductory\nsection with a picture of the monster itself:\n\n\\begin{theorem} \\lbl{CommutativityTheorem} (definitions and proof in\nSection~\\ref{diagram})\nThe following monster diagram is commutative:\n\\[ \\eepic{CommutativeDiagram}{0.8} \\]\n\\end{theorem}\n\n\\begin{remark} Our two conjectures ought to be related---one talks\nabout $\\Omega$, and another is about an operator $\\hat{\\Omega}$\nmade out of $\\Omega$, and the proofs of Theorems~\\ref{UnknotTheorem}\nand~\\ref{WheelingTheorem} both use the Duflo map (${D(j_\\frakg^{1\/2})}$ in the above\ndiagram). But looking more closely at the proofs below, the relationship\nseems to disappear. The proof of Theorem~\\ref{WheelingTheorem} uses\nonly the commutativity of the face labeled ${\\silenteepic{Cd}{0.5}}$, while the proof of\nTheorem~\\ref{UnknotTheorem} uses the commutativity of all faces but\n${\\silenteepic{Cd}{0.5}}$. No further relations between the conjectures are seen in the\nproofs of our theorems. We are still missing the deep relation that\nought to exist between `Wheels' and `Wheeling'. Why is it that the same\nstrange combination of Chinese characters $\\Omega$ plays a role in these\ntwo seemingly unrelated affairs?\n\\end{remark}\n\n\\subsection{Postscript} According to\nKontsevich~\\cite{Kontsevich:DeformationQuantization},\nConjecture~\\ref{WheelingConjecture} seems to follow from the results he proves\nin Section~8.3 of that paper, but a full proof of the conjecture is not\ngiven there. \\cite{LeThurston:Unknot} have shown that\nConjecture~\\ref{WheelingConjecture} implies\nConjecture~\\ref{UnknotConjecture}, but\nunfortunately their proof does not shed light on the fundamental\nrelationship that ought to exist between the two conjectures.\n\n\\subsection{Acknowledgement} Much of this work was done when the\nfour of us were visiting \\AA rhus, Denmark, for a special semester on\ngeometry and physics, in August 1995. We wish to thank the organizers,\nJ.~Dupont, H.~Pedersen, A.~Swann and especially J.~Andersen for their\nhospitality and for the stimulating atmosphere they created. We wish\nto thank the Institute for Advanced Studies for their hospitality,\nand P.~Deligne for listening to our thoughts and sharing his. His\nletter~\\cite{Deligne:Letter} introduced us to the Duflo isomorphism;\ninitially our proofs relied more heavily on the Kirillov character\nformula. A.~Others made some very valuable suggestions; we thank them\nand also thank J.~Birman, A.~Haviv, A.~Joseph, G.~Perets, J.~D.~Rogawski,\nM.~Vergne and S.~Willerton for additional remarks and suggestions.\n\n\n\\section{Proof of Theorem~\\ref{UnknotTheorem}} \\lbl{proof}\n\nWe prove the slightly stronger equality\n\\begin{equation} \\lbl{PreciseUnknotEquation}\n {\\mathcal T}^\\hbar_{\\mathfrak g}\\Omega\n = {\\mathcal T}^\\hbar_{\\mathfrak g}\\chi^{-1}Z(\\bigcirc).\n\\end{equation}\n\n\\begin{proof} We compute the right hand side\nof~\\eqref{PreciseUnknotEquation} by computing\n${S_\\frakg}{\\iota_\\frakg}^{-1}{\\psi_\\frakg}{RT_\\frakg}(\\bigcirc)$ and using the commutativity of the monster\ndiagram. It is known (see\ne.g.~\\cite[example~11.3.10]{ChariPressley:QuantumGroups}) that if\n$\\lambda-\\rho\\in{\\mathfrak h}^\\star$ is the highest weight of some irreducible\nrepresentation $R_{\\lambda-\\rho}$ of ${\\mathfrak g}$, then\n\\[ ({\\psi_\\frakg}{RT_\\frakg}(\\bigcirc))(\\lambda)\n = \\frac{1}{\\dim R_{\\lambda-\\rho}} \\prod_{\\alpha\\in\\Delta_+} \\frac\n {\\sinh\\hbar(\\lambda,\\alpha)\/2}\n {\\sinh\\hbar(\\rho,\\alpha)\/2},\n\\]\nwhere $\\Delta_+$ is the set of positive roots of ${\\mathfrak g}$ and\n$(\\cdot,\\cdot)$ is the standard invariant bilinear form on ${\\mathfrak g}$.\nBy the Weyl dimension formula and some minor arithmetic, we get (see\nalso~\\cite[section~7]{LeMurakami:Parallel})\n\\begin{equation} \\lbl{ProductOverRoots}\n ({\\psi_\\frakg}{RT_\\frakg}(\\bigcirc))(\\lambda) = \\prod_{\\alpha\\in\\Delta_+}\n \\frac\n {\\hbar(\\rho,\\alpha)\/2}\n {\\sinh\\hbar(\\rho,\\alpha)\/2}\n \\cdot\\frac\n {\\sinh\\hbar(\\lambda,\\alpha)\/2}\n {\\hbar(\\lambda,\\alpha)\/2}\n .\n\\end{equation}\nWe can identify ${\\mathfrak g}$ and ${\\mathfrak g}^\\star$ using the form $(\\cdot,\\cdot)$,\nand then expressions like `$\\operatorname{ad}\\lambda$' makes sense. By definition,\nif ${\\mathfrak g}_\\alpha$ is the weight space of the root $\\alpha$, then\n$\\operatorname{ad}\\lambda$ acts as multiplication by $(\\lambda,\\alpha)$\non ${\\mathfrak g}_\\alpha$, while acting trivially on ${\\mathfrak h}$. From this\nand~\\eqref{ProductOverRoots} we get\n\\[ ({\\psi_\\frakg}{RT_\\frakg}(\\bigcirc))(\\lambda) =\n \\det\\left(\\frac{\\operatorname{ad}\\hbar\\rho\/2}{\\sinh\\operatorname{ad}\\hbar\\rho\/2}\\right)^{1\/2}\n \\cdot\n \\det\\left(\\frac{\\sinh\\operatorname{ad}\\hbar\\lambda\/2}{\\operatorname{ad}\\hbar\\lambda\/2}\\right)^{1\/2} =\n j_{\\mathfrak g}^{-1\/2}(\\hbar\\rho)\\cdot j_{\\mathfrak g}^{1\/2}(\\hbar\\lambda).\n\\]\n\nThe above expression (call it $Z(\\lambda)$) makes sense\nfor all $\\lambda\\in{\\mathfrak g}^\\star$, and hence it is also\n${\\iota_\\frakg}^{-1}{\\psi_\\frakg}{RT_\\frakg}(\\bigcirc)$. So we're only left with computing ${S_\\frakg}\nZ(\\lambda)$:\n\\[ {S_\\frakg} Z(\\lambda)=\\int_{r\\in M_{i\\rho}}dr\\,Z(\\lambda-ir)=\n j_{\\mathfrak g}^{-1\/2}(\\hbar\\rho)\n \\int_{r\\in M_{i\\rho}}dr\\,j_{\\mathfrak g}^{1\/2}(\\hbar(\\lambda-ir)).\n\\]\nBy~\\eqref{FourierTransform}, this is\n\\[ j_{\\mathfrak g}^{-1\/2}(\\hbar\\rho)\\int_{r\\in M_{i\\rho}}dr\\int_{r'\\in M_{i\\rho}}dr'\\,\n e^{i\\hbar( r',\\lambda-ir)}\n = j_{\\mathfrak g}^{-1\/2}(\\hbar\\rho)\\int_{r'\\in M_{i\\rho}}dr'\\,\n e^{i\\hbar( r',\\lambda)}\\int_{r\\in M_{i\\rho}}dr\\,\n e^{i\\hbar( -ir',r)}.\n\\]\nUsing~\\eqref{FourierTransform} again, we find that the inner-most integral\nis equal to $j_{\\mathfrak g}^{1\/2}(\\hbar\\rho)$ independently of $r'$, and hence\n\\[ {S_\\frakg} Z(\\lambda)\n = \\int_{r'\\in M_{i\\rho}}dr'\\,e^{i\\hbar( r',\\lambda)},\n\\]\nand using~\\eqref{FourierTransform} one last time we find that\n\\begin{equation} \\lbl{SgZ}\n {S_\\frakg} Z(\\lambda) = j_{\\mathfrak g}^{1\/2}(\\hbar\\lambda).\n\\end{equation}\n\nThe left hand side of~\\eqref{PreciseUnknotEquation} was already computed\n(up to duality and powers of $\\hbar$) in Lemma~\\ref{TgOmega}. Undoing the\neffect of $\\kappa^\\hbar$ there, we get the same answer as in~\\eqref{SgZ}.\n\\def$\\hfill\\smily${$\\hfill\\smily$}\n\\end{proof}\n\n\\[ \\eepic{WheelsWheeling}{0.25} \\]\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Motivation}\n\nExperimental efforts to develop useful solid state quantum information processors have encountered a host of practical problems that have substantially limited progress. While the desire to reduce noise in solid state qubits appears to be the key factor that drives much of the recent work in this field, it must be acknowledged that there are formidable challenges related to architecture, circuit density, fabrication variation, calibration and control that also deserve attention. For example, a qubit that is inherently exponentially sensitive to fabrication variations with no recourse for in-situ correction holds little promise in any large scale architecture, even with the best of modern fabrication facilities. Thus, a qubit designed in the absence of information concerning its ultimate use in a larger scale system may prove to be of little utility in the future. In what follows, we present an experimental demonstration of a novel superconducting flux qubit \\cite{fluxqubit} that has been specifically designed to address several issues that pertain to the implementation of a large scale quantum information processor. While noise is not the central focus of this article, we nonetheless present experimental evidence that, despite its physical size and relative complexity, the observed flux noise in this flux qubit is comparable to the quietest such devices reported upon in the literature to date. \n\nIt has been well established that rf-SQUIDs can be used as qubits given an appropriate choice of device parameters. Such devices can be operated as a flux biased phase qubit using two intrawell energy levels \\cite{FluxBiasedPhaseQubit} or as a flux qubit using any pair of interwell levels \\cite{fluxqubit}. This article will focus upon an experimental demonstration of a novel rf-SQUID flux qubit that can be tuned in-situ using solely {\\it static} flux biases to compensate for fabrication variations in device parameters, both within single qubits and between multiple qubits. It is stressed that this latter issue is of critical importance in the development of useful large scale quantum information processors that could foreseeably involve thousands of qubits \\cite{DiVincenzo}. Note that in this regard, the ion trap approach to building a quantum information processor has a considerable advantage in that the qubits are intrinsically identical, albeit the challenge is then to characterize and control the trapping potential with high fidelity \\cite{Wineland}. While our research group's express interest is in the development of a large scale superconducting adiabatic quantum optimization [AQO] processor \\cite{AQC,Santoro}, it should be noted that many of the practical problems confronted herein are also of concern to those interested in implementing gate model quantum computation [GMQC] processors \\cite{GMQC} using superconducting technologies.\n\nThis article is organized as follows: In Section II, a theoretical argument is presented to justify the rf-SQUID design that has been implemented. It is shown that this design is robust against fabrication variations in Josephson junction critical current. Second, it is argued why it is necessary to include a tunable inductance in the flux qubit to account for differences in inductance between qubits in a multi-qubit architecture and to compensate for changes in qubit inductance during operation. Thereafter, the focus of the article shifts towards an experimental demonstration of the rf-SQUID flux qubit. The architecture of the experimental device and its operation are discussed in Section III and then a series of experiments to characterize the rf-SQUID and to highlight its control are presented in Section IV. Section V contains measurements of properties that indicate that this more complex rf-SQUID is indeed a flux qubit. Flux and critical current noise measurements and a formula for converting the measured flux noise spectral density into a free induction (Ramsey) decay time are presented in Section VI. A summary of key conclusions is provided in Section VII. Detailed calculations of rf-SQUID Hamiltonians have been placed in the appendices.\n\n\\section{rf-SQUID Flux Qubit Design}\n\nThe behavior of most superconducting devices is governed by three types of macroscopic parameters: the critical currents of any Josephson junctions, the net capacitance across the junctions and the inductance of the superconducting wiring. The Hamiltonian for many of these devices can generically be written as\n\\begin{equation}\n\\label{eqn:Hphase}\n{\\cal H}=\\sum_i\\left[\\frac{Q_i^2}{2C_i}-E_{Ji}\\cos(\\varphi_i)\\right]+\\sum_{n}U_n\\frac{\\left(\\varphi_n-\\varphi_n^x\\right)^2}{2} \\; ,\n\\end{equation}\n\n\\noindent where $C_i$, $E_{Ji}=I_i\\Phi_0\/2\\pi$ and $I_i$ denote the capacitance, Josephson energy and critical current of Josephson junction $i$, respectively. The terms in the first sum are readily recognized as being the Hamiltonians of the individual junctions for which the quantum mechanical phase across the junction $\\varphi_i$ and the charge collected on the junction $Q_i$ obey the commutation relation $[\\Phi_0\\varphi_i\/2\\pi,Q_j]=i\\hbar\\delta_{ij}$. The index $n$ in the second summation is over closed inductive loops. External fluxes threading each closed loop, $\\Phi_n^x$, have been represented as phases $\\varphi_n^x\\equiv 2\\pi\\Phi_n^x\/\\Phi_0$. The quantum mechanical phase drop experienced by the superconducting condensate circulating around any closed loop is denoted as $\\varphi_n$. The overall potential energy scale factor for each closed loop is given by $U_n\\equiv(\\Phi_0\/2\\pi)^2\/L_n$. Here, $L_n$ can be either a geometric inductance from wiring or Josephson inductance from large junctions \\cite{vanDuzer}. Hamiltonian (\\ref{eqn:Hphase}) will be used as the progenitor for all device Hamiltonians that follow.\n\n\\subsection{Compound-Compound Josephson Junction Structure}\n\n\\begin{figure}\n\\includegraphics[width=3.25in]{RFSSummary.pdf}\n\\caption{\\label{fig:rfssummary} (color online) a) A single junction rf-SQUID qubit. b) Compound Josephson Junction (CJJ) rf-SQUID qubit. c) Compound-Compound Josephson Junction (CCJJ) rf-SQUID qubit. Junction critical currents $I_i$ and junction phases $\\varphi_i$ ($1\\leq i \\leq 4$) as noted. Net device phases are denoted as $\\varphi_{\\alpha}$, where $\\alpha\\in\\left(\\ell ,r,q\\right)$. External fluxes, $\\Phi_n^x$, are represented as phases $\\varphi_{n}^x\\equiv2\\pi\\Phi_n^x\/\\Phi_0$, where $n\\in\\left(L,R,\\text{cjj},\\text{ccjj},q\\right)$. Inductance of the rf-SQUID body, CJJ loop and CCJJ loop are denoted as $L_{\\text{body}}$, $L_{\\text{cjj}}$ and $L_{\\text{ccjj}}$, respectively.}\n\\end{figure}\n\nA sequence of rf-SQUID architectures are depicted in Fig.~\\ref{fig:rfssummary}. The most primitive version of such a device is depicted in Fig.~\\ref{fig:rfssummary}a, and more complex variants in Figs.\\ref{fig:rfssummary} b and \\ref{fig:rfssummary}c. For the single junction rf-SQUID (Fig.~\\ref{fig:rfssummary}a), the phase across the junction can be equated to the phase drop across the body of the rf-SQUID: $\\varphi_1=\\varphi_q$. The Hamiltonian for this device can then be written as\n\\begin{subequations}\n\\begin{equation}\n\\label{eqn:1JHeff}\n{\\cal H}=\\frac{Q_q^2}{2C_q}+V(\\varphi_q) \\; ;\n\\end{equation}\n\\vspace{-0.12in}\n\\begin{equation}\n\\label{eqn:1JV}\nV(\\varphi_q)=U_q\\Big\\{\\frac{\\left(\\varphi_q-\\varphi_q^x\\right)^2}{2}-\\beta\\cos\\left(\\varphi_q\\right)\\Big\\} \\; ;\n\\end{equation}\n\\vspace{-0.12in}\n\\begin{equation}\n\\label{eqn:1Jbeta}\n\\beta=\\frac{2\\pi L_q I_q^c}{\\Phi_0} \\; ,\n\\end{equation}\n\\end{subequations}\n\n\\noindent with the qubit inductance $L_q\\equiv L_{\\text{body}}$, qubit capacitance $C_q\\equiv C_1$ and qubit critical current $I_q^c\\equiv I_1$ in this particular case. If this device has been designed such that $\\beta>1$ and is flux biased such that $\\varphi_q^x\\approx\\pi$, then the potential energy $V(\\varphi_q)$ will be bistable. With increasing $\\beta$ an appreciable potential energy barrier forms between the two local minima of $V(\\varphi_q)$, through which the two lowest lying states of the rf-SQUID may couple via quantum tunneling. It is these two lowest lying states, which are separated from all other rf-SQUID states by an energy of order of the rf-SQUID plasma energy $\\hbar\\omega_p\\equiv\\hbar\/\\sqrt{L_qC_1}$, that form the basis of a qubit. One can write an effective low energy version of Hamiltonian (\\ref{eqn:1JHeff}) as \\cite{Leggett}\n\\begin{equation}\n\\label{eqn:Hqubit}\n{\\cal H}_{q}=-{\\frac{1}{2}}\\left[\\epsilon\\sigma_z+\\Delta_q\\sigma_x\\right] \\;\\; ,\n\\end{equation}\n\n\\noindent where $\\epsilon=2\\left|I_q^p\\right|\\left(\\Phi_q^x-\\Phi_0\/2\\right)$, $\\left|I_q^p\\right|$ is the magnitude of the persistent current that flows about the inductive $q$ loop when the device is biased hard [$\\epsilon\\gg\\Delta_q$] to one side and $\\Delta_q$ represents the tunneling energy between the otherwise degenerate counter-circulating persistent current states at $\\Phi^x_q=\\Phi_0\/2$. \n\n\\begin{figure}\n\\includegraphics[width=3.25in]{QubitComparison_ver2.pdf}\n\\caption{\\label{fig:qubitcomparison} (color online) Depiction of the two lowest lying states of an rf-SQUID at degeneracy ($\\epsilon=0$) with nomenclature for the energy basis ($\\ket{g}$,$\\ket{e}$) and flux basis ($\\ket{\\downarrow}$,$\\ket{\\uparrow}$) as indicated.}\n\\end{figure}\n\nA depiction of the one-dimensional potential energy and the two lowest energy states of an rf-SQUID at degeneracy ($\\Phi_q^x=\\Phi_0\/2$) for nominal device parameters is shown in Fig.~\\ref{fig:qubitcomparison}. In this diagram, the ground and first excited state are denoted by $\\ket{g}$ and $\\ket{e}$, respectively. These two energy levels constitute the energy eigenbasis of a flux qubit. An alternate representation of these states, which is frequently referred to as either the flux or persistent current basis, can be formed by taking the symmetric and antisymmetric combinations of the energy eigenstates: $\\ket{\\downarrow}=\\left(\\ket{g}+\\ket{e}\\right)\/\\sqrt{2}$ and $\\ket{\\uparrow}=\\left(\\ket{g}-\\ket{e}\\right)\/\\sqrt{2}$, which yield two roughly gaussian shaped wavefunctions that are centered about each of the wells shown in Fig.~\\ref{fig:qubitcomparison}. The magnitude of the persistent current used in Eq.~(\\ref{eqn:Hqubit}) is then defined by $\\left|I_q^p\\right|\\equiv\\left|\\bra{\\uparrow}\\left(\\Phi_q-\\Phi_0\/2\\right)\/L_q\\ket{\\uparrow}\\right|$. The tunneling energy is given by $\\Delta_q=\\bra{e}{\\cal H}_q\\ket{e}-\\bra{g}{\\cal H}_q\\ket{g}$. \n\nThe aforementioned dual representation of the states of a flux qubit allows two distinct modes of operation of the flux qubit as a binary logical element with a logical basis defined by the states $\\ket{0}$ and $\\ket{1}$. In the first mode, the logical basis is mapped onto the energy eigenbasis: $\\ket{0}\\rightarrow\\ket{g}$ and $\\ket{1}\\rightarrow\\ket{e}$. This mode is useful for optimizing the coherence times of flux qubits as the dispersion of Hamiltonian (\\ref{eqn:Hqubit}) is flat as a function of $\\Phi_q^x$ to first order for $\\epsilon\\approx 0$, thus providing some protection from the effects of low frequency flux noise \\cite{optimalpoint}. However, this is not a convenient mode of operation for implementing interactions between flux qubits \\cite{parametriccoupling1,parametriccoupling2}. In the second mode, the logical basis is mapped onto the persistent current basis: $\\ket{0}\\rightarrow\\ket{\\downarrow}$ and $\\ket{1}\\rightarrow\\ket{\\uparrow}$. This mode of operation facilitates the implementation of inter-qubit interactions via inductive couplings, but does so at the expense of coherence times. GMQC schemes exist that attempt to leverage the benefits of both of the above modes of operation \\cite{IBM,Oliver,NiftyItalianPaper}. On the other hand, those interested in implementing AQO strictly use the second mode of operation cited above. This, very naturally, leads to some interesting properties: First and foremost, in the coherent regime at $\\epsilon=0$, the groundstate maps onto $\\ket{g}=\\left(\\ket{0}+\\ket{1}\\right)\/\\sqrt{2}$, which implies that it is a superposition state with a fixed phase between components in the logical basis. Second, the logical basis is not coincident with the energy eigenbasis, except in the extreme limit $\\epsilon\/\\Delta_q\\gg 1$. As such, the qubit should not be viewed as an otherwise free spin-1\/2 in a magnetic field, rather it maps onto an Ising spin subjected to a magnetic field with both a longitudinal ($B_z\\rightarrow\\epsilon$) and a transverse ($B_x\\rightarrow\\Delta_q$) component \\cite{Ising}. In this case, it is the competition between $\\epsilon$ and $\\Delta_q$ which dictates the relative amplitudes of $\\ket{\\downarrow}$ and $\\ket{\\uparrow}$ in the groundstate wavefunction $\\ket{g}$, thereby enabling logical operations that make {\\it no} explicit use of the excited state $\\ket{e}$. This latter mode of operation of the flux qubit has connections to the fields of quantum magnetism \\cite{Anderson} and optimization theory \\cite{Kirkpatrick}. Interestingly, systems of coupled flux qubits that are operated in this mode bear considerable resemblance to Feynman's original vision of how to build a quantum computer \\cite{Feynman}.\n\nWhile much seminal work has been done on single junction and the related 3-Josephson junction rf-SQUID flux qubit\\cite{3JJfluxqubits,MooijMore3JJFluxQubits,MooijSuperposition,MooijCoherentDynamics,MooijCoupledSpectroscopy,OliverMachZehnder,OliverLandauZener,OliverAmplitudeSpectroscopy,ClarkeQubits,IPHT4Q,1OverFFluxQubit1,1OverFFluxQubit2}, it has been recognized that such devices would be impractical in a large scale quantum information processor as their properties are exceptionally sensitive to fabrication variations. In particular, in the regime $E_{J1}\\gg\\hbar\\omega_p$, $\\Delta_q\\propto\\exp(-\\hbar\\omega_p\/E_{J1})$. Thus, it would be unrealistic to expect a large scale processor involving a multitude of such devices to yield from even the best superconducting fabrication facility. Moreover, implementation of AQO requires the ability to actively tune $\\Delta_q$ from being the dominant energy scale in the qubit to being essentially negligible during the course of a computation. Thus the single junction rf-SQUID flux qubit is of limited practical utility and has passed out of favor as a prototype qubit.\n\nThe next step in the evolution of the single junction flux qubit and related variants was the compound Josephson junction (CJJ) rf-SQUID, as depicted in Fig.~\\ref{fig:rfssummary}b. This device was first reported upon by Han, Lapointe and Lukens \\cite{CJJ} and was the first type of flux qubit to display signatures of quantum superposition of macroscopic states \\cite{LukensSuperposition}. The CJJ rf-SQUID has been used by other research groups\\cite{ItaliansCJJ,NiftyItalianPaper,MoreNiftyItalianPaper} and a related 4-Josephson junction device has been proposed \\cite{3JJfluxqubits,MooijMore3JJFluxQubits}. The CJJ rf-SQUID flux qubit and related variants have reappeared in a gradiometric configuration in more recent history \\cite{HOMRT,IBM,Delft}. Here, the single junction of Fig.~\\ref{fig:rfssummary}a has been replaced by a flux biased dc-SQUID of inductance $L_{\\text{cjj}}$ that allows one to tune the critical current of the rf-SQUID in-situ. Let the applied flux threading this structure be denoted by $\\Phi^x_{\\text{cjj}}$. It is shown in Appendix A that the Hamiltonian for this system can be written as\n\\begin{subequations}\n\\begin{equation}\n\\label{eqn:2JHeff}\n{\\cal H}=\\sum_n\\left[\\frac{Q_n^2}{2C_n}+U_n\\frac{\\left(\\varphi_n-\\varphi_n^x\\right)^2}{2}\\right]-U_q\\beta_{\\text{eff}}\\cos\\left(\\varphi_q-\\varphi_q^0\\right) \\; ,\n\\end{equation}\n\n\\noindent where the sum is over $n\\in\\left\\{q,\\text{cjj}\\right\\}$, $C_q\\equiv C_1+C_2$, $1\/C_{\\text{cjj}}\\equiv 1\/C_1+1\/C_2$ and $L_q\\equiv L_{\\text{body}}+L_{\\text{cjj}}\/4$. The 2-dimensional potential energy in Hamiltonian (\\ref{eqn:2JHeff}) is characterized by\n\\begin{equation}\n\\label{eqn:2JBeff}\n\\beta_{\\text{eff}}=\\beta_+\\cos\\left(\\frac{\\varphi_{\\text{cjj}}}{2}\\right)\\sqrt{1+\\left[\\frac{\\beta_-}{\\beta_+}\\tan(\\varphi_{\\text{cjj}}\/2)\\right]^2} \\; ;\n\\end{equation}\n\\vspace{-12pt}\n\\begin{equation}\n\\label{eqn:2JOffset}\n\\varphi_q^0\\equiv 2\\pi\\frac{\\Phi_q^0}{\\Phi_0} =-\\arctan\\left(\\frac{\\beta_-}{\\beta_+}\\tan\\left(\\varphi_{\\text{cjj}}\/2\\right)\\right) \\; ;\n\\end{equation}\n\\vspace{-12pt}\n\\begin{equation}\n\\label{eqn:2Jbetapm}\n\\beta_{\\pm}\\equiv 2\\pi L_q\\left(I_{1}\\pm I_{2}\\right)\/\\Phi_0 \\; .\n\\end{equation}\n\\end{subequations}\n\n\\noindent Note that if $\\cos\\left(\\varphi_{\\text{cjj}}\/2\\right)<0$, then $\\beta_{\\text{eff}}<0$ in Hamiltonian (\\ref{eqn:2JHeff}). This feature provides a natural means of shifting the qubit degeneracy point from $\\varphi_q^x=\\pi$, as in the single junction rf-SQUID case, to $\\varphi_q^x\\approx 0$. It has been assumed in all that follows that this $\\pi$-shifted mode of operation of the CCJ rf-SQUID has been invoked.\n\nHamiltonian (\\ref{eqn:2JHeff}) is similar to that of a single junction rf-SQUID modulo the presence of a $\\varphi_{\\text{cjj}}$-dependent tunnel barrier through $\\beta_{\\text{eff}}$ and an effective critical current $I_q^c\\equiv I_1+I_2$.\nFor $L_{\\text{cjj}}\/L_q\\ll 1$ it is reasonable to assume that $\\varphi_{\\text{cjj}}\\approx 2\\pi\\Phi^x_{\\text{cjj}}\/\\Phi_0$. Consequently, the CJJ rf-SQUID facilitates in-situ tuning of the tunneling energy through $\\Phi^x_{\\text{cjj}}$. While this is clearly desirable, one does pay for the additional flexibility by adding more complexity to the rf-SQUID design and thus more potential room for fabrication variations. The minimum achievable barrier height is ultimately limited by any so called {\\it junction asymmetry} which leads to finite $\\beta_{-}$. In practice, for $\\beta_-\/\\beta_+=(I_{1}-I_{2})\/(I_{1}+I_{2})\\lesssim0.05$, this effect is of little concern. However, a more insidious effect of junction asymmetry can be seen via the change of variables $\\varphi_q-\\varphi_q^0\\rightarrow\\varphi_q$ in Eq.~(\\ref{eqn:2JHeff}), namely an apparent $\\Phi^x_{\\text{cjj}}$-dependent flux offset: $\\Phi^x_q\\rightarrow\\Phi^x_q-\\Phi_q^0(\\Phi^x_{\\text{cjj}})$. If the purpose of the CJJ is to simply allow the experimentalist to target a particular $\\Delta_q$, then the presence of $\\Phi_q^0(\\Phi^x_{\\text{cjj}})$ can be readily compensated via the application of a static flux offset. On the other hand, any mode of operation that explicitly requires altering $\\Delta_q$ during the course of a quantum computation \\cite{IBM,Oliver,Kaminsky,Aharonov,NiftyItalianPaper,MoreNiftyItalianPaper} would also require active compensation for what amounts to a nonlinear crosstalk from $\\Phi^x_{\\text{cjj}}$ to $\\Phi^x_q$. While it may be possible to approximate this effect as a linear crosstalk over a small range of $\\Phi^x_{\\text{cjj}}$ if the junction asymmetry is small, one would nonetheless need to implement precise {\\it time-dependent} flux bias compensation to utilize the CJJ rf-SQUID as a flux qubit in any quantum computation scheme. While this may be feasible in laboratory scale systems, it is by no means desirable nor practical on a large scale quantum information processor.\n\nA second problem with the CJJ rf-SQUID flux qubit is that one cannot homogenize the qubit parameters $\\left|I_q^p\\right|$ and $\\Delta_q$ between a multitude of such devices that possess different $\\beta_{\\pm}$ over a broad range of $\\Phi^x_{\\text{cjj}}$. While one can accomplish this task to a limited degree in a perturbative manner about carefully chosen CJJ biases for each qubit \\cite{synchronization}, the equivalence of $\\left|I_q^p\\right|$ and $\\Delta_q$ between those qubits will be approximate at best. Therefore, the CJJ rf-SQUID does not provide a convenient means of accommodating fabrication variations between multiple flux qubits in a large scale processor.\n\nGiven that the CJJ rf-SQUID provides additional flexibility at a cost, it is by no means obvious that one can design a better rf-SQUID flux qubit by adding even more junctions. Specifically, it is desirable to have a device whose imperfections can be mitigated purely by the application of {\\it time-independent} compensation signals. The novel rf-SQUID topology shown in Fig.~\\ref{fig:rfssummary}c, hereafter referred to as the compound-compound Josephson junction (CCJJ) rf-SQUID, satisfies this latter constraint. Here, each junction of the CJJ in Fig.~\\ref{fig:rfssummary}b has been replaced by a dc-SQUID, which will be referred to as left ($L$) and right ($R$) minor loops, and will be subjected to external flux biases $\\Phi_L^x$ and $\\Phi_R^x$, respectively. The role of the CJJ loop in Fig.~\\ref{fig:rfssummary}b is now played by the CCJJ loop of inductance $L_{\\text{ccjj}}$ which will be subjected to an external flux bias $\\Phi^x_{\\text{ccjj}}$. It is shown in Appendix B that if one chooses {\\it static} values of $\\Phi_L^x$ and $\\Phi_R^x$ such that the net critical currents of the minor loops are equal, then it can be described by an effective two-dimensional Hamiltonian of the form\n\\begin{subequations}\n\\begin{equation}\n\\label{eqn:4JHeff}\n{\\cal H}=\\sum_n\\left[\\frac{Q_n^2}{2C_n}+U_n\\frac{\\left(\\varphi_n-\\varphi_n^x\\right)^2}{2}\\right]-U_q\\beta_{\\text{eff}}\\cos\\left(\\varphi_q-\\varphi_q^0\\right) \\; ,\n\\end{equation}\n\n\\noindent where the sum is over $n\\in\\left\\{q,\\text{ccjj}\\right\\}$, $C_q\\equiv C_1+C_2+C_3+C_4$, $1\/C_{\\text{ccjj}}\\equiv 1\/(C_1+C_2)+1\/(C_3+C_4)$ and $L_q\\equiv L_{\\text{body}}+L_{\\text{ccjj}}\/4$. The effective 2-dimensional potential energy in Hamiltonian (\\ref{eqn:4JHeff}) is characterized by\n\\begin{equation}\n\\label{eqn:4JBeffbalanced}\n\\beta_{\\text{eff}}=\\beta_+(\\Phi^x_{L},\\Phi^x_{R})\\cos\\left(\\frac{\\varphi_{\\text{ccjj}}-\\varphi^0_{\\text{ccjj}}}{2}\\right) \\;\\; ,\n\\end{equation}\n\n\\noindent where $\\beta_+(\\Phi^x_{L},\\Phi^x_{R})=2\\pi L_q I_q^c(\\Phi^x_{L},\\Phi^x_{R})\/\\Phi_0$ with \n\\begin{displaymath}\nI_q^c(\\Phi^x_{L},\\Phi^x_{R})\\equiv (I_1+I_2)\\cos\\left(\\frac{\\pi\\Phi^x_{L}}{\\Phi_0}\\right)+(I_3+I_4)\\cos\\left(\\frac{\\pi\\Phi^x_{R}}{\\Phi_0}\\right) \\; . \n\\end{displaymath}\n\n\\noindent Given an appropriate choice of $\\Phi^x_{L}$ and $\\Phi^x_{R}$, the $q$ and ccjj loops will possess apparent flux offsets of the form\n\\begin{equation}\n\\label{eqn:qoffset}\n\\Phi_q^0=\\frac{\\Phi_0\\varphi_q^0}{2\\pi}=\\frac{\\Phi_{L}^0+\\Phi_{R}^0}{2}\\; ;\n\\end{equation}\n\\vspace{-12pt}\n\\begin{equation}\n\\label{eqn:ccjjoffset}\n\\Phi^0_{\\text{ccjj}}=\\frac{\\Phi_0\\varphi^0_{\\text{ccjj}}}{2\\pi}=\\Phi_{L}^0-\\Phi_{R}^0\\; ,\n\\end{equation}\n\\end{subequations}\n\n\\noindent where $\\Phi_{L(R)}^0$ is given by Eq.~(\\ref{eqn:4JMinorOffset}), which is purely a function of $\\Phi^x_{L(R)}$ and junction critical currents. As such, the apparent flux offsets are {\\it independent} of $\\Phi^x_{\\text{ccjj}}$. Under such conditions, we deem the CCJJ to be {\\it balanced}. Given that the intended mode of operation is to hold $\\Phi_L^x$ and $\\Phi_R^x$ constant, then the offset phases $\\varphi_L^0$ and $\\varphi_R^0$ will also be constant. The result is that Hamiltonian (\\ref{eqn:4JHeff}) for the CCJJ rf-SQUID becomes homologous to that of an ideal CJJ rf-SQUID [$\\beta_-=0$ in Eqs.~(\\ref{eqn:2JBeff}) and (\\ref{eqn:2JOffset})] with apparent {\\it static} flux offsets. Such static offsets can readily be calibrated and compensated for in-situ using either analog control lines or on-chip programmable flux sources \\cite{PCC}. For typical device parameters and junction variability on the order of a few percent, these offsets will be $\\sim 1\\rightarrow 10\\,$m$\\Phi_0$. Equations \\ref{eqn:4JHeff}-\\ref{eqn:ccjjoffset} with $\\Phi_q^0=\\Phi^0_{\\text{ccjj}}=0$ will be referred to hereafter as the ideal CCJJ rf-SQUID model.\n\nThe second advantage of the CCJJ rf-SQUID is that one can readily accommodate for variations in critical current between multiple flux qubits. Note that in Eq.~(\\ref{eqn:4JBeffbalanced}) that the maximum height of the tunnel barrier is governed by $\\beta_+(\\Phi^x_{L},\\Phi^x_{R})\\equiv\\beta_L(\\Phi^x_{L})+\\beta_R(\\Phi^x_{R})$, where $\\beta_{L(R)}$ is given by Eq.~(\\ref{eqn:4JMinorOffset}). One is free to choose any pair of $(\\Phi^x_{L},\\Phi^x_{R})$ such that $\\beta_L(\\Phi^x_{L})=\\beta_R(\\Phi^x_{R})$, as dictated by Eq.~(\\ref{eqn:balancedapprox}). Consequently, $\\beta_+=2\\beta_R(\\Phi^x_{R})$ in Eq.~(\\ref{eqn:4JBeffbalanced}). One can then choose $\\Phi^x_{R}$, which then dictates $\\Phi^x_{L}$, so as to homogenize $\\beta_+$ between multiple flux qubits. The results is a set of nominally uniform flux qubits where the particular choice of $(\\Phi^x_{L},\\Phi^x_{R})$ for each qubit merely results in unique static flux offsets $\\Phi^0_q$ and $\\Phi^0_{\\text{ccjj}}$ for each device.\n\nTo summarize up to this point, the CCJJ rf-SQUID is robust against Josephson junction fabrication variations both within an individual rf-SQUID and between a plurality of such devices. The variations can be effectively tuned out purely through the application of {\\it static} flux biases, which is of considerable advantage when envisioning the implementation of large scale quantum information processors that use flux qubits.\n\n\\subsection{$L$-tuner}\n\nThe purpose of the CCJJ structure was to provide a means of coming to terms with fabrication variations in Josephson junctions both within individual flux qubits and between sets of such devices. However, junctions are not the only key parameter that may vary between devices, nor are fabrication variations responsible for all of the potential variation. In particular, it has been experimentally demonstrated that the inductance of a qubit $L_q$ that is connected to other qubits via tunable mutual inductances is a function of the coupler configuration \\cite{cjjcoupler}. Let the bare inductance of the qubit in the presence of no couplers be represented by $L_q^0$ and the mutual inductance between the qubit and coupler $i$ be represented by $M_{\\text{co},i}$. If the coupler possesses a first order susceptibility $\\chi_i$, as defined in Ref.~\\onlinecite{cjjcoupler}, then the net inductance of the qubit can be expressed as \n\\begin{equation}\n\\label{eqn:LqNoTuner}\nL_q=L_q^0-\\sum_{i}M^2_{\\text{co},i}\\chi_i\\; . \n\\end{equation}\n\n\\noindent Given that qubit properties such as $\\Delta_q$ can be exponentially sensitive to variations in $L_q$, then it is undesirable to have variations in $L_q$ between multiple flux qubits or to have $L_q$ change during operation. This could have a deleterious impact upon AQO in which it is typically assumed that all qubits are identical and they are intended to be annealed in unison \\cite{AQC}. From the perspective of GMQC, one could very well attempt to compensate for such effects in a CJJ or CCJJ rf-SQUID flux qubit by adjusting the tunnel barrier height to hold $\\Delta_q$ constant, but doing so alters $\\left|I_q^p\\right|$, which then alters the coupling of the qubit to radiative sources, thus demanding further compensation. Consequently, it also makes sense from the perspective of GMQC that one find a means of rendering $L_q$ uniform between multiple qubits and insensitive to the settings of inductive coupling elements.\n\n\\begin{figure}\n\\includegraphics[width=2.5in]{LTuner.pdf}\n\\caption{\\label{fig:LTuner} (color online) A CCJJ rf-SQUID with $L$-tuner connected to multiple tunable inductive couplers via transformers with mutual inductances $M_{\\text{co},i}$ and possessing susceptibilities $\\chi_i$. The $L$-tuner is controlled via the external flux bias $\\Phi^x_{LT}$}\n\\end{figure}\n\nIn order to compensate for variations in $L_q$, we have inserted a tunable Josephson inductance \\cite{vanDuzer} into the CCJJ rf-SQUID body, as depicted in Fig.~\\ref{fig:LTuner}. We refer to this element as an inductance ($L$)-tuner. This relatively simple element comprises a dc-SQUID whose critical current vastly exceeds that of the CCJJ structure, thus ensuring negligible phase drop across the $L$-tuner. Assuming that the inductance of the $L$-tuner wiring is negligible, the $L$-tuner modifies Eq.~(\\ref{eqn:LqNoTuner}) in the following manner:\n\\begin{equation}\n\\label{eqn:LqWithTuner}\nL_q=L_q^0-\\sum_{i}M^2_{\\text{co},i}\\chi_i + \\frac{L_{J0}}{\\cos(\\pi\\Phi^x_{LT}\/\\Phi_0)}\\; ,\n\\end{equation}\n\n\\noindent where $L_{J0}\\equiv\\Phi_0\/2\\pi I^c_{LT}$, $I^c_{LT}$ is the net critical current of the two junctions in the $L$-tuner and $\\Phi^x_{LT}$ is an externally applied flux bias threading the $L$-tuner loop. For modest flux biases such that $I^c_{LT}\\cos(\\pi\\Phi^x_{LT}\/\\Phi_0)\\gg I_q^c$, Eq.~(\\ref{eqn:LqWithTuner}) is a reliable model of the physics of the $L$-tuner.\n\nGiven that the $L$-tuner is only capable of augmenting $L_q$, one can only choose to target $L_q>L_q^0-\\sum_iM_{\\text{co},i}^2\\chi^{\\text{AFM}}_i+L_{J0}$, where $\\chi^{\\text{AFM}}_i$ is the maximum antiferromagnetic (AFM) susceptibility of inter-qubit coupler $i$. In practice, we choose to restrict operation of the couplers to the range $-\\chi_i^{\\text{AFM}}<\\chi_i<\\chi_i^{\\text{AFM}}$ such that the maximum qubit inductance that will be encountered is $L_q>L_q^0+\\sum_iM_{\\text{co},i}^2\\chi^{\\text{AFM}}_i+L_{J0}$. We then choose to prebias $\\Phi^x_{\\text{LT}}$ for each qubit to match the maximum realized $L_q\\equiv L^{\\text{max}}_q$ amongst a set of flux qubits. Thereafter, one can hold $L_q=L^{\\text{max}}_q$ as couplers are adjusted by inverting Eq.~(\\ref{eqn:LqWithTuner}) to solve for an appropriate value of $\\Phi^x_{LT}$. Thus, the $L$-tuner provides a ready means of compensating for small variations in $L_q$ between flux qubits and to hold $L_q$ constant as inductive inter-qubit coupling elements are adjusted.\n\n\\section{Device Architecture, Fabrication and Readout Operation}\n\n\\begin{figure}\n\\includegraphics[width=3.25in]{Architecture_Schematic.pdf} \\\\\n\\includegraphics[width=3.25in]{Architecture_CrossSection.pdf} \\\\\n\\includegraphics[width=3.25in]{Architecture_OpticalQubit.pdf}\n\\caption{\\label{fig:architecture} (color online) a) High level schematic of the analog devices on the device reported upon herein. Qubits are represented as light grey elongated objects and denoted as $q_0\\ldots q_7$. One representative readout (RO), CCJJ and $L$-tuner ($LT$) each have been indicated in dashed boxes. Couplers (CO) are represented as dark objects located at the intersections of the qubit bodies. b) SEM of a cross-section of the fabrication profile. Metal layers denoted as BASE, WIRA, WIRB and WIRC. Insulating layers labeled as SiO$_2$. Topmost insulator has not been planarized in this test structure, but is planarized in the full circuit process. An example via (VIA), Josephson junction (JUNC, AlO$_x$\/Al) and resistor (RESI) are also indicated. c) Optical image of a portion of a device completed up to WIRB. Portions of qubits $q_0\\ldots q_3$ and the entirety of $q_4$ are visible.}\n\\end{figure}\n\nTo test the CCJJ rf-SQUID flux qubit, we fabricated a circuit containing 8 such devices with pairwise interactions mediated by a network of 16 in-situ tunable CJJ rf-SQUID inter-qubit couplers \\cite{cjjcoupler}. Each qubit was also coupled to its own dedicated quantum flux parametron (QFP)-enabled readout \\cite{QFP}. A high level schematic of the device architecture is shown in Fig.~\\ref{fig:architecture}a. External flux biases were provided to target devices using a sparse combination of analog current bias lines to facilitate device calibration and an array of single flux quantum (SFQ) based on-chip programmable control circuitry (PCC) \\cite{PCC}. \n\nThe device was fabricated from an oxidized Si wafer with Nb\/Al\/Al$_2$O$_3$\/Nb trilayer junctions and four Nb wiring layers separated by planarized plasma enhanced chemical vapor deposited SiO$_{2}$. A scanning electron micrograph of the process cross-section is shown in Fig.~\\ref{fig:architecture}b. The Nb metal layers have been labeled as BASE, WIRA, WIRB and WIRC. The flux qubit wiring was primarily located in WIRB and consisted of $2\\,\\mu$m wide leads arranged as an approximately $900\\,\\mu$m long differential microstrip located $200\\,$nm above a groundplane in WIRA. CJJ rf-SQUID coupler wiring was primarily located in WIRC, stacked on top of the qubit wiring to provide inductive coupling. PCC flux storage loops were implemented as stacked spirals of 13-20 turns of $0.25\\,\\mu$m wide wiring with $0.25\\,\\mu$m separation in BASE and WIRA (WIRB). Stored flux was picked up by one-turn washers in WIRB (WIRA) and fed\ninto transformers for flux-biasing devices. External control lines were mostly located in\nBASE and WIRA. All of these control elements resided below a groundplane in WIRC. The groundplane under the qubits and over the PCC\/external control lines were electrically connected using extended vias in WIRB so as to form a nearly continuous superconducting shield between the analog devices on top and the bias circuitry below. To provide biases to target devices with minimal parasitic crosstalk, transformers for biasing qubits, couplers, QFPs and dc-SQUIDs using bias lines and\/or PCC elements were enclosed in superconducting boxes with BASE and WIRC forming the top and bottom, respectively, and vertical walls formed by extended vias in WIRA and WIRB. Minimal sized openings were placed in the vertical walls through which the bias and target device wiring passed at opposing ends of each box. \n\nAn optical image of a portion of a device completed up to WIRB is shown in Fig.~\\ref{fig:architecture}c. Qubits are visible as elongated objects, WIRB PCC spirals are visible as dark rectangles and WIRB washers are visible as light rectangles with slits. Note that the extrema of the CCJJ rf-SQUID qubits are terminated in unused transformers. These latter elements allow this 8-qubit unit cell to be tiled in a larger device with additional inter-qubit CJJ couplers providing the connections between unit cells.\n\n\\begin{figure}\n\\includegraphics[width=3.25in]{unipolarannealingwaveforms.pdf}\n\\caption{\\label{fig:unipolarannealing} (color online) a) Schematic representation of a portion of the circuit reported upon herein. Canonical representations of all externally controlled flux biases $\\Phi_{\\alpha}^x$, readout current bias $i_{ro}$ and key mutual inductances $M_{\\alpha}$ are indicated. b) Depiction of latching readout waveform sequence. c) Example QFP state population measurement as a function of the dc level $\\Phi^x_{\\text{qfp}}$ with no qubit signal. Data have been fit to Eq.~(\\ref{eqn:transition}).}\n\\end{figure}\n\nWe have studied the properties of all 8 CCJJ rf-SQUID flux qubits on this chip in detail and report upon one such device herein. To clearly establish the lingua franca of our work, we have depicted a portion of the multi-qubit circuit in Fig.~\\ref{fig:unipolarannealing}a. Canonical representations of the external flux biases needed to operate a qubit, a coupler and a QFP-enabled readout are labeled on the diagram. The fluxes $\\Phi_L^x$, $\\Phi_R^x$, $\\Phi^x_{LT}$ and $\\Phi_{\\text{co}}^x$ were only ever subjected to dc levels in our experiments that were controlled by the PCC. The remaining fluxes and readout current biases were driven by a custom-built 128 channel room temperature current source. The mutual inductance between qubit and QFP ($M_{q-\\text{qfp}}$), between QFP and dc-SQUID ($M_{\\text{qfp-ro}}$), qubit and coupler ($M_{\\text{co},i}$) and $\\Phi^x_{\\text{co}}$-dependent inter-qubit mutual inductance ($M_{\\text{eff}}$) have also been indicated. Further details concerning cryogenics, magnetic shielding and signal filtering have been discussed in previous publications \\cite{LOMRT,PCC,QFP,cjjcoupler}.\n\nSince much of what follows depends upon a clear understanding of our novel QFP-enabled readout mechanism, we present a brief review of its operation herein. The flux and readout current waveform sequence involved in a single-shot readout is depicted in Fig.~\\ref{fig:unipolarannealing}b. Much like the CJJ qubit \\cite{LOMRT}, the QFP can be adiabatically {\\it annealed} from a state with a monostable potential ($\\Phi^x_{\\text{latch}}=-\\Phi_0\/2$) to a state with a bistable potential ($\\Phi^x_{\\text{latch}}=-\\Phi_0$) that supports two counter-circulating persistent current states. The matter of which persistent current state prevails at the end of an annealing cycle depends upon the sum of $\\Phi^x_{\\text{qfp}}$ and any signal from the qubit mediated via $M_{q-\\text{qfp}}$. The state of the QFP is then determined with high fidelity using a synchronized flux pulse and current bias ramp applied to the dc-SQUID. The readout process was typically completed within a repetition time $t_{\\text{rep}}<50\\,\\mu$s.\n\nAn example trace of the population of one of the QFP persistent current states $P_{\\text{qfp}}$ versus $\\Phi^x_{\\text{qfp}}$, obtained using the latching sequence depicted in Fig.~\\ref{fig:unipolarannealing}b, is shown in Fig.~\\ref{fig:unipolarannealing}c. This trace was obtained with the qubit potential held monostable ($\\Phi^x_{\\text{ccjj}}=-\\Phi_0\/2$) such that it presented minimal flux to the QFP and would therefore not influence $P_{\\text{qfp}}$. The data have been fit to the phenomenological form\n\\begin{equation}\n\\label{eqn:transition}\nP_{\\text{qfp}}=\\frac{1}{2}\\left[1-\\tanh\\left(\\frac{\\Phi^x_{\\text{qfp}}-\\Phi^0_{\\text{qfp}}}{w}\\right)\\right]\n\\end{equation}\n\n\\noindent with width $w\\sim 0.18\\,$m$\\Phi_0$ for the trace shown therein. When biased with constant $\\Phi^x_{\\text{qfp}}=\\Phi^0_{\\text{qfp}}$, which we refer to as the QFP degeneracy point, this transition in the population statistics can be used as a highly nonlinear flux amplifier for sensing the state of the qubit. Given that $M_{q-\\text{qfp}}=6.28\\pm0.01\\,$pH for the devices reported upon herein and that typical qubit persistent currents in the presence of negligible tunneling $\\left|I_q^p\\right|\\gtrsim 1\\,\\mu$A, then the net flux presented by a qubit was $2M_{q-\\text{qfp}}\\left|I_q^p\\right|\\gtrsim 6\\,$m$\\Phi_0$, which far exceeded $w$. By this means one can achieve the very high qubit state readout fidelity reported in Ref.~\\onlinecite{QFP}. On the other hand, the QFP can be used as a linearized flux sensor by engaging $\\Phi^x_{\\text{qfp}}$ in a feedback loop and actively tracking $\\Phi^0_{\\text{qfp}}$. This latter mode of operation has been used extensively in obtaining many of the results presented herein.\n\n\n\\section{CCJJ rf-SQUID Characterization}\n\nThe purpose of this section is to present measurements that characterize the CCJJ, $L$-tuner and capacitance of a CCJJ rf-SQUID. All measurements shown herein have been made with a set of standard bias conditions given by $\\Phi^x_{L}=98.4\\,$m$\\Phi_0$, $\\Phi^x_{R}=-89.3\\,$m$\\Phi_0$, $\\Phi^x_{\\text{LT}}=0.344\\,\\Phi_0$ and all inter-qubit couplers tuned to provide $M_{\\text{eff}}=0$, unless indicated otherwise. The logic behind this particular choice of bias conditions will be explained in what follows.\nThis section will begin with a description of the experimental methods for extracting $L_q$ and $I_q^c$ from persistent current measurements. Thereafter, data that demonstrate the performance of the CCJJ and $L$-tuner will be presented. Finally, this section will conclude with the determination of $C_q$ from macroscopic resonant tunneling data.\n\n\\subsection{High Precision Persistent Current Measurements} \n\nThe most direct means of obtaining information regarding a CCJJ rf-SQUID is to measure the persistent current $\\left|I_q^p\\right|$ as a function of $\\Phi^x_{\\text{ccjj}}$. A reasonable first approach to measuring this quantity would be to sequentially prepare the qubit in one of its persistent current states and then the other, and use the QFP in feedback mode to measure the difference in flux sensed by the QFP, which equals $2M_{q-\\text{qfp}}\\left|I_q^p\\right|$. A fundamental problem with this approach is that it is sensitive to low frequency (LF) flux noise \\cite{1OverF}, which can alter the background flux experienced by the QFP between the sequential measurements. For a typical measurement with our apparatus, the act of locating a single QFP degeneracy point to within $20\\,\\mu\\Phi_0$ takes on the order of $1\\,$s, which means that two sequential measurements would only be immune to flux noise below $0.5\\,$Hz. We have devised a LF flux noise rejection scheme that takes advantage of the fact that such noise will generate a correlated shift in the apparent degeneracy points if the sequential preparations of the qubit can be interleaved with single-shot measurements that are performed in rapid succession. If these measurements are performed with repetition time $t_{\\text{rep}}\\sim 1\\,$ms, then the measurements will be immune to flux noise below $\\sim 1\\,$kHz.\n\n\\begin{figure}\n\\includegraphics[width=3.25in]{iqplockinwaveforms.pdf}\n\\caption{\\label{fig:iqplockin} (color online) a) Low frequency flux noise rejecting qubit persistent current measurement sequence. Waveforms shown are appropriate for measuring $\\left|I_q^p\\left(\\Phi^x_{\\text{ccjj}}\\right)\\right|$ for $-\\Phi_0\\leq\\Phi^x_{\\text{ccjj}}\\leq 0$. The $\\Phi^x_{\\text{ccjj}}$ waveform can be offset by integer $\\Phi_0$ to measure the periodic behavior of this quantity. Typical repetition time is $t_{\\text{rep}}\\sim 1\\,$ms. b) Depiction of QFP transition and correlated changes in QFP population statistics for the two different qubit initializations.}\n\\end{figure}\n\nA depiction of the LF flux noise rejecting persistent current measurement sequence is shown in Fig.~\\ref{fig:iqplockin}a. The waveforms comprise two concatenated blocks of sequential annealing of the qubit to a target $\\Phi^x_{\\text{ccjj}}$ in the presence of an alternating polarizing flux bias $\\pm\\Phi_q^i$ followed by latching and single-shot readout of the QFP. The QFP flux bias is engaged in a differential feedback mode in which it is pulsed in alternating directions by an amount $\\delta\\Phi_m$ about a mean level $\\Phi_m$. The two single-shot measurements yield binary results for the QFP state and the {\\it difference} between the two binary results is recorded. Gathering a statistically large number of such differential measurements then yields a differential population measurement $\\delta P_{\\text{qfp}}$. Conceptually, the measurement works in the manner depicted in Fig.~\\ref{fig:iqplockin}b: the two different initializations of the qubit move the QFP degeneracy point to some unknown levels $\\Phi_m^0\\pm\\delta\\Phi_m^0$, where $\\Phi_m^0$ represents the true mean of the degeneracy points at any given instant in time and $2\\delta\\Phi_m^0$ is the true difference in degeneracy points that is independent of time. Focusing on flux biases that are close to the degeneracy point, one can linearize Eq.~(\\ref{eqn:transition}):\n\\begin{equation}\n\\label{eqn:transitionlinear}\nP_{\\text{qfp},\\pm}\\approx\\frac{1}{2}+\\frac{1}{2w}\\left[\\Phi^x_{\\text{qfp}}-\\left(\\Phi_m^0\\pm\\delta\\Phi_m^0\\right)\\right] \\; .\n\\end{equation}\n\n\\noindent Assuming that the rms LF flux noise $\\Phi_n\\ll w$ and that one has reasonable initial guesses for $\\Phi_m^0\\pm\\delta\\Phi_m^0$, then the use of the linear approximation should be justified. Applying $\\Phi^x_{\\text{qfp}}=\\Phi_m\\pm\\delta\\Phi_m$ and sufficient repetitions of the waveform pattern shown in Fig.~\\ref{fig:iqplockin}a, the differential population will then be of the form\n\\begin{equation}\n\\label{eqn:diffpop}\n\\delta P_{\\text{qfp}}=P_{\\text{qfp},+}-P_{\\text{qfp},-}=\\frac{1}{w}\\left[\\delta\\Phi_m+\\delta\\Phi_m^0\\right]\\; ,\n\\end{equation}\n\n\\noindent which is {\\it independent} of $\\Phi_m$ and $\\Phi_m^0$. Note that the above expression contains only two independent variables, $w$ and $\\delta\\Phi_m^0$, and that $\\delta P_{\\text{qfp}}$ is purely a linear function of $\\delta\\Phi_m$. By sampling at three values of $\\delta\\Phi_m$, as depicted by the pairs of numbered points in Fig.~\\ref{fig:iqplockin}b, the independent variables in Eq.~(\\ref{eqn:diffpop}) will be overconstrained, thus readily yielding $\\delta\\Phi_m^0$. One can then infer the qubit persistent current as follows:\n\\begin{equation}\n\\label{eqn:iqplockin}\n\\left|I_q^p\\right|=\\frac{2\\delta\\Phi_m^0}{2M_{q-\\text{qfp}}}=\\frac{\\delta\\Phi_m^0}{M_{q-\\text{qfp}}} \\; .\n\\end{equation}\n\n\\begin{figure}[ht]\n\\includegraphics[width=3.25in]{QU1Lq0Extraction_ver2.pdf}\n\\caption{\\label{fig:Lq0extraction} (color online) Example measurements of $\\left|I_q^p\\left(\\Phi^x_{\\text{ccjj}}\\right)\\right|$.}\n\\end{figure}\n\n\\noindent Example measurements of $\\left|I_q^p\\right|\\left(\\Phi^x_{\\text{ccjj}}\\right)$ are shown in Fig.~\\ref{fig:Lq0extraction}. These data, for which $1.5\\lesssim\\left|\\beta_{\\text{eff}}\\right|\\lesssim 2.5$, have been fit to the ideal CCJJ rf-SQUID model by finding the value of $\\varphi_q\\equiv\\varphi^{\\text{min}}_q$ for which the potential in Eq.~(\\ref{eqn:4JHeff}) is minimized:\n\\begin{equation}\n\\label{eqn:iqp2d}\n\\left|I_q^p\\right|=\\frac{\\Phi_0}{2\\pi}\\frac{\\left|\\varphi^{\\text{min}}_q-\\varphi_q^x\\right|}{L_q} \\;\\; .\n\\end{equation}\n\n\\noindent The best fit shown in Fig.~\\ref{fig:Lq0extraction} was obtained with $L_q=265.4 \\pm 1.0\\,$pH, $L_{\\text{ccjj}}=26\\pm 1\\,$pH and $I_q^c=3.103\\pm 0.003\\,\\mu$A. For comparison, we had estimated $L_q=273\\,$pH at the standard bias condition for $\\Phi^x_{LT}$ and $L_{\\text{ccjj}}=20\\,$pH from design. \n\nIn practice, we have found that the LF flux noise rejecting method of measuring $\\left|I_q^p\\right|$ effectively eliminates any observable $1\/f$ component in that measurement's noise power spectral density, to within statistical error. Finally, it should be noted that the LF flux noise rejecting method is applicable to any measurement of a difference in flux sensed by a linearized detector. In what follows herein, we have made liberal use of this technique to calibrate a variety of quantities in-situ using both QFPs and other qubits as flux detectors.\n\n\\subsection{CCJJ}\n\nIn this subsection, the CCJJ has been characterized as a function of $\\Phi^x_{L}$ and $\\Phi^x_{R}$ with all other static flux biases set to the standard bias condition cited above. Referring to Eq.~(\\ref{eqn:4JQOffset}), it can be seen that the qubit degeneracy point $\\Phi_q^0$ is a function of $\\Phi^x_{\\text{ccjj}}$ through $\\gamma_0$ if the CCJJ has not been balanced. To accentuate this functional dependence, one can anneal the CCJJ rf-SQUID with $\\Phi^x_{\\text{ccjj}}$ waveforms of opposing polarity about a minimum in $\\left|\\beta_{\\text{eff}}\\right|$, as found at $\\Phi^x_{\\text{ccjj}}=-\\Phi_0\/2$. The expectation is that the {\\it apparent} qubit degeneracy points will be antisymmetric about the mean given by setting $\\gamma_0=0$ in Eq.~(\\ref{eqn:4JQOffset}). The waveform sequence for performing a differential qubit degeneracy point measurement is depicted in Fig.~\\ref{fig:bipolarlockin}. In this case, the QFP is used as a latching readout and the qubit acts as the linearized detector of its own apparent annealing polarization-dependent flux offset. As with the $\\left|I_q^p\\right|$ measurement described above, this LF flux noise rejecting procedure returns a {\\it difference} in apparent flux sensed by the qubit and not the absolute flux offsets. \n\n\\begin{figure}[ht]\n\\includegraphics[width=3.25in]{bipolarannealingwaveforms.pdf}\n\\caption{\\label{fig:bipolarlockin} (color online) Schematic of low frequency noise rejecting differential qubit degeneracy point measurement sequence. The qubit is annealed with a $\\Phi^x_{\\text{ccjj}}$ signal of opposing polarity in the two frames and the qubit flux bias is controlled via feedback.}\n\\end{figure}\n\nTo find balanced pairs of $\\left(\\Phi^x_{L},\\Phi^x_{R}\\right)$ in practice, we set $\\Phi^x_{R}$ to a constant and used the LF flux noise rejecting procedure inside a software feedback loop that controlled $\\Phi^x_{L}$ to null the difference in apparent degeneracy point to a precision of $20\\,\\mu\\Phi_0$. Balanced pairs of $\\left(\\Phi^x_{L},\\Phi^x_{R}\\right)$ are plotted in Fig.~\\ref{fig:balanced}a. These data have been fit to \\ref{eqn:balancedapprox} using $\\beta_-\/\\beta_+$ as a free parameter. The best fit shown in Fig.~\\ref{fig:balanced}a was obtained with $1-\\beta_{R,+}\/\\beta_{L,+}=(4.1\\pm0.3)\\times 10^{-3}$, which then indicates an approximately $0.4\\%$ asymmetry between the pairs of junctions in the $L$ and $R$ loops.\n\n\\begin{figure}[ht]\n\\includegraphics[width=3.25in]{QU1MinorLobeBalancing.pdf} \\\\\n\\includegraphics[width=3.25in]{QU1BalancedIp_ver2.pdf} \\\\\n\\caption{\\label{fig:balanced} (color online) a) Minor lobe balancing data and fit to Eq.~(\\ref{eqn:balancedapprox}). The standard bias conditions for $\\Phi^x_{L}$ and $\\Phi^x_{R}$ are indicated by dashed lines. b) $\\left|I_q^p(\\Phi^x_{\\text{ccjj}}=-\\Phi_0)\\right|$ versus $\\Phi^x_{R}$, where $\\Phi^x_{L}$ has been chosen using Eq.~(\\ref{eqn:balancedapprox}). The data have been fit to the ideal CCJJ rf-SQUID model. The standard bias condition for $\\Phi^x_{R}$ and the resultant $\\left|I_q^p(\\Phi^x_{\\text{ccjj}}=-\\Phi_0)\\right|$ are indicated by dashed lines.}\n\\end{figure}\n\nA demonstration of how the CCJJ facilitates tuning of $I_q^c$ is shown in Fig.~\\ref{fig:balanced}b. Here, the measurable consequence of altering $I_q^c$ that was recorded was a change in $\\left|I_q^p\\right|$ at $\\Phi^x_{\\text{ccjj}}=-\\Phi_0$. These data have been fit to the ideal CCJJ rf-SQUID model with the substitution\n\\begin{equation}\n\\label{eqn:Icbalanced}\nI_q^c(\\Phi^x_{R},\\Phi^x_{L})=I_c^0\\cos\\left(\\frac{\\pi\\Phi^x_{R}}{\\Phi_0}\\right)\n\\end{equation}\n\n\\noindent and using the values of $L_{\\text{ccjj}}$ and $L_q$ obtained from fitting the data in Fig.~\\ref{fig:Lq0extraction}, but treating $I_c^0$ as a free parameter. Here, $\\Phi^x_{L}$ on the left side of Eq.~(\\ref{eqn:Icbalanced}) is a function of $\\Phi^x_{R}$ per the CCJJ balancing condition Eq.~(\\ref{eqn:balancedapprox}). The best fit was obtained with $I_c^0=3.25\\pm0.01\\,\\mu$A. This latter quantity agrees well with the designed critical current of four $0.6\\,\\mu$m diameter junctions in parallel of $3.56\\;\\mu$A. Thus, it is possible to target a desired $I_q^c$ by using Eq.~(\\ref{eqn:Icbalanced}) to select $\\Phi^x_{R}$ and then Eq.~(\\ref{eqn:balancedapprox}) to select $\\Phi^x_{L}$. The standard bias conditions for $\\Phi^x_{L}$ and $\\Phi^x_{R}$ quoted previously were chosen so as to homogenize $I_q^c$ amongst the 8 CCJJ rf-SQUIDs on this particular chip.\n\n\\subsection{$L$-Tuner}\n\nTo characterize the $L$-tuner, we once again turned to measurements of $\\left|I_q^p(\\Phi^x_{\\text{ccjj}}=-\\Phi_0)\\right|$, but this time as a function of $\\Phi^x_{LT}$. Persistent current results were then used to infer $\\delta L_q=L_q(\\Phi^x_{LT})-L_q(\\Phi^x_{LT}=0)$ using the ideal CCJJ rf-SQUID model with $L_{\\text{ccjj}}$ and $I_q^c$ held constant and treating $L_q$ as a free parameter. The experimental results are plotted in Fig.~\\ref{fig:ltuner}a and have been fit to\n\\begin{equation}\n\\label{eqn:Ltunerfit}\n\\delta L_q=\\frac{L_{J0}}{\\cos\\left(\\pi\\Phi^x_{LT}\/\\Phi_0\\right)} \\; ,\n\\end{equation}\n\n\\noindent and the best fit was obtained with $L_{J0}=19.60\\pm0.04\\,$pH. Modeling this latter parameter as $L_{q0}=\\Phi_0\/2\\pi I^c_{LT}$, we estimate $I^c_{LT}=16.79\\pm0.04\\,\\mu$A, which is close to the design value of $16.94\\,\\mu$A. The standard bias condition for $\\Phi^x_{\\text{LT}}$ was chosen so as to homogenize $L_q$ amongst the 8 CCJJ rf-SQUID flux qubits on this chip and to provide adequate bipolar range to accommodate inter-qubit coupler operation.\n\n\\begin{figure}[ht]\n\\includegraphics[width=3.25in]{QU1LTunerCalibration.pdf} \\\\\n\\includegraphics[width=3.25in]{QU1LTunerCompensationComparison_ver2.pdf} \\\\\n\\caption{\\label{fig:ltuner} (color online) a) $L$-tuner calibration and fit to Eq.~(\\ref{eqn:Ltunerfit}). The standard bias condition for $\\Phi^x_{\\text{LT}}$ and the resultant $\\delta L_q$ are indicated by dashed lines. b) Observed change in maximum qubit persistent current with and without active $L$-tuner compensation and predictions for both cases.}\n\\end{figure}\n\nTo demonstrate the use of the $L$-tuner, we have probed a worst-case scenario in which four CJJ rf-SQUID couplers connected to the CCJJ rf-SQUID in question are tuned in unison. Each of the couplers had been independently calibrated per the procedures described in Ref.~\\onlinecite{cjjcoupler}, from which we obtained $M_{\\text{co},i}\\approx 15.8\\,$pH and $\\chi_i\\left(\\Phi^x_{\\text{co}}\\right)$ ($i\\in\\left\\{1,2,3,4\\right\\}$). Each of these devices provided a maximum AFM inter-qubit mutual inductance $M_{\\text{AFM}}=M^2_{\\text{co},i}\\chi_{\\text{AFM}}\\approx 1.56\\,$pH, from which one can estimate $\\chi_{\\text{AFM}}\\approx 6.3\\,$nH$^{-1}$. Measurements of $\\left|I_q^p\\right|$ with and without active $L$-tuner compensation as a function of coupler bias $\\Phi^x_{\\text{co}}$, as applied to all four couplers simultaneously, are presented in Fig.~\\ref{fig:ltuner}b. The predictions from the ideal CCJJ rf-SQUID model, obtained by using $L_q=265.4\\,\\text{pH}$ (with compensation) and $L_q$ obtained from Eq.~(\\ref{eqn:LqNoTuner}) (without compensation), are also shown. Note that the two data sets and predictions all agree to within experimental error at $\\Phi^x_{\\text{co}}=0.5\\,\\Phi_0$, which corresponds to the all zero coupling state ($M_{\\text{eff}}=0$). The experimental results obtained without $L$-tuner compensation agree reasonably well with the predicted $\\Phi^x_{\\text{co}}$-dependence. As compared to the case without compensation, it can be seen that the measured $\\left|I_q^p\\right|$ show considerably less $\\Phi^x_{\\text{co}}$-dependence when $L$-tuner compensation is provided. However, the data suggest a small systematic deviation from the inductance models Eqs.~(\\ref{eqn:LqNoTuner}) and (\\ref{eqn:LqWithTuner}). At $\\Phi^x_{\\text{ccjj}}=-\\Phi_0$, for which it is estimated that $\\beta_{\\text{eff}}\\approx 2.43$, $\\left|I_q^p\\right|\\propto 1\/L_q$. Given that the data for the case without compensation are below the model, then it appears that we have slightly underestimated the change in $L_q$. Consequently, we have provided insufficient ballast inductance when the $L$-tuner compensation was activated.\n\n\\subsection{rf-SQUID Capacitance}\n\nSince $I_q^c$ and $L_q$ directly impact the CCJJ rf-SQUID potential in Hamiltonian (\\ref{eqn:4JHeff}), it was possible to infer CCJJ and $L$-tuner properties from measurements of the groundstate persistent current. In contrast, the rf-SQUID capacitance $C_q$ appears in the kinetic term in Hamiltonian (\\ref{eqn:4JHeff}). Consequently, one must turn to alternate experimental methods that invoke excitations of the CCJJ rf-SQUID in order to characterize $C_q$. One such method is to probe macroscopic resonant tunneling (MRT) between the lowest lying state in one well into either the lowest order [LO, $n=0$] state or into a higher order [HO, $n>0$] state in the opposing well of the rf-SQUID double well potential \\cite{HOMRT}. The spacing of successive HOMRT peaks as a function of rf-SQUID flux bias $\\Phi^x_q$ will be particularly sensitive to $C_q$. HOMRT has been observed in many different rf-SQUIDs and is a well established quantum mechanical phenomenon \\cite{HOMRT,Bennett,MRT3JJ}. LOMRT proved to be more difficult to observe in practice and was only reported upon relatively recently in the literature \\cite{LOMRT}. We refer the reader to this latter reference for the experimental method for measuring MRT rates.\n\n\\begin{figure}\n\\includegraphics[width=3.25in]{QU1_HOMRT_Rate.pdf} \\\\\n\\includegraphics[width=3.25in]{QU1_HOMRT_GaussianWidth.pdf} \\\\\n\\includegraphics[width=3.25in]{QU1_HOMRT_PeakPosition.pdf}\n\\caption{\\label{fig:HOMRT} (color online) a) HOMRT peaks fitted to Eq.~(\\ref{eqn:HOMRTFit}). Data shown are for $\\Phi^x_{\\text{ccjj}}\/\\Phi_0=-0.6677$, $-0.6735$, $-0.6793$, $-0.6851$, $-0.6911$ and $-0.6970$, from left to right, respectively. Number of levels in target well $n$ as indicated. b) Best fit Gaussian width parameter $W_n$ as a function of $n$. c) Best fit peak position $\\epsilon_p^n$ as a function of $n$.}\n\\end{figure}\n\nMeasurements of the initial decay rate $\\Gamma\\equiv dP_{\\downarrow}\/dt|_{t=0}$ versus $\\Phi^x_q$ are shown in Fig.~\\ref{fig:HOMRT}a with the order of the target level $n$ as indicated. The maximum observable $\\Gamma$ was imposed by the bandwidth of the apparatus, which was $\\sim 5\\,$MHz. The minimum observable $\\Gamma$ was dictated by experimental run time constraints. In order to observe many HO resonant peaks within our experimental bandwidth we have successively raised the tunnel barrier height in roughly equal intervals by tuning the target $\\Phi^x_{\\text{ccjj}}$. The result is a cascade of resonant peaks atop a monotonic background.\n\nThe authors of Ref.~\\onlinecite{Bennett} attempted to fit their HOMRT data to a sum of gaussian broadened lorentzian peaks. It was found that they could obtain satisfactory fits within the vicinity of the tops of the resonant features but that the model was unable to correctly describe the valleys between peaks. We had reached the same conclusion with the very same model as applied to our data. However, it was empirically observed that we could obtain excellent fits to all of the data by using a model composed of a sum of purely gaussian peaks plus a background that varies exponentially with $\\Phi^x_q$:\n\\begin{equation}\n\\label{eqn:HOMRTFit}\n\\frac{\\Gamma(\\Phi^x_q)}{\\hbar}=\\sum_{n}\\sqrt{\\frac{\\pi}{8}}\\frac{\\Delta_n^2}{W_n}e^{-\\frac{(\\epsilon-\\epsilon_p^n)^2}{2W_n^2}}+\\Gamma_{\\text{bkgd}}e^{\\Phi^x_q\/\\delta\\Phi_{\\text{bkgd}}}\\; ,\n\\end{equation}\n\n\\noindent where $\\epsilon\\equiv 2\\left|I_q^p\\right|\\Phi^x_q$. These fits are shown in Fig.~\\ref{fig:HOMRT}a. A summary of the gaussian width parameter $W_n$ in Fig.~\\ref{fig:HOMRT}b is shown solely for informational purposes. We will refrain from speculating why there is no trace of lorentzian lineshapes or on the origins of the exponential background herein, but rather defer a detailed examination of HOMRT to a future publication.\n\nFor the purposes of this article, the key results to take from the fits shown in Fig.~\\ref{fig:HOMRT}a are the positions of the resonant peaks, as plotted in Fig.~\\ref{fig:HOMRT}c. These results indicate that the peak spacing is very uniform: $\\delta\\Phi_{\\text{MRT}}=1.55\\pm0.01\\,$m$\\Phi_0$. One can compare $\\delta\\Phi_{\\text{MRT}}$ with the predictions of the ideal CCJJ rf-SQUID model using the previously calibrated $L_q=265.4\\,$pH, $L_{\\text{ccjj}}=26\\,$pH and $I_q^c=3.103\\,\\mu$A with $C_q$ treated as a free parameter. From such a comparison, we estimate $C_q=190\\pm 2\\,$fF. \n\nThe relatively large value of $C_q$ quoted above can be reconciled with the CCJJ rf-SQUID design by noting that, unlike other rf-SQUID flux qubits reported upon in the literature, our qubit body resides proximal to a superconducting groundplane so as to minimize crosstalk. In this case, the qubit wiring can be viewed as a differential transmission line of length $\\ell\/2\\sim 900\\,\\mu$m, where $\\ell$ is the total length of qubit wiring, with the effective Josephson junction and a short on opposing ends. The transmission line will present an impedance of the form $Z(\\omega)=-j Z_0\\tanh(\\omega\\ell\/2\\nu)$ to the effective Josephson junction, with the phase velocity $\\nu\\equiv1\/\\sqrt{L_0C_0}$ defined by the differential inductance per unit length $L_0\\sim 0.26\\,$pH$\/\\mu$m and capacitance per unit length $C_0\\sim 0.18\\,$fF$\/\\mu$m, as estimated from design. If the separation between differential leads is greater than the distance to the groundplane, then $\\ell\/2\\nu\\approx\\sqrt{L_{\\text{body}}C_{\\text{body}}\/4}$, where $C_{\\text{body}}\\sim 640\\,$fF is the total capacitance of the qubit wiring to ground. Thus, one can model the high frequency behavior of the shorted differential transmission line as an inductance $L_{\\text{body}}$ and a capacitance $C_{\\text{body}}\/4$ connected in parallel with the CCJJ. Taking a reasonable estimated value of $40\\,$fF\/$\\mu$m$^2$ for the capacitance per unit area of a Josephson junction, one can estimate the total capacitance of four $0.6\\,\\mu$m diameter junctions in parallel to be $C_J\\sim 45\\,$fF. Thus we estimate $C_q=C_J+C_{\\text{body}}\/4\\sim 205\\,$fF, which is in reasonable agreement with the best fit value of $C_q$ quoted above.\n\nWith all of the controls of the CCJJ rf-SQUID having been demonstrated, we reach the first key conclusion of this article: The CCJJ rf-SQUID is a robust device in that parametric variations, both within an individual device and between a multitude of such devices, can be accounted for using purely static flux biases. These biases have been applied to all 8 CCJJ rf-SQUIDs on this particular chip using a truly scalable architecture involving on-chip flux sources that are programmed by only a small number of address lines \\cite{PCC}.\n\n\n\\section{Qubit Properties}\n\nThe purpose of the CCJJ rf-SQUID is to provide an as ideal as possible flux qubit \\cite{fluxqubit}. By this statement, it is meant that the physics of the two lowest lying states of the device can be described by an effective Hamiltonian of the form Eq.~(\\ref{eqn:Hqubit}) with $\\epsilon=2\\left|I_q^p\\right|\\left(\\Phi_q^x-\\Phi^0_q\\right)$, $\\left|I_q^p\\right|$ being the magnitude of the persistent current that flows about the inductive loop when the device is biased hard to one side, $\\Phi_q^0$ being a static flux offset and $\\Delta_q$ representing the tunneling energy between the lowest lying states when biased at its degeneracy point $\\Phi_q^x=\\Phi^0_q$. Thus, $\\left|I_q^p\\right|$ and $\\Delta_q$ are the defining properties of a flux qubit, regardless of its topology \\cite{Leggett}. Given the complexity of a six junction device with five closed superconducting loops, it is quite justifiable to question whether the CCJJ rf-SQUID constitutes a qubit. These concerns will be directly addressed herein by demonstrating that measured $\\left|I_q^p\\right|$ and $\\Delta_q$ agree with the predictions of the quantum mechanical Hamiltonian (\\ref{eqn:4JHeff}) given independently calibrated values of $L_q$, $L_{\\text{ccjj}}$, $I_q^c$ and $C_q$.\n\nBefore proceeding, it is worth providing some context in regards to the choice of experimental methods that have been described below. For those researchers attempting to implement GMQC using resonant electromagnetic fields to prepare states and mediate interactions between qubits, experiments that involve high frequency pulse sequences to drive excitations in the qubit (such as Rabi oscillations\\cite{MooijSuperposition}, Ramsey fringes\\cite{MooijSuperposition,1OverFFluxQubit1} and spin-echo\\cite{MooijSuperposition,1OverFFluxQubit1,1OverFFluxQubit2}) are the natural modality for studying quantum effects. Such experiments are convenient in this case as the methods can be viewed as basic gate operations within this intended mode of operation. However, such methods are not the exclusive means of characterizing quantum resources. For those who wish to use precise dc pulses to implement GMQC or whose interests lie in developing hardware for AQO, it is far more convenient to have a set of tools for characterizing quantum mechanical properties that require only low bandwidth bias controls. Such methods, some appropriate in the coherent regime \\cite{Greenberg,gsip} and others in the incoherent regime \\cite{HOMRT,LOMRT,LZ}, have been reported in the literature. We have made use of such low frequency methods as our apparatuses typically possess 128 low bandwidth bias lines to facilitate the adiabatic manipulation of a large number of devices.\n\n\\begin{figure}\n\\includegraphics[width=3.25in]{QU1_LOMRT_ExampleTraces.pdf} \\\\\n\\includegraphics[width=3.25in]{QU1_LOMRT_FitParameters.pdf} \\\\\n\\caption{\\label{fig:LOMRT} (color online) a) Example LOMRT peaks fitted to Eq.~(\\ref{eqn:LOMRTFit}). Data shown are for $\\Phi^x_{\\text{ccjj}}\/\\Phi_0=-0.6621$, $-0.6642$ and $-0.6663$, from top to bottom, respectively. Data from the qubit initialized in $\\ket{\\downarrow}$ ($\\ket{\\uparrow}$) are indicated by solid (hollow) points. b) Energy scales obtained from fitting multiple LOMRT traces.}\n\\end{figure}\n\nOne possible means of probing quantum mechanical tunneling between the two lowest lying states of a CCJJ rf-SQUID is via MRT\\cite{LOMRT}. Example LOMRT decay rate data are shown in Fig.~\\ref{fig:LOMRT}a. We show results for both initializations, $\\ket{\\downarrow}$ and $\\ket{\\uparrow}$, and fits to gaussian peaks, as detailed in Ref.~\\onlinecite{LOMRT}:\n\\begin{equation}\n\\label{eqn:LOMRTFit}\n\\frac{\\Gamma(\\Phi^x_q)}{\\hbar}=\\sqrt{\\frac{\\pi}{8}}\\frac{\\Delta_q^2}{W}e^{-\\frac{(\\epsilon-\\epsilon_p)^2}{2W^2}} \\;\\; .\n\\end{equation}\n\n\\noindent A summary of the fit parameters $\\epsilon_p$ and $W$ versus $\\Phi^x_{\\text{ccjj}}$ is shown in Fig.~\\ref{fig:LOMRT}b. We also provide estimates of the device temperature using the formula\n\\begin{equation}\n\\label{eqn:TMRT}\nk_BT_{\\text{MRT}}=\\frac{W^2}{2\\epsilon_p} \\; .\n\\end{equation}\n\n\\noindent As expected, $T_{\\text{MRT}}$ shows no discernible $\\Phi^x_{\\text{ccjj}}$-dependence and is scattered about a mean value of $53\\pm2\\,$mK. A summary of $\\Delta_q$ versus $\\Phi^x_{\\text{ccjj}}$ will be shown in conjunction with more experimental results at the end of this section. For further details concerning LOMRT, the reader is directed to Ref.~\\onlinecite{LOMRT}.\n\nA second possible means of probing $\\Delta_q$ is via a Landau-Zener experiment \\cite{LZ}. In principle, this method should be applicable in both the coherent and incoherent regime. In practice, we have found it only possible to probe the device to modestly larger $\\Delta_q$ than we can reach via LOMRT purely due to the low bandwidth of our bias lines. Results from such experiments on the CCJJ rf-SQUID flux qubit will be summarized at the end of this section. We see no fundamental limitation that would prevent others with higher bandwidth apparatuses to explore the physics of the CJJ or CCJJ flux qubit at the crossover between the coherent and incoherent regimes using the Landau-Zener method. \n\nIn order to probe the qubit tunnel splitting in the coherent regime using low bandwidth bias lines, we have developed a new experimental procedure for sensing the expectation value of the qubit persistent current, similar in spirit to other techniques already reported in the literature \\cite{gsip}. An unfortunate consequence of the choice of design parameters for our high fidelity QFP-enabled readout scheme is that the QFP is relatively strongly coupled to the qubit, thus limiting its utility as a detector when the qubit tunnel barrier is suppressed. One can circumvent this problem within our device architecture by tuning an inter-qubit coupler to a finite inductance and using a second qubit as a latching sensor, in much the same manner as a QFP. Consider two flux qubits coupled via a mutual inductance $M_{\\text{eff}}$. The system Hamiltonian can then be modeled as\n\\begin{equation}\n\\label{eqn:H2Q}\n{\\cal H}=-\\sum_{i\\in\\left\\{q,d\\right\\}}\\frac{1}{2}\\left[\\epsilon_i\\sigma_z^{(i)}+\\Delta_i\\sigma_x^{(i)}\\right]+J\\sigma_z^{(q)}\\sigma_z^{(d)} \\; ,\n\\end{equation}\n\n\\noindent where $J\\equiv M_{\\text{eff}}|I_q^p||I_d^p|$. Let qubit $q$ be the flux source and qubit $d$ serve the role of the detector whose tunnel barrier is adiabatically raised during the course of a measurement, just as in a QFP single shot measurement depicted in Fig.~\\ref{fig:unipolarannealing}. In the limit $\\Delta_d\\rightarrow 0$ one can write analytic expressions for the dispersion of the four lowest energies of Hamiltonian (\\ref{eqn:H2Q}):\n\\begin{equation}\n\\label{eqn:E2Q}\n\\begin{array}{ccc}\nE_{1\\pm} & = & \\pm\\frac{1}{2}\\sqrt{\\left(\\epsilon_q-2J\\right)^2+\\Delta_1^2}-\\frac{1}{2}\\epsilon_d \\; ;\\\\\nE_{2\\pm} & = & \\pm\\frac{1}{2}\\sqrt{\\left(\\epsilon_q+2J\\right)^2+\\Delta_1^2}+\\frac{1}{2}\\epsilon_d \\; .\n\\end{array}\n\\end{equation}\n\n\\noindent As with the QFP, let the flux bias of the detector qubit be engaged in a feedback loop to track its degeneracy point where $P_{d,\\downarrow}=1\/2$. Assuming Boltzmann statistics for the thermal occupation of the four levels given by Eq.~(\\ref{eqn:E2Q}), this condition is met when\n\\begin{equation}\n\\label{eqn:P2minus}\nP_{d,\\downarrow}=\\frac{1}{2}=\\frac{e^{-E_{2-}\/k_BT}+e^{-E_{2+}\/k_BT}}{\\sum_{\\alpha\\in\\left\\{1\\pm,2\\pm\\right\\}}e^{-E_{\\alpha}\/k_BT}} \\; .\n\\end{equation}\n\n\\noindent Setting $P_{d,\\downarrow}=1\/2$ in Eq.~(\\ref{eqn:P2minus}) and solving for $\\epsilon_2$ then yields an analytic formula for the balancing condition:\n\\begin{equation}\n\\label{eqn:HalfCondGeneral}\n\\epsilon_d= \\frac{F(+)-F(-)}{2}+k_BT\\ln\\left(\\frac{1+e^{-F(+)\/k_BT}}{1+e^{-F(-)\/k_BT}}\\right) \\; ;\n\\end{equation}\n\\vspace{-0.12in}\n\\begin{displaymath}\nF(\\pm)\\equiv\\sqrt{\\left(\\epsilon_q\\pm 2J\\right)^2+\\Delta_1^2} \\; .\n\\end{displaymath}\n\nWhile Eq.~(\\ref{eqn:HalfCondGeneral}) may look unfamiliar, it readily reduces to an intuitive result in the limit of small coupling $J\\ll \\Delta_1$ and $T\\rightarrow 0$:\n\\begin{equation}\n\\label{eqn:HalfCondSmallJ}\n\\epsilon_d \\approx M_{\\text{eff}}|I_q^p|\\frac{\\epsilon_q}{\\sqrt{\\epsilon_q^2+\\Delta_q^2}} = M_{\\text{eff}}\\bra{g}\\hat{I}_q^p\\ket{g} \\; ,\n\\end{equation}\n\n\\noindent where $\\ket{g}$ denotes the groundstate of the source qubit and $\\hat{I}_q^p\\equiv\\left|I_q^p\\right|\\sigma_z^{(q)}$ is the source qubit persistent current operator. Thus Eq.~(\\ref{eqn:HalfCondGeneral}) is an expression for the expectation value of the source qubit's groundstate persistent current in the presence of backaction from the detector and finite temperature. Setting $\\epsilon_i=2|I_i^p|\\Phi^x_{i}$ and rearranging then gives an expression for the flux bias of the detector qubit as a function of flux bias applied to the source qubit. Given independent calibrations of $M_{\\text{eff}}=1.56\\pm 0.01\\,$pH for a particular coupler set to $\\Phi^x_{\\text{co}}=0$ on this chip, $T=54\\pm 3\\,$mK from LOMRT fits and $|I_d^p|=1.25\\pm0.02\\,\\mu$A at the CCJJ bias where the LOMRT rate approaches the bandwidth of our bias lines, one can then envision tracing out $\\Phi_d^x$ versus $\\Phi_q^x$ and fitting to Eq.~(\\ref{eqn:HalfCondGeneral}) to extract the source qubit parameters $|I_q^p|$ and $\\Delta_q$ .\n\n\\begin{figure}\n\\includegraphics[width=3.25in]{QU1LargeDeltaTraceExample.pdf}\n\\caption{\\label{fig:largedeltatrace} (color online) Example coupled flux trace taken at $\\Phi^x_{\\text{ccjj}}=-0.6513\\,\\Phi_0$ used to extract large $\\Delta$ in the coherent regime. }\n\\end{figure}\n\nAn example $\\Phi_d^x$ versus $\\Phi_q^x$ data set for source CCJJ flux bias $\\Phi^x_{\\text{ccjj}}=-0.6513\\,\\Phi_0$ is shown in Fig.~\\ref{fig:largedeltatrace}. The solid curve in this plot corresponds to a fit to Eq.~(\\ref{eqn:HalfCondGeneral}) with a small background slope that we denote as $\\chi$. We have confirmed from the ideal CCJJ rf-SQUID model that $\\chi$ is due to the diamagnetic response of the source rf-SQUID to changing $\\Phi_q^x$. This feature becomes more pronounced with increasing $C_q$ and is peaked at the value of $\\Phi^x_{\\text{ccjj}}$ for which the source qubit potential becomes monostable, $\\beta_{\\text{eff}}=1$. Nonetheless, the model also indicates that $\\chi$ in no way modifies the dynamics of the rf-SQUID, thus the qubit model still applies. From fitting these particular data, we obtained $|I_q^p|=0.72\\pm 0.04\\,\\mu$A and $\\Delta_q\/h=2.64\\pm 0.24\\,$GHz.\n\n\\begin{figure}\n\\includegraphics[width=3.25in]{deltawaveforms.pdf}\n\\caption{\\label{fig:deltawaveforms} (color online) Depiction of large $\\Delta_q$ measurement waveforms. The waveform sequence is similar to that of Fig.~\\ref{fig:iqplockin}, albeit the source qubit's tunnel barrier is partially suppressed ($-\\Phi_0\/2<\\Phi^x_{\\text{ccjj}}<-\\Phi_0$) and a second qubit (as opposed to a QFP) serves as the flux detector.}\n\\end{figure}\n\nIn practice we have found it inefficient to take detailed traces of $\\Phi_d^x$ versus $\\Phi_q^x$ as this procedure is susceptible to corruption by LF flux noise in the detector qubit. As an alternative approach, we have adapted the LF flux noise rejecting procedures introduced in the last section of this article to measure a series of three differential flux levels in the detector qubit. The waveforms needed to accomplish this task are depicted in Fig.~\\ref{fig:deltawaveforms}. Here, the dc-SQUID and QFP connected to the detector qubit are used in latching readout mode while the detector qubit is annealed in the presence of a differential flux bias $\\Phi_m\\pm\\delta\\Phi_m$ which is controlled via feedback. Meanwhile, the source qubit's CCJJ bias is pulsed to an intermediate level $-\\Phi_0<\\Phi^x_{\\text{ccjj}}<-\\Phi_0\/2$ in the presence of an initialization flux bias $\\pm\\Phi_q^i$. By choosing two appropriate pairs of levels $\\pm\\Phi_q^i$, as indicated by the solid points $1\\pm$ and $2\\pm$ in Fig.~\\ref{fig:largedeltatrace}, one can extract $\\left|I_q^p\\right|$ and $\\chi$ from the two differential flux measurements. In order to extract $\\Delta_q$, we then choose a pair of $\\pm\\Phi_q^i$ in the centre of the trace, as indicated by the solid points $3\\pm$, from which we obtain the central slope $d\\Phi_d^x\/d\\Phi_q^x$. Taking the first derivative of Eq.~(\\ref{eqn:HalfCondGeneral}) and evaluating at $\\Phi_q^x=0$ yields\n\\begin{equation}\n\\label{eqn:centralslope}\n\\frac{d\\Phi_d^x}{d\\Phi_q^x}-\\chi=\\frac{2M_{\\text{eff}}\\left|I_q^p\\right|^2}{\\sqrt{\\left(2J\\right)^2+\\Delta_q^2}}\\tanh\\left[\\frac{\\sqrt{\\left(2J\\right)^2+\\Delta_q^2}}{2k_bT}\\right] \\; .\n\\end{equation}\n\n\\noindent Given independent estimates of all other parameters, one can then extract $\\Delta_q$ from this final differential flux measurement.\n\n\\begin{figure}\n\\includegraphics[width=3.25in]{QU1IpSummary_ver6.pdf} \\\\\n\\includegraphics[width=3.25in]{QU1DeltaSummary_ver6.pdf}\n\\caption{\\label{fig:DeltaAndIp} (color online) a) Magnitude of the persistent current $\\left|I_q^p\\right|$ as a function of $\\Phi^x_{\\text{ccjj}}$. b) Tunneling energy $\\Delta_q$ between two lowest lying states of the CCJJ rf-SQUID as a function of $\\Phi^x_{\\text{ccjj}}$, as characterized by macroscopic resonant tunneling [MRT] and Landau-Zener [LZ] in the incoherent regime and coupled groundstate persistent current ($\\bra{g}\\hat{I}_q^p\\ket{g}$) in the coherent regime. Solid curves are the predictions of the ideal CCJJ rf-SQUID model using independently calibrated $L_q$, $L_{\\text{ccjj}}$, $I_q^c$ and $C_q$ with no free parameters.}\n\\end{figure}\n\nA summary of experimental values of the qubit parameters $\\left|I_q^p\\right|$ and $\\Delta_q$ versus $\\Phi^x_{\\text{ccjj}}$ is shown in Fig.~\\ref{fig:DeltaAndIp}. Here, we have taken $\\Delta_q$ from LOMRT and Landau-Zener experiments in the incoherent regime and from the LF flux noise rejecting persistent current procedure discussed above in the coherent regime. The large gap between the three sets of measurements is due to two reasons: First, the relatively low bandwidth of our bias lines does not allow us to perform MRT or Landau-Zener measurements at higher $\\Delta_q$ where the dynamics are faster. Second, while the coherent regime method worked for $\\Delta_q>k_BT$, it proved difficult to reliably extract $\\Delta_q$ in the opposite limit. As such, we cannot make any precise statements regarding the value of $\\Phi^x_{\\text{ccjj}}$ which serves as the delineation between the coherent and incoherent regimes based upon the data shown in Fig.~\\ref{fig:DeltaAndIp}b. Regulating the device at lower temperature would assist in extending the utility of the coherent regime method to lower $\\Delta_q$. On the other hand, given that Eq.~(\\ref{eqn:LOMRTFit}) predicts that $\\Gamma\\propto\\Delta_q^2$, one would have to augment the experimental bandwidth by at least two orders of magnitude to gain one order of magnitude in $\\Delta_q$ via either MRT or LZ experiments.\n\nThe solid curves in Fig.~\\ref{fig:DeltaAndIp} were generated with the ideal CCJJ rf-SQUID model using the independently calibrated $L_q=265.4\\,$pH, $L_{\\text{ccjj}}=26\\,$pH, $I_q^c=3.103\\,\\mu$A and $C_q=190\\,$fF. Note that there are no free parameters. It can be seen that the agreement between theory and experiment is quite reasonable. Thus we reach the second key conclusion of this article: The CCJJ rf-SQUID can be identified as a flux qubit as the measured $\\left|I_q^p\\right|$ and $\\Delta_q$ agree with the predictions of a quantum mechanical Hamiltonian whose parameters were independently calibrated.\n\n\n\\section{Noise}\n\nWith the identification of the CCJJ rf-SQUID as a flux qubit firmly established, we now turn to assessing the relative quality of this device in comparison to other flux qubits reported upon in the literature. In this section, we present measurements of the low frequency flux and critical current spectral noise densities, $S_{\\Phi}(f)$ and $S_{I}(f)$, respectively. Finally, we provide explicit links between $S_{\\Phi}(f)$ and the free induction (Ramsey) decay time $T^*_{2}$ that would be relevant were this flux qubit to be used as an element in a gate model quantum information processor.\n\n\\subsection{Flux Noise}\n\n\\begin{figure}\n\\includegraphics[width=3.25in]{QU1FluxNoise_ver1.pdf}\n\\caption{\\label{fig:fluxnoise} (color online) Low frequency flux noise in the CCJJ rf-SQUID flux qubit. Data [points] have been fit to Eq.~(\\ref{eqn:1OverF}) [solid curve].}\n\\end{figure}\n\n\nLow frequency ($1\/f$) flux noise is ubiquitous in superconducting devices and is considered a serious impediment to the development of large scale solid state quantum information processors \\cite{1OverF}. We have performed systematic studies of this property using a large number of flux qubits of varying geometry \\cite{1OverFGeometry} and, more recently, as a function of materials and fabrication parameters. These latter studies have aided in the reduction of the amplitude of $1\/f$ flux noise in our devices and will be the subject of a forthcoming publication. Using the methods described in Ref.~\\onlinecite{1OverFGeometry}, we have generated the one-sided flux noise power spectral density $S_{\\Phi}(f)$ shown in Fig.~\\ref{fig:fluxnoise}. These data have been fit to the generic form\n\\begin{equation}\n\\label{eqn:1OverF}\nS(f)=\\frac{A^2}{f^{\\alpha}}+w_n\\; ,\n\\end{equation}\n\n\\noindent with best fit parameters $\\alpha=0.95\\pm 0.05$, $\\sqrt{w_n}=9.7\\pm 0.5\\,\\mu\\Phi_0\/\\sqrt{\\text{Hz}}$ and amplitude $A$ such that $\\sqrt{S_{\\Phi}(1\\,\\text{Hz})}=1.3^{+0.7}_{-0.5}\\,\\mu\\Phi_0\/\\sqrt{\\text{Hz}}$. Thus we reach the third key conclusion of this article: We have demonstrated that it is possible to achieve $1\/f$ flux noise levels with Nb wiring that are as low as the best Al wire qubits reported in the literature \\cite{1OverF,1OverFFluxQubit1,1OverFFluxQubit2}. Moreover, we have measured similar spectra from a large number of identical flux qubits, both on the same and different chips, and can state with confidence that the $1\/f$ amplitude reported herein is reproducible. Given the experimentally observed geometric scaling of $S_{\\Phi}(1\\,\\text{Hz})$ in Ref.~\\onlinecite{1OverFGeometry} and the relatively large size of our flux qubit bodies, we consider the prospects of observing even lower $1\/f$ noise in smaller flux qubits from our fabrication facility to be very promising.\n\n\\subsection{Critical Current Noise}\n\nA second noise metric of note is the critical current noise spectral density $S_I(f)$. This quantity has been studied extensively and a detailed comparison of experimental results is presented in Ref.~\\onlinecite{vanHarlingen}. A recent study of the temperature and geometric dependence of critical current noise has been published in Ref.~\\onlinecite{NewCriticalCurrentNoise}. Based upon Eq.~(18) of Ref.~\\onlinecite{vanHarlingen}, we estimate that the $1\/f$ critical current noise from a single $0.6\\,\\mu$m diameter junction, as found in the CCJJ rf-SQUID flux qubit, will have an amplitude such that $\\sqrt{S_I(1\\,\\text{Hz})}\\sim 0.2\\,$pA$\/\\sqrt{\\text{Hz}}$. Unfortunately, we were unable to directly measure critical current noise in the flux qubit. While the QFP-enable readout provided high fidelity qubit state discrimination when qubits are fully annealed to $\\Phi^x_{\\text{ccjj}}=-\\Phi_0$, this readout mechanism simply lacked the sensitivity required for performing high resolution critical current noise measurements. In lieu of a measurement of $S_I(f)$ from a qubit, we have characterized this quantity for the dc-SQUID connected to the qubit in question. The dc-SQUID had two $0.6\\,\\mu$m junctions connected in parallel. A time trace of the calibrated switching current $I_{\\text{sw}}\\approx I_c$ was obtained by repeating the waveform sequence depicted in Fig.~\\ref{fig:unipolarannealing}b except with $\\Phi^x_{\\text{latch}}=-\\Phi_0\/2$ at all time (QFP disabled, minimum persistent current) and $\\Phi^x_{\\text{ro}}=0$ to provide minimum sensitivity to flux noise. Assuming that the critical current noise from each junction is uncorrelated, the best that we could establish was an upper bound of $\\sqrt{S_I(1\\,\\text{Hz})}\\lesssim 7\\,$pA$\/\\sqrt{\\text{Hz}}$ for a single $0.6\\,\\mu$m diameter junction.\n\nGiven the upper bound cited above for critical current noise from a single junction, we now turn to assessing the relative impact of this quantity upon the CCJJ rf-SQUID flux qubit. It is shown in Appendix B that fluctuations in the critical currents of the individual junctions of a CCJJ generate apparent flux noise in the flux qubit by modulating $\\Phi_q^0$. Inserting critical current fluctuations of magnitude $\\delta I_c\\lesssim 7\\,$pA$\/\\sqrt{\\text{Hz}}$ and a mean junction critical current $I_c=I_q^c\/4\\sim 0.8\\,\\mu$A into Eq.~(\\ref{eqn:4JOffsetFluctuation}) yields qubit degeneracy point fluctuations $\\left|\\delta\\Phi_q^0\\right|\\lesssim 0.1\\,\\mu\\Phi_0\/\\sqrt{\\text{Hz}}$. This final result is at least one order of magnitude smaller than the amplitude of $1\/f$ flux noise inferred from the data in Fig.~\\ref{fig:fluxnoise}. As such, we consider the effects of critical current noise in the CCJJ rf-SQUID to be tolerable.\n\n\\subsection{Estimation of $T^*_{2}$}\n \nWhile measurements of noise power spectral densities are the most direct way of reporting upon and comparing between different qubits, our research group is frequently asked what is the dephasing time for our flux qubits. The answer presumably depends very strongly upon bias settings, for recall that we have measured properties of the CCJJ rf-SQUID flux qubit in both the coherent and incoherent regime. Given that our apparatuses contain only low bandwidth bias lines for enabling AQO, we are unable to measure dephasing within our own laboratory. Collaborative efforts to measure dephasing for our flux qubits are in progress. In the meantime, we provide a rough estimate below for our flux qubits if they were biased to the optimal point, $\\Phi^x_q=\\Phi_q^0$ based upon the measured $S_{\\Phi}(f)$ and subjected to a free induction decay, or Ramsey fringe, experiment. Referring to Eq.~(33a) of Ref.~\\onlinecite{Martinis} and key results from Ref.~\\onlinecite{Schnirman}, the mean squared phase noise for a flux qubit at the optimal point will be given by\n\\begin{equation}\n\\label{eqn:dephasing1}\n\\left<\\phi_n^2(t)\\right>=\\frac{1}{\\hbar^2}\\frac{\\left(2\\left|I_q^p\\right|\\right)^4}{2\\Delta^2}\\int^{\\Delta\/h}_{f_m}\\! df S_{\\Phi^2}(f)\\frac{\\sin^2(\\pi f t)}{(\\pi f)^2} \\; ,\n\\end{equation}\n\n\\noindent where $S_{\\Phi^2}(\\omega)$ represents the quadratic flux noise spectral density and $f_m$ is the measurement cutoff frequency. Assuming that the first order spectral density $S_{\\Phi}(\\omega)=2\\pi A^2\/\\omega$, then $S_{\\Phi^2}(\\omega)$ can be written as \n\\begin{eqnarray}\n\\label{eqn:sphisquared}\nS_{\\Phi^2}(\\omega) & = & \\frac{1}{2\\pi}\\int\\! dt e^{-i\\omega t}\\left<\\Phi_n^2(t) \\Phi_n^2(0)\\right> \\nonumber\\\\\n & = & \\frac{1}{2\\pi}\\int\\! dt e^{-i\\omega t} \\int d\\omega^{\\prime}\\frac{2\\pi A^2}{\\omega^{\\prime}}\\int d\\omega^{\\prime\\prime}\\frac{2\\pi A^2}{\\omega^{\\prime\\prime}} \\nonumber\\\\\n & = & 8\\pi^2 A^4\\frac{\\ln\\left(\\omega\/\\omega_{\\text{ir}}\\right)}{\\omega} \\; ,\n\\end{eqnarray}\n\n\\noindent where $\\omega_{\\text{ir}}\\equiv 2\\pi f_{\\text{ir}}$ denotes an infrared cutoff of the $1\/f$ noise spectral density. Inserting Eq.~(\\ref{eqn:sphisquared}) into Eq.~(\\ref{eqn:dephasing1}) and rendering the integral dimensionless then yields:\n\\begin{equation}\n\\label{eqn:dephasing2}\n\\left<\\phi_n^2(t)\\right>=\\frac{t^2}{\\hbar^2}\\frac{\\left(2\\left|I_q^p\\right| A\\right)^4}{\\pi\\Delta^2}\\int^{\\Delta t\/h}_{f_{\\text{min}}t}\\! dx \\frac{\\ln\\left(x\/f_{\\text{ir}}t\\right)\\sin^2(\\pi x)}{x^3} \\; ,\n\\end{equation}\n\n\\noindent where $f_{\\text{min}}=\\max\\left[\\begin{array}{cc} f_m & f_{\\text{ir}}\\end{array}\\right]$. We have numerically studied the behavior of the integral in Eq.~(\\ref{eqn:dephasing2}). In the very long measurement time limit the integral is cut off by $f_{\\text{ir}}$ and the integral varies as $1\/t^2$, which then cancels the factor of $t^2$ in the numerator of Eq.~(\\ref{eqn:dephasing2}). This means that the mean squared phase noise eventually reaches a finite limit. However, the more experimentally relevant limit is $f_m\\gg f_{\\text{ir}}$ , for which we found empirically that the integral varies roughly as $5\\times\\left[\\ln\\left(f_m\/f_{\\text{ir}}\\right)\\right]^2$ over many orders of magnitude in the argument of the logarithm. In this latter limit the result is independent of $t$, so Eq.~(\\ref{eqn:dephasing2}) can be rewritten as $\\left<\\phi_n^2(t)\\right>=t^2\/(T^*_{2})^2$, which then yields the following formula for $T^*_{2}$:\n\\begin{equation}\n\\label{eqn:Tphi}\nT^*_{2}\\approx\\left[\\frac{1}{\\hbar^2}\\frac{\\left(2\\left|I_q^p\\right| A\\right)^4}{\\pi\\Delta^2}5\\ln\\left(f_m\/f_{\\text{ir}}\\right)\\right]^{-1\/2} \\; .\n\\end{equation}\n\nSince flux noise spectra seem to obey the $1\/f$ form down to at least $0.1\\,$mHz and researchers are generally concerned with dephasing over times of order $1\\,\\mu$s, then it is fair to consider $f_m\/f_{\\text{ir}}\\sim 10^{10}$. For a nominal value of $\\Phi^x_{\\text{ccjj}}$ such that the flux qubit is in the coherent regime, say $-0.652\\,\\Phi_0$, the qubit parameters are $\\Delta_q\/h\\approx 2\\,$GHz and $\\left|I_q^p\\right|\\approx 0.7\\,\\mu$A. Substituting these quantities into Eq.~(\\ref{eqn:dephasing2}) then yields $T^*_{2}\\sim 150\\,$ns. This estimate of the dephasing time is comparable to that observed in considerably smaller flux qubits with comparable $1\/f$ flux noise levels \\cite{1OverFFluxQubit1,1OverFFluxQubit2}.\n\n\n\\section{Conclusions}\n\nOne can draw three key conclusions from the work presented herein: First, the CCJJ rf-SQUID is a robust and scalable device in that it allows for in-situ correction for parametric variations in Josephson junction critical currents and device inductance, both within and between flux qubits using only static flux biases. Second, the measured flux qubit properties, namely the persistent current $\\left|I_q^p\\right|$ and tunneling energy $\\Delta_q$, agree with the predictions of a quantum mechanical Hamiltonian whose parameters have been independently calibrated, thus justifying the identification of this device as a flux qubit. Third, it has been experimentally demonstrated that the low frequency flux noise in this all Nb wiring flux qubit is comparable to the best all Al wiring devices reported upon in the literature. Taken in summation, these three conclusions represent a significant step forward in the development of useful large scale superconducting quantum information processors.\n\nWe thank J.~Hilton, P.~Spear, A.~Tcaciuc, F.~Cioata, M.~Amin, F.~Brito, D.~Averin, A.~Kleinsasser and G.~Kerber for useful discussions. Siyuan Han was supported in part by NSF Grant No. DMR-0325551.\n\n\\begin{appendix}\n\n\\section{CJJ rf-SQUID}\n\nLet the qubit and cjj loop phases be defined as \n\\begin{subequations}\n\\begin{equation}\n\\varphi_q\\equiv\\left(\\varphi_1+\\varphi_2\\right)\/2 \\; ,\n\\end{equation} \n\\begin{equation}\n\\varphi_{\\text{cjj}}\\equiv\\varphi_1-\\varphi_2 \\; ,\n\\end{equation}\n\\end{subequations}\nrespectively. Furthermore, assume that the CJJ loop has an inductance $L_{\\text{cjj}}$ that is divided symmetrically between the two paths. Using trigonometric relations, one can write a Hamiltonian for this system in terms of modes in the $q$ and cjj loops that has the following form:\n\\begin{subequations}\n\\begin{eqnarray}\n\\label{eqn:2J2DH}\n{\\cal H} & = & \\sum_n\\left[\\frac{Q_n^2}{2C_n}+U_n\\frac{(\\varphi_n-\\varphi_n^x)^2}{2}\\right] \\nonumber\\\\\n & & -U_q\\beta_+\\cos\\left(\\frac{\\varphi_{\\text{cjj}}}{2}\\right)\\cos\\left(\\varphi_q\\right) \\nonumber\\\\\n & & +U_q\\beta_-\\sin\\left(\\frac{\\varphi_{\\text{cjj}}}{2}\\right)\\sin\\left(\\varphi_q\\right) \\; ;\n\\end{eqnarray}\n\\vspace{-12pt}\n\\begin{equation}\n\\beta_{\\pm}=\\frac{2\\pi L_q\\left(I_{1}\\pm I_{2}\\right)}{\\Phi_0} \\; ,\n\\end{equation}\n\\end{subequations}\n\n\\noindent where the sum is over $n\\in\\left\\{q,\\text{cjj}\\right\\}$, $C_q\\equiv C_1+C_2$, $1\/C_{\\text{cjj}}\\equiv 1\/C_1+1\/C_2$, $U_n\\equiv (\\Phi_0\/2\\pi)^2\/L_n$, $L_q\\equiv L_{\\text{body}}+L_{\\text{cjj}}\/4$ and $[\\Phi_0\\varphi_n\/2\\pi,Q_n]=i\\hbar$. The Josephson potential energy of Hamiltonian (\\ref{eqn:2J2DH}) can be rearranged by defining an angle $\\theta$ such that $\\tan\\theta=(\\beta_-\/\\beta_+)\\tan\\left(\\varphi_{\\text{cjj}}\/2\\right)$. Further trigonometric manipulation then yields Eqs.~(\\ref{eqn:2JHeff})-(\\ref{eqn:2Jbetapm}). \n\n\\section{CCJJ rf-SQUID}\n\nFollowing the same logic as for the CJJ rf-SQUID, one can define four orthogonal quantum mechanical degrees of freedom as follows:\n\\begin{subequations}\n\\begin{eqnarray}\n\\label{eqn:ccjjphases}\n\\varphi_L & \\equiv & \\varphi_1-\\varphi_2 \\; ;\\\\\n\\varphi_R & \\equiv & \\varphi_3-\\varphi_4 \\; ;\\\\\n\\varphi_{\\text{ccjj}} & \\equiv & \\varphi_{\\ell}-\\varphi_r=\\frac{\\varphi_1+\\varphi_2}{2}-\\frac{\\varphi_3+\\varphi_4}{2} \\; ;\\\\\n\\varphi_q & \\equiv & \\frac{\\varphi_{\\ell}+\\varphi_r}{2}=\\frac{\\varphi_1+\\varphi_2+\\varphi_3+\\varphi_4}{4}\\; .\n\\end{eqnarray}\n\\end{subequations}\n\n\\noindent Using the same strategy as in Appendix A, one can use trigonometric identities to first express the Josephson potential in terms of the $L$ and $R$ loop modes:\n\\begin{subequations}\n\\begin{eqnarray}\n\\label{eqn:4J4DH}\n{\\cal H} & = & \\sum_n\\frac{Q_n^2}{2C_n}+\\sum_mU_m\\frac{(\\varphi_m-\\varphi_m^x)^2}{2} \\nonumber\\\\\n & & -U_q\\beta_{L+}\\cos\\left(\\frac{\\varphi_L}{2}\\right)\\cos\\left(\\varphi_{\\ell}\\right) \\nonumber\\\\\n & & +U_q\\beta_{L-}\\sin\\left(\\frac{\\varphi_L}{2}\\right)\\sin\\left(\\varphi_{\\ell}\\right) \\nonumber \\\\\n & & -U_q\\beta_{R+}\\cos\\left(\\frac{\\varphi_R}{2}\\right)\\cos\\left(\\varphi_{r}\\right) \\nonumber\\\\\n & & +U_q\\beta_{R-}\\sin\\left(\\frac{\\varphi_R}{2}\\right)\\sin\\left(\\varphi_{r}\\right) \\; ;\n\\end{eqnarray}\n\\vspace{-12pt}\n\\begin{equation}\n\\label{eqn:betalrpm}\n\\beta_{L(R),\\pm}\\equiv\\frac{2\\pi L_q\\left(I_{1(3)}\\pm I_{2(4)}\\right)}{\\Phi_0} \\; ,\n\\end{equation}\n\\end{subequations}\n\n\\noindent where the first sum is over $n\\in\\left\\{L,R,\\ell,r\\right\\}$ and the second sum is over closed inductive loops $m\\in\\left\\{L,R,\\text{ccjj},q\\right\\}$. As before, each of the modes obey the commutation relation $[\\Phi_0\\varphi_n\/2\\pi,Q_n]=i\\hbar$. Here, $1\/C_{L(R)}=1\/C_{1(3)}+1\/C_{2(4)}$, $C_{\\ell(r)}=C_{1(3)}+C_{2(4)}$ and $U_m=(\\Phi_0\/2\\pi)^2\/L_m$. \n\nWe have found it adequate for our work to assume that $L_{L,R}\/L_q\\ll 1$, which then allows one to reduce the four dimensional system given in Hamiltonian (\\ref{eqn:4J4DH}) to two dimensions. Consequently, we will substitute $\\varphi_{L(R)}=\\varphi^x_{L(R)}$ and ignore the $L$ and $R$ kinetic terms henceforth. Assuming that the inductance of the ccjj loop is divided equally between the two branches one can then write $L_q=L_{\\text{body}}+L_{\\text{ccjj}}\/4$. With these approximations and the $\\theta$ strategy presented in Appendix A, one can rearrange the Josephson potential terms to yield the following:\n\\begin{subequations}\n\\begin{eqnarray}\n\\label{eqn:4J2DHver1}\n{\\cal H} & = & \\sum_n\\frac{Q_n^2}{2C_n}+\\sum_mU_m\\frac{(\\varphi_m-\\varphi_m^x)^2}{2} \\nonumber\\\\\n & & -U_q\\beta_{L}\\cos\\left(\\varphi_{\\ell}-\\varphi_L^0\\right) \\nonumber\\\\\n & & -U_q\\beta_{R}\\cos\\left(\\varphi_{r}-\\varphi_R^0\\right) \\; ;\n\\end{eqnarray}\n\\vspace{-12pt}\n\\begin{eqnarray}\n\\label{eqn:betalr}\n\\beta_{L(R)} & = & \\beta_{L(R),+}\\cos\\left(\\frac{\\varphi_{L(R)}^x}{2}\\right) \\\\\n & & \\times\\sqrt{1+\\left[\\frac{\\beta_{L(R),-}}{\\beta_{L(R),+}}\\tan\\left(\\frac{\\varphi_{L(R)}^x}{2}\\right)\\right]^2} \\; ; \\nonumber\n\\end{eqnarray}\n\\vspace{-12pt}\n\\begin{equation}\n\\label{eqn:4JMinorOffset}\n\\varphi_{L(R)}^0\n =-\\arctan\\left(\\frac{\\beta_{L(R),-}}{\\beta_{L(R),+}}\\tan(\\varphi_{L(R)}^x\/2)\\right)\\; ,\n\\end{equation}\n\\end{subequations}\n\n\\noindent where the first sum is over $n\\in\\left\\{\\ell,r\\right\\}$ and the second sum is over $m\\in\\left\\{\\text{ccjj},q\\right\\}$. The Josephson potential is given by a sum of two cosines, as encountered in the CJJ rf-SQUID derivation of Hamiltonian (\\ref{eqn:2J2DH}) from Hamiltonian (\\ref{eqn:Hphase}). These two terms can be rewritten in the same manner by defining $\\beta_{\\pm}=\\beta_L\\pm\\beta_R$. The result, similar to Hamiltonian (\\ref{eqn:2J2DH}), can then be subjected to the $\\theta$ strategy to yield\n\\begin{subequations}\n\\begin{eqnarray}\n\\label{eqn:4J2DHver2}\n{\\cal H} & = & \\sum_n\\left[\\frac{Q_n^2}{2C_n}+U_n\\frac{(\\varphi_n-\\varphi_n^x)^2}{2}\\right] \\nonumber\\\\\n & & -U_q\\beta_{\\text{eff}}\\cos\\left(\\varphi_q-\\varphi_q^0\\right) \\; ,\n \\end{eqnarray}\n \n\\noindent where the sum is over $n\\in\\left\\{q,\\text{ccjj}\\right\\}$ and the capacitances are defined as $C_q=C_1+C_2+C_3+C_4$ and $1\/C_{\\text{ccjj}}=1\/(C_1+C_2)+1\/(C_3+C_4)$. The other parameters are defined as\n\\begin{equation}\n\\label{eqn:4JBeff}\n\\beta_{\\text{eff}}=\\beta_+\\cos\\left(\\frac{\\gamma}{2}\\right)\\sqrt{1+\\left[\\frac{\\beta_-}{\\beta_+}\\tan\\left(\\frac{\\gamma}{2}\\right)\\right]^2} \\; ;\n\\end{equation}\n\\vspace{-12pt}\n\\begin{equation}\n\\label{eqn:4JQOffset}\n\\varphi_q^0=\\frac{\\varphi_L^0+\\varphi_R^0}{2}+\\gamma_0 \\; ;\n\\end{equation}\n\\vspace{-12pt}\n\\begin{equation}\n\\label{eqn:4JCCJJPhase}\n\\gamma\\equiv\\varphi_{\\text{ccjj}}-\\left(\\varphi_L^0-\\varphi_R^0\\right) \\; ;\n\\end{equation}\n\\vspace{-12pt}\n\\begin{equation}\n\\label{eqn:4JCCJJMonkey}\n\\gamma_0\\equiv -\\arctan\\left(\\frac{\\beta_-}{\\beta_+}\\tan(\\gamma\/2)\\right) \\; ;\n\\end{equation}\n\\begin{equation}\n\\label{eqn:betaccjjpm}\n\\beta_{\\pm}\\equiv \\beta_L\\pm\\beta_R \\; .\n\\end{equation}\n\\end{subequations}\n\nHamiltonian (\\ref{eqn:4J2DHver2}) inherits much of its complexity from junction asymmetry both within the minor loops, which gives rise to $\\varphi_{L(R)}^0$, and effective junction asymmetry between the minor loops, which gives rise to $\\gamma_0$. For arbitrary external flux biases and nominal spread in junction critical current, the CCJJ rf-SQUID offers no obvious advantage over the CJJ rf-SQUID. However, upon choosing biases $\\Phi_L^x$ and $\\Phi_R^x$ such that \n\\begin{equation}\n\\label{eqn:balanced}\n\\beta_L=\\beta_R \\;\\; ,\n\\end{equation}\n\n\\noindent then $\\beta_-=0$, and consequently $\\gamma_0=0$. With these substitutions, Hamiltonian (\\ref{eqn:4J2DHver2}) yields Hamiltonian (\\ref{eqn:4JHeff}). Note that for $\\beta_{L(R),-}\/\\beta_{L(R),+}\\ll 1$ and modest $\\Phi^x_{L(R)}$ that the so-called CCJJ balancing condition given by Eqs.~(\\ref{eqn:betalr}) and (\\ref{eqn:balanced}) can be written approximately as\n\\begin{displaymath}\n\\beta_{L,+}\\cos\\left(\\frac{\\varphi_L^x}{2}\\right) \\approx \\beta_{R,+}\\cos\\left(\\frac{\\varphi_R^x}{2}\\right) \\; ,\n\\end{displaymath}\n\n\\noindent which, upon solving for $\\varphi_L^x$ yields\n\\begin{equation}\n\\label{eqn:balancedapprox}\n\\Phi^x_{L} = \\frac{2\\pi}{\\Phi_0}\\arccos\\left[\\frac{\\beta_{R,+}}{\\beta_{L,+}}\\cos\\left(\\frac{\\pi\\Phi^x_{R}}{\\Phi_0}\\right)\\right] \\; .\n\\end{equation}\n\nIt is possible for critical current noise to couple into the $\\varphi_q$ degree of freedom in any compound junction rf-SQUID qubit via modulation of the junction asymmetry-dependent apparent qubit flux offset $\\Phi_q^0$. In the case of the CCJJ rf-SQUID, all three quantities on the right side of Eq.~(\\ref{eqn:4JQOffset}) are ultimately related to the critical currents of the individual junctions. Given typical junction parameter spreads from our fabrication facility,\n\\begin{displaymath}\n\\left|\\frac{\\beta_{L(R),-}}{\\beta_{L(R),+}}\\right|=\\left|\\frac{I_{1(3)}-I_{2(4)}}{I_{1(3)}+I_{2(4)}}\\right|\\sim {\\cal O}(0.01) \\; ,\n\\end{displaymath}\n\n\\noindent so one can write an approximate expression for $\\varphi^0_{L(R)}$ using Eq.~(\\ref{eqn:4JMinorOffset}):\n\\begin{eqnarray}\n\\label{eqn:4JMinorOffsetApprox}\n\\varphi^0_{L(R)} & \\approx & -\\frac{I_{1(3)}-I_{2(4)}}{I_{1(3)}+I_{2(4)}}\\tan\\left(\\frac{\\varphi^x_{L(R)}}{2}\\right) \\nonumber\\\\\n & \\approx & -\\frac{I_{1(3)}-I_{2(4)}}{2I_c}\\tan\\left(\\frac{\\varphi^x_{L(R)}}{2}\\right) \\; ,\n\\end{eqnarray}\n\n\\noindent and for $\\gamma_0$ using Eqs.~(\\ref{eqn:betalr}), (\\ref{eqn:4JCCJJPhase}) and (\\ref{eqn:4JCCJJMonkey}):\n\\begin{eqnarray}\n\\label{eqn:4JCCJJMonkeyApprox}\n\\gamma_0 & \\approx & \\frac{(I_3+I_4)\\cos\\left(\\frac{\\varphi_R^x}{2}\\right)-(I_1+I_2)\\cos\\left(\\frac{\\varphi_L^x}{2}\\right)}{(I_1+I_2)\\cos\\left(\\frac{\\varphi_L^x}{2}\\right)+(I_3+I_4)\\cos\\left(\\frac{\\varphi_R^x}{2}\\right)}\\tan\\left(\\frac{\\gamma}{2}\\right) \\nonumber\\\\\n & \\approx & \\frac{(I_3+I_4)\\cos\\left(\\frac{\\varphi_R^x}{2}\\right)-(I_1+I_2)\\cos\\left(\\frac{\\varphi_L^x}{2}\\right)}{2I_c\\left[\\cos\\left(\\frac{\\varphi_L^x}{2}\\right)+\\cos\\left(\\frac{\\varphi_R^x}{2}\\right)\\right]}\\tan\\left(\\frac{\\gamma}{2}\\right) , \\nonumber\\\\\n & & \n\\end{eqnarray}\n\n\\noindent where $I_c$ represents the mean critical current of a single junction. The CCJJ rf-SQUID is intended to be operated with only small flux biases in the minor loops, thus $\\cos\\left(\\frac{\\varphi_L^x}{2}\\right)\\approx\\cos\\left(\\frac{\\varphi_R^x}{2}\\right)\\approx 1$. It is also reasonable to assume that $\\gamma\\approx\\varphi^x_{\\text{ccjj}}$\nas the corrections to $\\tan(\\gamma\/2)$ from $\\varphi^0_{L(R)}$ and from the effective two-dimensionality of the rf-SQUID potential will be very small. Inserting Eqs.~\\ref{eqn:4JMinorOffsetApprox} and \\ref{eqn:4JCCJJMonkeyApprox} into Eq.~(\\ref{eqn:4JQOffset}) then yields\n\\begin{eqnarray}\n\\label{eqn:4JOffsetApprox}\n\\varphi^0_q & \\approx & -\\frac{I_1}{2I_c}\\left[\\tan\\left(\\frac{\\varphi_L^x}{2}\\right)+\\frac{1}{2}\\tan\\left(\\frac{\\varphi^x_{\\text{ccjj}}}{2}\\right)\\right] \\nonumber\\\\\n & & -\\frac{I_2}{2I_c}\\left[-\\tan\\left(\\frac{\\varphi_L^x}{2}\\right)+\\frac{1}{2}\\tan\\left(\\frac{\\varphi^x_{\\text{ccjj}}}{2}\\right)\\right] \\nonumber\\\\\n & & -\\frac{I_3}{2I_c}\\left[\\tan\\left(\\frac{\\varphi_R^x}{2}\\right)-\\frac{1}{2}\\tan\\left(\\frac{\\varphi^x_{\\text{ccjj}}}{2}\\right)\\right] \\nonumber\\\\\n& & -\\frac{I_4}{2I_c}\\left[-\\tan\\left(\\frac{\\varphi_R^x}{2}\\right)-\\frac{1}{2}\\tan\\left(\\frac{\\varphi^x_{\\text{ccjj}}}{2}\\right)\\right] \\; .\n\\end{eqnarray}\n\nFor the typical operating parameters described in this article, $\\Phi^x_{L(R)}\/\\Phi_0\\sim0.1$ and the device acts as a qubit for $\\Phi^x_{\\text{ccjj}}\/\\Phi_0\\sim 0.65$. For these flux biases, the magnitude of the terms within the square braces in Eq.~(\\ref{eqn:4JOffsetApprox}) are all of order 1. Therefore, for general flux bias conditions, the apparent qubit flux offset is roughly given by\n\\begin{displaymath}\n\\Phi_q^0 \\approx -\\frac{\\Phi_0}{4\\pi}\\frac{(I_1+I_2)-(I_3+I_4)}{I_c} \\; .\n\\end{displaymath}\n\n\\noindent Assume that each junction experiences critical current fluctuations of magnitude $\\delta I_c$. If each junction's fluctuations are independent, \nthen the root mean square variation of the qubit degeneracy point $\\left|\\delta\\Phi_q^0\\right|$ will be \n\\begin{equation}\n\\label{eqn:4JOffsetFluctuation}\n\\left|\\delta\\Phi_q^0\\right| \\approx \\frac{\\Phi_0}{2\\pi}\\frac{\\delta I_c}{I_c} \\; .\n\\end{equation}\n\n\\noindent Thus, critical current fluctuations generate apparent flux noise in the CCJJ rf-SQUID flux qubit.\n\n\\end{appendix}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{SecIntro}\nThe discovery of the interstellar object `Oumuamua in 2017 \\citep{MWM} led to a substantial increase in the expected number density of interstellar objects relative to certain earlier estimates \\citep{MTL09}. Recently, the identification of a putative interstellar meteor by \\citet{SL19a} enabled the determination of the flux of extrasolar objects impacting the Earth's atmosphere \\citep{SL19b}. \n\nThere are several avenues to analyze objects which originate beyond the Solar system (in short, extrasolar). First, one can send out spacecrafts to investigate interstellar dust in the neighborhood of Earth \\citep{LBG00}, unbound objects like `Oumuamua \\citep{SL18}, gravitationally captured objects within our Solar system \\citep{LL18}, or even nearby exoplanets like Proxima b.\\footnote{\\url{https:\/\/breakthroughinitiatives.org\/initiative\/3}} A second possibility entails remote sensing studies of interstellar meteors that burn up in Earth's atmosphere \\citep{SL19b} or objects that graze the Sun \\citep{FL19}. We will instead address a third route in this Letter: combing through lunar samples to search for extrasolar material. The same approach is utilizable, in principle, for detecting extrasolar material deposited on the surfaces of asteroids and comets.\n\nIt is very beneficial that extrasolar objects impact not only the Earth but also the Moon. The latter is advantageous from two different standpoints. First, as the Moon lacks an atmosphere, there is minimal ablation of small objects relative to Earth, consequently ensuring that they are preserved and do not burn up before impacting the surface. Second, it is well-known that the Moon is geologically inert with respect to the Earth over the past few Gyr \\citep{JHA12}. This feature ensures that the Moon, unlike the Earth, preserves a comprehensive geological record dating back almost to its formation around $4.5$ Ga.\n\nFrom a practical standpoint, the strategy of searching lunar samples has two benefits with respect to the alternatives mentioned earlier. First, the Apollo missions returned $\\sim 400$ kg of lunar material to the Earth, ensuring that it is feasible to examine these samples for extrasolar debris. Second, both the federal and private sectors have expressed an interest in going back to the Moon in the upcoming decade,\\footnote{\\url{https:\/\/www.nasa.gov\/specials\/apollo50th\/back.html}} and potentially establishing lunar bases in the long run.\\footnote{\\url{http:\/\/www.asi.org\/}} There are numerous benefits expected to accrue from the sustained \\emph{in situ} exploration of the Moon in areas as diverse as high-energy physics, medicine, planetary science and astrobiology \\citep{Cock10,CAC12}. We suggest that one should also include the detection of extrasolar material - in particular, the search for the building blocks of extraterrestrial life - to the list of benefits from lunar exploration.\n\nThe outline of the Letter is as follows. We predict the mass and number flux of extrasolar impactors striking the lunar surface in Section \\ref{SecMF}. We estimate the abundances of extrasolar material, organics, and biomolecular building blocks in Section \\ref{SecAbE}. Next, we briefly outline methodologies by which the extrasolar components may be detected in Section \\ref{SecSeaE}. Finally, we summarize our central results in Section \\ref{SecConc}.\n\n\\section{Mass flux of Extrasolar Impactors}\\label{SecMF}\nHenceforth, we use the subscript `S' to reference impactors whose origin lies within the Solar system (i.e., intrasolar) and the subscript `E' to denote impactors that originate outside the solar system (i.e., extrasolar). \n\nWe begin by assessing the number flux of extrasolar impactors on the Moon. In order to do so, we note that the contribution from gravitational focusing can be neglected since the correction factor $\\left(1 + v_\\mathrm{esc}^2\/v_\\infty^2\\right)$ is close to unity, where $v_\\mathrm{esc}$ is the escape velocity and $v_\\infty$ represents the excess velocity at a large distance. The probability distribution function for the impact flux is denoted by $\\mathcal{P}_E(m)$, in units of m$^{-2}$ s$^{-1}$ kg$^{-1}$, where $m$ is the mass of the impactor. We will work with a power-law function in the mass range of interest, i.e., $\\mathcal{P}_E(m) = C_E m^{-\\lambda_E}$, where $C_E$ is the proportionality constant and $\\lambda_E$ is the power-law index. This ansatz allows us to determine the number flux of impactors $\\dot{\\mathcal{N}}_E(m)$ with masses $> m$ as follows:\n\\begin{equation}\\label{PhiEI}\n \\dot{\\mathcal{N}}_E(m) = \\int_m^\\infty \\mathcal{P}_E(m')\\,dm' = \\frac{C_E}{|-\\lambda_E + 1|} m^{-\\lambda_E + 1}. \n\\end{equation}\nAs we are interested in extraterrestrial impactors, we will make the assumption that the number flux is roughly the same for the Moon and the Earth's atmosphere. This is fairly valid because the extra contribution from the orbital velocity of the Moon is smaller than $v_\\infty$ and $v_\\mathrm{esc}$ by approximately an order of magnitude. Note that the total number of impacts per unit time is \\emph{not} similar for both worlds, even if the number fluxes are comparable, because of their differing surface areas. In conjunction with the compiled data from Figure 1 and Section 2 of \\citet{SL19b}, we estimate $\\dot{\\mathcal{N}}_E(m)$ to be\n\\begin{equation}\\label{PhiEEmp}\n \\dot{\\mathcal{N}}_E(m) \\sim 4.4 \\times 10^{-22}\\,\\mathrm{m^{-2}\\,s^{-1}}\\, \\left(\\frac{m}{1\\,\\mathrm{kg}}\\right)^{-1.14}. \n\\end{equation}\nAs a consistency check, if we substitute $m = 10^{-14}$ kg in the above expression, we obtain $\\dot{\\mathcal{N}}_E \\sim 4 \\times 10^{-6}$ m$^{-2}$ s$^{-1}$. This result is in reasonable agreement with the empirical estimate of $\\dot{\\mathcal{N}}_E \\sim 1 \\times 10^{-6}$ m$^{-2}$ s$^{-1}$ based on \\emph{in situ} measurements carried out by the Ulysses and Galileo spacecrafts \\citep{LBG00}. In addition, the power-law exponent of $-1.14$ specified in (\\ref{PhiEEmp}) exhibits very good agreement with the empirical value of $-1.1$ from spacecraft observations \\citep{LBG00}. It is straightforward to determine $C_E$ and $\\lambda_E$ from (\\ref{PhiEI}) and (\\ref{PhiEEmp}); for example, we find $\\lambda_E = 2.14$. \n\n\nIn a similar fashion, we can determine the flux of Solar system objects that impact the Moon. We define $\\mathcal{P}_S(m) = C_S m^{-\\lambda_S}$ and thereby compute $\\dot{\\mathcal{N}}_S(m)$ in the same fashion as (\\ref{PhiEI}). At very small masses, the values of $C_S$ and $\\lambda_S$ are not tightly constrained. Older empirical measurements of interplanetary dust particles (IDPs) indicated that $\\lambda_S \\approx 2.34$ for $m > 10^{-7}$ kg \\citep{GHS11}, whereas more recent studies based on the Lunar Dust Experiment (LDEX) onboard the Lunar Atmosphere and Dust Environment Explorer (LADEE) concluded that $\\lambda_S \\approx 1.9$ for dust grains with masses $> 10^{-15}$ kg \\citep{SH16}. If we further suppose that the flux at Earth's atmosphere is comparable to that on the Moon, Figure 1 of \\citet{BA06} indicates that $\\lambda_S \\approx 1.9$. Thus, we find that $\\lambda_S$ is not very different from $\\lambda_E$. We introduce the ansatz\n\\begin{equation}\\label{PhiSEmp}\n \\dot{\\mathcal{N}}_S(m) \\sim 6 \\times 10^{-19}\\,\\mathrm{m^{-2}\\,s^{-1}}\\, \\left(\\frac{m}{1\\,\\mathrm{kg}}\\right)^{-0.9},\n\\end{equation}\nwhere the normalization constant is chosen to preserve consistency with Figure 1 of \\citet{BA06}. For $m = 0.1$ kg, the above formula yields $\\dot{\\mathcal{N}}_S \\sim 4.8 \\times 10^{-18}$ m$^{-2}$ s$^{-1}$. By using the observational data in Figure 4 of \\citet{GHS11}, we end up with $\\dot{\\mathcal{N}}_S \\sim 6.3 \\times 10^{-18}$ m$^{-2}$ s$^{-1}$, indicating that the above ansatz may be a reasonable estimate.\n\n\nAlong the same lines, we can determine the mass flux of impactors within a given mass range of $\\left(m_\\mathrm{min},\\,m_\\mathrm{max}\\right)$. The corresponding mass flux, denoted by $\\dot{\\mathcal{M}}_{E,S}$, is\n\\begin{equation}\n \\dot{\\mathcal{M}}_{E,S} = \\int_{m_\\mathrm{min}}^{m_\\mathrm{max}} m'\\,\\mathcal{P}_{E,S}(m')\\,dm'.\n\\end{equation}\nFor our lower bound, we choose approximately $\\mu$m-sized objects (with $m_\\mathrm{min} = 10^{-15}$ kg) as they represent the smallest particles that may host organic material \\citep{FKF03,Kwok}; in the most optimal circumstances, they might be capable of transporting living or extinct microbes \\citep{Wes10}. Our upper bound of $m_\\mathrm{max} = 10^{15}$ kg is based on the fact that objects with higher masses are unlikely to have impacted the Moon over its current age. The ratio of the two mass fluxes ($\\delta_{ES}$) is defined as\n\\begin{equation}\\label{RatFlux}\n \\delta_{ES} \\equiv \\frac{\\dot{\\mathcal{M}}_E}{\\dot{\\mathcal{M}}_S} \\sim 2.6 \\times 10^{-3},\n\\end{equation}\nwhere the last equality follows from employing the preceding relations. In other words, the mass flux of extrasolar objects striking the Moon is potentially three orders of magnitude smaller than the mass flux of impactors originating from within our Solar system. \n\nWe caution that the scaling relations specified for $\\dot{\\mathcal{N}}_E$ and $\\dot{\\mathcal{N}}_S$ constitute merely heuristic estimates as they are subject to numerous uncertainties (most notably for the flux of extrasolar objects). It is likely that a single power-law function will not suffice, thereby necessitating the use of broken power-laws in future studies. Another simplification introduced herein is that the flux of impactors remains roughly constant over time. While this is approximately correct when it comes to intrasolar objects over the past few Gyr and possibly valid for extrasolar objects, it is \\emph{not} valid for intrasolar objects during the early stages of our Solar system ($\\gtrsim 4.0$ Ga), when the impact rates were a few orders of magnitudes higher \\citep{CS92}.\n\n\\section{Abundance of Extrasolar Material on the Moon}\\label{SecAbE}\nThe ratio $\\delta_{ES}$ is valuable because it enables us to calculate the abundance of extrasolar material present near the lunar surface. However, in doing so, we rely upon the assumption that the gardening depths of intrasolar and extrasolar objects are comparable. This is not entirely unreasonable because the specific kinetic energy is proportional to $v_\\mathrm{esc}^2 + v_\\infty^2$, implying that its value for extrasolar objects is conceivably an order of magnitude higher than for intrasolar objects. \n\nIf the variations in gardening depth are ignored, the abundance of extrasolar material by weight ($\\phi_E$) is approximately proportional to $\\dot{\\mathcal{M}}_E$, consequently yielding $\\phi_E \\sim \\delta_{ES} \\phi_S$ with $\\phi_S$ signifying the abundance of (micro)meteoritic material originating from the Solar system. Based on the analysis of lunar samples, it has been estimated that this component makes up $\\sim 1$-$1.5\\%$ (by weight) of the lunar soil and $\\sim 1.28\\%$ of the lunar regolith \\citep{AGKM,MVT15}. Therefore, by using the above expression for $\\phi_E$, we arrive at $\\phi_E \\sim 30$ ppm, namely, the mass fraction of extrasolar material is $\\sim 3 \\times 10^{-5}$. In comparison, material ejected from Earth subsequently deposited on the Moon is predicted to occur at an abundance of $\\sim 1$-$2$ ppm at the surface \\citep{Arm10}.\n\nOf this extrasolar material, we note that $\\sim 10^{-3}$, therefore amounting to an abundance of $\\sim 30$ ppb, is derived from halo stars \\citep{SL19c}. Another crucial point worth noting before proceeding further is that the preservation of older extrasolar material is feasible in principle because the Moon has been geologically inactive relative to Earth during the past few Gyr \\citep{JHA12}. If we suppose, for instance, that the material is uniformly distributed over time and adequately preserved, we find that $\\sim 10\\%$ of all extrasolar material would have been deposited $> 4$ Ga. In other words, the abundance of such material might be $\\sim 3$ ppm after using the previous result for $\\phi_E$.\n\nHowever, the extrasolar material deposited on the surface will comprise both inorganic and organic components. It is very difficult to estimate the abundance of the latter as we lack precise constraints on the abundance of organics in ejecta expelled from extrasolar systems as well as the likelihood of their survival during transit and impact with the lunar surface. Hence, our subsequent discussion must be viewed with due caution as we operate under the premise that (micro)meteorites and IDPs within the Solar system are not very atypical relative to other planetary systems.\\footnote{This line of reasoning goes by many names, including the Copernican Principle and the Principle of Mediocrity, and is often implicitly invoked in astrobiology.}\n\nWe begin by considering the abundance of extrasolar organic material. Even within the Solar system, the inventory of organic carbon varies widely across meteorites and IDPs. For instance, it is believed that organic carbon comprises $\\sim 1.5$-$4\\%$ by weight in carbonaceous chondrites \\citep{PS10}, whereas it is lower for other classes of meteorites. When it comes to IDPs, laboratory analyses indicate that they possess $\\sim 10\\%$ carbon by weight on average \\citep{CS92,PCF06}. If we err on the side of caution and choose a mean value of $\\sim 1\\%$ as not all carbon is incorporated in organic material, we find that the abundance of extrasolar organic material ($\\phi_{E,O}$) may be $\\sim 3 \\times 10^{-7}$, namely, we obtain $\\phi_{E,O} \\sim 0.3$ ppm.\n\nHowever, it should be noted that the majority of organic carbon ($> 70\\%$) in carbonaceous chondrites is locked up in the form of insoluble compounds that are ``kerogen-like'' in nature \\citep{PS10,QOB14}. As organics constitute a very broad category, it is more instructive to focus on specific classes. We will henceforth mostly restrict ourselves to amino acids because they are building blocks for proteins and are therefore essential for life-as-we-know-it. Other organic compounds that were identified in meteorites include aliphatic and aromatic hydrocarbons, phosphonic and sulfonic acids, and polyols. \n\nWe begin by considering the abundance of amino acids. Meteorites exhibit different concentrations of amino acids with values ranging from $\\ll 1$ ppm to $\\gtrsim 100$ ppm \\citep{MAO}. The uncertainty for IDPs is even larger because only a few amino acids such as $\\alpha$-amino isobutyric acid have been detected and the average abundance of amino acids in IDPs remains poorly constrained \\citep{MPT04}. Hence, we will resort to an alternative strategy instead. The analysis of lunar samples from the Apollo missions indicates that the concentration of amino acids is $\\sim 0.1$-$100$ ppb with typical values on the order of $\\sim 10$ ppb \\citep{GZK72,HHW71,ECD16}. \n\nEarlier, we determined that the extrasolar mass flux is lower by three orders of magnitude compared to the intrasolar mass flux. Hence, using the value of $\\delta_{ES}$ from (\\ref{RatFlux}), we find that the concentration of extrasolar amino acids is potentially $\\sim 30$ parts-per-trillion (ppt). However, this estimate is an upper bound in all likelihood because it presumes that the fiducial choice of $10$ ppb for amino acids in the lunar regolith arises solely from (micro)meteorite impacts. In actuality, on account of the high enantiomeric excesses detected, it is believed these samples have experienced some terrestrial biological contamination \\citep{ECD16}. \n\nIn analogy with the discovery of carboxylic acids and nucleobases - the building blocks of lipids and nucleic acids, respectively - in meteorites on Earth, it is plausible that these compounds might be found on the Moon. For example, analysis of meteorites has revealed that carboxylic acids may comprise $\\sim 40$-$300$ ppm \\citep{PCF06}. Adopting a fiducial value of $\\sim 10$ ppm for carboxylic acids in extrasolar material by erring on the side of caution, we estimate an abundance of $\\sim 0.3$ ppb for extrasolar carboxylic acids in the lunar regolith after using the prior estimate for $\\phi_E$. A similar analysis can be carried out for nucleobases by employing carbonaceous chondrites as a proxy. Choosing a nucleobase abundance of $\\sim 0.1$ ppm in chondrites \\citep{CSC11}, we obtain an estimate of $\\sim 3$ ppt for extrasolar nucleobases near the lunar surface. \n\nWe reiterate that the numbers described herein are rough estimates because a number of key processes are not tightly constrained. Apart from the direct contribution of extrasolar objects impacting the Moon, it is possible for extrasolar material to be deposited on intrasolar objects that subsequently impact the Moon and thereby deposit this material on the lunar surface. It is likely, however, that this contribution will be sub-dominant.\n\n\\section{Searching for Extrasolar Material on the Moon}\\label{SecSeaE}\nHitherto, we have calculated the abundance of extrasolar material deposited on the lunar surface. However, this raises an immediate question: how do we distinguish between material (e.g., micrometeorites and IDPs) derived from within and outside the Solar system?\n\nThe solution may lie, at least partly, in analyzing multiple isotope ratios of samples \\citep{LL18}. Of the various candidates, perhaps the best studied are the oxygen isotope ratios. In the oxygen three-isotope plot, involving the isotope ratios $^{17}$O\/$^{16}$O and $^{18}$O\/$^{16}$O, the terrestrial fractionation line has a slope of approximately $0.5$ whereas carbonaceous chondrites are characterized by a slope of $\\sim 1$\n\\citep{Clay03,KA09}. It should also be noted that the $^{17}$O\/$^{18}$O ratio exhibits a lower value in the Solar system in comparison to the Galactic average \\citep{NG12}. Thus, significant deviations from the Solar system values in the oxygen three-isotope plot might imply that the sample is extrasolar in origin. \n\nApart from oxygen isotopes, other extrasolar flags include carbon and nitrogen isotope ratios, corresponding to $^{12}$C\/$^{13}$C and $^{14}$N\/$^{15}$N, respectively \\citep{Mum11,FM15}. Note, for instance, that enhanced values of the $^{12}$C\/$^{13}$C ratio could arise in extrasolar objects that have traversed through regions in proximity to Young Stellar Objects \\citep{SPY15}. In addition to isotope ratios, anomalies in CN-to-OH ratios as well as the abundances of bulk elements, C$_2$ and C$_3$ molecules might also serve as effective methods for discerning extrasolar material \\citep{LS07,Sch08}. \n\nOnce the identification of extrasolar grains has been achieved, one could attempt to identify the organics present within them. A plethora of standard techniques can be employed such as liquid chromatography-mass spectrometry. Using such procedures, the identification of amino acids, nucleobases and other organic compounds is feasible at sub-ppb concentrations \\citep{GDA06,CSC11,BSE12}. The detection of either nucleobases or amino acids that are neither prevalent in terrestrial nor meteoritic material would lend further credence to the notion that the sample under question may be extrasolar in nature.\\footnote{It is worth appreciating that meteorites contain ``exotic'' organic compounds that are very rare on Earth. For instance, the analysis of carbonaceous meteorites has revealed the existence of nucleobase analogs (e.g., purine) whose abundances are extremely low on Earth \\citep{CSC11}.}\n\nHitherto, we have limited our discussion to extrasolar material and organic compounds. There is yet another scenario worth mentioning, albeit with a potentially much lower probability, namely, the detection of biosignatures corresponding to extinct extraterrestrial life.\\footnote{We have implicitly excluded the prospects for living extraterrestrial organisms because the Moon's habitability ``window'' appears to have come to a close just millions of years after its formation \\citep{SMC18}.} There are a number of methods that may be utilized to search for biomarkers. Some of the measurable characteristics of molecular biosignatures include: (a) enantiomeric excesses stemming from homochirality, (b) preference for certain diastereoisomers and structural isomers, and (c) isotopic heterogeneities at molecular or sub-molecular levels \\citep{SAM08}. A review of numerous life-detection experiments and their efficacy can be found in \\citet{NHV18}. The most ideal scenario arguably entails the discovery of extrasolar microfossils as they would provide clear-cut evidence for extraterrestrial life; on Earth, the oldest microfossils with unambiguous evidence of cell lumens and walls are from the $\\sim 3.4$ Ga Strelley Pool Formation in Western Australia \\citep{WKS11}.\n\n\\section{Conclusion}\\label{SecConc}\nIn light of recent discoveries of interstellar objects, we have studied the deposition of extrasolar material on the lunar surface by estimating the mass fluxes of impactors originating from within and outside our Solar system. Our choice of the Moon is motivated by the fact that it lacks an atmosphere (avoiding ablation of the impactors) and is mostly geologically inactive (allowing for long-lived retention of material). \n\nOur calculations suggest that the abundance of extrasolar material at the surface is $\\sim 30$ ppm, with the abundance of detritus deposited $> 4$ Ga being $\\sim 3$ ppm. Of this material, a small fraction will exist in the form of organic molecules. We estimated that the abundance of extrasolar organic carbon near the lunar surface is $\\sim 0.3$ ppm. Among the various organic compounds, the abundances of carboxylic acids, amino acids and nucleobases are of particular interest as they constitute the building blocks for life-as-we-know-it. Our results indicate that their maximal abundances might be $\\sim 300$ ppt, $\\sim 30$ ppt and $\\sim 3$ ppt, respectively.\n\nWe outlined how the detection of extrasolar debris may be feasible by analyzing lunar samples. A combination of isotope ratios (oxygen in particular), elemental abundances, and other diagnostics might allow us to identify extrasolar material on the Moon. This material can then be subjected to subsequent laboratory experiments to search for organic compounds such as amino acids as well as molecular biosignatures arising from extinct extraterrestrial life. Altogether, these analyses could provide important new clues for astrobiology.\n\nEven the ``mere'' discovery of inorganic extrasolar material will open up new avenues for research. In particular, by studying the chemical composition of this material, it may be possible to place constraints on planetary formation models, assess the habitability of early planetary systems, gauge the origin and evolution of exo-Oort clouds, and determine the chemical diversity of extrasolar planetary systems. Hence, a new channel for understanding these physical processes, separate from studying unbound interstellar objects such as `Oumuamua \\citep{TRR17,RAV18,MM19}, can be initiated.\n\nThe discovery of extrasolar organics could reveal new complex macromolecules that may possess practical value in medicine and engineering. Furthermore, the detection of such molecules would enable us to gain a deeper understanding of what types of organics were synthesized in other planetary systems, allowing us to gauge the latter's prospects for hosting life. Finally, the discovery of molecular biosignatures confirming the existence of (extinct) extraterrestrial life will indubitably have far-reaching consequences for humankind. In view of these potential benefits, we contend that there are additional compelling grounds for sustained \\emph{in situ} exploration of the lunar surface in the upcoming decades.\n\n\\acknowledgments\nThis work was supported in part by the Breakthrough Prize Foundation, Harvard University's Faculty of Arts and Sciences, and the Institute for Theory and Computation (ITC) at Harvard University.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{s_intro}\n\nExtreme oxygen line ratios (O{\\small 32}\\ $\\equiv$ [\\ion{O}{iii}]$\\lambda 5007$\/[\\ion{O}{ii}]$\\lambda 3727$ $>4$) were proposed recently as a potential tracer of the escape of ionising radiation from galaxies through density-bounded \\ion{H}{ii}\\ regions \\citep{Jaskot13, Nakajima14}. The idea is the following: if a galaxy is leaking ionising photons through a density-bounded region, the ratio of O{\\small 32}\\ can be high if the \\ion{H}{ii}\\ region that we observe is truncated, for example if (part of) the [\\ion{O}{ii}]\\ region is missing. We see deeper into the ionised region and the external layer of [\\ion{O}{ii}]\\ is either nonexistent or thinner than in the classical ionisation-bounded scenario. Given their high O{\\small 32}\\ ratios, \\citet{Jaskot13} discuss the possibility of LyC escape from \"Green Pea\" (GP) galaxies, a population of extremely compact, strongly star-forming galaxies in the local Universe \\citep{Cardamone09, Izotov11, 2016ApJ...820..130Y}. \\citet{Nakajima14} and \\citet{Nakajima16} compare O{\\small 32}\\ ratios of different types of high-redshift galaxies, Lyman Break Galaxies (LBGs), and Lyman Alpha Emitters (LAEs) with GPs and Sloan Digital Sky Survey (SDSS) galaxies: high-redshift galaxies have on average higher O{\\small 32}\\ ratios than SDSS galaxies, but comparable to GPs. Furthermore, the observed O{\\small 32}\\ ratios of LAEs are larger than those of LBGs. Along the same line, GPs are also strong LAEs \\citep{Henry15, 2016ApJ...820..130Y, 2017A&A...597A..13V, 2017ApJ...838....4Y}, which is very unusual for galaxies in the local Universe \\citep{Hayes11, Wold14}. \n \nWhile the O{\\small 32}\\ ratio of galaxies that are leaking ionising photons may be enhanced compared to those with a LyC escape fraction, $f_{\\mathrm{esc}}^{\\mathrm{LyC}}$ , equal to zero, there are other situations that can lead to high O{\\small 32}\\ ratios. For example, the O{\\small 32}\\ ratio depends on metallicity: low stellar and nebular metallicities lead to higher O{\\small 32}\\ ratios \\citep{Jaskot13}. A harder ionising spectrum will also induce higher O{\\small 32}\\ ratios, as investigated in for example \\citet{Pellegrini12}, as well as a higher ionisation parameter (e.g. \\citealt{Stasinska15}). Furthermore, shocks could also explain these ratios, as studied in detail in \\citet{Stasinska15}.\n\nDespite intensive searches for LyC emission from galaxies, only a few LyC leakers have been identified over the last decades in the local Universe \\citep{Bergvall06, Leitet13, Borthakur14, Leitherer16}, but most searches resulted in non-detections or upper limits \\citep{Siana15, Mostardi15, Grazian16, Rutkowski16, Rutkowski17}. The discovery of the link between LyC emission and O{\\small 32},\\ however, turned the tide, as demonstrated by for example \\citet{Izotov16a,Izotov16b, 2018MNRAS.474.4514I, 2018MNRAS.tmp.1318I}. For their studies, LyC emission was detected for all eleven galaxies at $z \\approx 0.3$ that were selected by their extreme O{\\small 32}\\ ratios (O{\\small 32}\\ > 4), among other criteria such as brightness, compactness, and strong \\ifmmode {\\rm H}\\beta \\else H$\\beta$\\fi\\ equivalent widths. Furthermore, a correlation between O{\\small 32}\\ and the escape of ionising photons was found, although the scatter of $f_{\\mathrm{esc}}^{\\mathrm{LyC}}$\\ is large \\citep{2018MNRAS.tmp.1318I}. At high redshift ($z \\approx 3$), four galaxies with high escape fractions ($> 50$\\%) have been reported \\citep{Vanzella15, 2016A&A...585A..51D, Shapley16, Bian17, 2018MNRAS.476L..15V}, which were selected by similar criteria. Additionally, the recent results from the Lyman Continuum Escape Survey \\citep{2018arXiv180601741F} reveal an average escape fraction of $\\sim20\\%$ for galaxies at $z\\approx3$ with strong [\\ion{O}{iii}]\\ emission, and a weak correlation between $f_{\\mathrm{esc}}^{\\mathrm{LyC}}$ and the [\\ion{O}{iii}]\\ equivalent width for $\\sim20$ galaxies with directly detected LyC emission. Although the combination of these selection criteria has resulted in relatively few galaxies with confirmed LyC emission yet, the detection of extreme O{\\small 32}\\ emission from a local low-mass GP analogue \\citep{2017ApJ...845..165M} might, however, suggest that low-mass extreme O{\\small 32}\\ emitters, and thus possible low-mass LyC emitters, are more common than the bright GP samples suggest. A statistical study of the O{\\small 32}\\ ratios of emission-line selected galaxies over a broad range of stellar masses has, however, not been performed so far. \n\nThe unique capabilities of the Multi-Unit Spectroscopic Explorer (MUSE) \\citep{2010SPIE.7735E..08B} allow us to study the properties of galaxies with extreme O{\\small 32}\\ ratios and how common they are in emission-line selected samples. For this study we combine four MUSE Guaranteed Time Observing (GTO) surveys and collect a sample of mainly emission-line detected galaxies with a high specific star formation rate and stellar masses between $\\sim10^6$ and $\\sim10^{10}$, from which we compute the distribution of O{\\small 32}\\ ratios in a blind survey of star-forming galaxies. We will here present the properties and occurrences of extreme oxygen emitters spanning the redshift range 0.28 < z < 0.85, where both lines are in the MUSE spectral range, in the largest statistical sample of emission-line detected galaxies in three-dimensional spectral data.\n\nThis article is organised as follows: in Sect.~\\ref{s_data} we describe the data from different programmes that we used for this study; in Sect.~\\ref{s_sample} we describe the sample selection; in Sect.~\\ref{s_results} we investigate the occurrence of high O{\\small 32}\\ ratios and study potential correlations with stellar mass ( \\Mstar ), star formation rate (\\ensuremath{\\mathrm{SFR}} ), and the metallicity indicator R{\\small 23}\\ line ratio; in Sect.~\\ref{s_discussion} we study the incidence rate of galaxies with high O{\\small 32}\\ ratios as a function of \\Mstar\\ and $z$, and we also discuss how our results compare to nebular models with no escape of ionising photons. We end with a discussion on the most extreme oxygen emitters. Throughout this paper we adopt a cosmology with $H_0$ = 70 km s$^{-1}$ Mpc$^{-1}$, $\\Omega_m$ = 0.3 and $\\Omega_\\Lambda$ = 0.7. \n\n\\section{Data}\n\\label{s_data}\n\\subsection{Different MUSE surveys}\nFor this study we used data taken with MUSE, as part of GTO observations, covering a wavelength range of 4800-9300 $\\AA$. We selected data from four surveys that together span an area of more than 55 arcmin$^2$. Below follows a description of the different surveys. \n\n\\subsubsection{Hubble Ultra Deep Field survey}\nThe MUSE Hubble Ultra Deep (HUDF) survey \\citep{2017A&A...608A...1B} consists of two fields of different size and depth in the original HUDF region. The Medium Deep Mosaic Field (e.g. UDF-mosaic) is a mosaic of nine pointings ($\\approx$3$\\arcmin$ x 3$\\arcmin$) with a depth of approximately ten hours. The UDF-10 Ultra Deep Field (e.g. UDF-10) is a deeper observation of a single pointing within the UDF-mosaic region, with a depth of approximately 31 hours, covering $\\approx$1.15 arcmin$^2$. The data reduction is described by \\citet{2017A&A...608A...1B} and is based on the MUSE standard pipeline version 1.7dev \\citep{2014ASPC..485..451W}. For the construction of the redshift catalogue \\citep{2017A&A...608A...2I}, Hubble Space Telescope (HST) priors are used, as well as a blind search for emission lines in the datacube using the software ORIGIN (Mary et al. in prep).\n\n\n\\subsubsection{MUSE-Wide survey}\nThe MUSE-Wide survey aims to cover a larger field of view than the HUDF survey with relatively short exposures of one hour per pointing. The data release of the full survey will be presented in Urrutia et al (in prep). Here we focus on the first 24 pointings of MUSE-Wide, which together cover an area of 22.2 arcmin$^2$. We use the source catalogue that is presented in \\citet{2017A&A...606A..12H}, which is created using the emission-line detection software LSDCat \\citep{2017A&A...602A.111H}, but we do not use their supplied spectra since we extract the spectra of all surveys consistently. \n\n\\subsubsection{MUSE QuBES survey}\nThe MUSE Quasar Blind Emitter Survey (QuBES) consists of 24 individual fields centred on quasars. All datacubes have a minimum total exposure time of two hours, with selected fields observed to a depth of ten hours based on the availability of high-quality archival auxiliary data such as quasi-stellar object (QSO) spectra. The standard data reduction was carried out with the MUSE Data Reduction System (DRS; \\citealt{2014ASPC..485..451W}). Post-processing procedures for additional integral field unit (IFU) normalisation and sky subtraction were carried out using the CubEx package (Cantalupo et al. in prep; see \\citealt{2018ApJ...859...53M} and \\citealt{2016ApJ...831...39B} for details). The survey will be fully described in Straka et al. (in prep) and Segers et al. (in prep). A subset of 21 fields is used for this study based on the availability of galaxy catalogues. For this study, the presence of the QSO in the field is not important. The galaxy catalogues are compiled as follows. First, white light images are created from the MUSE datacubes by summing the entire cube along the wavelength axis. Then, SExtractor \\citep{1996A&AS..117..393B} is used to detect any objects down to 1$\\sigma$. The spectra for each object are extracted by selecting associated pixels in the segmentation maps produced by SExtractor. The spectra of the resulting detections are then inspected by eye in order to determine the galaxy redshifts based on nebular emission lines and stellar absorption features. \n\n\\subsubsection{Galaxy groups survey}\nThe last dataset added to our sample is that of the Galaxy group survey (Epinat et al. in prep), which targets galaxy groups at intermediate redshift ($z \\approx 0.5-0.7$) from the zCOSMOS 20k group catalogue \\citep{2012ApJ...753..121K}. We selected data from three galaxy groups, namely COSMOS-Gr30, 34, and 84, with the deepest MUSE data of 9.75, 5.25, and 5.25 hours respectively, \\textasciitilde 1$\\arcmin$ x 1$\\arcmin$ each. The data reduction followed the same approach as the Hubble Ultra Deep Field and is described in \\citet{2018A&A...609A..40E} for the galaxy group field COSMOS-Gr30. For the construction of the redshift catalogues, galaxies were selected from the COSMOS photometric catalogue by \\citet{2016ApJS..224...24L}, complemented by emission-line detection using ORIGIN for the deepest field COSMOS-Gr30 (see also \\citealt{2018A&A...609A..40E}). \n\n\\subsection{Spectrum extraction and emission-line flux measurements}\nThe spectra of all sources are extracted from the datacubes using the same method in order to make the line flux measurements comparable. We followed the approach of \\citet{2017A&A...608A...2I} and extracted the spectra using a mask region, which is the HST segmentation map convolved by the MUSE point spread function for all surveys except the MUSE QuBES survey, for which there is no HST coverage. Spectra in this survey are extracted using a mask region that is constructed from the MUSE white light image. We then used the simple unweighted sum of the flux in the mask region and used this as the spectrum of each galaxy. We measured the line fluxes of the galaxies in the catalogues using the software \\textsc{platefit} \\citep{2004ApJ...613..898T, 2008A&A...485..657B}. Since this method is the same as that used by \\citet{2017A&A...608A...2I} to construct the HUDF emission-line catalogue, the line flux measurements that we use here are identical to theirs. \n\n\\subsection{Deriving \\ensuremath{\\mathrm{SFRs}}\\ and dust extinction}\nThe \\ensuremath{\\mathrm{SFRs}}\\ of our galaxies are calculated using the method described in \\citet{2013MNRAS.432.2112B}. In short, we simultaneously fit the \\citet{2001MNRAS.323..887C} models to the brightest (signal-to-noise ratio (S\/N)$>$3) optical emission lines. From this we estimate the \\ensuremath{\\mathrm{SFR}}\\ marginalising over: metallicity, ionisation parameter, dust-to-metal ratio, and the optical depth of the dust attenuation (\\ensuremath{\\mathrm{\\tau}_{V}}). The advantage of using a multi-emission-line approach over using a single Balmer line to calculate the \\ensuremath{\\mathrm{SFR}}\\ is that it is less affected by sky line contamination of a single line and should therefore provide a more robust \\ensuremath{\\mathrm{SFR}} . Also, this method provides an estimate of \\ensuremath{\\mathrm{\\tau}_{V}} , which we adopt to correct the emission-line fluxes for dust extinction. \n\n\\subsection{Calculating stellar masses}\nWe obtained stellar masses for the galaxies by performing spectral energy distribution (SED) fitting using the \\textsc{fast} (Fitting and Assessment of Synthetic Templates) algorithm \\citep{2009ApJ...700..221K}, where we used the \\citet{2003MNRAS.344.1000B} library and assumed exponentially declining star formation histories (SFHs) with a \\citet{2000ApJ...533..682C} extinction law and a \\citet{2003ApJ...586L.133C} initial mass function (IMF). To test the influence of our SFH assumption, we compare our stellar masses with those derived by the \\textsc{magphys} code \\citep{2008MNRAS.388.1595D}. For these \\Mstar\\ estimations, the photometry is fitted to stellar population synthesis models assuming random bursts of star formation in addition to an exponentially declining SFH. We find consistent stellar masses, for example the median difference between \\Mstar\\ derived from \\textsc{fast} and \\Mstar\\ from \\textsc{magphys} equals $\\log \\Mstar\/\\Msun = 0.08$ with a standard deviation of $\\log \\Mstar\/\\Msun = 0.3$, from which we conclude that the influence of the assumed SFH is small for this sample. For the UDF, we used HST Advanced Camera for Surveys (ACS) and Wide Field Camera 3 (WFC3) photometry from the catalogue of \\citet{2015AJ....150...31R}. The same approach is applied to the MUSE-Wide (photometry from \\citealt{2014ApJS..214...24S}) and the Galaxy groups (photometry from \\citealt{2016ApJS..224...24L}). Unfortunately there is no deep photometry available for the MUSE-Qubes survey. We therefore used a set of 11 400 $\\AA$ -wide boxcar filters, where emission lines are masked, to compute a pseudo-photometric SED, as will be described in more detail in Segers et al. (in prep). Since not all the bright emission lines lie within the MUSE spectral range for our redshifts, we decided to leave the photometry uncorrected for bright emission lines. To test the possible effect of strong emission lines on \\Mstar , we compared our stellar masses to those based on the photometry that is corrected for emission lines in the MUSE spectral range. We find that for the bulk of the galaxies in our sample, this effect is negligible; for example, the maximum difference between the emission-line corrected and the non-corrected \\Mstar\\ for the extreme O{\\small 32}\\ emitters, which will be introduced in Sect. \\ref{s_results}, corresponds to $\\log$ \\Mstar\/\\Msun = 0.03.\n\n\\subsection{Redshift distribution}\nFor our main sample, we only select galaxies with spectra of sufficient quality for our study, that is, spectra with S\/N > 3 in the [\\ion{O}{iii}]$\\lambda 5007$\\ line at $0.28 < z < 0.85,$ since for this redshift interval the [\\ion{O}{ii}]\\ and [\\ion{O}{iii}]\\ emission lines fall within the MUSE wavelength range. For galaxies that meet this criterion, but have S\/N([\\ion{O}{ii}] ) < 3, we use the 3-$\\sigma$ lower limit for the O{\\small 32}\\ ratio. We show two examples of spectra that meet these criteria in Fig. \\ref{fig:example_spec}. This leads to a total sample of 815 galaxies, of which the redshift distribution for each survey is shown in Fig. \\ref{fig:galaxies_histogram}. This histogram shows the number of galaxies in the redshift bins, separated based on the survey from which the data originates. Because the depths of the different surveys that we combine here are not uniform, care is required for selecting galaxies for our final sample. \n\\begin{figure*}\n \\centering\n \\includegraphics[width=\\hsize]{example_spec_o32.pdf}\n \\caption{Example spectra of O{\\small 32}\\ emitters in our sample. The upper panel shows the spectrum of the galaxy with the most extreme oxygen ratio (dust-corrected O{\\small 32}\\ = 23) from the UDF-mosaic catalogue with id = 6865 and z = 0.83. The orange shaded lines show the \\ifmmode {\\rm H}\\gamma \\else H$\\gamma$\\fi , \\ifmmode {\\rm H}\\beta \\else H$\\beta$\\fi,\\ and [\\ion{O}{iii}]$\\lambda\\lambda 4959,5007$\\ of neighbouring sources at $z\\approx0.62$. An example spectrum of a galaxy with a lower O{\\small 32}\\ ratio, in this case with dust-corrected O{\\small 32}\\ = 0.25, is shown in the lower panel (UDF-mosaic, id = 892, z = 0.74).}\n \\label{fig:example_spec}\n\\end{figure*}\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\hsize]{hist_z_allsf_newselection_snhb_new.pdf}\n \\caption{Redshift distribution of 815 galaxies in total, in the range 0.28 < z < 0.85, with high confidence, spectroscopically determined MUSE redshift, in the four surveys that we are using for this project.}\n \\label{fig:galaxies_histogram}\n\\end{figure}\n\n\\subsection{Sample selection}\n\\label{s_sample}\n\\label{sec:selection}\nObservations of star-forming galaxies have shown that the star formation rate (\\ensuremath{\\mathrm{SFR}}) and stellar mass (\\Mstar) of these galaxies are tightly related (e.g. \\citealt{2004MNRAS.351.1151B, 2007ApJ...660L..43N}). This relation, also called the star-forming main sequence (SFMS), has been studied intensively in the last decade (e.g. \\citealt{2012ApJ...754L..29W, 2014ApJ...795..104W, 2015ApJ...801...80L, 2017ApJ...847...76S}). \\citet{Boogaardetal} have constrained the SFMS for galaxies that are detected in deep MUSE data, modelling them as a Gaussian distribution around a three-dimensional (3-D) plane, taking into account and obtaining the redshift evolution of the \\ensuremath{\\mathrm{SFR}}\\ - \\Mstar\\ relation, resulting in\n\\begin{equation}\n\\label{SFMS}\n\\begin{aligned}\n\\log \\ensuremath{\\mathrm{SFR}}\\ = {} & 0.83^{+0.07}_{-0.06} \\log \\left(\\frac{M_*}{M_0}\\right) -0.83^{+0.05}_{-0.05} \\\\ \n & + 1.74^{+0.66}_{-0.68} \\log \\left(\\frac{1 + z}{1 + z_0}\\right) \\pm 0.44^{+0.05}_{-0.04} \n\\end{aligned}\n,\\end{equation} \nwith $M_0 = 10^{8.5}$ \\Mstar\\ and $z_0 = 0.55$. Because this relation is derived for a sample of galaxies with deep photometry and high S\/N Balmer lines from MUSE spectra of galaxies with stellar masses down to log \\Mstar \/\\Msun\\ $\\approx$ 7, and comparable to the mass range of the galaxies in our sample, we use this $z$-dependent \\Mstar - \\ensuremath{\\mathrm{SFR}}\\ relation to select galaxies for our final sample. We calculate how much the \\ensuremath{\\mathrm{SFR}}\\ of a galaxy is offset from the redshift-corrected SFMS, which we will herein refer to as the 'distance to the SFMS' ($\\Delta$ SFMS) given in dex with a sign such that objects above the SFMS have a positive distance. The distribution of the distances to the SFMS is shown in Fig. \\ref{fig:distr_ssfr} for each survey that we use for this study separately. The dashed line represents the SFMS, with, at the left side, galaxies below and, at the right side, galaxies above the SFMS. For all four surveys, this distribution peaks within 1-$\\sigma$ from the SFMS. Since our sample is mostly emission-line selected, we expect our sample to be most complete at high \\ensuremath{\\mathrm{SFRs}} . However, below the SFMS it is likely that a fraction of the galaxies are below the detection threshold. We therefore conservatively select galaxies with a distance > 0 dex from the SFMS, resulting in a final sample of 406 galaxies. Applying such a selection based on a fixed distance from the $z-$dependent SFMS relation also ensures that we select the same fraction of star-forming galaxies at each redshift.\n\\begin{figure*}\n \\centering\n \\includegraphics[width=\\textwidth]{ssfr_histogram_mainsdependent_v2_sfms_rs1.pdf}\n \\caption{Distribution of the distance to the redshift-dependent SFMS from \\citet{Boogaardetal} (Equation \\ref{SFMS}) in dex of the four different surveys used for this study. We show the 1-$\\sigma$ variation around the mean $\\mu$ (grey area) calculated from the fitted normal distribution (black line). We included all galaxies with a distance > 0 dex (dashed line), so all galaxies that lie above the main sequence. The number in the upper right corner of the diagram shows the number of galaxies above this threshold. In our final sample we include 406 galaxies in total.} \n \\label{fig:distr_ssfr}\n\\end{figure*}\nFor an overview of the stellar mass and the \\ensuremath{\\mathrm{SFR}}\\ distribution of the four surveys, we refer the reader to Figs. \\ref{fig:app_distr_m} and \\ref{fig:app_distr_sfr} in the Appendix. \n\nBased on the stellar mass distribution, we assume that all the galaxies in the sample are pure star-forming systems, since studies show that the fraction of active galactic nucleus (AGN) host galaxies is low at these masses. For example almost all AGN hosts at $0 < z < 1$ have $\\log$ \\Mstar\/\\Msun\\ $\\gtrsim 10.2$ \\citep{2013A&A...556A..11V} and the fraction of AGNs over all galaxies with $\\log$ \\Mstar\/\\Msun\\ $\\approx 10$ at $z \\lesssim 0.3 $ is $\\sim$1 $\\%$ \\citep{2003MNRAS.346.1055K,2010ApJ...723.1447H}.\n\n\n\\section{Results}\n\\label{s_results}\nIn this section we explore whether there is a correlation between galactic properties and O{\\small 32} . We also study the location of these emitters with respect to the SFMS, since \\citet{2017A&A...605A..67C} report that LyC leakers lie further above the SFMS.\n \n\\subsection{Redshift distribution of extreme O{\\small 32}\\ emitters}\nIn Fig. \\ref{fig:redshift_frequency} we show the redshift distribution of our sample after the selection that is described in the previous section. In light grey we show the number density of all the galaxies, while in dark grey we show that of galaxies with O{\\small 32}\\ > 1. For galaxies with S\/N([\\ion{O}{ii}] ) $<$ 3, we use the 3-$\\sigma$ upper limits on [\\ion{O}{ii}] , resulting in 3-$\\sigma$ lower limits on O{\\small 32} . We determined the ratio between the number of galaxies with O{\\small 32}\\ > 1 to the total number of galaxies in a bin. A $\\chi^2$ statistical test against the null hypothesis that this fraction in each bin is independent of redshift gives $\\chi^2$ = 1.9 a $P$-value of $\\sim$0.99, which indicates that the fraction of galaxies with oxygen ratios above unity does not evolve as a function of redshift. We adopted the same approach for the extreme oxygen emitters with O{\\small 32}\\ > 4, as shown in red, which yielded a comparable result ( $\\chi^2$ = 4.4, $P$-value $\\approx$ 0.88). This latter result is not particularly robust, because we only have 15 extreme emitters in our sample. \n\\begin{figure}\n \\centering\n \\includegraphics[width=\\hsize]{redshift_frequency_msselection_v2_sfms_rs1.pdf}\n \\caption{Redshift frequency of extreme O{\\small 32}\\ emitters with O{\\small 32}\\ > 4 (red). In dark grey we show the redshift distribution of the galaxies with O{\\small 32}\\ > 1 and in light grey the total sample.}\n \\label{fig:redshift_frequency}\n\\end{figure}\n\n\\subsection{O{\\small 32}\\ emitters on the star-formation main sequence}\nThe redshift-corrected distribution in the $\\log$ \\ensuremath{\\mathrm{SFR}}\\ - $\\log$ \\Mstar\\ plane is shown in Fig. \\ref{fig:main_sequence}. We normalised our results to $z=0$, and show the SFMS of Eq. \\ref{SFMS} corrected to $z=0$ (dashed line). The same diagram coloured by survey is shown in Fig. \\ref{fig:app_SFMS} in the Appendix. Only galaxies above this relation are selected for the final sample (circles), but for comparison we also show the galaxies that are left out by the SFMS selection (triangles). Galaxies with oxygen ratios larger than unity (blue points) are overall more abundant above the SFMS (104 out of 406 galaxies, 26$\\%$) than below this relation (46 out of 324 galaxies, 14$\\%$). The number of extreme O{\\small 32}\\ emitters deviates even more; 15 versus one galaxies in O{\\small 32}\\ regime above and below the SFMS, respectively. Regarding only galaxies in the final sample, there is no clear correlation between distance from the SFMS and O{\\small 32}\\ ratio. Moreover, we also find that galaxies with O{\\small 32}\\ > 1 are more common at low masses ($\\log$\\Mstar\/\\Msun\\ < 9). A two-dimensional plot of O{\\small 32}\\ versus the distance to the SFMS is shown in Fig. \\ref{fig:deltasfms_o32}. We will now turn to a more quantitative discussion of the relation between the oxygen ratio and \\ensuremath{\\mathrm{SFR}}\\ and \\Mstar .\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\hsize]{main_sequence_o3o2_zdep_msselection_v2_sfms_rs1.pdf}\n \\caption{ \\ensuremath{\\mathrm{SFR}}\\ - \\Mstar\\ diagram where we have corrected the \\ensuremath{\\mathrm{SFRs}}\\ to $z=0$ using the redshift evolution from Eq. \\ref{SFMS}. The dashed line shows this relation for $z = 0$, above which we selected galaxies in our sample, as described in Sect. \\ref{sec:selection}. The points are coloured by the logarithm of the dust-corrected O{\\small 32}\\ ratio. The symbols show galaxies in the final selection (circles), galaxies below the selection threshold (triangles), and galaxies with a lower limit on the O{\\small 32}\\ ratio (squares).}\n \\label{fig:main_sequence}\n\\end{figure}\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\hsize]{DeltaSFMS_o3o2_rs1.pdf}\n \\caption{O{\\small 32}\\ ratio versus the distance to the SFMS. Only galaxies above $\\Delta$ SFMS = 0 (dashed line) are included in the final sample. Lower limits on O{\\small 32}\\ are shown by orange triangles.} \n \\label{fig:deltasfms_o32}\n\\end{figure} \n\n\n\\subsection{O{\\small 32}\\ as a function of stellar mass}\n\\label{subsec:o32M}\nIn Fig. \\ref{fig:m_o32} we plot the $\\log$ O{\\small 32}\\ line ratio versus the stellar mass (black dots). The red squares denote the median values in both the x- and y-directions, in the intervals of $\\log$ O{\\small 32}\\ between -1.0 and 1.0 with a step size of 0.5. Additionally we show 3-$\\sigma$ lower limits of the O{\\small 32}\\ ratio for galaxies without an [\\ion{O}{ii}]\\ detection above our S\/N threshold (orange triangles). The confirmed LyC-leaking galaxies from \\citet{Izotov16a,Izotov16b, 2018MNRAS.474.4514I} are shown by green stars, and the extreme LyC emitter at $z=3.2$ from \\citet{2016A&A...585A..51D} and \\citet{2016ApJ...825...41V} is shown by the green square. The histograms along the y- and x-axis represent the distribution of O{\\small 32}\\ and \\Mstar , respectively. \n\nThe median values in the O{\\small 32}\\ bins show a clear anti-correlation between the oxygen ratio and the stellar mass. However, the Spearman's rank correlation coefficient for individual galaxies equals -0.30 ($P$-value $\\approx$ 0), indicating no clear trend between $\\log$ O{\\small 32}\\ and $\\log$ \\Mstar . The stellar masses of galaxies with O{\\small 32}\\ > 4 are lower than the average stellar mass of the entire population, for example all the extreme oxygen emitters in our sample have stellar masses below $10^9$. Although the O{\\small 32}\\ ratios of the extreme emitters in our sample are similar to those of the confirmed LyC leakers, their stellar masses are smaller than those of most of the leaking galaxies from \\citet{Izotov16a, Izotov16b, 2018MNRAS.474.4514I}, \\citet{2016A&A...585A..51D} and \\citet{2016ApJ...825...41V}. Because their galaxies were, besides their extreme O{\\small 32}\\ ratios, selected by their brightness to increase the possibility to directly detect LyC photons at $z \\approx 0.3$, their mass is not necessarily a reflection of the typical mass of an LyC emitter. For example, \\citet{2017MNRAS.471..548I} show that galaxies at $z<0.1$, which are only selected by extreme O{\\small 32}\\ ratios, all have masses between $10^{6} - 10^{7} \\Msun$. In addition, Mrk 71, a near green pea analogue and a LyC emitter candidate, has a stellar mass around $10^{5}$ \\citep{2017ApJ...845..165M}, suggesting that the mass of the bulk of LyC emitters might be lower than what is derived from confirmed LyC leakers. The approximately 20 recently discovered galaxies with LyC emission at $z\\approx3$ \\citep{2018arXiv180601741F} show an anti-correlation between the LyC escape fraction and the stellar mass, similar to our results.\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\hsize]{M_o3o2_withhist_msselection_v2_sfms_rs1.pdf}\n \\caption{Stellar mass as a function of $\\log$ O{\\small 32}\\ (black dots) and lower limits on O{\\small 32}\\ (orange triangles). The median per bin with 1-$\\sigma$ errors is shown by the red squares and error bars. We defined the bins by intervals of $\\log$O{\\small 32}\\ between -1.0 and 1.0 with a step size of 0.5. Three-$\\sigma$ lower limits on the O{\\small 32}\\ ratio for sources with a detection of [\\ion{O}{ii}]\\ below our S\/N cut are shown by the orange triangles. The green stars indicate the positions of the confirmed Lyman continuum leakers from \\citet{Izotov16a,Izotov16b, 2018MNRAS.474.4514I} and the green square is a LyC emitter at $z$=3.2 \\citep{2016A&A...585A..51D, 2016ApJ...825...41V}. Galaxies above the grey dashed line have extreme O{\\small 32}\\ ratios (O{\\small 32}\\ > 4). }\n \\label{fig:m_o32}\n\\end{figure}\n\n\\subsection{O{\\small 32}\\ as a function of \\ensuremath{\\mathrm{SFR}}}\nIn Fig. \\ref{fig:sfr_o32} we show the oxygen ratio as a function of \\ensuremath{\\mathrm{SFR}}\\ with the same colour code as used in Fig. \\ref{fig:m_o32}. The median values of the \\ensuremath{\\mathrm{SFR}}\\ decrease with increasing O{\\small 32} . For individual galaxies there is again no clear correlation (Spearman's rank correlation coefficient $\\approx$ -0.35, $P$-value $\\approx$ 0). The \\ensuremath{\\mathrm{SFR}}\\ of the confirmed LyC emitters, visualised by the green stars and square, is about two orders of magnitude larger than the median of our galaxies with comparable O{\\small 32}\\ emission.\n\nWhen comparing the O{\\small 32}\\ ratio with the specific star formation rate (\\ensuremath{\\mathrm{sSFR}}\\ = \\ensuremath{\\mathrm{SFR}}\/\\Mstar), we find that these values are also not correlated and that this also holds for the median values in the O{\\small 32}\\ bins. The \\ensuremath{\\mathrm{sSFR}}\\ of the confirmed LyC emitters is on average one order of magnitude larger than the \\ensuremath{\\mathrm{sSFR}}\\ of our sample. \n\\begin{figure}\n \\centering\n \\includegraphics[width=\\hsize]{SFR_o3o2_withhist_msselection_v2_sfms_rs1.pdf}\n \\caption{O{\\small 32}\\ ratio versus \\ensuremath{\\mathrm{SFR}} . The colours are described in the caption of Fig. \\ref{fig:m_o32}. All \\ensuremath{\\mathrm{SFR}} s that are shown here are derived from emission lines, as described in Sect. \\ref{sec:selection}.}\n \\label{fig:sfr_o32}\n\\end{figure}\n\n\\subsection{O{\\small 32}\\ as a function of R{\\small 23}}\n\\label{results:r23}\nR{\\small 23}\\ is a diagnostic to estimate the gas-phase oxygen abundances (later referred to as metallicity, \\Z ) of galaxies and is based on the ratio of [\\ion{O}{ii}]\\ + [\\ion{O}{iii}]\\ over \\ifmmode {\\rm H}\\beta \\else H$\\beta$\\fi\\ \\citep{1979A&A....78..200A, 1979MNRAS.189...95P, 1980MNRAS.193..219P, 1991ApJ...380..140M}, given by \\citep{1991ApJ...380..140M}\n\\begin{equation}\n\\mathrm{R{\\small 23}} \\space = \\space \\frac{ [\\ion{O}{ii}]{\\ensuremath{\\lambda}3727} + [\\ion{O}{iii}]{\\ensuremath{\\lambda}4959} + [\\ion{O}{iii}]{\\ensuremath{\\lambda}5007}}{\\ifmmode {\\rm H}\\beta \\else H$\\beta$\\fi} \\space. \n\\end{equation} \nSince it only relies on the blue rest-frame spectrum, this diagnostic is often used when the red emission lines are out of the spectrum. However, the relation between R{\\small 23}\\ and \\Z\\ is degenerate and therefore additional lines are still necessary to constrain the metallicity. In Fig. \\ref{fig:r23} we plot the logarithm of the oxygen line ratio against the logarithm of R{\\small 23} . At high O{\\small 32}\\ ($\\log$ O{\\small 32}\\ $\\gtrsim$ -0.2), we visually determine a trend between O{\\small 32}\\ and R{\\small 23} , which is followed by the LyC leakers of \\citet{Izotov16a,Izotov16b, 2018MNRAS.474.4514I}. However, below $\\log$ O{\\small 32}\\ $\\approx$ -0.2 the data points scatter in the $\\log$ R{\\small 23}\\ direction, as a result of uncertain \\ifmmode {\\rm H}\\beta \\else H$\\beta$\\fi\\ measurements as we will return to in the discussion. There we will also discuss how the stellar and gas-phase metallicity influences the oxygen ratio.\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\hsize]{r23_o3o2_withhist_MSselection_v2_sfms_rs1.pdf}\n \\caption{O{\\small 32}\\ versus R{\\small 23} , which is a proxy of the metallicity. The colours are used in the same way as in the previous plots. The orange triangles are now both lower limits of O{\\small 32}\\ and upper limits of R{\\small 23} . Galaxies with S\/N(\\ifmmode {\\rm H}\\beta \\else H$\\beta$\\fi) $< 3$ are shown by open symbols.} \n \\label{fig:r23}\n \\end{figure} \n\n\n\\section{Discussion}\n\\label{s_discussion}\nIn the previous section we gave an overview of the galaxy properties of our sample. Here we aim to determine what processes and properties are responsible for a high oxygen line ratio and under which circumstances it is likely to find extreme O{\\small 32}\\ ratios. \n\\subsection{The occurrence of high O{\\small 32}}\nAlthough in previous studies 'extreme' oxygen emitters are defined as having O{\\small 32}\\ > 4, we use O{\\small 32}\\ > 1 as the threshold for 'high' O{\\small 32}\\ emitters to study their occurrence, which will be justified later in this section. This results in a significant sample of 104 high O{\\small 32}\\ emitters, in contrast to applying the O{\\small 32}\\ > 4 threshold, which would only leave 15 galaxies in the extreme regime. In order to study how our selection criterion influences our results and to study the redshift dependence of our results, we constructed a comparison sample from SDSS data. \n\\subsubsection{Creating a comparison sample from SDSS data}\n\\label{section:sdsscomp}\nWe created a comparison sample of star-forming galaxies in SDSS DR7, from which the derivation of the measurements is detailed in \\citet{2004MNRAS.351.1151B} and \\citet{2004ApJ...613..898T}. For each galaxy in the MUSE sample we selected a local analogue in SDSS that lies at the same position in the redshift-corrected \\ensuremath{\\mathrm{SFR}}\\ - \\Mstar\\ plane, thus both samples contain the same number of galaxies. However, because the spectra and consequently the emission-line flux measurement of SDSS galaxies are, unlike the MUSE galaxies, biased by aperture effects due to a finite fiber size, we first corrected for this. Assuming that the slope of the SFMS from Eq. \\ref{SFMS} is unaffected, we refitted the relation to the SDSS data, resulting in a shift of +0.2 in the $\\log \\ensuremath{\\mathrm{SFR}} - 2.93 \\log(1+z)$ direction. Each SDSS galaxy in the same redshift-corrected \\ensuremath{\\mathrm{SFR}}\\ - \\Mstar\\ position is a potential analogue of a MUSE galaxy. We therefore selected all SDSS galaxies within the $1- \\sigma$ error bars of the position of the MUSE galaxy. We used the median value and the 16$\\%$ and 84$\\%$ percentiles of the [\\ion{O}{iii}]\\ and [\\ion{O}{ii}]\\ emission-line fluxes of all selected SDSS galaxies galaxy as the fluxes and errors of the analogue. \n\nFigure \\ref{fig:oiii_oii_hist_sdss} shows the distribution of O{\\small 32}\\ of our sample (left) and the SDSS comparison sample (middle). We compared the O{\\small 32}\\ of each galaxy in our sample with its counterpart in the SDSS sample. The result of this is shown in the right panel of Fig. \\ref{fig:oiii_oii_hist_sdss}. At $\\Delta \\log$ O{\\small 32}\\ = 0 (black dashed line), the oxygen ratio of the galaxy in our sample is the same as its SDSS analogue. The median value of the $\\Delta \\log$ O{\\small 32}\\ is at 0.13 dex (blue solid line), which means that the O{\\small 32}\\ ratio of the MUSE galaxies on average exceeds the O{\\small 32}\\ of galaxies in the comparison sample. The dashed blue lines, however, indicate that this difference is within the 1-$\\sigma$ error bars.\n\\begin{figure*}\n \\centering\n \\includegraphics[width=\\hsize]{oiii_oii_mosaic_sdss_onlysf_newplot_incmedian_errors_MSselection_v2.pdf}\n \\caption{Number of galaxies per $\\log$ O{\\small 32}\\ bin for our sample of 406 galaxies (left panel). The dashed lines correspond to O{\\small 32}\\ = 1 (left) and O{\\small 32}\\ = 4 (right). In the middle panel we show a similar plot for our SDSS comparison sample 1 (see the text for details). The distribution of the difference in O{\\small 32}\\ of each galaxy in our sample with its counterpart in the SDSS comparison sample is shown in the right panel. The median and the 1-$\\sigma$ spread of the distribution are shown by the solid and dashed blue lines respectively.}\n \\label{fig:oiii_oii_hist_sdss}\n\\end{figure*}\n\n\\subsubsection{Incidence rate of high O{\\small 32}\\ as a function of mass}\n\\label{incidenceratemass}\nWe divided the sample into three subsets based on stellar mass, with $\\log$ \\Mstar \/\\Msun\\ in the ranges [7.0, 8.0], [8.0, 9.0], and [9.0, 10.0]. We then derived the fraction of galaxies with O{\\small 32}\\ > 1 in each mass bin, shown as the black line in Fig. \\ref{fig:f_logM_notZcorr}, where the points indicate the centres of the mass bins. One-$\\sigma$ errors are derived by bootstrapping the sample 10,000 times (grey area). We then applied the same approach to the SDSS comparison sample (see the red points, line, and shaded area).\n\nThe fraction of galaxies in the MUSE sample with O{\\small 32}\\ > 1 decreases with increasing \\Mstar\\ (black line); we find $\\sim$30 $\\%$ for galaxies with stellar masses between $10^{7}$ and $10^{9}$ \\Msun, but $\\sim$10 $\\%$ in the highest mass bin. This trend is comparable to that of the median bins between O{\\small 32}\\ and \\Mstar\\ , which we described in Sect. \\ref{subsec:o32M} and Fig. \\ref{fig:m_o32}. The SDSS comparison sample follows a comparable trend, but the fractions are offset by 0.05-0.2 towards lower fractions. Below $\\log$ \\Mstar\/\\Msun\\ $ \\approx$ 8 the SDSS sample is, however, incomplete (see Appendix \\ref{app:sdss}), so we caution that the SDSS results in the lowest mass bin are likely to be biased as a result. \n\n \\begin{figure}\n \\centering\n \\includegraphics[width=\\hsize]{f_logM_noZcorr_msselection_allfields_inclSDSSnew_v2_boxes_rs1.pdf}\n \\caption{Fraction of galaxies with O{\\small 32}\\ > 1 in bins of 1 dex with central $\\log$ \\Mstar\\\/\\Msun = 7.5, 8.5, and 9.5 of our MUSE sample (black\/grey) and the SDSS comparison sample (red) (see the text for details). The points denote the centres of the stellar mass bins and their size reflects the number of galaxies in the bin. The shaded areas cover the 1-$\\sigma$ errors as calculated by bootstrapping in the y-direction and the bin size in the x-direction.} \n \\label{fig:f_logM_notZcorr}\n \\end{figure} \n\n\\subsubsection{Adopting a metallicity-dependent threshold for the O{\\small 32}\\ ratio}\n\\label{sec:metalthreshold}\nHigh oxygen ratios can also be driven by low metallicity systems (see for example the nebular models of \\citealt{2016MNRAS.462.1757G}) due to the harder ionising spectrum of low metallicity stars and less efficient cooling. Here we perform the same analysis as in the previous section, but instead of the fixed threshold at O{\\small 32}\\ = 1, we adopt a metallicity dependent threshold on O{\\small 32}\\ that we derive as follows. For each galaxy we derive the metallicity \\Z\\ using the redshift dependent \\Mstar\\ - \\Z\\ relation of \\citet{2014ApJ...791..130Z}. We then set the threshold for this galaxy equal to the O{\\small 32}\\ ratio from photo-ionisation models from \\citet{2016MNRAS.462.1757G} with this metallicity and the ionisation parameter set to $\\log U = -3$. We show the relation between the metallicity dependent O{\\small 32}\\ threshold and stellar mass in Fig. \\ref{fig:threshold}, for the minimum and maximum redshifts of our sample ($z = 0.28$ and $z = 0.85$). Our assumption that galaxies are highly ionised above $\\log U = -3$ results in an O{\\small 32}\\ threshold for O{\\small 32}\\ between 0.5 and 1.7. This is the reason for setting the fixed threshold to O{\\small 32}\\ > 1 in the previous section. The incidence rate of galaxies with O{\\small 32}\\ above this metallicity-dependent threshold in each mass bin is shown in Fig. \\ref{fig:f_logM_Zcorr} for the MUSE sample (black\/grey) and the comparison sample (red).\n\nWe see that with the \\Z -dependent threshold there is no longer a strong trend between \\Mstar\\ and the fraction of high O{\\small 32}\\ emitters. For the bin with most galaxies ($8 < \\log$ \\Mstar\/\\Msun\\ $< 9$, see Table \\ref{table:fractions}), the SDSS and MUSE results are in agreement. This indicates that the relation between \\Mstar\\ and the incidence rate of high O{\\small 32}\\ emitters is most likely the result of the relation between metallicity and oxygen ratio.\n \\begin{figure}\n \\centering\n \\includegraphics[width=\\hsize]{threshold_M_z.pdf}\n \\caption{Metallicity-dependent O{\\small 32}\\ threshold as a function of stellar mass, for the minimum and maximum redshift of our sample, derived using the redshift-dependent \\Mstar\\ - \\Z\\ relation of \\citet{2014ApJ...791..130Z} and the O{\\small 32}\\ ratio from photo-ionisation models from \\citet{2016MNRAS.462.1757G} (see the text for details).\n }\n \\label{fig:threshold}\n \\end{figure} \n \\begin{figure}\n \\centering\n \\includegraphics[width=\\hsize]{f_logM_Zcorr_msselection_allfields_inclSDSS_v2_boxes_rs1.pdf}\n \\caption{Fraction of galaxies with O{\\small 32}\\ greater than the metallicity-dependent threshold for our MUSE sample (black\/grey) and the SDSS comparison sample (red) (see the text for details).} \n \\label{fig:f_logM_Zcorr}\n \\end{figure} \n\n\n\\subsubsection{Evolution of the fraction of galaxies with high O{\\small 32}}\nWe split the sample into three redshift bins of equal number, with median redshifts of $z = 0.42$, $z = 0.63,$ and $z = 0.74$. We then calculated the fractions of galaxies above the metallicity-dependent threshold and show the result in Fig. \\ref{fig:f_logM_Zcorr_zdep}. For comparison we also show the incidence rates of the entire SDSS comparison sample, which has a median redshift of $z = 0.03$. \n\nIn the lowest mass bin there seems to be a weak trend between the incidence rate and the redshift, for example the fraction in the highest $z$ bin is significantly higher than the fraction of the SDSS comparison sample. However, the lowest redshift subsample in this mass bin is larger (44) than the intermediate and high-redshift bin (26 and 28 respectively; see also Table \\ref{table:fractions}), which may indicate that we only include the most extreme star-forming systems in the high-redshift sample and this can explain the results that we observe in the lowest mass bin. In the two highest mass bins, there is no significant difference between the fraction of O{\\small 32}\\ > 1 at different redshifts. In Fig. \\ref{fig:fraction_time} the results for the galaxies with stellar masses between $\\log$ \\Mstar \/\\Msun\\ = 8 and $\\log$ \\Mstar \/\\Msun\\ = 9 are presented against the look-back time and redshift, which we calculated using the median redshifts of the redshift bins. \n \\begin{figure}\n \\centering\n \\includegraphics[width=\\hsize]{f_logM_Zcorr_msselection_allfields_zdep_inclSDSS_v2_boxes_rs1.pdf}\n \\caption{Incidence rate of galaxies with O{\\small 32}\\ above the metallicity-dependent threshold for three equally sized redshift-selected subsets, with median redshifts of z = 0.42 (green), z = 0.63 (purple), and z = 0.74 (orange) and the entire SDSS comparison sample (red) with median redshift z = 0.03. The vertical lines reflect the 1-$\\sigma$ errors in each bin.} \n \\label{fig:f_logM_Zcorr_zdep}\n \\end{figure} \n \\begin{figure}\n \\centering\n \\includegraphics[width=\\hsize]{fractionversusredshift_v2_rs1.pdf}\n \\caption{Incidence rate of galaxies with high oxygen emission for galaxies with stellar masses between $\\log$ \\Mstar \/\\Msun\\ = 8 and $\\log$ \\Mstar \/\\Msun\\ = 9, versus look-back time and redshift, calculated from the median redshift of the subsets.} \n \\label{fig:fraction_time}\n \\end{figure} \n \nThe incidence rate of high oxygen emitters when controlled for metallicity is thus independent of redshift for our sample. Since we selected the galaxies based on their distance to the redshift-dependent main SFMS (Eq. \\ref{SFMS}), we selected the same fraction of star-forming galaxies at each redshift, rather than selecting the same kind (same \\ensuremath{\\mathrm{SFR}}\\ and \\Mstar\\ and thus \\ensuremath{\\mathrm{sSFR}} ) of galaxies over cosmic time. The incidence rates that we here derived may therefore be extrapolated to the entire population of star-forming galaxies, and suggest that the fraction of high O{\\small 32}\\ emitters are constant over time between $z = 0.28$ and $z = 0.85$. \n\nIn the current paradigm, the O{\\small 32}\\ ratio of high-redshift galaxies are believed to be more extreme than those of low-redshift galaxies. Results from the MOSFIRE Deep Evolution Field (MOSDEF) and the Keck Baryonic Structure Survey (KBSS)-MOSFIRE surveys of galaxies at $z \\sim 2.3$ show that their O{\\small 32}\\ ratios are offset towards significantly larger values in comparison to those of local galaxies \\citep{2014ApJ...795..165S, 2016ApJ...816...23S, 2017ApJ...836..164S}. This may be interpreted by a harder stellar radiation field at fixed mass at higher redshift \\citep{2014ApJ...795..165S, 2017ApJ...836..164S}, or as the result of lower metallicities of high-redshift galaxies at fixed mass \\citep{2016ApJ...816...23S}. These results are supported by cosmological simulations of massive galaxies \\citep{2017MNRAS.472.2468H} that show that the ionisation parameter and the [\\ion{O}{iii}]\/\\ifmmode {\\rm H}\\beta \\else H$\\beta$\\fi\\ ratio increases with redshift at fixed \\Mstar\\ at $0 < z < 4$. Our results at $0.28 < z < 0.85$ , however, support another scenario where the O{\\small 32}\\ ratio is constant over cosmic time. This difference may be the result of the different \\Mstar\\ regime that is probed in the high-redshift surveys (9 $\\lesssim$ $\\log$ \\Mstar\/\\Msun\\ $\\lesssim$ 11) and in the cosmological simulations (9.5 $\\lesssim$ $\\log$ \\Mstar\/\\Msun\\ $\\lesssim$ 11.5) with respect to the stellar mass of the galaxies in this work (7 $\\lesssim$ $\\log$ \\Mstar\/\\Msun\\ $\\lesssim$ 10). \\begin{table*}\n\\caption{Number of galaxies in mass and redshift bins.} \n\\label{table:fractions} \n\\centering \n\\begin{tabular}{c | c c c c} \n\\hline\\hline \n & 7< $\\log$ \\Mstar \/\\Msun\\ <8 & 8 < $\\log$ \\Mstar \/\\Msun\\ < 9 & 9 < $\\log$ \\Mstar \/\\Msun\\ <10 & total \\\\\n \\hline \nall & 98 & 195 & 81 & 406 \\\\\nlow z & 44 & 64 & 11 & 135 \\\\\nintermediate z & 26 & 67 & 29 & 135 \\\\\nhigh z & 28 & 64 & 41 & 136 \\\\\n\\hline \n\\end{tabular}\n\\end{table*}\n\n\\subsubsection{Completeness and robustness of the results}\nOur data sample consists of a combination of several surveys with depths between one and 30 hours. The catalogues for most fields are a mixture of emission lines and continuum-detected galaxies, resulting in samples of different completeness limits in \\Mstar\\ and emission-line flux. In Appendix \\ref{app:sdss} we study how such a possible incompleteness of our data sample alters our results by simulating different completenesses in flux and \\Mstar\\ of SDSS data, and find that the effect is negligible. \n\nWe studied the impact of our \\ensuremath{\\mathrm{SFR}}\\ calibration method on the results and compared them with \\ifmmode {\\rm H}\\beta \\else H$\\beta$\\fi -derived \\ensuremath{\\mathrm{SFRs}}\\ that are de-reddened by the \\ifmmode {\\rm H}\\beta \\else H$\\beta$\\fi\/\\ifmmode {\\rm H}\\gamma \\else H$\\gamma$\\fi\\ fraction. We also re-analysed the data by adopting different stellar libraries for \\Mstar\\ calculations and derived similar results. However, for one of the surveys that is used for this study, the MUSE QuBES, we used the MUSE spectrum for the SED fitting instead of deep photometry as we used for the data of the other surveys. We are aware that this induces uncertainty on the mass estimates and therefore re-analysed the results in Sect. \\ref{sec:metalthreshold} without the data of the MUSE QuBES survey and acquired comparable results.\n\n\\subsection{Can nebular models with no escape of ionising photons predict the observed O{\\small 32} ?}\nIn Sect. \\ref{results:r23} we discussed the behaviour of our galaxies in the $\\log$ O{\\small 32}\\ versus R{\\small 23}\\ diagram (see also Fig. \\ref{fig:r23}). Here we compare these results with nebular models to study if they are consistent with each other and whether we can derive the nebular metallicity of our galaxies with the R{\\small 23}\\ method (see Fig. \\ref{fig:r23_discussion}). We added the grids of line ratios from nebular models that are calculated by \\citet{2016MNRAS.462.1757G} by the coloured squares, where each colour represents a model of a fixed metallicity \\Z \\ (see the figure legend; metallicity here refers to the combination of nebular and stellar oxygen abundances, since these are kept constant for these models). We connected the grids of models with constant metallicity by dashed lines, where the ionisation parameter $U$ increases towards the upper right from $\\log U = -4$ to $\\log U = -1$. For the calculation of these models the ionising photon escape fraction was assumed to be zero. The lower limits on O{\\small 32},\\ which are also upper limits on R{\\small 23},\\ are not shown here since in this plot because it is difficult to compare these galaxies with models. \n\nDeriving the nebular metallicity of our galaxies by comparing it to the model results is not straightforward, due to the degeneracy of the R{\\small 23}--metallicity relation. However, from Fig. \\ref{fig:r23_discussion} it is clear that the R{\\small 23}\\ ratio indicates that the metallicity of the majority of the galaxies in our sample is sub-solar and around 0.006 ($\\approx 1\/3$ \\Zsun). Hence the R{\\small 23}\\ of many of the galaxies exceeds the maximum R{\\small 23}\\ predicted by the models, which can partly be explained by an uncertain \\ifmmode {\\rm H}\\beta \\else H$\\beta$\\fi\\ flux measurement (galaxies with an \\ifmmode {\\rm H}\\beta \\else H$\\beta$\\fi\\ signal to noise lower than three are shown by open squares). \n\nWe have differentiated between models with ionisation parameters between $\\log U$ = -4 and $\\log U$ = -2 (solid line) and those of higher values, since observations show that the bulk of star-forming galaxies have $\\log U$ < -2 (e.g. \\citealt{2014ApJ...787..120S}). The ionisation parameter of galaxies with extreme O{\\small 32}\\ ratios might however exceed those of normal galaxies, as pointed out by \\citet{Stasinska15}. However, if the O{\\small 32}\\ ratio of our galaxies exceeds the predicted ratio of models with $\\log U > -2$, the escape of LyC photons is also a likely scenario. For this reason the galaxy with the highest O{\\small 32}\\ ratio is a promising LyC escape candidate. Although the O{\\small 32}\\ ratios of the other extreme emitters in our sample (with O{\\small 32}\\ > 4) in this diagram are similar to those of the confirmed LyC leakers (green stars), the logaritm of R{\\small 23}\\ of our galaxies scatters within 0.4 of the value of the LyC leakers. Most of them however imply either low stellar and nebular metallicities, high ionisation parameters, the escape of ionising photons, or a combination of these factors. However, comparing the data that lie inside the model grid to these nebular models with no escape of ionising photons is thus not sufficient to determine if LyC escape is responsible for the extreme oxygen emission. \n\\begin{figure*}\n \\centering\n \\includegraphics[width=\\hsize]{r23_o32_nosdss_newcolors_v2_resubmit_rs1.pdf}\n \\caption{Logarithm of O{\\small 32}\\ versus the logarithm of R{\\small 23} . The black points are galaxies from our sample, open symbols reflect galaxies with S\/N (\\ifmmode {\\rm H}\\beta \\else H$\\beta$\\fi ) < 3), while the green stars are the confirmed LyC leakers from \\citet{Izotov16a,Izotov16b, 2018MNRAS.474.4514I}. The squares that are connected by solid ($\\log U < -2$) and dashed coloured lines show the position in this diagram of the nebular model of \\citet{2016MNRAS.462.1757G} for constant gas-phase metallicity (see legend) and with increasing $\\log U$ towards the upper right, with -4 < $\\log U$ < -1 and step size 0.5.} \n \\label{fig:r23_discussion}\n \\end{figure*} \n \n\\subsection{Extreme O{\\small 32}\\ emitters}\nIn the previous sections we discussed how the properties of galaxies with extreme oxygen ratios are related to the entire sample. Here we study the robustness of the O{\\small 32}\\ measurement and investigate the electron temperatures of the [\\ion{O}{iii}]\\ regime in extreme galaxies. \n\nTo confirm that the O{\\small 32}\\ ratios are well measured, we compare them to the [\\ion{Ne}{iii}]$\\lambda 3869$\/[\\ion{O}{ii}]\\ ratio. This ratio is an alternative diagnostic of the ionisation parameter because of its tight relation to the O{\\small 32}\\ ratio \\citep{2014ApJ...780..100L}. Even though [\\ion{Ne}{iii}]$\\lambda 3869$\\ is more than an order of magnitude fainter than [\\ion{O}{iii}]$\\lambda 5007$ , the [\\ion{Ne}{iii}]\/[\\ion{O}{ii}]\\ ratio is less affected by reddening than the O{\\small 32}\\ ratio. \nThe relationship between [\\ion{O}{iii}]$\\lambda 5007$\/[\\ion{O}{ii}]$\\lambda 3727$\\ and [\\ion{Ne}{iii}]$\\lambda 3869$\/[\\ion{O}{ii}]$\\lambda 3727$\\ for the extreme emitters with a significant [\\ion{Ne}{iii}]$\\lambda 3869$\\ detection ($\\sigma > 3$) is shown in Fig.~\\ref{fig:ne3}. The solid grey line shows the predictions of the Starburst99\/Mappings III photo-ionisation models of \\citet{2014ApJ...780..100L} with \\Z\\ = 0.2 \\Zsun . We offset the models by a factor of +0.6 in the $\\log$ [\\ion{O}{iii}]$\\lambda 5007$\/[\\ion{O}{ii}]$\\lambda 3727$\\ direction to take into account the discrepancy between these models and the observations of \\citet{2006A&A...448..955I} and \\citet{Jaskot13}, as discussed by \\citet{2014ApJ...780..100L}. \n\\begin{figure}\n \\centering\n \\includegraphics[width=\\hsize]{o32_ne3o2_v2_resubmit.pdf}\n \\caption{Log O{\\small 32}\\ versus the $\\log$ [\\ion{Ne}{iii}]$\\lambda 3869$\/[\\ion{O}{ii}]\\ ratio for our extreme emitters (O{\\small 32}\\ >4) with a [\\ion{Ne}{iii}]$\\lambda 3869$\\ detection of at least 3 $\\sigma$. The galaxies of which we do not have a significant [\\ion{O}{ii}]\\ detection are presented by red circles, which are the 3-$\\sigma$ upper limits on both ratios. The rest of the extreme emitters sample is shown by black circles and the most extreme O{\\small 32}\\ emitter by the black diamond. The grey line corresponds to the \\citet{2014ApJ...780..100L} relation for \\Z\\ = 0.2 \\Zsun\\ between $\\log$ O{\\small 32}\\ and $\\log$ [\\ion{Ne}{iii}]$\\lambda 3869$\/[\\ion{O}{ii}]\\ , which is offset by +0.6 in the y-direction.} \n \\label{fig:ne3}\n \\end{figure} \n We use these results to conclude that the [\\ion{Ne}{iii}]$\\lambda 3869$\/[\\ion{O}{ii}]\\ ratios are consistent with extreme O{\\small 32}\\ ratios. There are, however, offsets between our observations and the corrected \\citet{2014ApJ...780..100L} relation of up to 0.2 dex, which are most likely caused by the lower significance of the [\\ion{Ne}{iii}]$\\lambda 3869$\\ line and\/or by an offset in the dust attenuation estimate. We note however, that if we use these offsets to correct the [\\ion{O}{iii}]\\ line fluxes, the O{\\small 32}\\ ratios will still be in the regime of the extreme oxygen emitters.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\hsize]{O32_O3_gutkin_cl01.pdf}\n \\caption{Log O{\\small 32}\\ versus the [\\ion{O}{iii}]$\\lambda 4363$\/[\\ion{O}{iii}]$\\lambda 5007$\\ ratio (lower x-axis) and the electron temperature in the [\\ion{O}{iii}]\\ regime (lower x-axis) calculated using the ${\\tt nebular.ionic}$ routine in Pyneb \\citep{2015A&A...573A..42L}, assuming $n_e$ = 100 cm$^3$. The coloured lines indicate the predictions from the \\citet{2016MNRAS.462.1757G} models of different metallicity as indicated by the colours, with increasing $\\log U$ towards the upper right, and $\\log U < -2$ (solid line) and $\\log U < -2$ (dashed line), and step size of $\\log U$ = 0.5 between the squares. The colours of the circles are used as in Fig. \\ref{fig:ne3}. }\n \\label{fig:te}\n \\end{figure} \n For most of the extreme emitters, we have high S\/N measurements (S\/N > 3) of the auroral [\\ion{O}{iii}]$\\lambda 4363$\\ emission line. As such we can use the [\\ion{O}{iii}]$\\lambda 4363$\/[\\ion{O}{iii}]$\\lambda 5007$\\ ratio to diagnose the electron temperatures (\\Te) in the [\\ion{O}{iii}]\\ regime of the \\ion{H}{ii}\\ region (see \\citealt{1989agna.book.....O}), using the ${\\tt nebular.ionic}$ routine in Pyneb \\citep{2015A&A...573A..42L}, assuming an electron density of $n_e = 100$ cm$^{-3}$.\nThe [\\ion{O}{iii}]$\\lambda 4363$\/[\\ion{O}{iii}]$\\lambda 5007$\\ ratios and the corresponding values of \\Te\\ are shown in Fig. \\ref{fig:te}. A complete discussion on this is outside the scope of the paper. However, there is presumably a positive correlation between O{\\small 32}\\ and \\Te\\ since the electron temperatures of the galaxies with the most extreme O{\\small 32}\\ ratios exceed those of the other extreme galaxies. \nFurthermore, comparing the [\\ion{O}{iii}]$\\lambda 4363$\/[\\ion{O}{iii}]$\\lambda 5007$\\ and O{\\small 32}\\ ratios to the predictions from the \\citet{2016MNRAS.462.1757G} nebular models, the metallicity of the galaxies can be constrained, varying from \\Z$\\approx0.0001$ (\\Z\/\\Zsun$\\approx$0.005) for the galaxy with the highest O{\\small 32}\\ ratio (diamond) to \\Z$\\approx0.002-0.004$ (\\Z\/\\Zsun$\\approx$0.15) for the galaxies with O{\\small 32} $\\approx 4$. Again, for the calculation of these nebular models, the fraction of escaping ionising photons is assumed to equal zero. Although such a scenario would boost the O{\\small 32}\\ ratio, there is no obvious reason to expect an effect on the [\\ion{O}{iii}]$\\lambda 4363$\/[\\ion{O}{iii}]$\\lambda 5007$\\ ratio, and thus the real metallicity of a leaking system would be even lower than what is predicted by the models. Other scenarios, such as a top-heavy initial mass function, might increase the hardness of the ionising spectrum and could, however, also affect the position of a galaxy in this plot. \n \n\\subsubsection*{UDF object 6865}\nIn Fig. \\ref{fig:images} we show an HST image of the F775W filter and MUSE images of the [\\ion{O}{ii}]\\ and [\\ion{O}{iii}]\\ lines of the most extreme oxygen emitter in our sample, which has O{\\small 32}\\ = 23. This galaxy is observed in the UDF-mosaic at $z = 0.83$ (identified in the MUSE catalogue of \\citealt{2017A&A...608A...2I} as id = 6865). The spectrum of this object is shown in the upper panel of Fig. \\ref{fig:example_spec}, from which we derived an extinction \\ensuremath{\\mathrm{\\tau}_{V}}\\ = 0.49. The field is very crowded as can be seen in the HST image. The MUSE narrow band images of the [\\ion{O}{ii}]$\\lambda 3727$\\ and [\\ion{O}{iii}]$\\lambda 5007$\\ lines, however, confirm that the measured line fluxes originate from this source in the centre of the images. The $\\log$ R{\\small 23} , $\\log$ [\\ion{Ne}{iii}]\/[\\ion{O}{ii}],\\ and $\\log$ [\\ion{O}{iii}]$\\lambda 4363$\/[\\ion{O}{iii}]$\\lambda 5007$\\ of this object are shown by the diamond in Figs. \\ref{fig:r23_discussion}, \\ref{fig:ne3}, and \\ref{fig:te}. The $\\log$ [\\ion{O}{iii}]$\\lambda 4363$\/[\\ion{O}{iii}]$\\lambda 5007$\\ indicates that the metallicity of this galaxy is extremely low (\\Z$\\approx0.0001$), which deviates somewhat from what is predicted from the R{\\small 23}\\ ratio (\\Z$\\approx0.0005-0.001$), and the ionisation parameter is high ($\\log U \\approx -1.5$). However, all models assume that there is no escape of ionising photons, which may affect the O{\\small 32}\\ and R{\\small 23}\\ ratios. \n\nDue to the combination of the relatively low stellar mass, $\\log$(\\Mstar\/\\Msun)$=8.16^{+0.16}_{-0.06}$, and the redshift, we are not able to precisely derive the size of the object, because the apparent size is comparable to the PSF of the HST image and the source is not resolved in the MUSE data. This, however, indicates that the galaxy is compact, like the objects of \\citet{Izotov16a,Izotov16b, 2018MNRAS.474.4514I}. Together with the comparison of its oxygen ratio to those of nebular models, this suggests that this galaxy may be an LyC emitter candidate. \n\\begin{figure} \n \\centering\n \\includegraphics[width=\\hsize]{HST_and_nb_6865_scale.png}\n \\caption{HST F77W image (left), a MUSE narrow band image of the [\\ion{O}{ii}]$\\lambda 3727$\\ line (middle), and a MUSE narrow band image of the [\\ion{O}{iii}]$\\lambda 5007$\\ line (right) of the most extreme oxygen emitter with O{\\small 32}\\ = 23.} \n \\label{fig:images}\n \\end{figure} \n\n\n\\section{Conclusions}\nWe constructed a sample of emission-line galaxies in the redshift range $0.28 < z < 0.85$ that are detected in data from four MUSE GTO surveys. The galaxies are selected based on their position in the \\ensuremath{\\mathrm{SFR}}\\ - \\Mstar\\ plane in a way that we only included galaxies that are above the redshift-dependent SFMS from \\citet{Boogaardetal}. In this regime we expect the sample to be independent of selection effects. Our final sample consists of 406 galaxies, of which 104 (26$\\%$) have a high O{\\small 32}\\ ratio (O{\\small 32}\\ > 1) and 15 galaxies are extreme emitters with O{\\small 32} > 4 (3.7$\\%$). We studied the O{\\small 32}\\ ratio as a function of the position in the (redshift-corrected) \\ensuremath{\\mathrm{SFR}}\\ versus \\Mstar\\ diagram, as a function of stellar mass \\Mstar , \\ensuremath{\\mathrm{SFR}} , and metallicity indicator R{\\small 23} . We then studied the incidence rate of galaxies with high oxygen ratios, which is defined by either a fixed threshold, O{\\small 32}\\ > 1, or by a metallicity-dependent threshold as a function of \\Mstar\\ and redshift. The main conclusions of this study are: \n\\begin{itemize}\n\\item Galaxies with a high oxygen ratio are more common at lower masses (\\Mstar\\ < 9) and above the SFMS. There is no clear correlation between distance from the SFMS and the O{\\small 32}\\ ratio for galaxies in our final sample that are above the SFMS (Fig. \\ref{fig:main_sequence}). \\\\\n\\item We find no correlation between O{\\small 32}\\ ratio and \\Mstar , although the median values in O{\\small 32}\\ bins seems to be anti-correlated (Fig. \\ref{fig:m_o32}).\\\\\n\\item We observe the same trend between the median values of O{\\small 32}\\ and \\ensuremath{\\mathrm{SFR}} , but again no significant correlation for individual galaxies. The \\ensuremath{\\mathrm{SFR}}\\ of most of our extreme emitters is two to three orders of magnitude smaller than those of confirmed leakers (Fig. \\ref{fig:sfr_o32}).\\\\\n\\item The fraction of galaxies with high O{\\small 32}\\ ratios is independent of stellar mass when we use a metallicity-dependent O{\\small 32}\\ threshold (Fig. \\ref{fig:f_logM_Zcorr}).\\\\\n\\item We find no significant correlation between the fraction of high O{\\small 32}\\ emitters and redshift, suggesting that there is no redshift evolution of the number of high O{\\small 32}\\ in the redshift range $0.28 < z < 0.85$ (Figs. \\ref{fig:f_logM_Zcorr_zdep} and \\ref{fig:fraction_time}). \\\\\n\n\\item Comparing O{\\small 32}\\ and R{\\small 23}\\ of our galaxies with those of nebular models with no escape of ionising photons, we find that some of the high oxygen emitters can be reproduced by models with a high ionisation parameter ($\\log U \\approx -2$), a very low stellar and nebular metallicity (smaller than $\\sim 1\/3$ \\Zsun), or a combination of both. However, our extreme emitters are in the same regime as the confirmed leakers from \\citet{Izotov16a, Izotov16b, 2018MNRAS.474.4514I} and we therefore cannot exclude the escape of ionising photons from these galaxies. The O{\\small 32}\\ ratio of our most extreme oxygen emitter can only be explained by models with very high ionisation parameter ($\\log U > -2$), from which we conclude that this galaxy may be a LyC leaker candidate (Fig. \\ref{fig:r23_discussion}). \\\\\n\n\\item For galaxies with a significant [\\ion{O}{iii}]$\\lambda 4363$\\ detection, we derived the [\\ion{O}{iii}]$\\lambda 4363$\/[\\ion{O}{iii}]$\\lambda 5007$\\ ratio and the electron temperature and find that these values are similar to or larger than those predicted by nebular models with extremely low metallicity, high ionisation parameters, and constant \\ensuremath{\\mathrm{SFR}}\\ at $t = 3 \\times 10^8$ years. From this we conclude that a part of the extreme O{\\small 32}\\ emitters may have light-weighted ages of $t < 3 \\times 10^8$ years (Fig. \\ref{fig:te}). \\\\\n\\end{itemize}\n\\label{s_conclusions}\n\n\n\\begin{acknowledgements}\nWe thank the referee for a constructive report that helped improve the paper. AV is supported by a Marie Heim-V\\\"{o}gtlin fellowship of the Swiss National Foundation. JB acknowledges support from the Funda\\c{c}\\~{a}o para a Ci\\^{e}ncia e a Technologia (FCT) through national funds (UID\/FIS\/04434\/2013) and Investigador FCT contract IF\/01654\/2014\/CP1215\/CT0003., and by FEDER through COMPETE2020 (POCI-01-0145-FEDER-007672). TC acknowledges support from the ANR FOGHAR (ANR-13-BS05-0010-02), the OCEVU Labex (ANR-11- LABX-0060), and the A*MIDEX project (ANR-11-IDEX-0001-02) funded by the \"Investissements d'avenir\" French government programme. JS and SM acknowledge support from The Netherlands Organisation for Scientific Research (NWO), VICI grant 639.043.409. SC gratefully acknowledges support from Swiss National Science Foundation grant PP00P2$\\_$163824.\n\\end{acknowledgements}\n\n\\bibliographystyle{aa}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}