{"text":"\n\n\n\n\\subsection{MC40 Cyclotron}\nThe MC40 cyclotron at the University of Birmingham is primarily used\nfor the production of medical isotopes. However, it is regularly used\nfor nuclear physics research and radiation damage studies. The\nconfiguration utilised for high intensity proton irradiations is shown\nin Fig.~\\ref{fig:ATLASChamber}. \nThe beam spot is a square of area $10\\times10\\;$mm\\textsuperscript{2},\nand its position is calibrated before each irradiation session with\ngafchromic film. The sample is isolated from the environment using a\ntemperature controlled chamber, which is is mounted on a XY-axis\nrobotic scanning system controlled via LabView. The chamber\ntemperature during irradiation is set to $-27\\degree$C, this is\nselected to ensure that the sample temperature remains well below $0\\degree$C,\neven when irradiated at the highest available dose rate in the\nfacility, and thus there will be no significant contribution to\nthermal annealing.\nFurther details for the\nirradiation facility are provided in Ref.~\\cite{Allport:2017ipp}.\n\n\\begin{figure}[htbp]\n \\centering \n \\subfigure[\\label{fig:ATLASChamberLeft}]{\\includegraphics[width=.35\\textwidth]{ATLAS_chamber.jpg}}\n \\subfigure[\\label{fig:ATLASChamberRight}]{\\includegraphics[width=.35\\textwidth]{Isolation_box.jpg}}\n \\caption{\\subref{fig:ATLASChamberLeft} The high intensity area of the MC40 cyclotron with the temperature controlled chamber. \\subref{fig:ATLASChamberRight} The interior of the temperature controlled chamber, viewed from the side. The aluminium plate used to mount the diodes is visible on the right.\\label{fig:ATLASChamber}}\n\\end{figure}\n\nThe irradiated sample consists of an aluminium plate with twelve slots\nfor diodes, mounted in pairs, as shown in\nFig.~\\ref{fig:DiodeMount}. In front of each pair of diodes,\n${}^{57}$Ni foils are installed and their activity after irradiation\nis used to estimate the delivered proton fluence. Inside the\ntemperature controlled chamber, the sample and the foils, are placed\nbehind a $350\\;\\mu$m thick sheet of aluminium to block possible low\nenergy components of the beam. The energy of the proton beam when they\nreach the sample, is estimated using a\nGeant4-based~\\cite{Agostinelli:2002hh, Allison:2016lfl} simulation, as\nshown in Fig.~\\ref{fig:28MeV_incidentEnergy}.\n\n\n\\begin{figure}[htbp]\n \\centering \n \\subfigure[\\label{fig:DiodeMountLeft}]{\\includegraphics[width=.35\\textwidth]{Mount_no_Foils.jpg}}\n \\subfigure[\\label{fig:DiodeMountRight}]{\\includegraphics[width=.35\\textwidth]{Mount_with_Foils.jpg}}\n \\caption{\\subref{fig:DiodeMountLeft} Aluminium diode mount with attached diodes; and \\subref{fig:DiodeMountRight} the same mount following placement of ${}^{57}$Ni foils for fluence measurements.\\label{fig:DiodeMount}}\n \\end{figure}\n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width = 0.75\\linewidth]{2016-05-14_1000k_Energy.png}\n \\caption{Geant4 simulation of the MC40 cyclotron beam-line showing the incident proton beam energy, the energy at the nickel foils, and the energy at the photodiodes.\\label{fig:28MeV_incidentEnergy}}\n\\end{figure}\n \n\\subsection{IRRAD Proton Facility}\n\nThe IRRAD proton facility at CERN utilises a primary proton beam with\nan energy of $23\\;$GeV, extracted from the proton\nsynchrotron~\\cite{Cundy:2017ezr}. The facility employs the use of a\nremote controlled stage to adjust the position of the sample, and an\nisolated box for humidity and temperature control down to\napproximately $-20\\degree$C~\\cite{Cindro:2014qca}. The proton fluence\ndetermination for IRRAD is performed with aluminium foils. Figure\n\\ref{fig:IRRAD_tables} shows an image of the IRRAD setup and the\nremote controllable tables, which can move in the transverse and\nazimuthal directions to align the sample with the beam. This setup\nalso allows for beam scanning, which can be done\nautomatically. Furthermore, there are three blocks of three tables\nalong the beam line, each separated by thick blocks of concrete.\n \n \\begin{figure}[htbp]\n \\centering\n \\includegraphics[width = 0.8\\linewidth]{IRRAD_table.jpg}\n \\caption{IRRAD proton facility experimental areas featuring three groups of remote-controlled tables installed along the\nproton beam path, from Ref.~\\cite{Gkotse:2015axt}.\\label{fig:IRRAD_tables}}\n \\end{figure}\n \nFor this study, both BPW34F photodiodes and FZ pad diodes were\nirradiated to the same fluences for NIEL comparisons. Irradiations\ntake place at room temperature, and given the dose rate on the samples\nno appreciable thermal annealing is expected to take place, in\nparticular when compared to the thermal annealing applied during the analysis procedure, see\n\\S~\\ref{sec:Annealing}.\n\n\\subsection{Irradiation Center Karlsruhe}\nThe Irradiation Center Karlsruhe~\\cite{KITsite} accesses a compact\ncyclotron operated by ZAG Zyklotron AG~\\cite{ZAG}, a private-owned\ncompany specialising in radioisotopes production for medicine and\nengineering. The cyclotron accelerates protons to $25\\;$MeV and for\nthis study the energy of the protons at the sample was measured to be\n$23\\;$MeV. The set-up utilised for irradiations is shown in\nFig.~\\ref{fig:KIT_setup}. Similarly to the MC40 cyclotron and the\nIRRAD proton facility, the sample can be positioned within an\nisolation box for humidity and temperature control down to\n$-30\\degree$C~\\cite{KITsite}, and the fluences are calculated using\n${}^{57}$Ni foils.\n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width = 0.5\\linewidth]{KIT_setup.jpg}\n \\caption{The Karlsruhe proton irradiation setup with the beam pipe visible on the left and the cold box on the right. During irradiation the box is moving to allow beam scanning over the whole area of interest.\\label{fig:KIT_setup}}\n\\end{figure}\n \n\n\\subsection{Thermal Annealing Procedure}\n\\label{sec:Annealing}\nSince the degree of thermal annealing significantly affects the\npost-irradiation leakage current of photodiodes, all diodes where\nthermally annealed for 80 minutes at $60\\degree$C in accordance with\nthe guidelines of the RD50 collaboration. This ensured that\npost-irradiation, all diodes possessed the same thermal history. The\nprocess itself utilised a pre-heated oven, monitored with a NiCr-NiAl\nthermocouple. For the MC40 cyclotron, due to the large number of\ndiodes tested, thermal annealing took place in two sets, with half of\nthe diodes in each set.\n\n\\subsection{Maximum Depletion Voltage}\n\\label{sec:MaxDep}\nTo estimate the value of the voltage where maximum depletion is\nachieved, the diodes were placed inside an aluminium box for radiation\nshielding, as shown in Fig.~\\ref{fig:CVsetup}, alongside a fan for air\ncirculation. The system was then connected to a Wayne-Kerr 6500B\nPrecision Impedance Analyser via a junction box and four coaxial\ncables for capacitance readings at 10~kHz; in accordance with RD50\nguidelines. An external bias was supplied to the diodes by a Keithley\n2410 Sourcemeter. The system was then trimmed to approximately zero\ncapacitance with the diode unconnected before each set of data was\ntaken.\n\n\\begin{figure}[htbp]\n \\centering \n \\subfigure[\\label{fig:CVsetupRight}]{\\includegraphics[width=.35\\textwidth]{CVSetup.jpg}}\n \\subfigure[\\label{fig:CVsetupLeft}]{\\includegraphics[width=.35\\textwidth]{Photodiode_Box.jpg}}\n \\caption{\\subref{fig:CVsetupRight} C--V measurement set-up, with the Wayne-Kerr 6500B\nPrecision Impedance Analyser, the Keithley 2410 Sourcemeter, and the aluminium box; \\subref{fig:CVsetupLeft} Internal view of the aluminium box.\\label{fig:CVsetup}}\n\\end{figure}\n\nFor a p-n junction, before full depletion, the capacitance is\ninversely proportional to the square root of the voltage. Following\nfull depletion, capacitance becomes independent of the voltage. Using\nthis, it is possible to estimate the voltage at which a diode becomes\nfully depleted, referred to as the maximum depletion voltage. Figure\n\\ref{fig:Diode46_CV_PostAnneal_regions} shows a plot of capacitance as\na function of voltage in logarithmic scales. In region (1), the diode\nis not fully depleted, whilst in region (2), full depletion has been\nachieved. In the latter region, the gradient is not zero as the\nBPW34F diode does not contain a guard ring, and thus lateral depletion\nis still occurring. By fitting the two regions linearly, the maximum\ndepletion voltage is estimated as the intercept of the two lines. This\nestimate was performed for every diode following irradiation and\nannealing.\n\n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width = 0.45\\linewidth]{Diode46_CV_PostAnneal_regions.pdf}\n \\caption{Capacitance as a function of reverse bias for a BPW34F photodiode irradiated to $4.33\\times10^{11}$p\/cm\\textsuperscript{2}.\\label{fig:Diode46_CV_PostAnneal_regions}}\n\\end{figure}\n\n\n\\subsection{Quantifying Radiation Damage}\nFigure \\ref{fig:IVsetup} shows the experimental setup used for I--V\nmeasurements. Similarly to the capacitance measurements, the diodes\nwere placed within an aluminium box alongside a fan. A Keithley 2410\nsource meter was used to apply a reverse bias across the diode, which\nwas then measured and displayed the corresponding current. A NiCr-NiAl\nthermocouple was used to record the temperature within the box, being\nplaced close to the diode to obtain an accurate reading of its\ntemperature.\n\n \\begin{figure}[htbp]\n \\centering\n \\includegraphics[width = 0.5\\linewidth, trim = {2cm 3.5cm 1.5cm 3.5cm}, clip = true]{IV_Setup.jpg}\n \\caption{The setup utilised for I--V measurements.\\label{fig:IVsetup}}\n \\end{figure}\n\nAlthough the ambient temperature, $T$, during data taking did not\ndeviate substantiall from $21\\degree$C, to minimise any effects due to\nthe temperature dependence of the leakage current, all I--V curves\nwere normalised to the reference temperature, $T_R$, of $21\\degree$C,\nfollowing RD50 recommendations. The formula used is given in\nEq.~\\ref{eqn:tempscaling}, where $E_a$ is the activation energy of\nsilicon, which is closely related to the band gap energy of silicon,\nand all other symbols have their usual meanings.\n\n\\begin{equation}\\label{eqn:tempscaling}\n I(T_R) = I(T) \\left(\\frac{T_R}{T}\\right)^2e^{-\\frac{E_a}{2k_B}\\left[\\frac{1}{T_R}-\\frac{1}{T}\\right]}\n\\end{equation}\n \nOver the temperature range relevant for this study, $E_a$ has a value\nof $1.21\\;$eV~\\cite{Chilingarov_2013}. Post-irradiation, the leakage\ncurrent of the diodes increased proportionally to the incident fluence\ndue to induced defects in the silicon~\\cite{Moll:1999kv}. Hence, the\nchange in leakage current pre- and post- irradiation can be used as a\nmeasure of the degree of radiation damage. Assuming that the leakage\ncurrent scaled with the NIEL irrespective of the particle species and\nenergy, the change in leakage current can be used to determine the\nhardness factor.\n\n\\subsection{Determination of Hardness Factors}\n\\label{sec:hfdet}\nThe change in leakage current pre- and post- irradiation, $\\Delta I$,\nas a function of fluence, $\\phi$ is:\n\n\\begin{equation}\n\\label{eqn:Delta_I}\n \\Delta I = \\alpha l^2 w \\phi\n\\end{equation}\n \nwhere $\\alpha$ is the current related damage rate, $l^2$ is the active\narea of silicon, and $w$ is the width of the depletion region. For\nBPW34F photodiodes, $l^2 = (0.265\\times 0.265)$\ncm\\textsuperscript{2}~\\cite{BPW34F,Ravotti:835408} and $w = 300\\text{\n}\\mu$m~\\cite{Ravotti:2008vcv}. From equation \\ref{eqn:Delta_I}, and\nthe NIEL scaling hypothesis, it follows that the hardness factor can\nbe written as:\n\\begin{equation}\\label{eqn:kappa}\n\\kappa = \\frac{\\phi _{neq}}{\\phi},\n\\end{equation}\nwhere $\\phi _{neq}$ is the $1\\;$MeV neutron equivalent\nfluence. Combining equations \\ref{eqn:Delta_I} and \\ref{eqn:kappa},\nthe hardness factor can be obtained as:\n\\begin{equation}\\label{eqn::kappaalpha}\n \\kappa = \\frac{\\alpha}{\\alpha _{neq}},\n\\end{equation} \nwhere $\\alpha _{neq}$ is the current related damage rate for 1 MeV\nneutrons. In this study, $\\alpha _{neq} = (3.99\\pm\n0.03){\\times}10^{-17}\\;$Acm\\textsuperscript{-1} was\nused~\\cite{Moll:1999kv}. Thus, by comparing the diode leakage current pre-\nand post-irradiation as a function of fluence, the\nhardness factor of the incident beam is calculated.\n\n\\begin{figure}[htbp]\n \\centering \n \\subfigure[\\label{fig:Diode47Left}]{\\includegraphics[width=.45\\textwidth]{Diode47_unirrad-eps-converted-to.pdf}}\n \\subfigure[\\label{fig:Diode47Right}]{\\includegraphics[width=.45\\textwidth]{Diode47_irrad-eps-converted-to.pdf}}\n \\caption{I--V curves fit with a first order polynomial. \\subref{fig:Diode47Left} Unirradiated diode; \\subref{fig:Diode47Right} Following irradiation at $(1.64\\pm 0.36)\\times 10^{11}$pcm$^{-2}$ and thermal annealing.\\label{fig:Diode47}}\n\\end{figure}\n \nFigure~\\ref{fig:Diode47} shows I--V curves for the same diode pre- and\npost-irradiation and annealing. The data were fit with a first order\npolynomial centred at the minimum voltage value for which the\ndepletion region is maximised, as determined from C--V\nmeasurements. The change in leakage current, evaluated at this\nvoltage, was computed for each diode and plot as a function of\nfluence.\n\nFinally, the range of validity of Eq.~\\ref{eqn:Delta_I} has been\nconsidered in a dedicated study at the MC40 cyclotron. As shown in\nFig.~\\ref{fig:high_fluence}, the change in leakage current is linear\nas a function of fluence up to fluences of approximately\n$10^{14}\\;$pcm$^{-2}$. At low fluences, all charge carriers generated\nby the radiation-induced defects are transported to the electrodes and\ncontribute fully to the leakage current (Shockley-Read-Hall\nmechanism). At high fluences the high defect concentration results to \ncharge carriers getting trapped. As a result, these are not \ntransported throughout the sensor, which results to the observed plateau of the change in leakage current versus fluence.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.45\\linewidth]{high_fluence.pdf}\n \\caption{Change in leakage current as a function of fluence for the BPW34F diodes. The linearity of response up to approximately $10^{14}$p\/cm\\textsuperscript{2} is demonstrated.\\label{fig:high_fluence}}\n\\end{figure}\n\n\n \n\n\\section{Introduction}\n\\input{Introduction.tex}\n\\section{Irradiations}\n\\label{sec:irradiations}\n\\input{Irradiations.tex}\n\\section{Measurements}\n\\label{sec:measurements}\n\\input{Measurements.tex}\n\\section{Results}\n\\label{sec:results}\n\\input{Results.tex}\n\\section{Conclusions}\n\\label{sec:summary}\n\\input{Conclusion.tex}\n\\section*{Acknowledgements}\n\\input{Acknowledgements.tex}\n\\flushbottom\n\n\\bibliographystyle{ieeetr}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\nRecently two new methods for calculating the Hawking temperature\n\\cite{hawking} of a Schwarzschild\nblack hole have been put forward. The first is the quasi-classical WKB\nmethod \\cite{padman,kraus,volovik, boyarsky, wilczek} where\none calculates the tunneling rate obtained as the exponent of the imaginary part of\nthe classical action for particles coming from the vicinity of the horizon. \nDevelopments and refinements of this method\ncan be found in \\cite{vagenas,zerbini,banerjee} and references therein.\nSee also \\cite{brout} for an early paper on WKB methods applied to de Sitter \nspacetime to calculate Hawking-Gibbons temperature \\cite{gibbons}. More recent\nwork applying the WKB\/tunneling formalism to de Sitter space using\nHamilton-Jacobi methods can be found in \\cite{sekiwa, volovik2, medved}.\nThe WKB\/tunneling method has also been shown to have\nconnections to black hole thermodynamics \\cite{pilling2,majhi,zhang}. \n\nThe second method uses gravitational anomalies \\cite{robinson,das,volovik3}. \nIn this approach one\nplaces a scalar field in a Schwarzschild background and then\ndimensionally reduces the field equations to $1+1$ dimensions near the horizon.\nOne then discards the modes of the scalar field inside the horizon,\nas well as the inward directed modes on the horizon \\cite{isoa,murata,isob},\nsince these are inaccessible to an outside observer. (In the original proposal\nof the anomaly method \\cite{robinson} modes behind the horizon were also\nconsidered. The simpler method of using only near horizon and outside\nhorizon modes was originally proposed in \\cite{isoa,murata,isob}). \nIn this way one obtains an\neffective chiral field theory near the horizon. Such theories are known to have\ngravitational anomalies \\cite{bertlmann, christensen, witten}. The anomaly is\ncancelled and general covariance is restored if one\nhas a flux of particles coming from the horizon with the Hawking temperature.\nHowever one does not recover {\\it directly} the Planckian spectrum from this method.\nIn this Letter we make a critique of, and comparison between, these two\nmethods. We mainly focus on the Rindler spacetime and the \nassociated Unruh radiation \\cite{unruh}. Unruh radiation is\nthe simple, proto-typical example of all similar effects such as Hawking radiation and\nHawking-Gibbons radiation. We examine both consistent and covariant anomaly methods for\ntwo different forms of the Rindler metric. In both cases we find that neither anomaly \nmethods gives the correct Unruh temperature. \n\nNext we compare the consistent and covariant anomaly methods for \nobtaining the Gibbons-Hawking temperature of de Sitter spacetime. In this case the consistent method\nyields the correct Gibbons-Hawking temperature while the covariant method does not.\n\nGiven this failure of both anomaly methods\nwe next examine the WKB\/tunneling method for Rindler spacetime. \nHere we find that regardless of the specific form of\nthe metric the WKB method gives the correct temperature for Rindler\nspacetime. However there are subtleties involved in calculating the \ntemperature of the radiation. Here we show that there is a \npreviously unaccounted for temporal contribution \\cite{grf, akhmedova, nakamura} \nin the WKB\/tunneling method which must be taken into account in order to obtain \nthe correct Unruh temperature.\n\n\\section{Gravitational anomaly method}\n\nThe action for a massless scalar field in some background\nmetric $g_{\\mu \\nu}$ can be written as\n\\begin{equation}\nS[\\phi] = - \\frac{1}{2} \\int d^4x \\sqrt{-g} g^{\\mu \\nu}\n\\nabla_\\mu \\phi \\nabla_\\nu \\phi = \\frac{1}{2} \\int d^4x \\; \\phi\\;\n\\partial_\\mu \\left( \\sqrt{-g} g^{\\mu \\nu} \\partial_\\nu \\right) \\phi\n\\end{equation}\nBy integrating out the angular variables this can be reduced to a\n$1+1$ dimensional action \\cite{robinson}\n\\begin{equation}\nS[\\phi] = \\frac{1}{2} \\sum_{mn} \\int d^2x \\; \\phi_{mn} \n\\partial_\\mu \\left(\n\\sqrt{-g} g^{\\mu \\nu} \\right)\n\\partial_\\nu \\phi_{mn}\n\\end{equation}\nwhere we have expanded the scalar field $\\phi$ as\n\\begin{equation}\n\\phi = \\sum_{mn} \\phi_{mn}(t,r) e^{imy} e^{inz}~.\n\\end{equation}\nEliminating the scalar field modes\nbehind the horizon as well as the ingoing modes on the horizon (these\nmodes lead to a singular energy-momentum flux at the horizon) we\nare left with a 1 + 1 dimensional effective chiral effective theory\nnear the horizon\nwhich is connected to a non-chiral theory outside the horizon which\nhas both outgoing and ingoing modes. It is well known that 1 + 1 dimensional\nchiral theories exhibit a gravitational anomaly \\cite{bertlmann,\nchristensen, witten},\nso the energy-momentum tensor is no longer covariantly conserved\n(see equation (6.17) in \\cite{bertlmann}):\n\\begin{equation}\n\\label{anomaly}\n\\nabla_\\mu T^\\mu_{(H) \\nu} =\n\\frac{1}{96 \\pi \\sqrt{-g}} \\epsilon^{\\alpha \\mu}\n\\partial_\\mu \\partial_\\beta \\Gamma^\\beta_{\\; \\alpha \\nu}\n\\equiv \\frac{1}{\\sqrt{-g}} \\partial_\\mu N^\\mu_\\nu~.\n\\end{equation}\nThe subscript $(H)$ denotes the energy-momentum tensor on the horizon and\n$g$ is the determinant of the 1 + 1 dimensional metric. Equation \\eqref{anomaly}\nis the consistent gravitational anomaly.\nNow under general, infinitesimal coordinate transformations the variation of\nthe 1 + 1 dimensional classical action is\n\\begin{equation}\n\\label{variationIntegral}\n\\delta S= - \\int d^2 x \\; \\sqrt{-g}\\lambda^\\nu \\nabla_\\mu T^\\mu_\\nu~.\n\\end{equation}\nHere $\\lambda ^\\nu = (\\lambda ^t, \\lambda ^r)$ is the variational\nparameter. Normally,\nrequiring the vanishing of the variation of the action, $\\delta S =0$,\nwould yield\nenergy-momentum conservation, $\\nabla_\\mu T^\\mu_\\nu = 0$, but the anomaly in\n\\eqref{anomaly} spoils energy-momentum conservation. We now split the\nenergy-momentum tensor into the anomalous part on the horizon and the\nnormal, outside the horizon part, i.e. $T^\\mu_\\nu = T^\\mu_{(H) \\nu} \\Theta_H\n+ T^\\mu_{(O) \\nu} \\Theta_+ $. $\\Theta _+ = \\Theta (r - r_H -\\epsilon)$\nis a step function\nwith $\\Theta _ + =1$ when $r > r_H + \\epsilon$ and zero otherwise.\n$r_H$ is the location of the\nhorizon and $\\epsilon \\ll 1$. $\\Theta _H = 1- \\Theta _+$ and steps down from\n1 when $r_H \\le r < r_H + \\epsilon$. The subscript $(O)$ denotes the\nenergy-momentum tensor\noff the horizon. The covariant derivative of $T^\\mu_\\nu$ is thus given by:\n\\begin{equation}\n\\nabla_\\mu T^\\mu_\\nu = \\frac{1}{\\sqrt{-g}} \\partial_\\mu \\left(\n\\Theta_H N^\\mu_\\nu \\right)\n+ \\left( T^\\mu_{(O) \\nu} - T^\\mu_{(H) \\nu}\n- \\frac{1}{\\sqrt{-g}} N^\\mu_\\nu \\right) \\delta (r - r_H - \\epsilon )~.\n\\end{equation}\nUsing this result and considering only time-independent\nmetric so that the partial time derivative vanishes we find\nthat the variation of the action \\eqref{variationIntegral} becomes:\n\\begin{eqnarray}\n\\label{nonvanish.variation}\n\\delta S &=& - \\int d^2 x \\; \\biggl[ \\lambda^t \\biggl\\{\n\\partial_r \\left( \\Theta_H N^r_t \\right)\n+ \\left( \\sqrt{-g} T^r_{(O) t} - \\sqrt{-g} T^r_{(H) t}\n- N^r_t \\right) \\delta (r-r_H) \\biggr\\} \\nonumber \\\\\n&+& \\lambda^r \\biggl\\{\n\\partial_r \\left( \\Theta_H N^r_r \\right)\n+ \\left( \\sqrt{-g} T^r_{(O) r} - \\sqrt{-g} T^r_{(H) r}\n- N^r_r \\right) \\delta (r-r_H) \\biggr\\} \\biggr]\n\\end{eqnarray}\nFrom this point on we will not explicitly write the $\\epsilon$'s.\nAll the works on the anomaly method drop the\ntotal derivative term -- $\\partial_r \\left( \\Theta_H N^r_\\nu \\right)$\n-- with the justification that it is canceled by the quantum effects\nof the neglected ingoing modes \\cite{isoa, murata}. \nIn this way we find that \\eqref{nonvanish.variation}, gives \nthe following conditions:\n\\begin{equation}\n\\label{flux}\n\\sqrt{-g} T^r_{(O) t} = \\sqrt{-g} T^r_{(H) t} + N^r_t \\: , \\qquad\n\\sqrt{-g} T^r_{(O) r} = \\sqrt{-g} T^r_{(H) r} + N^r_r ~.\n\\end{equation}\nWe will focus on the first condition since it is the one that\ndeals with flux. The second condition deals with pressures and\nfor Rindler one finds that $N^r _r =0$ so that we get just a\ntrivial continuity condition for the radial pressure from the\nsecond condition. On the other hand we will find that for the\nRindler metric the anomaly is not zero i.e. $N^r _t \\ne 0$. Thus\none needs $\\sqrt{-g} T^r_{(O) t} \\ne \\sqrt{-g} T^r_{(H) t}$.\nIn particular the off-horizon flux must be larger by an \namount $\\Phi = N^r_t$ in order to cancel the anomaly and restore \ngeneral covariance.\nBosons with a thermal spectrum at temperature $T$ have a Planckian distribution\ni.e. $J(E) = (e ^{E\/T} -1) ^{-1}$ where we have taken $k_B =1$. The\nflux associated with these bosons is given by \\cite{isoa}:\n\\begin{equation}\n\\label{planck}\n\\Phi = \\frac{1}{2 \\pi} \\int_0^\\infty E ~ J(E) ~ dE = \\frac{\\pi}{12} T^2\n\\end{equation}\nIf one assumes that the fluxes in \\eqref{flux} come from a blackbody\nand so have a thermal spectrum one can use \\eqref{planck} to give their \ntemperature via the association $N^r _t = \\Phi$. This is a second, well-known\ncritique of the anomaly method -- one has to assume the spectrum.\nWe are now ready to apply the above results to the Rindler metric. The standard form\nof the Rindler metric for an observer undergoing acceleration $a$ is\n\\begin{equation}\n\\label{rindler}\nds^2 = -(1 + a r)^2 dt^2 + dr^2 ~.\n\\end{equation}\nIn order to calculate the anomaly we need the Christoffel symbols\nfor the metric \\eqref{rindler}. These are given by\n\\begin{equation}\n\\label{christr1}\n\\Gamma ^r _{tt} = a + a^2 r ~, \\qquad \\Gamma ^t _{tr} = \\frac{a}{1+ar}\n\\end{equation}\nStraightforwardly using these Christoffel symbols in $N^\\mu_\\nu = \n\\frac{1}{96 \\pi} \\epsilon^{\\alpha \\mu} \\partial_\\beta \\Gamma^\\beta_{\\; \\alpha \\nu}$ \none arrives at\n\\begin{equation}\nN_t^r=\\frac{\\epsilon^{tr}}{96\\pi}\\partial_r \\Gamma^r_{tt}=\\frac{a^2}{96\\pi}\n\\end{equation}\ncombining this with $N^r _t = \\Phi = \\frac{\\pi}{12} T^2$ one finds \na temperature $T = \\frac{a}{2\\sqrt{2} \\pi}$ which is a factor\nof $\\frac{1}{\\sqrt{2}}$ smaller than the correct Unruh temperature \nof $\\frac{a}{2 \\pi}$. The source of the trouble can be traced to the fact that \nthe standard form of the Rindler metric \\eqref{rindler} covers the region in front\nto the horizon ($r=-1\/a$) twice. Thus in effect the flux is spread over\na larger spatial region which leads to a smaller temperature. A similar \nproblem occurs for the Schwarzschild metric in isotropic\ncoordinates \\cite{wu} where one finds twice the correct Hawking\ntemperature using the anomaly method. The reason is the same: isotropic coordinates\ndouble cover the region in front of the horizon and thus the flux is\nspread over an effectively larger region. \nIsotropic radial coordinates, $\\rho$, are related to\nSchwarzschild radial coordinates, $r$, via $r=\\rho \\left( 1 + \\frac{M}{2 \\rho} \\right) ^2$,\nwhere $M$ is the mass of the black hole. From this one can see that the region\n$r \\ge 2M$ is covered twice as $\\rho$ ranges from $0$ to $\\infty$ (the region\n$r \\ge 2M$ is covered once by $\\frac{M}{2} \\le \\rho \\le \\infty$ and once by\n$0 \\le \\rho \\le \\frac{M}{2}$). The reason why the Unruh temperature is reduced\nby $\\frac{1}{\\sqrt{2}}$ while the Hawking temperature is reduced by $\\frac{1}{2}$\nis not clear although it maybe related to the fact that the double covering of\nRindler is symmetric (we show this immediately below) while the double covering of\nisotropic coordinates is not.\n\nHowever even the above analysis which leads to the incorrect temperature is\nsuspect. Taking the Christoffel symbols of \\eqref{christr1} and using them in\n\\eqref{anomaly} one finds that the right-hand side is zero i.e. the anomaly\nvanishes (although $N^r _t$ does not vanish) and thus there is no need to have a\nflux in order to cancel the anomaly at the Rindler horizon. Thus since there is\nno anomaly the Unruh temperature is zero according to this method. In any case for the\nstandard form of the Rindler metric \\eqref{rindler} whether one naively uses \n$N ^r _t = \\Phi$ or takes into account the fact that the anomaly is zero one\nfinds that the consistent anomaly method gives the wrong temperature --\neither $T = \\frac{a}{2\\sqrt{2} \\pi}$ or $T=0$.\n\nOne might suspect that the source of the trouble is the fact that $\\det (g) =0$ for the\nform of the Rindler metric given in \\eqref{rindler}. This was suggested as the source\nof the problem for the Schwarzschild metric in isotropic coordinates \\cite{wu}.\nIn order to obtain the correct Unruh temperature via the anomaly method\none should transform to a form of the Rindler metric which covers\nboth regions -- in front and behind the horizon. Such a ``good\" form of the Rindler \nmetric is obtained by applying the following coordinate transformation\n\\begin{eqnarray}\n\\label{coordtrans1}\nT = \\frac{\\sqrt{1+2ar}}{a} \\sinh (at) \\qquad {\\rm and} \\qquad R=\n\\frac{\\sqrt{1+2ar}}{a} \\cosh (at) \\qquad {\\rm for} \\qquad r \\geq -\\frac{1}{2a}~,\n\\end{eqnarray}\nand\n\\begin{eqnarray}\n\\label{coordtrans2}\nT = \\frac{\\sqrt{| 1+2ar |}}{a} \\cosh (at) \\qquad {\\rm and} \\qquad R=\n\\frac{\\sqrt{| 1+2ar |}}{a} \\sinh (at) \\qquad {\\rm for} \\qquad r \\leq -\\frac{1}{2a}~.\n\\end{eqnarray}\nto the Minkowski metric -- $ds^2 =-dT^2 + dR^2$. In\n\\eqref{coordtrans1} and \\eqref{coordtrans2},\n$a$ is the acceleration of the noninertial observer. The Rindler\nmetric obtained after\nperforming these coordinate transformations is the following:\n\\begin{equation}\n\\label{rindler2}\nds^2 = -(1 + 2\\,a\\,r)dt^2 + (1 + 2\\,a\\,r)^{-1} dr^2 ~.\n\\end{equation}\nNotice that in this final form we have removed the absolute value sign from\naround the factor $1 + 2 \\,a\\,r$. Also unlike the standard form of Rindler in\n\\eqref{rindler} the sign in front of the time part changes when \nthe horizon at $r=-1\/2a$ is crossed. \nThis metric can also be found directly from the standard Rindler\nmetric \\eqref{rindler} by\nperforming the following coordinate transformation\n\\begin{equation}\n\\label{transform}\n(1+ a \\, r_{std})= \\sqrt{| 1+ 2\\,a\\,r |} ~.\n\\end{equation}\nAs $r$ ranges from $+\\infty$ to $-\\infty$ we find that $r_{std}$ runs from $+\\infty$ down to \n$r_{std} = -1\/a$ and then runs back out to $+\\infty$. Using the metric given by \\eqref{rindler2}, the\nChristoffel symbols are\n\\begin{equation}\n\\label{christr2}\n\\Gamma ^r _{tt} = a(1 + 2 a r) ~, \\qquad \\Gamma ^t _{tr} = \\frac{a}{1+2ar} ~, \\qquad \\Gamma ^r _{rr} = -\\frac{a}{1+2ar}~.\n\\end{equation}\nUsing these Christoffel symbols in \\eqref{anomaly} one finds that the anomaly vanishes (i.e.\n$\\nabla _\\mu T^\\mu _\\nu =0$) so that one gets a temperature $T=0$. Note for the Rindler metric in\nthe form \\eqref{rindler2} $\\det (g) = 1$ so we do not have the problems and ambiguity \nof having $\\det (g) =0$ associated with the standard form of the Rindler metric \\eqref{rindler}.\nIf one ignores the fact that the anomaly vanishes and naively applies the formula\n\\begin{equation}\nN_t^r = \\frac{\\epsilon^{tr}}{96\\pi}\\partial_r \\Gamma^r_{tt} =\n\\frac{a^2}{48\\pi} = \\Phi = \\frac{\\pi}{12} T^2~,\n\\end{equation}\none gets a temperature of $T = \\frac{a}{2\\pi}$, which is the correct Unruh temperature.\nHowever given that the anomaly explicitly vanishes we can find no justification for\nthis procedure. \n\nSince the above analysis was done using the consistent anomaly, which is \nnon-covariant, one might think that this is the source of problem. However \nif one uses the covariant anomaly \\cite{isoa,banerjee2} (which as the \nname implies is covariant) it is immediately apparent that in any\ncoordinate system the anomaly method will fail for Rindler spacetime. \nThe covariant anomaly is given by\n\\begin{equation}\n\\label{covariant}\n\\nabla_\\mu T^\\mu_\\nu =\n\\frac{1}{96 \\pi \\sqrt{-g}} \\epsilon_{\\nu \\lambda}\n\\partial^\\lambda R\n\\end{equation}\nwhere $R$ is the Ricci scalar. This method yields zero flux and\nzero temperature for Rindler spacetime, since Rindler has a vanishing Ricci scalar\nregardless of the specific form of the metric. An additional problem with the \ncovariant method is that it gives zero Gibbons-Hawking temperature when applied to \nde Sitter spacetime. The 1+1 de Sitter in static coordinates is\n\\begin{equation}\n\\label{desitter}\nds^2 = - \\left( 1-\\frac{r^2}{\\alpha^2} \\right) dt^2 + \\frac{dr^2}{\\left(\n1-\\frac{r^2}{\\alpha^2} \\right)}\n\\end{equation} \nThe Ricci scalar of the 1+1 de Sitter metric is $R = \\frac{2}{\\alpha^2} = const$. Thus\nthe covariant anomaly in \\eqref{covariant} vanishes and the temperature of the\nGibbons-Hawking radiation is wrongly given as $T=0$. On the other hand the consistent anomaly\nmethod does give the correct Gibbons-Hawking temperature. The Christoffel symbols for\nthe metric \\eqref{desitter} are\n\\begin{equation}\n\\label{christ-des}\n\\Gamma ^r _{tt} = \\frac{r(r^2-a^2)}{a^4} ~, \\qquad \\Gamma ^t _{tr} = \\frac{r}{r^2-a^2} ~, \\qquad \\Gamma ^r _{rr} = -\\frac{r}{r^2-a^2}~.\n\\end{equation}\nUsing these one finds that the anomaly in \\eqref{anomaly} is not zero and applying\n$N^r _t = \\Phi$ at the horizon, $r=\\alpha$, yields $T = \\frac{1}{2 \\pi \\alpha}$\nwhich is the correct Gibbons-Hawking temperature.\n\n\\section{WKB-like calculation: Temporal contribution}\n\nIn the previous section we found that the consistent and covariant anomaly methods\ndid not give the correct Unruh temperature for either form\nof the Rindler metric \\eqref{rindler} or \\eqref{rindler2}. (The consistent \nanomaly method did give the correct temperature for the Rindler metric\nin form \\eqref{rindler2} if one ignored the fact that the anomaly was zero and \nnaively applied $N^r _t = \\Phi$).\nIn this section we examine how the WKB method does in \ncalculating the Unruh temperature of Rindler spacetime. \n \nThe Hamilton-Jacobi equations give a simple way to do the WKB-like\ncalculations. For a scalar field of mass $m$ in a gravitational background, \n$g_{\\mu \\nu}$, the Hamilton-Jacobi equations are\n\\begin{equation}\n\\label{hamiltoneq}\ng^{\\mu\\nu}(\\partial_\\mu S)(\\partial_\\nu S) + m^2=0 ~,\n\\end{equation}\nwhere $S(x_\\mu )$ is the action in terms of which the scalar field\nis $\\phi (x) \\propto \\exp [ - \\frac{i}{\\hbar}\\, S(x) + ... ]$.\nFor stationary spacetimes one can split the action into a\ntime and spatial part i.e. $S(x^\\mu)=Et+S_0(\\vec x)$. $E$ is the particle\nenergy and $x^\\mu = (t, \\vec{x})$. Using \\eqref{hamiltoneq} one finds \n\\cite{pilling,akhmedov} that the spatial part of the action\nhas the general solution $S_0=\\int p_r dr$ with $p_r$ being the\nradial, canonical\nmomentum from the Hamiltonian. If $S_0$ has an imaginary part this indicates\nthat the spacetime radiates and the temperature of the\nradiation is obtained by equating the Boltzmann\nfactor $\\Gamma\\propto \\exp(\\frac{-E}{T})$, with the quasi-classical\ndecay rate given by\n\\begin{equation}\n\\label{decay}\n\\Gamma\\propto \\exp \\left[ - {\\rm Im} \\Big( \\oint p_r dr \\Big) \\right]=\n\\exp \\left[ - {\\rm Im} \\Big( \\int p_r^{out} dr\\; - \\int p_r^{in} dr\n\\Big) \\right]\n\\end{equation}\nThe closed path in \\eqref{decay} goes across the horizon and comes\nback. The temperature associated\nwith the radiation is thus given by $T=\\frac{\\hbar E}{{\\rm Im} (\\oint p_r dr)}$.\nIn almost all of the WKB\/tunneling literature\n$\\oint p_r dr$ is incorrectly replaced by $\\pm 2 \\int p_r ^{out,in}\ndr$ (the latter is not invariant under canonical transformations). The two\nexpressions are equivalent only if the ingoing and outgoing \nmomenta have the same magnitude. One much\nused set of coordinates for which this is not the case are the Painlev{\\'e}-Gulstrand\ncoordinates. These points are discussed in detail in \\cite{pilling, akhmedov, chowdhury}. \n\nUsing the Hamilton--Jacobi equations\n\\eqref{hamiltoneq} with the alternative form of the Rindler\nmetric \\eqref{rindler2} one finds the following solution for $S_0$\n\\begin{equation}\n\\label{Uneff2}\nS_0 =\\pm \\int_{-\\infty}^\\infty \\frac{\\sqrt{E^2 - m^2(1+2\\,a\\,r)}}{(1+2\\,a\\,r)} ~dr\n\\end{equation}\nwhere (+) is outgoing and (-) ingoing modes. Since\nthe magnitude of the outgoing and ingoing $S_0$ are the same, \nusing either $\\oint p_r dr$ or\n$\\pm 2 \\int p_r ^{Out, In} dr$ gives an equivalent result. \n$S_0$ has an imaginary contribution from the pole at $r=-1\/2a$. \nTo see this explicitly we parameterize the semi-circular contour near \n$r=-1\/2a$ by $r= -\\frac{1}{2a} + \\epsilon e ^{i \\theta}$ where\n$\\epsilon \\ll 1$ and $\\theta$ goes from $0$ to $\\pi$ for the ingoing path and\n$\\pi$ to $2 \\pi$ for the outgoing path. With this parameterization the contribution\nto the integral in \\eqref{Uneff2} coming from the pole is\n\\begin{equation}\n\\label{Uneff2a}\nS_0 = \\pm \\int \\frac{\\sqrt{E^2 - m^2 \\epsilon e^{i \\theta}}}{2 a \\epsilon e^{i \\theta}} ~\ni \\epsilon e^{i \\theta} d \\theta = \\pm \\frac{{\\rm i} \\, \\pi \\, E}{2a} ~.\n\\end{equation}\nIn the second expression we have taken the limit $\\epsilon \\rightarrow 0$.\nUsing this result in \\eqref{decay} apparently gives twice the correct \nUnruh temperature. \n\nAt first glance the standard form of the Rindler metric \\eqref{rindler} appears to give the\ncorrect Unruh temperature. Using the Hamilton-Jacobi equations one finds the following \nsolution for $S_0$\n\\begin{equation}\n\\label{Uneff}\nS_0 = \\pm \\int ^\\infty _{-\\infty} \\frac{dr_{std}}{1 + a\\,r_{std}}\\, \\sqrt{E^2 - m^2\\, (1 +\na\\, r_{std})^2} ~,\n\\end{equation}\nIn this case it appears as if the contour integration of\n\\eqref{Uneff} around the pole at $r=-1\/a$ would yield\nvalue $S_0 = \\pm \\frac{{\\rm i} \\, \\pi \\, E}{a}$. However, since the integrals\nin \\eqref{Uneff2} and \\eqref{Uneff} are related by the coordinate \ntransform \\eqref{transform} (which is just a change of variables) the value of the integral\nshould be the same. In detail using \\eqref{transform} one finds that the\nparameterization of the contour in \\eqref{Uneff2a} becomes\n$1 + a r_{std} = \\sqrt{\\epsilon} e^{i \\theta \/2}$. From this one sees\nthat the semi-circular contour of \\eqref{Uneff2a} gets transformed into\na quarter circle (i.e. one must transform both the integrand and\nthe measure). In terms of residue this means that for \\eqref{Uneff2a} one\nhas $i \\pi \\times {\\rm Residue}$ while for \\eqref{Uneff} one has\n$i \\frac{\\pi}{2} \\times {\\rm Residue}$. Thus the imaginary contributions to \n$S_0$ are the same for both \\eqref{Uneff2a} and \\eqref{Uneff} namely $S_0 = \\pm \\frac{{\\rm i} \\, \\pi \\, E}{2a}$.\nThis subtlety in the transformation of the contour is exactly parallel\nto what occurs for the Schwarzschild metric in the Schwarzschild form versus\nthe isotropic form \\cite{pilling,akhmedov}. Thus we have an apparent \nfactor of two discrepancy for calculating the Unruh temperature using the\nWKB\/tunneling method. A possible resolution of this factor of two was given \nin \\cite{mitra} where an integration constant was inserted into expressions like\n\\eqref{Uneff} or \\eqref{Uneff2} and then adjusted so as to obtain the\ndesired answer. This resolution lacked any physical motivation for choosing \nthe specific value of the imaginary part of the integration constant. \n\nThe actual resolution to this discrepancy is that there is a \ncontribution coming from the $E \\, t$ part of\n$S(x_\\mu )$ in addition to the contribution coming from $S_0$\n\\cite{grf, akhmedova, nakamura}.\nThe source of this temporal contribution can be seen by noting that\nupon crossing the horizon at\n$r=-1\/2a$, the $t,r$ coordinates reverse their time-like\/space-like character.\nIn more detail when the horizon is crossed one can see from equations\n\\eqref{coordtrans1} and \\eqref{coordtrans2} that the time coordinate\nchanges as $t\\rightarrow t - \\frac{i\\pi}{2a}$ (along with a factor of\n$i$ coming from the square root). Thus when the horizon is crossed\nthere will be an imaginary contribution coming from the $E \\, t$ term of\n$S(x_\\mu)$ of the form ${\\rm Im}(E\\Delta t)= - \\frac {\\pi E}{2 a}$. For\na round trip one will have a contribution of ${\\rm Im}(E\\Delta\nt)_{round-trip}= - \\frac {\\pi E}{a}$.\nAdding this temporal contribution\nto the spatial contribution from \\eqref{decay} now gives the correct Unruh\ntemperature for all forms of the Rindler metric using the WKB\/tunneling method. \n\nAs a final note in addition to obtaining the Unruh temperature via\n\\eqref{decay} it is also possible to use the detailed balance method of \\cite{padman}\nto obtain the correct Unruh temperature \\cite{banerjee3}. For detailed balance \none sets $P_{emission}\/P_{absorption} = \\exp \\left(-\\frac{E}{T}\\right)$ where\n$P_{emission, absorption} = |\\phi _{out, in} |^2 = \\exp \\left[ - 2 ~ {\\rm Im} \\int p_r^{out, in} dr \\right]$.\nOne should add the temporal part to this but since the temporal part is the same for\noutgoing and ingoing paths (emission and absorption) and since the formula involves\nthe ratio $P_{emission}\/P_{absorption}$ the temporal part will cancel out. This explains \nwhy the detailed balance method was able to apparently give the correct result\nwhile ignoring the temporal part. However, as point out in \\cite{banerjee3} one should have\nthe physical condition, $P_{absorption} =1$, since classically there\nis no barrier for an ingoing particle to cross the horizon. The condition is only\nachieved when one takes into account the temporal contribution. \n\n\\section{Conclusion}\n\nIn this Letter we have made a comparison and critique of the anomaly and WKB\/tunneling\nmethods of obtaining radiation from a given spacetime. For Rindler spacetime we found that\nboth the consistent and covariant anomaly method gave an incorrect\nUnruh temperature of $T=0$ since in both cases the anomaly vanished. \nIn the case of the consistent anomaly method if one ignored the vanishing of the\nanomaly and naively applied $N^r _t = \\Phi$ one obtained an incorrect Unruh\ntemperature of $T = \\frac{a}{2\\sqrt{2} \\pi}$ for the form of the Rindler metric in\n\\eqref{rindler} and the {\\it correct} Unruh temperature of $T = \\frac{a}{2 \\pi}$ for\nthe form of the Rindler metric in \\eqref{rindler2}. However we cannot\nfind a justification for this naive application of $N^r _t = \\Phi$ in the\ncase of \\eqref{rindler2} since by \\eqref{anomaly} $\\nabla_\\mu T^\\mu_{(H) \\nu} =0$.\nWe also examined a problem with the covariant anomaly method in connection\nwith Gibbons-Hawking radiation of de Sitter spacetime. Since de Sitter spacetime\nhas a constant Ricci scalar the covariant anomaly \\eqref{covariant} is zero. \nThus the covariant anomaly gives a Gibbons-Hawking\ntemperature of zero. On the other hand the consistent anomaly is non-zero\nand gives the correct Gibbons-Hawking temperature.\n\nThe WKB\/tunneling method works for any\nform of the metric for Rindler spacetime, but there are\nsubtle features. In particular there is a temporal\ncontribution to $S(x_\\mu)$ coming from a change in the time\ncoordinate upon crossing the horizon. In addition there is the question of\nwhether one should exponentiate $\\oint p_r dr$ or $\\pm 2 \\int p_r ^{Out, In} dr$ \nto get the correct decay rate.\nThis confusion has led to a wrong factor of two in calculating, for example, the\nHawking temperature \\cite{pilling,akhmedov}. There was an ad hoc \nattempt at resolving this factor of two by inserting an integration \nconstant \\cite{mitra} into expressions like \\eqref{Uneff}\nor \\eqref{Uneff2} and then adjusting to get the expected answer. Physically \nthis resolution lacked motivation. In this Letter we have shown \nthat the arbitrarily adjusted integration constant essentially plays the role of\nthe temporal contribution discussed above. Once this temporal\ncontribution is taken into account one obtains\nthe correct temperature regardless of which form of the metric is used.\nAlthough we have focused on Rindler spacetime and Unruh radiation,\nour results should be extendable to other\nspacetimes which exhibit Hawking-like radiation.\n\nRecently there has been work which attempts to connect the WKB\/tunneling method and\nthe anomaly method \\cite{ghosh}. The idea behind this unification of the two methods\nis that some anomalies can be viewed as the effect of spectral flow of the energy \nlevels. This spectral flow is analogous to tunneling thus giving the\nconnection. In the present work we have shown that the \nboth anomaly methods fail for Rindler spacetime while the\nWKB\/tunneling method recovers the correct Unruh temperature. Further the covariant anomaly method \nfails for de Sitter spacetime while the consistent anomaly method and \nWKB\/tunneling method work. The results of this work indicate that the connection \nbetween the anomaly method and the WKB\/tunneling method is not valid for\nall spacetimes.\n\n\\begin{center}\n\\bf{Acknowledgment}\n\\end{center}\nWe would like to acknowledge discussions with E.T. Akhmedov and R. Banerjee. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\\label{sec:intro}\n\n\nWe consider the fundamental problem of making inference about an\nunknown function $f$ that predicts an output $Y$ using a $p$\ndimensional vector of inputs $x = (x_1,\\dots,x_p)$ when\n\\begin{equation}\nY=f(x) + \\epsilon, \\qquad \\epsilon \\sim N(0,\\sigma^2).\n\\label{basemodel}\n\\end{equation}\nTo do this, we consider modeling or at least approximating $f(x)\n= E(Y | x)$, the mean of $Y$ given $x$, by a sum of $m$\nregression trees $f(x) \\approx h(x) \\equiv \\sum_{j=1}^m g_j(x)$\nwhere each $g_j$ denotes a regression tree. Thus, we approximate\n(\\ref{basemodel}) by a sum-of-trees model\n\\begin{equation}\nY = h(x) + \\epsilon, \\qquad \\epsilon\n\\sim N(0,\\sigma^2). \\label{sstmodel1}\n\\end{equation}\n\nA sum-of-trees model is fundamentally an additive model with multivariate components.\nCompared to generalized additive models based on sums of low dimensional smoothers,\nthese multivariate components can more naturally incorporate interaction effects.\nAnd compared to a single tree model, the sum-of-trees can more easily\nincorporate additive effects.\n\n\n\nVarious methods which combine a set of tree models, so called\nensemble methods, have attracted much attention. These include\nboosting [\\citeasnoun{FreSch1997}, \\citeasnoun{Fri2001}], bagging [\\citet{Br96a}] and random\nforests [\\citet{Bre2001}], each of which use\ndifferent techniques to fit a linear combination of trees. Boosting\nfits a sequence of single trees, using each tree to fit data\nvariation not explained by earlier trees in the sequence. Bagging\nand random forests use randomization to\ncreate a large number of independent trees, and then reduce\nprediction variance by averaging predictions across the trees. Yet\nanother approach that results in a linear combination of trees is\nBayesian model averaging applied to the posterior arising from a\nBayesian single-tree model as in \\citeasnoun{ChipGeorMcCu1998a}\n(hereafter CGM98), \\citeasnoun{DeniMallSmit1998}, \\citeasnoun{Blan2004} and\n\\citeasnoun{WuTjeWes2007}. Such model averaging uses posterior\nprobabilities as weights for averaging the predictions from\nindividual trees.\n\n\nIn this paper we propose a Bayesian approach called BART (Bayesian\nAdditive Regression Trees) which uses a sum of trees to model or\napproximate $f(x) = E(Y | x)$. The\nessential idea is to elaborate the sum-of-trees model\n(\\ref{sstmodel1}) by imposing a prior that regularizes the fit\nby keeping the individual tree effects small. In effect, the $g_j$'s become\na dimensionally adaptive random basis of ``weak learners,'' to\nborrow a phrase from the boosting literature. By weakening the $g_j$ effects,\nBART ends up with a sum of trees, each of which explains a small and\ndifferent portion of $f$. Note that BART is not equivalent to posterior averaging\nof single tree fits of the entire function $f$.\n\nTo fit the sum-of-trees model, BART uses a tailored version of\nBayesian backfitting MCMC [\\citet{HastTibs2000}] that\niteratively constructs and fits successive residuals. Although\nsimilar in spirit to the gradient boosting approach of\n\\citeasnoun{Fri2001}, BART differs in both how it weakens the\nindividual trees by instead using a prior, and how it performs the\niterative fitting by instead using Bayesian backfitting on a fixed\nnumber of trees. Conceptually, BART can be viewed as a Bayesian\nnonparametric approach that fits a parameter rich model using\na strongly influential prior distribution.\n\nInferences obtained from BART are based on successive iterations of the backfitting algorithm\nwhich are effectively an MCMC sample from the induced posterior over the sum-of-trees model space.\nA single posterior mean estimate of $f(x) = E(Y | x)$ at\nany input value $x$ is obtained by a simple average of these successive\nsum-of-trees model draws evaluated at $x$. Further, pointwise uncertainty intervals for $f(x)$ are\neasily obtained from the corresponding quantiles of the sample of draws.\nPoint and interval estimates are similarly obtained for functionals of $f$, such as partial dependence functions which reveal the marginal effects\nof the $x$ components.\nFinally, by keeping track of the relative frequency with\nwhich $x$ components appear in the sum-of-trees model iterations, BART can be used to identify\nwhich components are more important for explaining\nthe variation of $Y$. Such variable selection information is model-free in the sense that it is not based on the usual assumption of an encompassing parametric model.\n\nTo facilitate the use of the BART methods described in this paper,\nwe have provided open-source software implementing BART as a stand-alone package or with an\ninterface to R, along with full documentation and examples. It is\navailable as the \\texttt{BayesTree} library in R at \\url{http:\/\/cran.r-project.org\/}.\n\nThe remainder of the paper is organized as follows. In Section\n\\ref{sec:model} the BART model is outlined. This consists of the\nsum-of-trees model combined with a regularization prior. In\nSection \\ref{sec:postcalc} a Bayesian backfitting MCMC algorithm\nand methods for inference are described. In Section \\ref{sec:classification}\nwe describe a probit extension of BART for classification of binary $Y$.\nIn Section \\ref{sec:examples} examples, both simulated and real, are used to\ndemonstrate the potential of BART. Section \\ref{sec:executiontime} provides studies of execution time.\nSection \\ref{sec:related} describes extensions and\na variety of recent developments and applications of BART based on an early version of\nthis paper. Section~\\ref{sec:disc} concludes with a discussion.\n\n\\section{The BART model}\\label{sec:model}\n\nAs described in the \\hyperref[sec:intro]{Introduction}, the BART model consists of two parts: a sum-of-trees model\nand a regularization prior on the parameters of that model. We describe each of these in detail in the following subsections.\n\n\\subsection{A sum-of-trees model}\n\nTo elaborate the form of the sum-of-trees mod\\-el~(\\ref{sstmodel1}), we begin by\nestablishing notation for a single tree model. Let $T$ denote a\nbinary tree consisting of a set of interior node decision rules\nand a set of terminal nodes, and let $M =\n\\{\\mu_1,\\mu_2,\\ldots,\\mu_b\\}$ denote a set of parameter values\nassociated with each of the $b$ terminal nodes of $T$. The\ndecision rules are binary splits of the predictor space of the\nform $\\{x \\in A\\}$ vs $\\{x \\notin A\\}$ where $A$ is a subset of\nthe range of~$x$. These are typically based on the single components of\n$x = (x_1,\\ldots,x_p)$ and are of the\nform $\\{x_i \\le c\\}$ vs $\\{x_i > c\\}$ for continuous $x_i$.\nEach $x$ value is associated with a single\nterminal node of $T$ by the sequence of decision rules from top to\nbottom, and is then assigned the $\\mu_i$ value associated with\nthis terminal node. For a given $T$ and $M$, we use $g(x; T,M)$ to\ndenote the function which assigns a $\\mu_i \\in M$ to $x$.\nThus,\n\\begin{equation}\\label{stmodel}\nY = g(x; T,M) + \\epsilon,\\qquad\n \\epsilon \\sim N(0,\\sigma^2)\n\\end{equation}\nis a single tree model of the form considered by CGM98. Under (\\ref{stmodel}),\nthe conditional mean of $Y$ given $x$, $E(Y | x)$ equals the\nterminal node parameter $\\mu_i$ assigned by $g(x;T,M)$.\n\nWith this notation, the sum-of-trees model (\\ref{sstmodel1}) can\nbe more explicitly expressed as\n\\begin{equation}\\label{sstmodel2}\nY = \\sum_{j=1}^m g(x; T_j,M_j) +\n\\epsilon,\\qquad\n \\epsilon \\sim N(0,\\sigma^2),\n\\end{equation}\nwhere for each binary regression tree $T_j$ and its associated terminal node parameters $M_j$, $g(x; T_j,M_j)$ is the function which assigns $\\mu_{ij} \\in M_j$ to $x$.\nUnder (\\ref{sstmodel2}), $E(Y | x)$ equals the sum of all the terminal node\n$\\mu_{ij}$'s assigned to $x$ by the $g(x; T_j,M_j)$'s.\nWhen the number of trees $m > 1$, each $\\mu_{ij}$ here is merely a part of $E(Y | x)$, unlike the single tree model (\\ref{stmodel}). Furthermore, each such $\\mu_{ij}$ will represent a main effect when $g(x;\nT_j,M_j)$ depends on only one component of $x$ (i.e., a single\nvariable), and will represent an interaction effect when $g(x;\nT_j,M_j)$ depends on more than one component of $x$ (i.e., more than\none variable). Thus, the sum-of-trees model can incorporate both\nmain effects and interaction effects. And because (\\ref{sstmodel2})\nmay be based on trees of varying sizes, the interaction effects may\nbe of varying orders. In the special case where every terminal node\nassignment depends on just a single component of $x$, the\nsum-of-trees model reduces to a simple additive function, a sum of step functions of the\nindividual components of $x$.\n\nWith a large number of trees, a sum-of-trees model gains increased\nrepresentation flexibility which, as we'll see, endows BART with\nexcellent predictive capabilities. This\nrepresentational flexibility is obtained by rapidly increasing the\nnumber of parameters. Indeed, for fixed $m$, each sum-of-trees model\n(\\ref{sstmodel2}) is determined by $(T_1,M_1),\\ldots,(T_m,M_m)$ and\n$\\sigma$, which includes all the bottom node parameters as well as\nthe tree structures and decision rules. Further, the\nrepresentational flexibility of each individual tree leads to\nsubstantial redundancy across the tree components. Indeed, one can\nregard $\\{g(x; T_1,M_1),\n \\ldots , g(x; T_m,M_m)\\}$ as an ``overcomplete\nbasis'' in the sense that many different choices of $(T_1,M_1),\\ldots,(T_m,M_m)$ can lead to an identical function $\\sum_{j=1}^m g(x; T_j,M_j)$.\n\n\\subsection{A regularization prior} \\label{sec:prior}\n\nWe complete the BART model specification by imposing a prior over all\nthe parameters of the sum-of-trees model, namely,\n$(T_1,M_1),\\ldots,(T_m,M_m)$ and $\\sigma$. As discussed below, we\nadvocate specifications of this prior that effectively regularize the\nfit by keeping the individual tree effects from being unduly\ninfluential. Without such a regularizing influence, large tree\ncomponents would overwhelm the rich structure of (\\ref{sstmodel2}),\nthereby limiting the advantages of the additive representation both in\nterms of function approximation and computation.\n\nTo facilitate the easy implementation of BART in practice, we recommend\nautomatic default specifications below which appear to be remarkably\neffective, as demonstrated in the many examples of Section\n\\ref{sec:examples}. Basically we proceed by first reducing the prior\nformulation problem to the specification of just a few interpretable\nhyperparameters which govern priors on $T_j$, $M_j$ and $\\sigma$. Our\nrecommended defaults are then obtained by using the observed variation\nin $y$ to gauge reasonable hyperparameter values when external\nsubjective information is unavailable. Alternatively, one can use the\nconsiderations below to specify a range of plausible hyperparameter\nvalues and then use cross-validation to select from these values. This\nwill of course be computationally more demanding. We should also\nmention that although we sacrifice Bayesian coherence by using the data\nto calibrate our priors, our overriding concern is to make sure that\nour priors are not in severe conflict with the data.\n\n\n\\subsubsection{Prior independence and symmetry}\\label{sec:simplification}\n\n\nSpecification of our regularization prior is vastly simplified by restricting attention to\npriors for which\n\\begin{eqnarray}\\label{indep1}\np((T_1,M_1),\\ldots,(T_m,M_m),\\sigma)\n & = & \\biggl[\\prod_j p(T_j,M_j) \\biggr] p(\\sigma)\n \\nonumber\\\\[-8pt]\\\\[-8pt]\n & = & \\biggl[\\prod_j p(M_j | T_j) p(T_j) \\biggr]\\nonumber\n p(\\sigma)\n\\end{eqnarray}\nand\n\\begin{equation}\\label{indep2}\np(M_j | T_j) = \\prod_i{p(\\mu_{ij} | T_j)},\n\\end{equation}\nwhere $\\mu_{ij} \\in M_j$. Under such\npriors, the tree components $(T_j,M_j)$ are independent of each other and of\n $\\sigma$, and the terminal node parameters of every tree are\nindependent.\n\n\nThe independence restrictions above simplify the prior specification\nproblem to the specification of forms for just $p(T_j),\np(\\mu_{ij} | T_j)$ and $p(\\sigma)$, a specification which we further simplify by\nusing identical forms for all $p(T_j)$ and for all\n$p(\\mu_{ij} | T_j)$. As described in the ensuing\nsubsections, for these we use the same prior forms proposed by CGM98 for\nBayesian CART. In addition to their valuable computational\nbenefits, these forms are controlled by just a few interpretable\nhyperparameters which can be calibrated using the data to yield\neffective default specifications for regularization of the sum-of-trees model.\nHowever, as will be seen, considerations for the choice of these hyperparameter\nvalues for BART are markedly different than those for Bayesian CART.\n\n\n\\subsubsection{The $T_j$ prior}\\label{sec:treeprior}\n\nFor $p(T_j)$, the form recommended by CGM98 is easy to specify and\ndovetails nicely with calculations for the backfitting MCMC\nalgorithm described later in Section \\ref{sec:mcmc}. It is\nspecified by three aspects: (i) the probability that a node at\ndepth $d$ ($= 0, 1,2,\\ldots$) is nonterminal, given by\n\\begin{equation}\\label{treeprior}\n\\alpha (1+d)^{-\\beta},\\qquad\n\\alpha \\in (0,1), \\beta \\in [0, \\infty),\n\\end{equation}\n(ii) the distribution on the splitting variable assignments at\neach interior node, and (iii) the distribution on the splitting\nrule assignment in each interior node, conditional on the splitting\nvariable. For (ii) and (iii) we use the simple defaults used by\nCGM98, namely, the uniform prior on available variables for (ii)\nand the uniform prior on the discrete set of available splitting\nvalues for (iii). Although not strictly coherent from the\nBayesian point of view, this last choice has the appeal of invariance\nunder monotone transformations of the splitting variables.\n\n\\iffalse\n# R calculations for the prior on tree size when alpha=.95,beta=2...\npvec <- .95\/(1:4)^2\np1 <- pvec[1]\np2 <- pvec[2]\np3 <- pvec[3]\np4 <- pvec[4]\n# depth 1 is .05\n# depth 2\np1*(1-p2)^2\n# [1] 0.5523359\n2*p1*p2*(1-p2)*(1-p3)^2\n# [1] 0.2752731\n4*p1*p2*p3*(1-p2)*(1-p3)*(1-p4)^2 + p1*p2*p2*(1-p3)^4\n# [1] 0.09178265\n# remainder\n1-.05-0.5523359-0.2752731-0.09178265\n# [1] 0.03060835\n\\fi\n\nIn a single tree model (i.e., $m =1$), a tree with many terminal\nnodes may be needed to model a complicated structure. However, for\na sum-of-trees model, especially with $m$ large, we want the\nregularization prior to keep the individual tree components small.\nIn our examples in Section \\ref{sec:examples}, we do so by using\n$\\alpha=0.95$ and $\\beta=2$ in (\\ref{treeprior}). With this choice,\ntrees with 1, 2, 3, 4 and $\\geq $5 terminal nodes receive prior\nprobability of 0.05, 0.55, 0.28, 0.09 and 0.03, respectively.\nNote that even with this prior, which puts most probability on\ntree sizes of 2 or 3, trees with many terminal nodes can\nbe grown if the data demands it. For example, in one of our\nsimulated examples with this prior, we observed considerable\nposterior probability on trees of size 17 when we set $m=1$.\n\n\\subsubsection{The $\\mu_{ij} | T_j$ prior}\\label{sec:mprior}\n\nFor $p(\\mu_{ij} | T_j)$, we use the conjugate normal distribution\n$N(\\mu_\\mu, \\sigma_\\mu^2)$ which offers tremendous computational\nbenefits because $\\mu_{ij}$ can be margined out. To guide the\nspecification of the hyperparameters $\\mu_\\mu$ and $ \\sigma_\\mu$, note\nthat $E(Y | x)$ is the sum of $m$ $\\mu_{ij}$'s under the sum-of-trees\nmodel, and because the $\\mu_{ij}$'s are apriori i.i.d., the induced\nprior on $E(Y | x)$ is $N(m \\mu_\\mu, m \\sigma_\\mu^2)$. Note also that\nit is highly probable that $E(Y | x)$ is between $y_{\\min}$ and\n$y_{\\max}$, the observed minimum and maximum of $Y$ in the data. The\nessence of our strategy is then to choose $\\mu_\\mu$ and $\\sigma_\\mu$ so\nthat $N(m \\mu_\\mu, m \\sigma_\\mu^2)$ assigns substantial probability\nto the interval $(y_{\\min}, y_{\\max})$. This can be conveniently done\nby choosing $\\mu_\\mu$ and $ \\sigma_\\mu$ so that $m \\mu_\\mu - k\n\\sqrt{m} \\sigma_\\mu = y_{\\min}$ and $m \\mu_\\mu + k \\sqrt{m}\n\\sigma_\\mu = y_{\\max}$ for some preselected value of~$k$. For example,\n$k=2$ would yield a 95\\% prior probability that $E(Y | x)$ is in the\ninterval $(y_{\\min}, y_{\\max})$.\n\nThe strategy above uses an aspect of the observed data, namely, $y_{\\min}$ and $y_{\\max}$, to\ntry to ensure that the implicit prior for $E(Y | x)$ is in the right ``ballpark.''\nThat is to say, we want it to assign substantial probability to the entire region of\nplausible values of $E(Y | x)$ while avoiding overconcentration and overdispersion.\nWe have found that, as long as this goal is met, BART is very robust to changes in the exact specification.\nSuch a data-informed prior approach is especially useful in our problem, where reliable subjective information about $E(Y | x)$ is likely to be unavailable.\n\n\nFor convenience, we implement our specification strategy by first shifting and rescaling\n$Y$ so that the observed transformed $y$ values range from $y_{\\min}= -0.5$ to $y_{\\max}= 0.5$,\nand then treating this transformed $Y$ as our dependent variable.\nWe then simply center the prior for $\\mu_{ij}$ at zero $\\mu_\\mu\n= 0$ and choose $\\sigma_\\mu$ so that\n$ k \\sqrt{m} \\sigma_{\\mu} = 0.5$ for\na suitable value of $k$, yielding\n\\begin{equation}\\label{eq:muprior}\n\\mu_{ij} \\sim N(0,\\sigma_{\\mu}^2)\\qquad \\mbox{where }\n\\sigma_{\\mu} = 0.5\/k \\sqrt{m}.\n\\end{equation}\n\nThis prior has the effect of shrinking the tree parameters\n$\\mu_{ij}$ toward zero, limiting the effect of the individual\ntree components of (\\ref{sstmodel2}) by keeping them small. Note\nthat as $k$ and\/or the number of trees $m$ is increased, this\nprior will become tighter and apply greater shrinkage to the\n$\\mu_{ij}$'s. Prior shrinkage on the $\\mu_{ij}$'s is the\ncounterpart of the shrinkage parameter in\nFriedman's (\\citeyear{Fri2001}) gradient boosting algorithm. The prior\nstandard deviation $\\sigma_{\\mu}$ of $\\mu_{ij}$ here and the gradient\nboosting shrinkage parameter there both serve to ``weaken'' the\nindividual trees so that each is constrained to play a smaller\nrole in the overall fit. For the choice of $k$, we have found\nthat values of $k$ between 1 and 3 yield good results, and we\nrecommend $k = 2$ as an automatic default choice. Alternatively,\nthe value of $k$ may be chosen by cross-validation from a range of\nreasonable choices.\n\nAlthough the calibration of this prior is based on a simple linear\ntransformation of $Y$, it should be noted that there is no need to\ntransform the predictor variables. This is a consequence of the\nfact that the tree splitting rules are invariant to monotone\ntransformations of the $x$ components. The simplicity of\nour prior for $\\mu_{ij}$ is an appealing feature of BART.\nIn contrast, methods like neural nets that use linear combinations\nof predictors require standardization choices for each predictor.\n\n\n\\subsubsection{The $\\sigma$ prior}\\label{sec:sigmaprior}\n\nFor $p(\\sigma)$, we also use a conjugate prior, here the inverse chi-square distribution\n$\\sigma^2 \\sim \\nu \\lambda\/\\chi_{\\nu}^2.$ To guide the specification\nof the hyperparameters $\\nu$ and $\\lambda$, we again use a data-informed prior approach,\nin this case to assign substantial probability to the entire region of\nplausible values of $\\sigma$ while avoiding overconcentration and overdispersion.\nEssentially, we calibrate the prior df $\\nu$ and scale $\\lambda$ for this purpose\nusing a ``rough data-based overestimate''\n$\\hat{\\sigma}$ of $\\sigma$. Two natural choices for $\\hat{\\sigma}$ are (1) the ``naive''\nspecification, in which we take $\\hat{\\sigma}$ to be the sample\nstandard deviation of $Y$, or (2) the ``linear model''\nspecification, in which we take $\\hat{\\sigma}$ as the residual\nstandard deviation from a least squares linear regression of $Y$\non the original $X$'s. We then pick a value of\n$\\nu$ between 3 and 10 to get an appropriate shape, and a value of\n$\\lambda$ so that the $q$th quantile of the prior on $\\sigma$ is located at\n$\\hat{\\sigma}$, that is, $P(\\sigma < \\hat{\\sigma}) = q.$ We consider values of $q$ such as\n0.75, 0.90 or 0.99 to center the distribution below $\\hat{\\sigma}$.\n\n\n\\begin{figure}\n\n\\includegraphics{285f01.eps}\n\n\\caption{Three priors on $\\sigma$ based on $\\mathrm{df} = \\nu$ and $\\mathrm{quantile} = q$\nwhen $\\hat \\sigma = 2$.}\\label{fig:sigmaprior}\n\\end{figure}\n\nFigure \\ref{fig:sigmaprior} illustrates priors corresponding to\nthree $(\\nu, q)$ settings when the rough overestimate is\n$\\hat{\\sigma}=2$. We refer to these three settings, $(\\nu, q) =\n(10, 0.75)$, $(3, 0.90)$, $(3, 0.99)$, as conservative, default\nand aggressive, respectively. The prior mode moves toward\nsmaller $\\sigma$ values as $q$ is increased. We recommend against\nchoosing $\\nu < 3$ because it seems to concentrate too much mass\non very small $\\sigma$ values, which leads to overfitting. In our\nexamples, we have found these three settings to work very well and\nyield similar results. For automatic use, we recommend the\ndefault setting $(\\nu, q) = (3, 0.90)$ which tends to avoid\nextremes. Alternatively, the values of $(\\nu, q)$ may be chosen by\ncross-validation from a range of\nreasonable choices.\n\n\n\\subsubsection{The choice of $m$}\\label{sec:numtrees}\n\nA major difference between BART and boosting methods is that for a\nfixed number of trees $m$, BART uses an iterative backfitting\nalgorithm (described in Section \\ref{sec:mcmc}) to cycle over and over\nthrough the $m$ trees. If BART is to be used for estimating $f(x)$ or\npredicting $Y$, it might be reasonable to treat $m$ as an unknown\nparameter, putting a prior on $m$ and proceeding with a fully Bayes\nimplementation of BART. Another reasonable strategy might be to select\na ``best'' value for $m$ by cross-validation from a range of reasonable\nchoices. However, both of these strategies substantially increase\ncomputational requirements.\n\nTo avoid the computational costs of these strategies, we have found it\nfast and expedient for estimation and prediction to begin with a\ndefault of $m = 200$, and then perhaps to check if one or two other\nchoices makes any difference. Our experience has been that as $m$ is\nincreased, starting with $m = 1$, the predictive performance of BART\nimproves dramatically until at some point it levels off and then\nbegins to very slowly degrade for large values of $m$. Thus, for\nprediction, it seems only important to avoid choosing $m$ too small.\nAs will be seen in Section \\ref{sec:examples}, BART yielded excellent\npredictive performance on a wide variety of examples with the simple\ndefault $m = 200$. Finally, as we shall see later in Sections\n\\ref{sec:estpd} and \\ref{sec:examples}, other considerations for\nchoosing $m$ come into play when BART is used for variable selection.\n\n\n\\section{Extracting information from the posterior}\\label{sec:postcalc}\n\n\\subsection{A Bayesian backfitting MCMC algorithm} \\label{sec:mcmc}\n\nGiven the observed data $y$, our Bayesian setup induces a\nposterior distribution\n\\begin{equation} \\label{posterior}\np((T_1,M_1), \\ldots,(T_m,M_m),\\sigma | y)\n\\end{equation}\non all the unknowns that determine a sum-of-trees model\n(\\ref{sstmodel2}). Although the sheer size of the parameter\nspace precludes exhaustive calculation, the following backfitting\nMCMC algorithm can be used to sample from this posterior.\n\nAt a general level, our algorithm is a Gibbs sampler. For\nnotational convenience, let $T_{(j)}$ be the set of all trees in\nthe sum \\textit{except} $T_j$, and similarly define $M_{(j)}$. Thus,\n$T_{(j)}$ will be a set of $m-1$ trees, and $M_{(j)}$ the\nassociated terminal node parameters. The Gibbs sampler here\nentails $m$ successive draws of $(T_j,M_j)$ conditionally on\n$(T_{(j)}, M_{(j)}, \\sigma)$:\n\\begin{equation}\\label{draw1}\n(T_j,M_j) | T_{(j)}, M_{(j)}, \\sigma, y,\n\\end{equation}\n$j = 1,\\ldots,m$, followed by a draw of $\\sigma$ from the full conditional:\n\\begin{equation}\\label{draw2}\n\\sigma | T_1, \\ldots, T_m, M_1, \\ldots, M_m, y .\n\\end{equation}\n\\citeasnoun{HastTibs2000} considered a similar application of\nthe Gibbs sampler for posterior sampling for additive and\ngeneralized additive models with $\\sigma$ fixed, and showed how it\nwas a stochastic generalization of the backfitting algorithm for\nsuch models. For this reason, we refer to our algorithm as\nbackfitting MCMC.\n\nThe draw of $\\sigma$ in (\\ref{draw2}) is simply a draw from an\ninverse gamma distribution and so can be easily obtained by\nroutine methods. More challenging is how to implement the\n$m$ draws of $(T_j,M_j)$ in (\\ref{draw1}). This can be done by\ntaking advantage of the following reductions. First, observe that\nthe conditional distribution $p(T_j,M_j | T_{(j)}, M_{(j)},\n\\sigma , y)$ depends on $(T_{(j)}, M_{(j)}, y)$ only through\n\\begin{equation}\nR_j \\equiv y - \\sum_{k \\neq j} g(x;T_k,M_k),\n\\end{equation}\nthe $n$-vector of partial residuals based on a fit that excludes\nthe $j$th tree. Thus, the $m$ draws of $(T_j,M_j)$ given $(T_{(j)},\nM_{(j)}, \\sigma , y)$ in (\\ref{draw1}) are equivalent to $m$ draws\nfrom\n\\begin{equation} \\label{newdraw}\n(T_j,M_j) | R_j, \\sigma,\n\\end{equation}\n$j = 1,\\ldots,m$.\n\nNow (\\ref{newdraw}) is formally equivalent to the posterior of the\nsingle tree model $R_j = g(x; T_j,M_j) + \\epsilon$ where $R_j$\nplays the role of the data $y$. Because we have used a conjugate\nprior for $M_j$,\n\\begin{equation}\\label{Mmarg}\np(T_j | R_j,\\sigma) \\propto p(T_j) \\int p(R_j | M_j,T_j, \\sigma)\np(M_j|T_j,\\sigma) \\,dM_j\n\\end{equation}\ncan be obtained in closed form up to a norming constant. This\nallows us to carry out each draw from (\\ref{newdraw})\nin two successive steps as\n\\begin{eqnarray} \\label{tdraw}\n&T_j | R_j,\\sigma,&\n\\\\ \\label{mdraw}\n&M_j| T_j, R_j, \\sigma .&\n\\end{eqnarray}\n\nThe draw of $T_j$ in (\\ref{tdraw}), although somewhat elaborate,\ncan be obtained using the Metropolis--Hastings (MH) algorithm of\nCGM98. This algorithm proposes a new tree based on the current\ntree using one of four moves. The moves and their associated\nproposal probabilities are as follows: growing a terminal node (0.25),\npruning a pair of terminal nodes (0.25), changing a nonterminal\nrule (0.40), and swapping a rule between parent and child (0.10).\nAlthough the grow and prune moves change the number of\nterminal nodes,\nby integrating out $M_j$ in (\\ref{Mmarg}), we\navoid the complexities associated with reversible jumps between\ncontinuous spaces of varying dimensions [\\citet{Gre1995}].\n\nFinally, the draw of $M_j$ in (\\ref{mdraw}) is simply a set of\nindependent draws of the terminal node $\\mu_{ij}$'s from a normal\ndistribution. The draw of $M_j$ enables the calculation of the\nsubsequent residual $R_{j+1}$ which is critical for the next draw\nof $T_j$. Fortunately, there is again no need for a complex\nreversible jump implementation.\n\nWe initialize the chain with $m$ simple single node trees, and then\niterations are repeated until satisfactory convergence is\nobtained. At each iteration, each tree may increase or decrease\nthe number of terminal nodes by one, or change one or two decision\nrules. Each $\\mu$ will change (or cease to exist or be born), and\n$\\sigma$ will change. It is not uncommon for a tree to grow large\nand then subsequently collapse back down to a single node as the\nalgorithm iterates. The sum-of-trees model, with its abundance of\nunidentified parameters, allows for ``fit'' to be freely\nreallocated from one tree to another. Because each move makes only\nsmall incremental changes to the fit, we can imagine the algorithm\nas analogous to sculpting a complex figure by adding and\nsubtracting small dabs of clay.\n\nCompared to the single tree model MCMC approach of CGM98, our\nbackfitting MCMC algorithm mixes dramatically better. When only\nsingle tree models are considered, the MCMC algorithm tends to\nquickly gravitate toward a single large tree and then gets stuck\nin a local neighborhood of that tree. In sharp contrast, we have\nfound that restarts of the backfitting MCMC algorithm give\nremarkably similar results even in difficult problems.\nConsequently, we run one long chain with BART rather than multiple\nstarts. Although mixing does not appear to be an issue, the\nrecently proposed modifications of \\citeasnoun{Blan2004} and \\citeasnoun{WuTjeWes2007}\nmight well provide additional benefits.\n\n\\subsection{Posterior inference statistics}\\label{sec:estpd}\n\n\nThe backfitting algorithm described in the previous section is ergodic,\ngenerating a sequence of draws of $(T_1,M_1),\\break \\ldots,(T_m, M_m),\\sigma$\nwhich is converging (in distribution) to the posterior\\break $p((T_1,M_1),\n\\ldots,(T_m, M_m),\\sigma | y)$. The induced sequence of sum-of-trees\nfunctions\n\\begin{equation}\\label{fstar}\nf^*(\\cdot) = \\sum_{j=1}^m g(\\cdot;T_j^*,M_j^*),\n\\end{equation}\nfor the sequence of draws $(T_1^*,M_1^*), \\ldots,(T_m^*,M_m^*)$, is\nthus converging to $p(f | y)$, the posterior distribution on the\n``true'' $f(\\cdot)$. Thus, by running the algorithm long enough after\na suitable burn-in period, the sequence of $f^*$ draws, say,\n$f^*_1,\\dots,f^*_K$, may be regarded as an approximate, dependent\nsample of size $K$ from $p(f | y)$. Bayesian inferential quantities\nof interest can then be approximated with this sample as indicated\nbelow. Although the number of iterations needed for reliable\ninferences will of course depend on the particular application, our\nexperience with the examples in Section \\ref{sec:examples} suggests\nthat the number of iterations required is relatively modest.\n\nTo estimate $f(x)$ or predict $Y$ at a particular $x$, in-sample or\nout-of-sample, a~natural choice is the average of the after burn-in\nsample $f^*_1,\\dots,f^*_K$,\n\\begin{equation}\\label{eq:fhat}\n\\frac{1}{K} \\sum_{k=1}^K f^*_k(x),\n\\end{equation}\nwhich approximates the posterior mean $E( f(x) | y)$. Another good\nchoice would be the median of $f^*_1(x),\\dots,f^*_K(x)$ which\napproximates the posterior median of $f(x)$. Posterior uncertainty\nabout $f(x)$ may be gauged by the variation of\n$f^*_1(x),\\dots,f^*_K(x)$. For example, a natural and convenient\n$(1-\\alpha)\\%$ posterior interval for $f(x)$ is obtained as the\ninterval between the upper and lower $\\alpha\/2$ quantiles of\n$f^*_1(x),\\dots,f^*_K(x)$. As will be seen, these uncertainty intervals\nbehave sensibly, for example, by widening at $x$ values far from the\ndata.\n\nIt is also straightforward to use $f^*_1(x),\\dots,f^*_K(x)$ to estimate\nother functionals of $f$. For example, a functional of particular interest is the\npartial dependence function [\\citet{Fri2001}], which summarizes the\nmarginal effect of one (or more) predictors on the response. More precisely,\nletting $f(x) = f(x_s,x_c)$ where $x$ has been partitioned into the predictors of interest, $x_s$\nand the complement $x_c=x\\setminus x_s$, the partial dependence function is defined as\n\\begin{equation}\\label{eq:pdsum}\nf(x_s) = \\frac{1}{n}\\sum_{i=1}^n f(x_s,x_{ic}),\n\\end{equation}\nwhere $x_{ic}$ is the $i$th observation of $x_c$ in the data. Note\nthat $(x_s,x_{ic})$ will not generally be one of the observed data\npoints. A draw from the induced BART posterior $p(f(x_s) | y)$ at any value of $x_s$ is obtained by\nsimply computing $f^*_k(x_s)= \\frac{1}{n}\\sum_i f^*_k(x_s,x_{ic})$. The average of $f^*_1(x_s),\\dots,f^*_K(x_s)$ then yields an\nestimate of $f(x_s)$, and the upper and lower $\\alpha\/2$ quantiles provide endpoints\nof $(1-\\alpha)\\%$ posterior intervals for $f(x_s)$.\n\n\nFinally, as mentioned in Section \\ref{sec:intro}, BART can also be used\nfor variable selection by selecting those variables that appear most\noften in the fitted sum-of-trees models. Interestingly, this strategy\nis less effective when $m$ is large because the redundancy offered by\nso many trees tends to mix many irrelevant predictors in with the\nrelevant ones. However, as $m$ is decreased and that redundancy is\ndiminished, BART tends to heavily favor relevant predictors for its\nfit. In a sense, when $m$ is small the predictors compete with each\nother to improve the fit.\n\nThis model-free approach to variable selection is accomplished by\nobserving what happens to the $x$ component usage frequencies in a\nsequence of MCMC samples $f^*_1,\\dots,f^*_K$ as the number of trees $m$\nis set smaller and smaller. More precisely, for each simulated\nsum-of-trees model $f^*_k$, let $z_{ik}$ be the proportion of all\nsplitting rules that use the $i$th component of $x$. Then\n\\begin{equation}\\label{eq:xfreq}\nv_i \\equiv \\frac {1}{K} \\sum_{k=1}^K z_{ik}\n\\end{equation}\nis the average use per splitting rule for the $i$th component of $x$.\nAs $m$ is set smaller and smaller, the sum-of-trees models tend to more\nstrongly favor inclusion of those $x$ components which improve\nprediction of $y$ and exclusion of those $x$ components that are\nunrelated to $y$. In effect, smaller $m$ seems to create a bottleneck\nthat forces the $x$ components to compete for entry into the\nsum-of-trees model. As we shall see illustrated in Section\n\\ref{sec:examples}, the $x$ components with the larger $v_i$'s will\nthen be those that provide the most information for predicting $y$.\nFinally, it might be useful to consider alternative ways of measuring\ncomponent usage in (\\ref{eq:xfreq}) such as weighting variables by the\nnumber of data points present in the node, thereby giving more weight\nto the importance of initial node splits.\n\n\n\\section{BART probit for classification}\\label{sec:classification}\n\nOur development of BART up to this point has pertained to setups where\nthe output of interest $Y$ is a continuous variable. However, for\nbinary $Y$ ($= 0$ or 1), it is straightforward to extend BART to the\nprobit model setup for classification\n\\begin{equation}\\label{eq:cmodel}\np(x) \\equiv P[Y = 1 | x] = \\Phi[G(x)],\n\\end{equation}\nwhere\n\\begin{equation}\nG(x) \\equiv \\sum_{j=1}^m g(x; T_j,M_j)\n\\end{equation}\nand $\\Phi[\\cdot]$ is the standard normal c.d.f. Note that each\nclassification probability $p(x)$ here is obtained as a function of\n$G(x)$, our sum of regression trees. This contrasts with the often\nused aggregate classifier approaches which use a majority or an average\nvote based on an ensemble of classification trees, for example, see\n\\citeasnoun{AmitGema1997} and \\citeasnoun{Bre2001}.\n\nFor the BART extension to (\\ref{eq:cmodel}), we need to impose a\nregularization prior on $G(x)$ and to implement a Bayesian backfitting\nalgorithm for posterior computation. Fortunately, these are obtained\nwith only minor modifications of the methods in Sections\n\\ref{sec:model} and \\ref{sec:postcalc}. As opposed to\n(\\ref{sstmodel2}), the model (\\ref{eq:cmodel}) implicitly assumes\n$\\sigma = 1$ and so only a prior on $(T_1,M_1),\\ldots,(T_m,M_m)$ is\nneeded. Proceeding exactly as in Section \\ref{sec:simplification}, we\nconsider a prior of the form\n\\begin{equation}\np((T_1,M_1),\\ldots,(T_m,M_m))= \\prod_j \\biggl[p(T_j) \\prod_i p(\\mu_{ij} | T_j) \\biggr],\n\\end{equation}\nwhere each tree prior $p(T_j)$ is the choice recommended in Section\n\\ref{sec:treeprior}. For the choice of $p(\\mu_{ij} | T_j)$ here, we\nconsider the case where the interval $(\\Phi[-3.0], \\Phi[3.0])$ contains\nmost of the $p(x)$ values of interest, a case which will often be of\npractical relevance. Proceeding similarly to the motivation of\n(\\ref{eq:muprior}) in Section \\ref{sec:mprior}, we would then recommend\nthe choice\n\\begin{equation}\\label{eq:muprior2}\n\\mu_{ij} \\sim N(0,\\sigma_{\\mu}^2)\\qquad \\mbox{where }\n\\sigma_{\\mu} = 3.0\/k \\sqrt{m} ,\n\\end{equation}\nwhere $k$ is such that $G(x)$ will with high probability be in the interval $(-3.0,3.0)$.\nJust as for (\\ref{eq:muprior}), this prior has the effect of shrinking the tree parameters\n$\\mu_{ij}$ toward zero, limiting the effect of the individual\ntree components of $G(x)$. As $k$ and\/or the number of trees $m$ is increased, this\nprior will become tighter and apply greater shrinkage to the\n$\\mu_{ij}$'s. For the choice of $k$, we have found\nthat values of $k$ between 1 and 3 yield good results, and we\nrecommend $k = 2$ as an automatic default choice. Alternatively,\nthe value of $k$ may be chosen by cross-validation.\n\nBy shrinking $G(x)$ toward 0, the prior (\\ref{eq:muprior2}) has the effect of shrinking\n$p(x) = \\Phi[G(x)]$ toward $0.5$. If it is of interest to shrink toward\na value $p_0$ other than $0.5$, one can simply replace $G(x)$ by $G_c = G(x)+ c$ in\n(\\ref{eq:cmodel}) with the offset $c = \\Phi^{-1}[p_0]$. Note also that if an interval other than $(\\Phi[-3.0], \\Phi[3.0])$\nis of interest for $p(x)$, suitable modification of (\\ref{eq:muprior2}) is straightforward.\n\nTurning to posterior calculation, the essential features of the\nbackfitting algorithm in Section \\ref{sec:mcmc} can be implemented by\nusing the augmentation idea of \\citeasnoun{AlbeChib1993}. The key idea\nis to recast the model (\\ref{eq:cmodel}) by introducing latent\nvariables $Z_1,\\ldots,Z_n$ i.i.d. $\\sim N(G(x), 1)$ such that $Y_i = 1$\nif $Z_i > 0$ and $Y_i = 0$ if $Z_i \\le 0$. Note that under this\nformulation, $Z_i | [y_i = 1] \\sim \\max\\{N(g(x), 1), 0\\}$ and $Z_i\n| [y_i = 0] \\sim \\min\\{N(g(x), 1), 0\\}$. Incorporating simulation of\nthe latent $Z_i$ values into the backfitting algorithm, the Gibbs\nsampler iterations here entail $n$ successive draws of $Z_i | y_i$,\n$i = 1,\\dots, n$, followed by $m$ successive draws of $(T_j,M_j) |T_{(j)}, M_{(j)}, z_1,\\ldots,z_n$, $j = 1,\\ldots,m$, as spelled out in\nSection \\ref{sec:mcmc}. The induced sequence of sum-of-trees functions\n\\begin{equation}\\label{pstar}\np^*(\\cdot) = \\Phi \\Biggl[\\sum_{j=1}^m g(\\cdot;T_j^*,M_j^*) \\Biggr],\n\\end{equation}\nfor the sequence of draws $(T_1^*,M_1^*), \\ldots,(T_m^*,M_m^*)$, is\nthus converging to the posterior distribution on the ``true''\n$p(\\cdot)$. After a suitable burn-in period, the sequence of $g^*$\ndraws, say, $g^*_1,\\dots,g^*_K$, may be regarded as an approximate,\ndependent sample from this posterior which can be used to draw\ninference about $p(\\cdot)$ in the same way that $f^*_1,\\dots,f^*_K$ was\nused in Section \\ref{sec:estpd} to draw inference about $f(\\cdot)$.\n\n\n\\section{Applications}\\label{sec:examples}\n\nIn this section we demonstrate the application of BART on several\nexamples. We begin in Section \\ref{sec:bakeoff} with a predictive\ncross-validation performance comparison of BART with competing methods\non 42 different real data sets. We next, in Section\n\\ref{sec:simex:friedman}, evaluate and illustrate BART's capabilities\non simulated data used by \\citeasnoun{Frie1991}. Finally, in Section\n\\ref{sec:drugdisc} we apply the BART probit model to a drug discovery\nclassification problem. All of the BART calculations throughout this\nsection can be reproduced with the \\texttt{BayesTree} library at\n\\url{http:\/\/cran.r-project.org\/}.\n\n\n\n\n\\subsection{Predictive comparisons on 42 data sets}\\label{sec:bakeoff}\n\nOur first illustration is a ``bake-off,'' a predictive\nperformance comparison of BART with competing methods on 42 different real data sets.\nThese data sets (see Table \\ref{tab:datasets}) are a subset of 52 sets considered by \\citeasnoun{KimLohShiCha2007}.\nTen data sets were excluded either because Random Forests was unable to use over 32\ncategorical predictors, or because a single train\/test split was used\nin the original paper. All data sets correspond to regression setups with\nbetween 3 and 28 numeric predictors and 0 to 6 categorical predictors.\nCategorical predictors were converted into 0\/1 indicator variables\ncorresponding to each level. Sample sizes vary from 96 to 6806 observations.\nIn each of the 42 data sets, the response was minimally preprocessed,\napplying a log or square root transformation if this made the histogram\nof observed responses more bell-shaped. In about half the cases, a log\ntransform was used to reduce a right tail. In one case (Fishery) a square\nroot transform was most appropriate.\n\n\\begin{table}\n\\tabcolsep=0pt\n\\caption{The 42 data sets used in the bake-off}\\label{tab:datasets}\n\\begin{tabular*}{\\textwidth}{@{\\extracolsep{\\fill}}lc@{\\qquad}lc@{\\qquad}lc@{\\qquad}lc@{\\qquad}lc@{}}\n\\hline\n\\textbf{Name} & $\\bolds{n}$ &\n\\textbf{Name} & $\\bolds{n}$ &\n\\textbf{Name} & $\\bolds{n}$ &\n\\textbf{Name} & $\\bolds{n}$ &\n\\textbf{Name} & $\\bolds{n}$ \\\\\n\\hline\nAbalone & 4177 & Budget & 1729 & Diamond & \\hphantom{0}308 & Labor & 2953 & Rate & 144 \\\\\nAis & \\hphantom{0}202 & Cane & 3775 & Edu & 1400 & Laheart & \\hphantom{0}200 & Rice & 171 \\\\\nAlcohol & 2462 & Cardio & \\hphantom{0}375 & Enroll & \\hphantom{0}258 & Medicare & 4406 & Scenic & 113 \\\\\nAmenity & 3044 & College & \\hphantom{0}694 & Fame & 1318 & Mpg & \\hphantom{0}392 & Servo & 167 \\\\\nAttend & \\hphantom{0}838 & Cps & \\hphantom{0}534 & Fat & \\hphantom{0}252 & Mumps & 1523 & Smsa & 141 \\\\\nBaseball & \\hphantom{0}263 & Cpu & \\hphantom{0}209 & Fishery & 6806 & Mussels & \\hphantom{0}201 & Strike & 625 \\\\\nBaskball & \\hphantom{00}96 & Deer & \\hphantom{0}654 & Hatco & \\hphantom{0}100 & Ozone & \\hphantom{0}330 & Tecator & 215 \\\\\nBoston & \\hphantom{0}506 & Diabetes& \\hphantom{0}375 & Insur & 2182 & Price & \\hphantom{0}159 & Tree & 100 \\\\\nEdu & 1400 & Fame & 1318 & \\\\\n\\hline\n\\end{tabular*}\n\\end{table}\n\n\nFor each of the 42 data sets, we created 20 independent train\/test splits by\nrandomly selecting $5\/6$ of the data as a training set and\nthe remaining $1\/6$ as a test set. Thus, $42 \\times 20 = 840$\ntest\/train splits were created. Based on each training set,\neach method was then used to predict the corresponding test set\nand evaluated on the basis of its predictive RMSE.\n\n\nWe considered two versions of BART: BART-cv where the prior hyperparameters\n$(\\nu, q, k, m)$ were treated as operational parameters to\nbe tuned via cross-validation, and BART-default where\nwe set $(\\nu, q, k, m)$ to the defaults $(3, 0.90, 2, 200)$.\nFor both BART-cv and BART-default, all specifications of the quantile\n$q$ were made relative to the least squares linear regression estimate\n$\\hat{\\sigma}$, and the number of burn-in steps and MCMC iterations used\nwere determined by inspection of a single long run. Typically, 200 burn-in\nsteps and 1000 iterations were used. For BART prediction at each $x$, we used the\nposterior mean estimates given by (\\ref{eq:fhat}).\n\nAs competitors, we considered linear regression with L1 regularization\n(the Lasso) [\\citet{EfrHasJohTib2004}] and three black-box models:\ngradient boosting [\\citet{Fri2001}, implemented as \\texttt{gbm} in\nR by \\citeasnoun{Rid2004}], random forests [\\citet{Bre2001},\nimplemented as \\texttt{randomforest} in R] and neural networks with one\nlayer of hidden units [implemented as \\texttt{nnet} in R by\n\\citeasnoun{VenaRipl2002}]. These competitors were chosen because, like\nBART, they are black box predictors. Trees, Bayesian CART (CGM98) and\nBayesian treed regression [\\citet{ChipGeorMcCu2002a}] models were not\nconsidered, since they tend to sacrifice predictive performance for\ninterpretability.\n\nWith the exception of BART-default (which requires no tuning),\nthe operational parameters\nof every method were chosen via 5-fold cross-validation\nwithin each training set. The parameters considered and potential\nlevels are given in Table~\\ref{tab:parameters}. In particular, for BART-cv, we\nconsidered the following:\n\\begin{itemize}\n\\item three settings $(3,0.90)$ (default), $(3,0.99)$ (aggressive) and\n$(10,0.75)$ (conservative) as shown in Figure \\ref{fig:sigmaprior}\nfor the $\\sigma$ prior hyperparameters $(\\nu, q)$,\n\\item four values $k= 1,2,3,5$ reflecting moderate to heavy shrinkage for the $\\mu$ prior hyperparameter, and\n\\item two values $m = 50, 200$ for the number of trees,\n\\end{itemize}\na total of $3*4*2 = 24$ potential choices for $(\\nu, q, k, m)$.\n\n\\begin{table}[b]\n\\caption{Operational parameters for the various competing models}\\label{tab:parameters}\n\\begin{tabular*}{\\textwidth}{@{\\extracolsep{\\fill}}llc@{}}\n\\hline\n \\textbf{Method} & \\textbf{Parameter} & \\textbf{Values considered} \\\\\n \\hline\n BART-cv & Sigma prior: $(\\nu, q)$\ncombinations & (3, 0.90), (3, 0.99), (10, 0.75)\\\\\n& \\# trees $m$ & 50, 200 \\\\\n & $\\mu$ prior: $k$ value for $\\sigma_\\mu$\n & 1, 2, 3, 5\\\\[3pt]\nLasso & Shrinkage (in range 0--1) & $0.1, 0.2, \\ldots, 1.0$ \\\\[3pt]\nGradient boosting & \\# of trees & 50, 100, 200 \\\\\n & Shrinkage (multiplier of each tree added)& 0.01, 0.05, 0.10, 0.25 \\\\\n & Max depth permitted for each tree & 1, 2, 3, 4 \\\\[3pt]\nNeural nets & \\# hidden units & see text \\\\\n & Weight decay & 0.0001, 0.001, 0.01, 0.1, 1, 2, 3\\\\[3pt]\nRandom forests & \\# of trees & 500 \\\\\n & \\% variables sampled to grow each node & 10, 25, 50, 100\\\\\n\\hline\n\\end{tabular*}\n\\end{table}\n\nAll the levels in Table~\\ref{tab:parameters} were chosen with a\nsufficiently wide range so that the selected value was not at an extreme of\nthe candidate values in most problems. Neural networks are the only model\nwhose operational parameters need additional explanation. In that case,\nthe number of hidden units was chosen in terms of the implied number\nof weights, rather than the number of units. This design choice was\nmade because of the widely varying number of predictors across problems,\nwhich directly impacts the number of weights. A number of hidden units\nwere chosen so that there was a total of roughly $u$ weights, with $u=\n50, 100, 200, 500$ or $800$.\nIn all cases, the number of hidden units was further constrained to fall\nbetween 3 and~30.\nFor example, with 20 predictors we used 3, 8 and 21 as candidate\nvalues for the number of hidden units.\n\n\n\n\nTo facilitate performance comparisons across data sets, we considered\nrelative RMSE (RRMSE), which we defined as the RMSE divided by the\nminimum RMSE obtained by any method for each test\/train split. Thus, a\nmethod obtained an RRMSE of 1.0 when that method had the minimum RMSE\non that split. As opposed to the RMSE, the RRMSE provides meaningful\ncomparisons across data sets because of its invariance to location and\nscale transformations of the response variables. Boxplots of the 840\ntest\/train split RRMSE values for each method are shown in\nFigure~\\ref{fig:boxplot}, and the (50\\%, 75\\%) RRMSE quantiles (the\ncenter and rightmost edge of each box in Figure~\\ref{fig:boxplot}) are\ngiven in Table~\\ref{tab:perf}. (The Lasso was left off the boxplots\nbecause its many large RRMSE values visually overwhelmed the other\ncomparisons.)\n\n\\begin{figure}[b]\n\n\\includegraphics{285f02.eps}\n\n\\caption{Boxplots of the RRMSE values for each method across the 840 test\/train splits.\nPercentage RRMSE values larger than 1.5 for each method (and not plotted)\nwere the following: random forests 16.2\\%, neural net 9.0\\%, boosting 13.6\\%, BART-cv\n9.0\\% and BART-default 11.8\\%. The Lasso (not plotted because of too many\nlarge RRMSE values) had 29.5\\% greater than 1.5.}\\label{fig:boxplot}\n\\end{figure}\n\n\\begin{table}\n\\tablewidth=170pt\n\\caption{(50\\%, 75\\%) quantiles of relative RMSE values for each method across the 840 test\/train splits}\\label{tab:perf}\n\\begin{tabular*}{170pt}{@{\\extracolsep{\\fill}}lc@{}}\n\\hline\n\\textbf{Method} & \\textbf{(50\\%, 75\\%)} \\\\\n\\hline\nLasso & (1.196, 1.762)\\\\\nBoosting & (1.068, 1.189)\\\\\nNeural net & (1.055, 1.195)\\\\\nRandom forest & (1.053, 1.181)\\\\\nBART-default & (1.055, 1.164)\\\\\nBART-cv & (1.037, 1.117)\\\\\n\\hline\n\\end{tabular*}\n\\end{table}\n\nAlthough relative performance in Figure~\\ref{fig:boxplot} varies widely across the different problems, it is clear from the distribution of RRMSE values that BART-cv tended to more often obtain smaller RMSE than any of its competitors. Also notable is the overall performance of BART-default which was arguably second best. This is especially impressive since neural nets, random forests and gradient boosting all relied here on cross-validation for control parameter tuning. By avoiding the need for hyperparameter specification, BART-default is vastly easier and faster to use. For example, a single implementation of BART-cv here requires\nselection among the 24 possible hyperparameter values with 5 fold cv,\nfollowed by fitting the best model, for a total of $24*5 + 1 = 121$\napplications of BART. For those who want a computationally inexpensive method ready for\neasy ``off the shelf'' use, BART-default is the winner in this experiment.\n\n\n\\subsection{Friedman's five dimensional test function}\\label{sec:simex:friedman}\n\n\nWe next proceed to illustrate various features of BART on simulated data where\nwe can gauge its performance against the true underlying signal. For this purpose,\nwe constructed data by\nsimulating values of $x = (x_1,x_2,\\ldots,x_p)$ where\n\\begin{equation}\\label{eq:fri-xs}\nx_1,x_2,\\ldots,x_p \\mbox{ i.i.d. } \\sim \\operatorname{Uniform}(0,1),\n\\end{equation}\nand $y$ given $x$ where\n\\begin{equation}\\label{eq:fri-ys}\ny = f(x) + \\epsilon = 10 \\sin(\\pi x_1 x_2) + 20 (x_3-0.5)^2 + 10\nx_4 + 5 x_5 + \\epsilon,\n\\end{equation}\nwhere $\\epsilon \\sim N(0,1)$. Because $y$ only depends on\n$x_1,\\ldots,x_5$, the predictors $x_6,\\ldots, x_p$ are irrelevant.\nThese added variables together with the interactions and\nnonlinearities make it more challenging to find $f(x)$ by\nstandard parametric methods. \\citeasnoun{Frie1991} used this setup\nwith $p = 10$ to illustrate the potential of\nmultivariate adaptive regression\nsplines (MARS).\n\n\nIn Section \\ref{sec:friedman-simple} we illustrate various basic\nfeatures of BART. We illustrate point and interval estimation of\n$f(x)$, model-free variable selection and estimation of partial\ndependence functions. We see that the BART MCMC burns-in quickly and\nmixes well. We illustrate BART's robust performance with respect to\nvarious hyperparameter settings. In Section \\ref{sec:friedman-finding}\nwe increase the number of irrelevant predictors in the data to show\nBART's effectiveness at detecting a low dimensional structure in a high\ndimensional setup. In Section \\ref{sec:friedman-train-test} we compare\nBART's out-of-sample performance with the same set of competitors used\nin Section \\ref{sec:bakeoff} with $p$ equal to 10, 100 and 1000. We\nfind that BART dramatically outperforms the other methods.\n\n\n\\subsubsection{A simple application of BART}\\label{sec:friedman-simple}\n\nWe begin by illustrating the basic features of BART on a single simulated data set\nof the Friedman function (\\ref{eq:fri-xs}) and (\\ref{eq:fri-ys})\nwith $p= 10$ $x's$ and $n = 100$ observations. For simplicity, we\napplied BART with the default setting $(\\nu, q, k,m) = (3,0.90,2,200)$\ndescribed in Section \\ref{sec:prior}.\nUsing the backfitting MCMC algorithm, we generated 5000 MCMC draws\nof $f^*$ as in (\\ref{fstar}) from the posterior after skipping\n1000 burn-in iterations.\n\nTo begin with, for each value of $x$, we obtained\nposterior mean estimates $\\hat{f}(x)$ of $f(x)$ by averaging the\n5000 $f^*(x)$ values as in (\\ref{eq:fhat}). Endpoints of 90\\% posterior intervals for\neach $f(x)$ were obtained as the 5\\% and 95\\% quantiles of the\n$f^*$ values. Figure~\\ref{fig:friedman}(a)\nplots $\\hat{f}(x)$ against\n$f(x)$ for the $n= 100$ in-sample values of $x$ from\n(\\ref{eq:fri-xs}) which were used to generate the $y$ values using\n$(\\ref{eq:fri-ys})$. Vertical lines indicate the 90\\% posterior\nintervals for the $f(x)$'s. Figure~\\ref{fig:friedman}(b) is the\nanalogous plot at 100 randomly selected out-of-sample $x$ values.\nWe see that in-sample the $\\hat{f}(x)$ values correlate very well with the\ntrue $f(x)$ values and the intervals tend to cover the true\nvalues. Out-of sample, there is a slight degradation of the correlation and\nwider intervals indicating\ngreater uncertainty about $f(x)$ at new $x$ values.\n\n\\begin{figure}[b]\n\n\\includegraphics{285f03.eps}\n\n \\caption{Inference about Friedman's $f(x)$ in $p=10$ dimensions.}\\label{fig:friedman}\n\\end{figure}\n\nAlthough one would not expect the 90\\% posterior intervals to exhibit\n90\\% frequentist coverage, it may be of interest to note that 89\\%\nand 96\\% of the intervals in Figures~\\ref{fig:friedman}(a) and (b)\ncovered the true $f(x)$ value, respectively. In fact, in over\n200 independent replicates of this example we found average\ncoverage rates of 87\\% (in-sample) and 93\\% (out-of-sample).\nIn real data settings where $f$ is unknown,\nbootstrap and\/or cross-validation methods might be helpful to get similar\ncalibrations of frequentist coverage. It should be\nnoted, however, that for extreme $x$ values, the prior may exert more shrinkage toward 0, leading\nto lower coverage frequencies.\n\n\nThe lower sequence in Figure~\\ref{fig:friedman}(c) is the sequence\nof $\\sigma$ draws over the entire 1000 burn-in plus 5000\niterations (plotted with *). The horizontal line is drawn at the\ntrue value $\\sigma = 1$. The Markov chain here appears to reach\nequilibrium quickly, and although there is autocorrelation, the\ndraws of $\\sigma$ nicely wander around the true value $\\sigma = 1$,\nsuggesting that we have fit but not overfit. To further\nhighlight the deficiencies of a single tree model, the upper\nsequence (plotted with $\\cdot$) in Figure~\\ref{fig:friedman}(c) is\na sequence of $\\sigma$ draws when $m =1$, a single tree model, is used. The sequence\nseems to take longer to reach equilibrium and remains\nsubstantially above the true value $\\sigma = 1$. Evidently a\nsingle tree is inadequate to fit this data.\n\nMoving beyond estimation and inference about the values of $f(x)$, BART\nestimates of the partial dependence functions $f(x_i)$ in\n(\\ref{eq:pdsum}) reveal the marginal effects of the individual $x_i$'s\non $y$. Figure \\ref{fig:friedman-pdplot} shows the plots of point and\ninterval estimates of the partial dependence functions for\n$x_1,\\ldots,x_{10}$ from the 5000 MCMC samples of $f^*$. The nonzero\nmarginal effects of $x_1,\\ldots,x_5$ and the zero marginal effects of\n$x_6,\\ldots,x_{10}$ seem to be completely consistent with the form of\n$f$ which of course would be unknown in practice.\n\n\\begin{figure}[b]\n\n\\includegraphics{285f04.eps}\n\n\\caption{Partial dependence plots for the 10 predictors in the Friedman data.}\\label{fig:friedman-pdplot}\n\\end{figure}\n\nAs described in Section \\ref{sec:estpd}, BART can also be used to\nscreen for variable selection by identifying as promising those\nvariables that are used most frequently in the sum-of-trees model $f^*$\ndraws from the posterior. To illustrate the potential of this\napproach here, we recorded the average use measure $v_i$ in\n(\\ref{eq:xfreq}) for each $x_i$ over 5000 MCMC draws of $f^*$ for each\nof various values of $m$, based on a sample of $n = 500$ simulated\nobservations of the Friedman function (\\ref{eq:fri-xs}) and\n(\\ref{eq:fri-ys}) with $p = 10$. Figure \\ref{fig:friedman-varsel}\nplots these $v_i$ values for $x_1,\\ldots,x_{10}$ for $m =\n10$, 20, 50, 100, 200. Quite dramatically, as the number of trees $m$ is\nmade smaller, the fitted sum-of-trees models increasingly incorporate\nonly those $x$ variables, namely, $x_1,\\ldots,x_5$, that are needed to\nexplain the variation of $y$. Without making use of any assumptions or\ninformation about the actual functional form of $f$ in\n(\\ref{eq:fri-ys}), BART has here exactly identified the subset of\nvariables on which $f$ depends.\n\n\\begin{figure}\n\n\\includegraphics{285f05.eps}\n\n\\caption{Average use per splitting rule for variables $x_1,\\ldots,x_{10}$ when $m =\n10$, 20, 50, 100, 200.} \\label{fig:friedman-varsel}\n\\end{figure}\n\n\nYet another appealing feature of BART is its apparent\nrobustness to small changes in the prior and to the\nchoice of $m$, the number of trees. This robustness is illustrated in\nFigures~\\ref{fig:friedmanmrun1}(a) and (b) which display the in-\nand out-of-sample RMSE obtained by BART over 5000 MCMC samples of $f^*$ for various choices of $(\\nu,\nq, k, m )$. In each plot of RMSE versus $m$, the plotted text\nindicates the values of $(\\nu, q, k)$: $k = 1$, 2 or 3 and $(\\nu,\nq) = \\mathrm{d}$, a or c (default\/agressive\/conservative). Three striking\nfeatures of the plot are apparent: (i) a very small number of\ntrees ($m$ very small) gives poor RMSE results, (ii) as long as $k>1$,\nvery similar results are obtained from different prior settings,\nand (iii) increasing the number of trees well beyond the number\nneeded to capture the fit results in only a slight degradation of\nthe performance.\n\n\\begin{figure}\n\n\\includegraphics{285f06.eps}\n\n \\caption{BART's robust RMSE performance as $(\\nu, q, k, m)$ is varied\n[a\/d\/c correspond to aggressive\/default\/conservative prior on $\\sigma$,\nblack\/red\/green correspond to $k=(1,2,3)$]: \\textup{(a)} in-sample RMSE\ncomparisons and \\textup{(b)} out-of-sample RMSE comparisons. (Horizontal\njittering of points has been used to improve readability).}\\label{fig:friedmanmrun1}\n\\end{figure}\n\n\n\nAs Figure~\\ref{fig:friedmanmrun1} suggests, the BART fitted values\nare remarkably stable as the settings are varied. Indeed, in this\nexample, the correlations between out-of-sample fits turn out to\nbe very high, almost always greater than 0.99. For example, the\ncorrelation between the fits from the $(\\nu,q,k,m)=(3,0.9,2,100)$\nsetting (a reasonable default choice) and the $(10,0.75,3,100)$\nsetting (a very conservative choice) is 0.9948. Replicate runs with\ndifferent seeds are also stable: The correlation between fits from\ntwo runs with the $(3,0.9,2,200)$ setting is 0.9994. Such stability\nenables the use of one long MCMC run. In contrast, some models\nsuch as neural networks require multiple starts to ensure a good\noptimum has been found.\n\n\n\\subsubsection{Finding low dimensional structure in high dimensional data}\\label{sec:friedman-finding}\n\nOf the $p$ variables $x_1,\\ldots,x_p$ from (\\ref{eq:fri-xs}), $f$\nin (\\ref{eq:fri-ys}) is a function of only five $x_1,\\ldots,x_5$.\nThus, the problem we have been considering is one of drawing\ninference about a five dimensional signal embedded in a $p$\ndimensional space. In the previous subsection we saw that when $p\n= 10$, the setup used by \\citeasnoun{Frie1991}, BART could easily\ndetect and draw inference about this five dimensional signal with\njust $n = 100$ observations. We now consider the same problem with\nsubstantially larger values of $p$ to illustrate the extent to\nwhich BART can find low dimensional structure in high dimensional\ndata. For this purpose, we repeated the analysis displayed in\nFigure~\\ref{fig:friedman} with $p = 20$, 100 and 1000 but again\nwith only $n = 100$ observations. We used BART with the same\ndefault setting of $(\\nu, q, k) = (3,0.90,2)$ and $m = 100$ with\none exception: we used the naive estimate $\\hat{\\sigma}$ (the\nsample standard deviation of $Y$) rather the least squares\nestimate to anchor the $q$th prior quantile to allow for data with\n$p \\ge n$. Note that because the naive $\\hat{\\sigma}$ is very\nlikely to be larger than the least squares estimate, it would also\nhave been reasonable to use a more aggressive prior setting for\n$(\\nu,q)$.\n\n\n\\begin{figure}\n\n\\includegraphics{285f07.eps}\n\n\\caption{Inference about Friedman's function in $p = 20$, 100, 1000 dimensions.}\\label{fig:friedmanbigp}\n\\end{figure}\n\nFigure~\\ref{fig:friedmanbigp} displays the in-sample and\nout-of-sample BART inferences for the larger values $p = 20$, 100\nand 1000. The in-sample estimates and 90\\% posterior intervals for\n$f(x)$ are remarkably good for every $p$. As would be expected,\nthe out-of-sample plots show that extrapolation outside the data\nbecomes less reliable as $p$ increases. Indeed, the estimates\nare shrunk toward the mean more, especially when $f(x)$ is near an extreme, and the\nposterior intervals widen (as they should). Where there is less\ninformation, it makes sense that BART pulls toward the center\nbecause the prior takes over and the $\\mu$'s are shrunk toward\nthe center of the $y$ values. Nonetheless, when the dimension\n$p$ is so large compared to the sample size $n = 100$, it is\nremarkable that the BART inferences are at all reliable,\nat least in the middle of the data.\n\nIn the third column of Figure~\\ref{fig:friedmanbigp}, it is\ninteresting to note what happens to the MCMC sequence of $\\sigma$\ndraws. In each of these plots, the solid line at $\\sigma = 1$ is\nthe true value and the dashed line at $\\hat\\sigma = 4.87$ is the\nnaive estimate used to anchor the prior. In each case, the\n$\\sigma$ sequence repeatedly crosses $\\sigma = 1$. However, as $p$\ngets larger, it increasingly tends to stray back toward larger\nvalues, a reflection of increasing uncertainty. Last, note that\nthe sequence of $\\sigma$ draws in Figure~\\ref{fig:friedmanbigp}\nis systematically higher than the $\\sigma$ draws in\nFigure~\\ref{fig:friedman}(c). This may be due in part to the fact that\nthe regression $\\hat\\sigma$ rather than the naive $\\hat\\sigma$ was\nused to anchor the prior in Figure~\\ref{fig:friedman}. Indeed, if\nthe naive $\\hat\\sigma$ was instead used for\nFigure~\\ref{fig:friedman}, the $\\sigma$ draws would similarly\nrise.\n\nA further attractive feature of BART is that it appears to avoid\nbeing misled by pure noise. To gauge this, we simulated $n = 100$\nobservations from (\\ref{eq:fri-xs}) with $f \\equiv 0$ for\n$p=10$, 100, 1000 and ran BART with the same settings as above. With\n$p=10$ and $p=100$ all intervals for $f$ at both in-sample and\nout-of-sample $x$ values covered or were close to 0, clearly\nindicating the absence of a relationship. At $p=1000$ the data\nbecomes so uninformative that our prior, which suggests that there\nis some fit, takes over and some in-sample intervals are far from\n0. However, the out-of-sample intervals still tend to cover 0 and\nare very large so that BART still indicates no evidence of a\nrelationship between $y$ and $x$.\n\n\\subsubsection{Out-of-sample comparisons with competing methods}\\label{sec:friedman-train-test}\n\nTo gauge how well BART performs on the Friedman setup,\nwe compared its out-of-sample performance with random forests, neural nets and gradient boosting.\nWe dropped the Lasso since it has no hope of uncovering the nonlinear structure without\nsubstantial modification of\nthe approach we used in Section~\\ref{sec:bakeoff}.\nIn the spirit of Section~\\ref{sec:friedman-finding}, we consider the case of estimating $f$\nwith just $n = 100$ observations when $p = 10$, 100 and 1000.\nFor this experiment we based both the BART-default and BART-cv estimates on 3000 MCMC\niterations obtained after 1000 burn-in draws.\n\nFor each value of $p$, we simulated 100 data sets of $n = 100$ observations each.\nAs in Section~\\ref{sec:bakeoff}, we used 5-fold cross-validation to choose tuning parameters.\nBecause $f$ is known here, there was no need to simulate test set data.\nRather, for each method's $\\hat{f}$ based on each data set, we randomly drew 1000 independent $x$ values\nand assessed the fit using\n$\\mathrm{RMSE} = \\sqrt{\\frac{1}{1000} \\sum_{i=1}^{1000} (\\hat{f}(x_i) - f(x_i))^2}$.\nFor each method we thus obtained 100 such RMSE values.\n\nFor $p=10$, we used the same parameter values given in\nTable~\\ref{tab:parameters} for all the methods. For $p = 100$ and\n1000, as in Section \\ref{sec:friedman-finding}, we based the BART\nprior for $\\sigma$ on the sample standard deviation of $y$ rather than\non the least squares estimate. For $p=100$, we changed the settings\nfor neural nets. We considered either 3 or 6 hidden units and decay\nvalues of 0.1, 1, 2, 3, 5, 10 or 20. With the larger value of $p$, neural\nnets use far more parameters so we had to limit the number of units and\nincrease the shrinkage in order to avoid consistently hitting a\nboundary. At $p = 1000$, computational difficulties forced us to drop\nneural nets altogether.\n\n\n\\begin{figure}\n\n\\includegraphics{285f08.eps}\n\n\\caption{Out-of-sample predictive comparisons in the Friedman simulated example for\n(from top to bottom) BART-default, BART-cv, boosting and random forests.\nEach boxplot represents 100 RMSE values.} \\label{fig:friedman-out}\n\\end{figure}\n\nFigure~\\ref{fig:friedman-out} displays boxplots and\nTable~\\ref{tab:friedman-perf} provides the 50\\% and 75\\% quantiles of\nthe 100 RMSE values for each method for $p = 10$, 100 and 1000. (Note\nthat these are not relative RRMSE values as we had used in Figure\n\\ref{fig:boxplot}.) With $p = 10$, the two BART approaches are clearly\nthe best and very similar. However, as $p$ increases, BART-cv degrades\nrelatively little, whereas BART-default gets much worse. Indeed, when\n$p = 1000$, BART-cv is much better than the other methods and the\nperformance of BART-default is relatively poor.\n\nEvidently, the default prior is not a good choice for the Friedman simulation when $p$ is large.\nThis can be seen by noting that\nin the cross-validation selection of tuning parameters for BART-cv,\nthe setting with $m = 50$ trees and the aggressive prior on $\\sigma$ ($\\mathrm{df}=3$, $\\mathrm{quantile}=0.99$) is\nchosen 60\\% of the time when $p = 100$ or 1000. Because of a high signal-to-noise ratio here,\nthe default $\\sigma$ prior settings are apparently not aggressive enough\nwhen the sample standard deviation of $y$ is used to anchor the quantile. Furthermore,\nsince only five of the variables actually matter, $m = 50$ trees is adequate to fit\nthe complexity of the true $f$, whereas using more trees may inhibit the stochastic\nsearch in this very high dimensional problem.\n\n\n\\iffalse\nFrom a Bayesian perspective, the ``big $p$, small $n$'' scenario cries out for\nthe use of prior information.\nIn Bayesian variable selection for linear models, this would be expressed\ndirectly expressed as a prior on the number of variables.\nIn BART, this prior information is quite naturally needed on $\\sigma$ directly\nand on the number of variable through the choice of $m$.\n\\fi\n\n\n\\begin{table}[b]\n\\tablewidth=290pt\n\\caption{(50\\%, 75\\%) quantiles of RMSE values for each method when $p = 10$, 100, 1000}\\label{tab:friedman-perf}\n\\begin{tabular*}{290pt}{@{\\extracolsep{\\fill}}lccc@{}}\n\\hline\n\\textbf{Method} & $\\bolds{p = 10}$ & $\\bolds{p = 100}$ & $\\bolds{p = 1000}$ \\\\\n\\hline\nRandom forests & (1.25, 1.31) & (1.46, 1.52) & (1.62, 1.68) \\\\\nNeural net & (1.01, 1.32) & (1.71, 2.11) & unavailable \\\\\nBoosting & (0.99, 1.07) & (1.03, 1.14) & (1.08, 1.33) \\\\\nBART-cv & (0.90, 0.95) & (0.93, 0.98) & (0.99, 1.06) \\\\\nBART-default & (0.89, 0.94) & (1.02, 1.10) & (1.48, 1.66)\\\\\n\\hline\n\\end{tabular*}\n\\end{table}\n\n\n\n\n\\subsection{Classification: A drug discovery application}\\label{sec:drugdisc}\n\nOur last example illustrates an application of the BART probit approach\nof Section \\ref{sec:classification} to a drug discovery\nclassification problem. In such problems, the goal is to predict\nthe ``activity'' of a compound using\npredictor variables that characterize the molecular structure of the\ncompound. By ``activity,'' one typically means the ability to effect\na desired outcome against some biological target, such as inhibiting or\nkilling a certain virus.\n\nThe data we consider describe $p = 266$ molecular characteristics\nof $n = 29\\mbox{,}374$ compounds, of which 542 were classified as active.\nThese predictors represent topological aspects of molecular structure.\nThis data set was collected by the National Cancer Institute, and is\ndescribed in \\citeasnoun{Feng2003}. Designating the activity of a compound\nby a binary variable ($Y=1$ if active and $Y=0$ otherwise), BART probit can be applied here\nto obtain posterior mean estimates of $P[Y = 1 | x]$ for each $x$ vector of the 266\nmolecular predictor values.\n\nTo get a feel for the extent to which BART's $P[Y = 1 | x]$ estimates\ncan be used to identify promising drugs, we randomly split the data\ninto nonoverlapping train and test sets, each with 14,687 compounds of\nwhich 271 were active. We then applied BART probit to the training set\nwith the default settings $m = 50$ trees and mean shrinkage $k = 2$\n(recall $\\nu$ and $q$ have no meaning for the probit model). To gauge\nMCMC convergence, we performed four independent repetitions of 250,000\nMCMC iterations and obtained essentially the same results each time.\n\nFigure~\\ref{intervals} plots the 20 largest $P[Y = 1 | x]$ estimates\nfor the train and the test sets. Also provided are the 90\\% posterior\nintervals which convey uncertainty and the identification whether the\ndrug was in fact active ($y = 1$) or not ($y = 0$). The true positive\nrates in both the train and test sets for these 20 largest estimates\nare $16\/20 = 80$\\% (there are 4 inactives in each plot), an impressive\ngain over the $271\/14\\mbox{,}687 = 1.85$\\% base rate. It may be of interest to\nnote that the test set intervals are slightly wider, with an average\nwidth of 0.50 compared to 0.47 for the training intervals.\n\n\\begin{figure}\n\n\\includegraphics{285f09.eps}\n\n\\caption{BART posterior intervals for the 20 compounds with highest\npredicted activity, using train~\\textup{(a)} and test \\textup{(b)} sets.}\\label{intervals}\n\\end{figure}\n\n\nTo gauge the predictive performance of BART probit on this data, we\ncompared its out-of sample performance with boosted trees, neural\nnetworks and random forests (using \\texttt{gbm}, \\texttt{nnet} and\n\\texttt{randomforest}, as in Section \\ref{sec:bakeoff}) and with\nsupport vector machines [using \\texttt{svm} in the \\texttt{e1071}\npackage of \\citeasnoun{e1071}]. L1-penalized logistic regression was\nexcluded due to numeric difficulties. For this purpose, we randomly\nsplit the data into training and test sets, each containing 271\nrandomly selected active compounds. The remaining inactive compounds\nwere then randomly allocated to create a training set of 1000 compounds\nand a test set of 28,374 observations. The training set was\ndeliberately chosen smaller to make feasible a comparative experiment\nwith 20 replications.\n\nFor this experiment we considered both BART-default and BART-cv based\non 10,000 MCMC iterations. For BART-default, we used the same default\nsettings as above, namely, $m=200$ trees and $k=2$. For BART-cv, we\nused 5-fold cross-validation to choose from among $k=0.25$, 0.5, 1, 2,\n3 and $m = 100$, 200, 400 or 800. For all the competitors, we also\nused 5-fold cross-validation to select tuning parameters as in Section\n\\ref{sec:bakeoff}. However, the large number of predictors led to some\ndifferent ranges of tuning parameters. Neural networks utilized a\nskip layer and 0, 1 or 2 hidden units, with possible decay values of\n0.0001, 0.1, 0.5, 1, 2, 5, 10, 20 and 50. Even with 2 hidden units,\nthe neural network model has over 800 weights. In random forests, we\nconsidered 2\\% variable sampling in addition to 10\\%, 25\\%, 50\\% and\n100\\%. For support vector machines, two parameters, $C$, the cost of a\nconstraint violation, and $\\gamma$ [\\citet{CC01a}], were chosen by\ncross-validation, with possible values $C=2^a, a=-6, -5, \\ldots, 0$ and\n$\\gamma = 2^b, b=-7, -6, -5, -4$.\n\nIn each of 20 replicates, a different train\/test split was generated.\nTest set performance for this classification problem was measured by area under the\nReceiver Operating Characteristic (ROC) curve, via the ROCR package of\n\\citeasnoun{ROCR}. To generate a ROC curve, each method must produce a rank\nordering of cases by predicted activity. All models considered generate a\npredicted probability of activity, though other rank orderings could be\nused. Larger AUC values indicate superior performance, with an AUC of 0.50\ncorresponding to the expected performance of a method that randomly\norders observations by their predictions. A classifier's AUC value\nis the probability that it will rank a randomly chosen $y = 1$ example\nhigher than a randomly chosen $y= 0$.\n\n\\begin{table}[b]\n\\caption{Classifier performance for the drug discovery problem, measured as\nAUC, the area under a ROC curve. Results are averages over 20 replicates.\nThe corresponding standard error is 0.0040, based~on an ANOVA of\nAUC scores with a block effect for replicates}\\label{tab:AUC}\n\\begin{tabular*}{6cm}{@{\\extracolsep{\\fill}}lc@{}}\n\\hline\n\\textbf{Method} & \\textbf{AUC} \\\\\n\\hline\nRandom forests & 0.7680\\\\\nBoosting & 0.7543\\\\\nBART-cv & 0.7483\\\\\nSupport vector & 0.7417\\\\\nBART & 0.7245\\\\\nNeural network & 0.7205\\\\\n\\hline\n\\end{tabular*}\n\\end{table}\n\nThe area under curve (AUC) values in Table~\\ref{tab:AUC} indicate that for\nthis data set, BART is very competitive with all the methods. Here\nrandom forests provides the best performance, followed closely by\nboosting, BART-cv and then support vector machines. The default version of BART\nand neural networks score slightly lower. Although the differences in AUC between\nthese three groups are statistically significant (based on a 1-way ANOVA with a\nblock effect for each replicate), the practical differences are not appreciable.\nWe remark again that by avoiding the cross-validated selection of tuning\nparameters, BART-default is much faster and easier to implement than the other methods here.\n\n\n\nFinally, we turn to the issue of variable selection and demonstrate\nthat by decreasing the number of trees $m$, BART probit can be used,\njust as BART in Section~\\ref{sec:friedman-simple}, to identify those predictors which have\nthe most influence on the \\mbox{response}. For this purpose, we modify the\ndata setup as follows: instead of holding out a test set, all 542 active\ncompounds and a subsample of 542 inactives were used to build a model.\nFour independent chains, each with 1,000,000 iterations, were used. The\nlarge number of iterations was used to ensure stability in the\n``percent usage'' variable selection index (\\ref{eq:xfreq}).\nBART probit with $k=2$ and\nwith $m = 5, 10, 20$ trees were considered.\n\nAs Figure~\\ref{fig:usage} shows,\nthe same three variables are selected as most\nimportant for all three choices of $m$.\nConsidering that $1\/266 \\approx 0.004$, percent usages of 0.050 to\n0.100 are quite a bit larger than one would expect if all variables were\nequally important. As expected, variable usage is most concentrated in the\ncase of a small ensemble (i.e., $m=5$ trees).\n\n\\begin{figure}\n\\begin{tabular}{@{}cc@{}}\n\n\\includegraphics{285f10a.eps}\n&\\includegraphics{285f10b.eps}\\\\\n(a)&(b)\n\\end{tabular}\n \\caption{Variable importance measure, drug discovery example. Values are\ngiven for 5, 10 and 20 trees in the ensemble, for all 266 variables \\textup{(a)} and\nthe 25 variables with the highest mean usage~\\textup{(b)}. Vertical lines in \\textup{(a)}\nindicate variables whose percent usage exceeds the 95th percentile. The\n95th percentile is indicated by a horizontal line.}\\label{fig:usage}\n\\end{figure}\n\n\\section{Execution time considerations}\\label{sec:executiontime}\n\nIn this section we study BART's execution time on various simulations of the Friedman data\n in order to shed light on how it depends on the sample size $n$ and\nnumber of predictors $p$, and on how it compares to the execution time\nof random forests, gradient boosting and neural nets.\n\nTo study the dependence of execution time on sample size $n$, we fixed\n$p = 50$ and varied $n$ from 100 to 10,000. For each $n$, we ran both\na short version (no burn-in iterations, 2 sampling iterations, $m =\n200$ trees) and the default version (100 burn-in iterations, 1000\nsampling iterations, $m = 200$ trees) of BART 10 times. The execution\ntimes of these 10 replicates for each $n$ are displayed in\nFigures~\\ref{fig:times}(a) and (b). (We used the R \\texttt{system.time}\ncommand to time each run). Replicate variation is negligible. Because\nBART's main computational task is the calculation of residuals in\n(\\ref{newdraw}) and the evaluation of log-likelihood in the\nMetropolis--Hastings proposal, both of which involve iterating over\neither all $n$ observations or all observations contained in a node, we\nanticipated that execution time would increase linearly with $n$. This\nlinearity was indeed borne out by the short version of BART in\nFigure~\\ref{fig:times}(a).\n\nHowever, for the longer default version of BART, this dependence\nbecomes quadratic as is evidenced in Figure~\\ref{fig:times}(b).\nApparently, this nonlinear dependence is due to the adaptive nature of\nBART. For larger $n$, BART iterations tend toward the use of larger\ntrees to exploit finer structure, and these larger trees require more\ntree-based operations to generate the predictions required for residual\nand likelihood evaluation. Indeed, in a separate experiment using $m =\n50$ trees, we found that for $n = 100$, BART trees had up to 4 terminal\nnodes with an average size of 2.52 terminal nodes, whereas for $n =\n10\\mbox{,}000$, BART trees had as many as 10 terminal nodes with an average\nsize of 3.34. In contrast, the short version BART effectively keeps\ntree sizes small by limiting iterations, so that its execution time\nscales linearly with $n$.\n\n\n\\begin{figure}\n\n\\includegraphics{285f11.eps}\n\n\\caption{\\textup{(a)} For $p = 50$, execution times of the short\nversion of BART for $n= 100$, 500, 1000, 2500, 5000, 7500, 10,000, with a linear\nregression overlaid. \\textup{(b)} For $p = 50$, execution times\nof the default version of BART for $n= 100$, 500, 1000, 2500, 5000, 7500, 10,000, with a quadratic\nregression overlaid.\n\\textup{(c)} Execution times for the default version of BART\nwhen $p = 10$, 25, 50, 75, 100 for each $n= 100$, 500, 1000, 2500, 5000, 7500, 10,000.}\\label{fig:times}\n\\end{figure}\n\nTo study the dependence of execution time on the number of predictors\n$p$, we replicated the above experiment for the default version of BART\nvarying $p$ from 10 to 100 for each $n$. The execution times,\ndisplayed in Figure~\\ref{fig:times}(c), reveal that in all cases,\nBART's execution time is close to independent of $p$, especially as\ncompared to its dependence on $n$. Note, however, that, in practice,\nthe time to run BART may depend on the complexity of the underlying\nsignal which may require a longer burn-in period and a longer set of\nruns to fully explore the posterior. Larger values of $p$ may lead to\nsuch complexity.\n\nFinally, we compared BART's execution time to that of random forests,\ngradient boosting and neural nets, where execution of each method\nentails generating predictions for the training set. As in our first\nexperiment above, we fixed $p=50$ and varied $n$ from 100 to 10,000.\nTwo versions of BART were run: the default version considered above and\na minimal version (20 burn-in iterations, 10 sampling iterations, $m =\n50$ trees). Even with such a small number of iterations, the fits\nprovided by this minimal version were virtually indistinguishable from\nthe default version for the Friedman data with $n=100$ and $p=10$. For\nthe other models, tuning parameters were held fixed at the ``typical''\nvalues: \\texttt{mtry = 10} and \\texttt{ntree = 500} for\n\\texttt{RandomForest}; \\texttt{shrinkage = 0.1},\n\\texttt{interaction.depth = 3} and \\texttt{n.tree = 100} for\n\\texttt{gbm}; \\texttt{size = 6} and \\texttt{decay = 1.0} for\n\\texttt{nnet}.\n\n\\begin{figure}[b]\n\n\\includegraphics{285f12.eps}\n\n\\caption{Execution time comparisons of various methods, with $\\log_{10}$ seconds plotted versus sample\nsize $n= 100$, 500, 1000, 2500, 5000, 7500, 10,000.}\n\\label{fig:compare}\n\\end{figure}\n\nExecution times as a function of $n$ for each of the methods are\ndisplayed in Figure~\\ref{fig:compare}. The execution time of BART is\nseen to be comparable with that of the other algorithms, and all the\nalgorithms scale in a similar fashion. The minimal version of BART is\nfaster than all the other algorithms, while the default version is the\nslowest. Of course, execution times under actual use should take into\naccount the need to select tuning parameters, typically by\ncross-validation. By being competitive while avoiding this need, as\nwas illustrated in Section~\\ref{sec:bakeoff}, the default version of\nBART compares most favorably with these other methods.\n\n\n\n\n\n\\section{Extensions and related work}\\label{sec:related}\n\nAlthough we have framed BART as a stand alone procedure, it can\nalso be incorporated into larger statistical models, for example,\nby adding other components such as linear terms or linear random\neffects. For instance,\none might consider a model of the form\n\\begin{equation}\\label{h2}\nY = h_1(x) + h_2(z) + \\epsilon, \\qquad \\epsilon\\sim N(0,\\sigma^2),\n\\end{equation}\nwhere $h_1(x)$ is a sum of trees as in (\\ref{sstmodel1}) and $h_2(z)$ is a parametric form involving~$z$, a second vector of predictors.\nOne can also extend the sum-of-trees model to a\nmultivariate framework such as\n\\begin{equation}\\label{multivarBART}\nY_i = h_i(x_i) + \\epsilon_i,\\qquad\n(\\epsilon_1,\\epsilon_2,\\ldots,\\epsilon_p) \\sim N(0,\\Sigma),\n\\end{equation}\nwhere each $h_i$ is a sum of trees and $\\Sigma$ is a $p$ dimensional\ncovariance matrix. If all the $x_i$ are the same, we have a\ngeneralization of multivariate regression. If the $x_i$ are\ndifferent, we have a generalization of Zellner's SUR model\n[\\citet{Zell1962}]. The modularity of the BART MCMC algorithm in\nSection \\ref{sec:mcmc} easily allows for such incorporations and\nextensions. Implementation of linear terms or random effects in a\nBART model would only require a simple additional MCMC step to draw\nthe associated parameters. The multivariate version of BART\n(\\ref{multivarBART}) is easily fit by drawing each $h^*_i$ given\n$\\{h^*_j\\}_{j \\ne i}$ and $\\Sigma$, and then drawing $\\Sigma$ given\nall the $h^*_i$.\n\n\nThe framework for variable selection developed in Section\n\\ref{sec:postcalc} and illustrated in Section \\ref{sec:examples}\nappears quite promising for model-free identification of important\nfeatures. Modification of the prior hyperparameters may further\nenhance this approach. For instance, in the tree prior\n(\\ref{treeprior}), the default $\\alpha=0.95$ puts only 5\\% prior\nprobability on a single node tree. This may encourage splits even in\nsituations where predictive gains are modest. Putting more mass on\nsmall trees (via smaller values of $\\alpha$) might lead to a posterior\nin which ``every split counts,'' offsetting the tendency of BART to\ninclude spurious splits. Although such spurious splits do not affect\npredictive accuracy, they do tend to inflate variable usage\nfrequencies, thereby making it more difficult to distinguish the\nimportant variables. Prior specifcations for variable selection via\nBART are part of our ongoing research.\n\n\nAn early version of our work on BART [\\citet{ChipGeorMcCu2007}] was\npublished in the proceedings of the conference Advances in Neural\nInformation Processing Systems 2006. Based on this and other\npreliminary technical reports of ours, a variety of extensions and\napplications of BART have begun to appear. \\citeasnoun{ZhaShiMul2007}\nproposed SBART an extension of BART obtained by adding a spatial\ncomponent along the lines of (\\ref{h2}). Applied to the problem of\nmerging data sets, they found that SBART improved over the conventional\ncensus based method. For the predictive modeling problem of TF-DNA\nbinding in genetics,\n\\citeasnoun{ZhoLiu2008} considered a variety of learning methods,\nincluding stepwise linear regression, MARS, neural networks, support\nvector machines, boosting and BART. Concluding that ``the BART method\nperformed best in all cases,'' they noted BART's ``high predictive\npower, its explicit quantification of uncertainty and its\ninterpretability.'' By keeping track of the per sample inclusion\nrates, they successfully used BART to identify some unusual predictors.\n\\citeasnoun{ZhaHae2007} independently developed a probit extension\nof BART, which they call BACT, and applied it to credit risk data to\npredict the insolvency of firms. They found BACT to outperform the\nlogit model, CART and support vector machines.\n\\citeasnoun{AbuNapWan2008} also independently discovered the probit\nextension of BART, which they call CBART, and applied it for the\nautomatic detection of phishing emails. They found CBART to outperform\nlogistic regression, random forests, support vector machines, CART,\nneural networks and the original BART. \\citeasnoun{AbreMcCu2006}\napplied BART to hockey game penalty data and found evidence of referee\nbias in officiating. Without exception, these papers provide further\nevidence for the remarkable potential of BART.\n\n\\section{Discussion}\\label{sec:disc}\n\nThe essential components of BART are the sum-of-trees model, the\nregularization prior and the backfitting MCMC algorithm. As opposed to the\nBayesian approaches of CGM98 and \\citeasnoun{DeniMallSmit1998},\nwhere a single tree is used to explain all the variation in $y$,\neach of the trees in BART accounts for only part of the overall fit. This is accomplished with\na regularization prior that shrinks the tree effects toward a simpler fit.\nTo facilitate the implementation of BART, the prior is formulated in terms of\nrapidly computable forms that are controlled by interpretable hyperparameters,\nand which allow for a highly effective default version for\nimmediate ``off-the-shelf'' use. Posterior calculation is carried out by a\ntailored backfitting MCMC algorithm that appears to converge quickly, effectively\nobtaining a (dependent) sample from the posterior distribution over the space of sum-of-trees models.\nA variety of inferential quantities of interest can be obtained directly from this sample.\n\nThe application of BART to a wide variety of data sets and a simulation\nexperiment (Section \\ref{sec:examples}) served to demonstrate many of\nits appealing features. In terms of out-of sample predictive RMSE\nperformance, BART compared favorably with boosting, the lasso, MARS,\nneural nets and random forests. In particular, the computationally\ninexpensive and easy to use default version of BART performed extremely\nwell. In the simulation experiments, BART obtained reliable posterior\nmean and interval estimates of the true regression function as well as\nthe marginal predictor effects. BART's performance was seen to be\nremarkably robust to hyperparameter specification, and remained\neffective when the regression function was buried in ever higher\ndimensional spaces. BART was also seen to be a new effective tool for\nmodel-free variable selection. Finally, a straightforward probit\nextension of BART for classification of binary $Y$ was seen to be an\neffective, competitive tool for discovering promising drugs on the\nbasis of their molecular structure.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section*{I}\n\n\\bigskip \\noindent\n7\/28\/2007\n\n\\bigskip \\noindent\nDear Qubitzers,\n\n\\bigskip \\noindent\nGR=QM? Well why not? Some of us already accept ER=EPR \\cite{Maldacena:2013xja}, so why not follow it to its logical conclusion?\n\n\\bigskip \\noindent\n\nIt is said that general relativity and quantum mechanics are separate subjects that don't fit together comfortably. There is a tension, even a contradiction between them---or so one often hears.\n I take exception to this view. I think that exactly the opposite is true. It may be too strong to say that gravity and quantum mechanics are exactly the same thing, but those of us who are paying attention, may already sense that the two are inseparable, and that neither makes sense without the other.\n\nTwo things make me think so. The first is ER=EPR, the equivalence between quantum entanglement and spatial connectivity. In its strongest form ER=EPR holds not only for black holes but for any entangled systems---even empty space%\n\\footnote{Empty space can be divided by Rindler horizons, so that the two sides are entangled \\cite{VanRaamsdonk:2010pw}\\cite{Ryu:2006bv}.}.\n If the entanglement between two spatial regions is somehow broken, the regions become disconnected \\cite{VanRaamsdonk:2010pw}; and conversely, if regions are entangled, they must be connected \\cite{Maldacena:2013xja}. The most basic property of space---its connectivity---is due to the most quantum property of quantum mechanics: entanglement. Personally I think ER=EPR is more than half way to GR=QM.\n\nThe second has to do with the dynamics of space, in particular its tendency to expand. One sees this in cosmology, but also behind the horizons of black holes. The expansion is thought to be connected with the tendency of quantum states to become increasingly complex. Adam Brown and I called this tendency \\it the second law of quantum complexity\\rm \\ \\cite{Brown:2017jil}. If one pushes these ideas to their logical limits, quantum entanglement of any kind implies the existence of hidden Einstein-Rosen bridges which have a strong tendency to grow, even in situations which one naively would think have nothing to do with gravity.\n\n To summarize this viewpoint in a short slogan:\n\n\\bigskip \\noindent\n\\it Where there is quantum mechanics, there is also gravity.\\rm\n\n\\bigskip \\noindent\nI suggest that this is true in a very strong sense; even for systems that are deep into the non-relativistic range of parameters---the range in which the Newton constant is negligibly small, and and the speed of light is much larger than any laboratory velocity. \nThis may sound like a flight of fantasy, but I believe it is an inevitable consequence of things we already accept. \n\n\n\\bigskip \\noindent\n\nLet's suppose that a laboratory exists, containing a large spherical shell made of some more or less ordinary material, that is well described by non-relativistic quantum mechanics. That's all there is in the lab except for a few simple devices like strain gauges, squids to measure magnetic field, and a light hammer to tap on the shell. One other thing: Alice and Bob to do some experiments. Obviously---you say---quantum gravity is completely irrelevant in this lab: a perfect counter example to my claim that ``where there is quantum mechanics there is also gravity.\" Equally obviously, I don't agree.\n\n\n The shell has been engineered to be at a quantum critical point, where the excitations are described by a conformal field theory having a holographic bulk%\n\\footnote{By bulk I mean the AdS-like geometry dual to the CFT. The space in the laboratory will just be called \\it the lab\\rm. The bulk should not be confused with the ordinary interior of the shell which is part of the lab. \n\\ \n\nBecause a real shell has a finite number of atoms the CFT is a cutoff version, which means that the bulk geometry terminates on some physical boundary finitely far from the center of the bulk.}\n dual. \nAssume that the shell has a signal velocity much less than the laboratory speed of light, and that the gravitational constant in the lab is so small that gravitational effects on the shell are negligible. Experiments on the shell would be limited only by the laws of quantum mechanics, and not by the speed of light or by gravitational back reaction. \n\nYou can probably see where this is going, but you may be tempted to respond:\n ``You are cleverly simulating a system that has a bulk dual, but it's just a simulation, not real quantum gravity.\" Again, I disagree.\n\nI argue that the bulk with its gravitons, black holes, and bulk observers is just as real as the laboratory itself. It can be probed, entered, measured, and the results communicated to observers in the lab---even in the limit that the laboratory laws of physics become arbitrarily close to non-relativistic quantum mechanics.\n\n\\bigskip \\noindent\n\n\nFrom the holographic AdS\/CFT correspondence we may assume that observers, perhaps with human-like cognitive abilities, are possible in the bulk. Can a laboratory observer confirm the presence of such a bulk observer by doing experiments on the shell? Current theory seems to say yes. By tapping on the shell the laboratory observer can perturb the CFT, exciting low dimension operators. In the gravity dual these perturbations source bulk gravitational waves and other light fields. An observer floating at the center of AdS would detect these signals with a bulk LIGO detector. In that way the laboratory observer can communicate with the bulk observer.\n\nSimilarly the bulk observer can send messages to the lab. Gravitational signals emitted in the bulk will reach the boundary, and be detected as disturbances of the shell by means of strain gauges, squids, or other devices. The bulk and laboratory observers can communicate with one another and even carry on a conversation. Responding to questions from the lab, the bulk observer may report the existence of massless spin-2 particles. In that way the lab observer learns that quantum gravity exists in the bulk.\n\nIt is possible that the CFT describing the shell does not have a weakly coupled standard gravitational dual. If the central charge (number of independent CFT field degrees of freedom) is not large, the entire radius of the AdS universe would not be large in bulk Planck units. Or if the CFT coupling were too weak other microscopic bulk scales (such as the size of strings) would be large; a single string could fill the bulk space. This would not mean that there is no bulk; it would mean the bulk laws are in some sense messy. Theories with weakly coupled gravity duals are special limiting cases, but bulk duals for the other cases should not be ruled out. Pushing this to its logical conclusion: any quantum system has a gravitational dual; even a pair of entangled electron spins has a tiny quantum wormhole through which a single qubit can be teleported.\n\n\n\nReturning to shells that have weakly coupled gravitational duals, what if the bulk of such a shell has no observer? In principle the lab observer can create one by applying appropriate perturbations to the shell. In fact there is nothing to prevent her from merging her own quantum state with the shell and entering into the bulk. \n\n\n\\bigskip \\noindent\n\nThere is an apparent contradiction between this view and bulk locality%\n\\footnote{I thank Ying Zhao for pointing this out.}.\n\nFigure \\ref{quick} illustrates the point.\n\\begin{figure}[H]\n\\begin{center}\n\\includegraphics[scale=.3]{quick}\n\\caption{}\n\\label{quick}\n\\end{center}\n\\end{figure}\n\\bigskip \\noindent\nIn the bulk a signal originates at point $\\bf a $ and travels outward toward the AdS boundary. Alice plucks the signal out of the bulk and quickly runs to the opposite side of the shell where she re-inserts the signal. The signal then propagates from the boundary to point $\\bf b$, the whole trip taking less time than for light, traveling through the bulk, to go from $\\bf a$ to $\\bf b$.\n\nAll this is true, but it is not a contradiction with bulk locality. No one in the bulk sees a signal move past them with a local velocity faster than light. What is true is that the boundary conditions (at the AdS boundary) are not of the usual reflecting type. For example non-local boundary conditions in AdS\/CFT can be induced by double-trace bilocal operators added to the CFT action. The bulk observers may be astonished by how fast signals can propagate across the boundary, but they do not see a bulk violation of the speed of light. \n\n\\bigskip \\noindent\n\n\n\n\n\n \n\nWhat about bulk black holes? Given some energy the bulk observer can create a small black hole and report its properties to the lab. Alternately the lab observer can heat the shell and create a large black hole. But the most interesting questions about black holes might remain unanswered; namely what happens beyond the horizon? Since no signal can get to the boundary from behind the horizon, there would seem to be no way for the lab observer to confirm the existence of a black hole interior. \n\n\nI believe this conclusion is too pessimistic.\nLet's suppose that the lab contained two well-separated identical shells, entangled in the thermofield-double state. The bulk dual, sometimes called an eternal black hole \\cite{Maldacena:2001kr}, is really two entangled black holes. According to our usual understanding of gauge-gravity duality the bulk description contains an Einstein-Rosen bridge (ERB) connecting the two entangled black holes.\n\nOn the other hand, since the laboratory is part of a world governed by quantum mechanics and by gravity, we may apply the principles of quantum gravity to the lab. In particular we may assume ER=EPR, \\bf \\underline{not} \\rm for the bulk theory, but for the laboratory itself.\nIt would imply that the two entangled shells are connected by an ERB of some sort. Are these two the same---the ERB of the bulk dual and the ERB of the entangled laboratory shells? \n\nI believe they are.\n\n\n\nAssume that the two entangled shells are controlled by laboratory technicians Alice and Bob. In principle Alice should be able to merge a laboratory system---call it Tom-the-teleportee---with her shell, so that effectively Tom is injected into the bulk. In the bulk description Tom may fall into Alice's black hole and crash into the singularity. Bob may also inject a system---Tina---into his shell, and if conditions are right, Tom and Tina can meet in the wormhole before they are destroyed at the singularity. But because they can't send a signal to the boundary from behind the horizon, Tom and Tina can't tell Alice and Bob about their experiences. \nOne might come to the conclusion that the world behind the horizon is a figment of the imagination, with no operational significance or possibility of being falsified.\n\n\nBut is this conclusion warranted? I don't think so. To see why, recall that an entangled system can be used to mediate quantum teleportation---in this case the teleportation of Tom from Alice to Bob. \nCarrying this out would require Alice to transfer a certain number of classical bits from her shell to Bob's. The transfer of classical information takes place through the laboratory space%\n\\footnote{Since the speed of signal propagation in the shell may be much smaller than the speed of light, from the bulk point of view the classical exchange takes essentially no time. },\nnot through the wormhole, but the classical information is completely random and uncorrelated with Tom's quantum state. In fact by the monogamy of entanglement, there can be no trace of Tom's quantum state in the lab. \n\nThen how did Tom's qubits get from Alice to Bob if they did not pass through the lab? The usual quantum theorist's answer is that quantum information is non-locally distributed in the entangled state; it doesn't make sense to ask where it is localized. But ER=EPR suggests another answer \\cite{Susskind:2014yaa}\\cite{Susskind:2016jjb}: Quantum teleportation is \\it teleportation through the wormhole. \\rm\nIn other words teleported information is transferred between entangled systems by passing through the Einstein-Rosen bridge connecting them.\n\nThis conclusion has recently gained significant credibility due to the work of Gao, Jafferis, and Wall \\cite{Gao:2016bin}, and the subsequent followup by Maldacena, Stanford, and Yang \\cite{Maldacena:2017axo}. What these authors show is that the same conditions that allow quantum teleportation, have a dynamical back-reaction on the wormhole that renders it traversable. \n\n\n What makes the protocol for teleportation through the wormhole special---what makes it, \\it through the wormhole\\rm---is that once Tom has merged with Alice's shell, enough time is allowed to elapse so that his information becomes scrambled with the horizon degrees of freedom. This takes place before the rest of the protocol is executed \\cite{Susskind:2017nto}. To put it another way, the first step of the protocol is to allow Tom \n to fall through the horizon of Alice's black hole. \n\nIt is especially interesting that if Tom encounters objects during his passage from Alice's side to Bob's side, his experiences may be recorded in his memory%\n\\footnote{This is especially clear in \\cite{Gao:2016bin}\n and \\cite{Maldacena:2017axo} }. This would allow him to report the conditions in the wormhole to laboratory observers. \n \nQuantum teleportation through the wormhole is a real game-changer; it provides a direct way to observe the interior geometry of a wormhole. One can no longer claim that life behind the horizon is unphysical, meaningless, unobservable, or scientifically unfalsifiable.\n\n\\bigskip \\noindent\n\nCan laboratory experiments of this type be carried out? I don't see why not. Instead of shells supporting conformal field theories, a more practical alternative might be quantum computers simulating the CFTs. Entangling two identical quantum computers into a thermofield double state should be feasible. \nTo teleport a genuine sentient Tom through the wormhole would require an enormous number of qubits, but with a few hundred logical qubits one can teleport a register, composed of say ten qubits---enough for a primitive memory. By varying the initial entangled state one can vary the environment in the wormhole. In turn these variations will couple to the register and be recorded, later to be communicated to the lab\n\\footnote{Note that quantum teleportation of a single qubit through a channel of a single bell pair is already an experimental reality. ER=EPR allows it to be interpreted as teleportation through a Planckian wormhole.}.\n\nThe operations needed for this kind of teleportation are fairly complex (in the computational sense) \\cite{Susskind:2017nto} and are therefore difficult, but I don't see anything forbidding them once quantum computers become available.\n\n\\bigskip \\noindent\n\nOne thing that I want to emphasize is that there is no need for \nPlanckian energy in the laboratory in order to exhibit these quantum gravity effects. The Planck scale in the bulk is not related to the Planck scale in the lab, but rather to central charge of the CFT. \n\n\\bigskip \\noindent\n\nLet me dispel one concern---a counter argument to the claim that black holes dual to thermally excited non-relativistic shells are real. The argument goes as follows:\nAccording to Lloyd \\cite{Lloyd} and to my own papers with Brown, Roberts, Swingle, and Zhao \n\\cite{Brown:2015bva}\\cite{Brown:2015lvg}, black holes are the fastest possible computers. Because the speed of propagation in non-relativistic shells is much smaller than the laboratory speed of light, the \\it simulated \\rm black holes are nowhere near saturating Lloyd's bound on computational speed. The laboratory observer can easily tell the difference between the slow simulated black holes and real black holes.\n\nThe error in this argument is the classic logical fallacy: \\it All politicians are liars. Therefore Pinocchio is a politician. \\rm The correct statement about black holes is that all things that saturate the Lloyd bound are black holes---not that all black holes saturate the Lloyd bound. In \\cite{Brown:2015lvg} an example of a black hole encased in a static ``Dyson sphere\" showed that black holes do not generally saturate the Lloyd bound.\n\n\n\\bigskip \\noindent\n\n\\bigskip \\noindent\n\nIf there is anything new here\n\\footnote{The ideas in this letter are not particularly original. On several occasions I've discussed similar things with Juan Maldacena, Joe Polchinski, Don Marolf, and Aron Wall, among others. Related concepts appear in \\cite{Maldacena:2001kr}\\cite{Heemskerk:2012mn}\\cite{Marolf:2012xe}\\cite{Maldacena:2013xja}\\cite{Gao:2016bin}\\cite{Maldacena:2017axo} and probably a great number of other places.}\n it is the idea that information may pass from a laboratory environment to the degrees of freedom of a physical realization of a CFT, thereby bridging the gap between the lab and the bulk. One can enter the bulk, observe it, and go back to the lab. From the laboratory point of view this gives operational meaning to the bulk and to whatever it contains. Figure \\ref{moon} is a cartoonish attempt to illustrate how an observer can be merged with a quantum-circuit computer. The first step would be to act on the observer with a unitary operation that transfers his quantum state to a set of qubits. The qubits can then be combined with the circuit. An appropriate protocol, similar to quantum teleportation, can move the observer back to the lab at a later time \\cite{Maldacena:2017axo}.\n\n\n\n\n\n\\begin{figure}[H]\n\\begin{center}\n\\includegraphics[scale=.2]{moon}\n\\caption{}\n\\label{moon}\n\\end{center}\n\\end{figure}\n\n\\bigskip \\noindent\n\n\\bigskip \\noindent\n\nNow let's turn to quantum complexity and how it governs the growth of space \\cite{Susskind:2014rva}\\cite{Stanford:2014jda}\\cite{Susskind:2015toa}\\cite{Brown:2015bva}\\cite{Brown:2015lvg}. Einstein-Rosen bridges are not static objects. They grow with time in a manner that loosely resembles the cosmological expansion of space. This growth of geometry has a quantum dual description, namely the statistical growth of quantum computational complexity. \n\nWhy do ERBs grow? From the viewpoint of classical general relativity, the Einstein field equations in combination with positive energy conditions predict such growth. Classically the growth continues forever, but quantum mechanically the finite number of quantum states limits the time of growth to be exponential in the entropy.\n\nOn the other side of the duality the growth of complexity reflects a general quantum-statistical law that parallels the second law of thermodynamics. The second law of thermodynamics is about the increase of entropy as systems tend toward thermal equilibrium. The second law of complexity \\cite{Brown:2017jil} is about the growth of quantum complexity, which eventually leads to complexity equilibrium after \nan exponential time. \n\nThe second laws, both of entropy and complexity, are very general. They apply to black holes, shells of matter, and quantum circuits. The match between the growth of complexity for generic quantum systems, and the expansion of Einstein-Rosen bridges, is remarkably detailed \\cite{Stanford:2014jda}\\cite{Roberts:2014isa}. It extends to all kinds of out-of-equilibrium situations including the gravitational back reaction to violent shock waves. The pattern is so general that laboratory observers, monitoring the growth of complexity, will see behaviors that are completely consistent with the relativistic evolution of Einstein-Rosen bridges.\n\nBut complexity is a notoriously difficult thing to measure, even if one could do repeated experiments and collect statistics. There are more direct things that would indicate a growth of the wormhole. One more or less standard way would be to measure correlation functions between fields at opposite ends of the wormhole. The magnitude of the correlation is a direct measure of the distance between the ends. \n\nIf the two systems (shells, quantum computers, black holes) were unentangled, the relevant distance governing correlations would be the ordinary exterior distance, i.e., the distance outside the horizon separating two black holes, or the laboratory distance separating the two shells or computers. In general any correlation will fall to zero with increased separation. On the other hand if the systems are entangled the correlation will not decrease with laboratory separation. No matter how far the shells are separated from one another, the correlations between them will behave as if there is a short connection whose length is independent of the exterior distance. In other words the correlation functions behave as if there is a wormhole connecting the systems.\n\n\nBecause wormholes grow with time, the correlation functions should decrease in a characteristic way%\n\\footnote{Ordinary local field correlations decrease until noisy fluctuations dominate the falling signal. This happens after a polynomial time \\cite{Maldacena:2001kr}. There are various ways to filter out the noise and construct correlators that continue to decrease for an exponential time.}.\nIf the dynamics of the laboratory systems is generic the evolution of correlations between Alice's and Bob's systems will also be time dependent, despite the fact that both systems are in thermal equilibrium. One can determine the correlation\nfunctions by making measurements on both systems and collecting statistics. It would not be hard to show that the correlations decrease with time the same way as they would for ``real\" black holes with growing ERBs. \n\nThe lab observers could interpret this in two ways. They could say that the correlations decrease because of time-dependence of the phases in the wave function of the combined system, or they could be bold and say the wormhole that mediates the correlations grows with time. The results would be the same. \n\n One might ask if there is a more detailed relation between gravitational dynamics and the properties of the evolution of complexity---a relation which goes beyond the overall expansion of space? The answer is probably yes, and to support that claim I would point to the close relation between complexity of quantum states and the Einstein-Hilbert action of certain regions of bulk space-time \\cite{Brown:2015bva}\\cite{Brown:2015lvg}. It seems possible that gravitational dynamics can be recovered from the generic behavior of quantum complexity.\n\n\n\n\\bigskip \\noindent\n\nA skeptic may argue that all of this can be explained without ever invoking bulk gravity or wormholes; plain old quantum mechanics and some condensed matter physics or quantum circuitry is enough. This is absolutely true, but I think it misses the point: Theories with gravity are always holographic and require a lower dimensional non-gravitational description%\n\\footnote{Of the two, the bulk description is often much simpler than the strongly coupled holographic description.}. \nThis does not mean the bulk world is not real. \n\n\\bigskip \\noindent\n\n\n\n \n\n \nFinally, in what way is gravity special? Suppose instead of a shell of matter with a CFT description, we construct a large block of matter engineered to have the \\it standard model \\rm (without gravity) as its excitations; or a quantum computer composed of a three-dimensional array of interacting qubits. \nIs the world in the block or the computer real? Sure it is; the block and its excitations are certainly real, and if the standard model was well simulated it may support observers who could communicate with laboratory observers. \n\nBut there are some big differences.\nIn the case of the block, the bulk space really is the three-dimensional volume of the block. It exists in ordinary laboratory space. \nBy contrast, in the case of CFT-supporting shells, something much more subtle is at work. The bulk is not part of ordinary space: it is not the shell: it is not the hollow space inside the shell. These are all part of the lab. \nThe bulk space is a pure manifestation of entanglement and complexity. \n\n\\bigskip \\noindent\n\nAll of this leads me to the conjecture that where there is quantum mechanics there is also gravity, or more succinctly, GR=QM. I'll also bet that once quantum computers become cheap, easy to build, and easy to control, experiments similar to the ones I've described will become common, and the idea of quantum gravity in the lab will seem much less crazy. In fact such experiments are already on the drawing board \\cite{Swingle:2016var} \\cite{Yao:2016ayk}. \n\n\nBest regards,\n\nLenny\n\n\\section*{P.S.}\n\n Hrant Gharibyan has pointed out that \nthe question of whether teleportation can reveal events behind the horizon is a bit tricky. Consider the event labeled $\\bf a$ in figure \\ref{behind}. The region behind the future horizon is shaded and the event $\\bf a$ is behind the horizon.\n\n\\begin{figure}[H]\n\\begin{center}\n\\includegraphics[scale=.35]{behind}\n\\caption{The event $\\bf a$ is behind the horizon and cannot be seen from the boundary.}\n\\label{behind}\n\\end{center}\n\\end{figure}\n\nNow let us apply the protocol of \n\\cite{Gao:2016bin}\\cite{Maldacena:2017axo} in order to make the ERB traversable. This is shown in figure \\ref{not-behind}.\n\\begin{figure}[H]\n\\begin{center}\n\\includegraphics[scale=.3]{not-behind}\n\\caption{The protocol of \n\\cite{Gao:2016bin}\\cite{Maldacena:2017axo} makes the wormhole traversable so that $\\bf a$ becomes visible from the boundaries. The orange curve represents the classical information sent from Alice to Bob, required to carry out the protocol.}\n\\label{not-behind}\n\\end{center}\n\\end{figure}\n\n\\bigskip \\noindent\nThe event $a$ has been exposed so that Tom can witness it as he passes through the ERB. In this sense the protocol has made visible an event behind the horizon.\n\nBut in another sense the horizon has not been breached; it has been moved to the new shaded region. The rule is that teleportation allows us to see what would be behind the horizon had we not applied teleportation. But we can also say the the very attempt to see behind the horizon moves the horizon to where we cannot see behind it.\n\n\n\n\n\\section*{Acknowledgements}\n\nI am grateful for many discussions over the years which shaped these views. I especially recall discussions with Juan Maldacena, Joe Polchinski, Don Marolf, Aaron Wall, Steve Giddings, Douglas Stanford, Dan Harlow, Ying Zhao, Hrant Gharibyan, and Ben Freivogel. I've probably forgotten many others and for that I apologize. \n\nSupport came through NSF Award Number 1316699.\n\n\\setcounter{equation}{0}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n\n\nThe uncertainty principle rules out the possibility to obtain\nprecise measurement outcomes simultaneously when one measures two\nincomparable observables at the same time. Since the uncertainty\nrelation satisfied by the position and momentum\n\\cite{Heisenberg1927}, various uncertainty relations have been\nextensively investigated\n\\cite{Dammeier2015njp,Li2015,Guise2018pra,Giorda2019pra,Xiao2019pra,Sponar2020pra}.\nOn the occasion of celebrating the 125th anniversary of the academic\njournal \"Science\", the magazine listed 125 challenging scientific\nproblems \\cite{Seife2005}. The 21st problem asks: Do deeper\nprinciples underlie quantum uncertainty and nonlocality? As\nuncertainty relations play significant roles in entanglement\ndetection\n\\cite{Hofman2003pra,Guhne2004prl,Guhne2009pra,Schwonnek2017prl,Qian2018qip,Zhao2019prl}\nand quantum nonlocality \\cite{Oppenheim2010}, and many others, it is\ndesirable to explore the mathematical structures and physical\nimplications of uncertainties in more details from various\nperspectives.\n\nThe state-dependent Robertson-Schr\\\"{o}dinger uncertainty relation\n\\cite{Kennard1927,Weyl1928,Robertson1929,Schrodinger1930} is of form:\n\\begin{eqnarray*}\n(\\Delta_\\rho\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})^2(\\Delta_\\rho\\bsB)^2\\geqslant\\frac14\\Br{\n(\\langle\\set{\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\bsB}\\rangle_\\rho -\n\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle_\\rho\\langle\\bsB\\rangle_\\rho)^2+\\langle[\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\bsB]\\rangle_\\rho^2},\n\\end{eqnarray*}\nwhere $\\set{\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\bsB}:=\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\bsB+\\bsB\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}$,\n$[\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\bsB]:=\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\bsB-\\bsB\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}$, and\n$(\\Delta_\\rho\\bsX)^2:=\\Tr{\\bsX^2\\rho}-\\Tr{\\bsX\\rho}^2$ is the\nvariance of $\\bsX$ with respect to the state $\\rho$, $\\bsX=\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\bsB$.\n\nRecently, state-independent uncertainty relations have been investigated \\cite{Guhne2004prl,Schwonnek2017prl},\nwhich have direct applications to entanglement detection.\nIn order to get state-independent uncertainty relations, one considers the sum\nof the variances and solves the following optimization problems:\n\\begin{eqnarray}\n\\var_\\rho(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})+\\var_\\rho(\\bsB)&\\geqslant&\n\\min_{\\rho\\in\\rD(\\mC^d)}\\Pa{\\var_\\rho(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})+\\var_\\rho(\\bsB)},\\label{eq:1}\\\\\n\\Delta_\\rho\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}+\\Delta_\\rho\\bsB&\\geqslant&\n\\min_{\\rho\\in\\rD(\\mC^d)}\\Pa{\\Delta_\\rho\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}+\\Delta_\\rho\\bsB},\\label{eq:2}\n\\end{eqnarray}\nwhere $\\var_\\rho(\\bsX)=(\\Delta_\\rho\\bsX)^2$ is the variance of the\nobservable $\\bsX$ associated to state $\\rho\\in\\rD(\\mC^d)$.\n\nEfforts have been devoted to\nprovide quantitative uncertainty bounds for the above inequalities \\cite{Busch2019}.\nHowever, searching for such uncertainty bounds may be not the best way to\nget new uncertainty relations \\cite{Zhang2018}. Recently, Busch\nand Reardon-Smitha proposed to consider the \\emph{uncertainty\nregion} \\cite{Busch2019} of two observables $\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}$ and $\\bsB$,\ninstead of finding the bounds based on some particular choice of uncertainty\nfunctional, typically such as the product or sum of uncertainties\n\\cite{Maccone2014}. Once we can identify what the structures of\nuncertainty regions are, we can infer specific information about the\nstate with the minimal uncertainty in some sense.\nIn view of this, the above two optimization problems\n\\eqref{eq:1} and \\eqref{eq:2} become\n\\begin{eqnarray*}\n\\min_{\\rho\\in\\rD(\\mC^d)}\\Pa{\\var_\\rho(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})+\\var_\\rho(\\bsB)} &=&\n\\min\\Set{x^2+y^2:(x,y)\\in\\mathcal{U}}\\def\\cV{\\mathcal{V}}\\def\\cW{\\mathcal{W}}\\def\\cX{\\mathcal{X}}\\def\\cY{\\mathcal{Y}^{(\\text{m})}_{\\Delta\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\Delta\\bsB}},\\\\\n\\min_{\\rho\\in\\rD(\\mC^d)}\\Pa{\\Delta_\\rho\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}+\\Delta_\\rho\\bsB} &=&\n\\min\\Set{x+y:(x,y)\\in\\mathcal{U}}\\def\\cV{\\mathcal{V}}\\def\\cW{\\mathcal{W}}\\def\\cX{\\mathcal{X}}\\def\\cY{\\mathcal{Y}^{(\\text{m})}_{\\Delta\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\Delta\\bsB}},\n\\end{eqnarray*}\nwhere $\\mathcal{U}}\\def\\cV{\\mathcal{V}}\\def\\cW{\\mathcal{W}}\\def\\cX{\\mathcal{X}}\\def\\cY{\\mathcal{Y}^{(\\text{m})}_{\\Delta\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\Delta\\bsB}$ is the so-called\nuncertainty region of two observables $\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}$ and $\\bsB$ defined by\n\\begin{eqnarray*}\n\\mathcal{U}}\\def\\cV{\\mathcal{V}}\\def\\cW{\\mathcal{W}}\\def\\cX{\\mathcal{X}}\\def\\cY{\\mathcal{Y}^{(\\text{m})}_{\\Delta\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\Delta\\bsB} =\n\\Set{(\\Delta_\\rho\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\Delta_\\rho\\bsB)\\in\\mR^2_+:\n\\rho\\in\\rD(\\mC^d)}.\n\\end{eqnarray*}\n\nRandom matrix theory or probability theory are powerful tools in\nquantum information theory. Recently, the non-additivity of quantum\nchannel capacity \\cite{Hastings2009} has been cracked via probabilistic\ntools. The Duistermaat-Heckman measure on moment polytope has been used to\nderive the probability distribution density of one-body quantum\nmarginal states of multipartite random quantum states\n\\cite{Christandl2014,Dartois2020} and that of classical probability\nmixture of random quantum states \\cite{Zhang2018pla,Zhang2019jpa}.\nAs a function of random quantum pure states, the probability density\nfunction (PDF) of quantum expectation value of an observable is also\nanalytical calculated \\cite{Venuti2013}. Motivated by these works,\nwe investigate the joint probability density functions\nof uncertainties of observables. By doing so, we find that it is\nnot necessarily to solve directly the uncertainty regions of\nobservables. It is sufficient to identify the support of such PDF\nbecause PDF vanishes exactly beyond the uncertainty regions. Thus\nall the problems are reduced to compute the PDF of uncertainties of\nobservables, since all information concerning uncertainty regions and\nstate-independent uncertainty relations are encoded in such PDFs.\nIn \\cite{Zhang2021preprint} we have studied such PDFs for the random mixed quantum\nstate ensembles, where all problems concerning qubit observables are completely solved, i.e., analytical formulae of the PDFs of uncertainties are obtained, and the characterization of\nuncertainty regions over which the optimization problems for\nstate-independent lower bound of sum of variances is presented. In this\npaper, we will focus the same problem for random pure quantum\nstate ensembles.\n\nLet $\\delta(x)$ be delta function \\cite{Hoskins2009} defined by\n\\begin{eqnarray*}\n\\delta(x)=\\begin{cases} +\\infty,&\\text{if }x\\neq0;\\\\0,&\\text{if\n}x=0.\\end{cases}\n\\end{eqnarray*}\nOne has $\\Inner{\\delta}{f}:=\\int_\\mathbb{R} f(x)\\delta(x)\\mathrm{d}\nx=f(0)$. Denote by $\\delta_a(x):=\\delta(x-a)$. Then\n$\\Inner{\\delta_a}{f} = f(a)$.\nLet $Z(g):=\\Set{x\\in D(g):g(x)=0}$ be the zero set of function $g(x)$ with its domain $D(g)$. We will use the following definition.\n\n\\begin{definition}[\\cite{lz2020ijtp,Zuber2020}] If $g:\\mathbb{R}\\to\\mathbb{R}$ is a smooth function\n(the first derivative $g'$ is a continuous function) such that\n$Z(g)\\cap Z(g')=\\emptyset$, then the composite $\\delta\\circ g$ is\ndefined by:\n\\begin{eqnarray*}\n\\delta(g(x)) = \\sum_{x\\in Z(g)} \\frac1{\\abs{g'(x)}}\\delta_x.\n\\end{eqnarray*}\n\\end{definition}\n\n\\section{Uncertainty regions of observables}\n\nWe can extend the notion of the uncertainty region of two\nobservables $\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}$ and $\\bsB$, put forward in \\cite{Busch2019}, into\nthat of multiple observables.\n\n\\begin{definition}\nLet $(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}_1,\\ldots,\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}_n)$ be an $n$-tuple of qudit observables\nacting on $\\mC^d$. The \\emph{uncertainty region} of such $n$-tuple\n$(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}_1,\\ldots,\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}_n)$, for the mixed quantum state ensemble, is\ndefined by\n\\begin{eqnarray*}\n\\mathcal{U}}\\def\\cV{\\mathcal{V}}\\def\\cW{\\mathcal{W}}\\def\\cX{\\mathcal{X}}\\def\\cY{\\mathcal{Y}^{(\\text{m})}_{\\Delta\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}_1,\\ldots,\\Delta\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}_n}:=\\Set{(\\Delta_\\rho\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}_1,\\ldots,\\Delta_\\rho\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}_n)\\in\\mR^n_+:\\rho\\in\\rD(\\mC^d)}.\n\\end{eqnarray*}\nSimilarly, the \\emph{uncertainty region} of such $n$-tuple\n$(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}_1,\\ldots,\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}_n)$, for the pure quantum state ensemble, is\ndefined by\n\\begin{eqnarray*}\n\\mathcal{U}}\\def\\cV{\\mathcal{V}}\\def\\cW{\\mathcal{W}}\\def\\cX{\\mathcal{X}}\\def\\cY{\\mathcal{Y}^{(\\text{p})}_{\\Delta\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}_1,\\ldots,\\Delta\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}_n}:=\\Set{(\\Delta_\\psi\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}_1,\\ldots,\\Delta_\\psi\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}_n)\\in\\mR^n_+:\\ket{\\psi}\\in\\mC^d}.\n\\end{eqnarray*}\nApparently,\n$\\mathcal{U}}\\def\\cV{\\mathcal{V}}\\def\\cW{\\mathcal{W}}\\def\\cX{\\mathcal{X}}\\def\\cY{\\mathcal{Y}^{(\\text{p})}_{\\Delta\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}_1,\\ldots,\\Delta\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}_n}\\subset\\mathcal{U}}\\def\\cV{\\mathcal{V}}\\def\\cW{\\mathcal{W}}\\def\\cX{\\mathcal{X}}\\def\\cY{\\mathcal{Y}^{(\\text{m})}_{\\Delta\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}_1,\\ldots,\\Delta\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}_n}$.\n\\end{definition}\nNote that our definition about uncertainty region is different from\nthe one given in \\cite{Dammeier2015njp}. In the above definition, we\nuse the standard deviation instead of variance.\n\nNext we will show that\n$\\mathcal{U}}\\def\\cV{\\mathcal{V}}\\def\\cW{\\mathcal{W}}\\def\\cX{\\mathcal{X}}\\def\\cY{\\mathcal{Y}^{(\\text{m})}_{\\Delta\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}_1,\\ldots,\\Delta\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}_n}$ is contained\nin the supercube in $\\mR^n_+$. To this end, we study the following\nsets $\\mathscr{P}}\\def\\sQ{\\mathscr{Q}}\\def\\sR{\\mathscr{R}}\\def\\sS{\\mathscr{S}}\\def\\sT{\\mathscr{T}(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})=\\Set{\\text{Var}_\\psi(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}): \\ket{\\psi}\\in\\mC^d}$ and\n$\\sM(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})=\\Set{\\text{Var}_\\rho(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}): \\rho\\in\\density{\\mC^d}}$ for\na qudit observable $\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}$ acting on $\\mC^d$. The relationship\nbetween both sets $\\mathscr{P}}\\def\\sQ{\\mathscr{Q}}\\def\\sR{\\mathscr{R}}\\def\\sS{\\mathscr{S}}\\def\\sT{\\mathscr{T}(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})$ and $\\sM(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})$ is summarized into the\nfollowing proposition.\n\\begin{prop}\\label{prop:convx}\nIt holds that\n\\begin{eqnarray*}\n\\mathscr{P}}\\def\\sQ{\\mathscr{Q}}\\def\\sR{\\mathscr{R}}\\def\\sS{\\mathscr{S}}\\def\\sT{\\mathscr{T}(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})=\\sM(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})=\\mathrm{conv}(\\mathscr{P}}\\def\\sQ{\\mathscr{Q}}\\def\\sR{\\mathscr{R}}\\def\\sS{\\mathscr{S}}\\def\\sT{\\mathscr{T}(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}))\n\\end{eqnarray*}\nis a closed interval $[0,\\max_\\psi\\mathrm{Var}_\\psi(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})]$.\n\\end{prop}\n\n\\begin{proof}\nNote that\n$\\mathscr{P}}\\def\\sQ{\\mathscr{Q}}\\def\\sR{\\mathscr{R}}\\def\\sS{\\mathscr{S}}\\def\\sT{\\mathscr{T}(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})\\subset\\sM(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})\\subset\\mathrm{conv}(\\mathscr{P}}\\def\\sQ{\\mathscr{Q}}\\def\\sR{\\mathscr{R}}\\def\\sS{\\mathscr{S}}\\def\\sT{\\mathscr{T}(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}))$. Here the\nfirst inclusion is apparently; the second inclusion follows\nimmediately from the result obtained in \\cite{Petz2012}: For any\ndensity matrix $\\rho\\in\\density{\\mC^d}$ and a qudit observable\n$\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}$, there is a pure state ensemble decomposition $\\rho=\\sum_j\np_j\\proj{\\psi_j}$ such that\n\\begin{eqnarray}\\label{eq:vardecom}\n\\text{Var}_\\rho(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}) = \\sum_j p_j\\text{Var}_{\\psi_j}(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}).\n\\end{eqnarray}\nSince all pure states on $\\mC^d$ can be generated via a fixed\n$\\psi_0$ and the whole unitary group $\\textsf{U}}\\def\\V{\\textsf{V}}\\def\\W{\\textsf{W}}\\def\\X{\\textsf{X}}\\def\\Y{\\textsf{Y}(d)$, it follows that\n\\begin{eqnarray*}\n\\mathscr{P}}\\def\\sQ{\\mathscr{Q}}\\def\\sR{\\mathscr{R}}\\def\\sS{\\mathscr{S}}\\def\\sT{\\mathscr{T}(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}) = \\mathrm{im}(\\Phi),\n\\end{eqnarray*}\nwhere the mapping $\\Phi:\\textsf{U}}\\def\\V{\\textsf{V}}\\def\\W{\\textsf{W}}\\def\\X{\\textsf{X}}\\def\\Y{\\textsf{Y}(d)\\to \\mathscr{P}}\\def\\sQ{\\mathscr{Q}}\\def\\sR{\\mathscr{R}}\\def\\sS{\\mathscr{S}}\\def\\sT{\\mathscr{T}(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})$ is defined by\n$\\Phi(\\boldsymbol{U}}\\def\\bsV{\\boldsymbol{V}}\\def\\bsW{\\boldsymbol{W}}\\def\\bsX{\\boldsymbol{X}}\\def\\bsY{\\boldsymbol{Y})=\\mathrm{Var}_{\\boldsymbol{U}}\\def\\bsV{\\boldsymbol{V}}\\def\\bsW{\\boldsymbol{W}}\\def\\bsX{\\boldsymbol{X}}\\def\\bsY{\\boldsymbol{Y}\\psi_0}(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})$. This mapping $\\Phi$ is\nsurjective and continuous. Due to the fact that $\\textsf{U}}\\def\\V{\\textsf{V}}\\def\\W{\\textsf{W}}\\def\\X{\\textsf{X}}\\def\\Y{\\textsf{Y}(d)$ is a compact\nLie group, we see that $\\Phi$ can attain maximal and minimal values\nover the unitary group $\\textsf{U}}\\def\\V{\\textsf{V}}\\def\\W{\\textsf{W}}\\def\\X{\\textsf{X}}\\def\\Y{\\textsf{Y}(d)$. In fact, $\\min_{\\textsf{U}}\\def\\V{\\textsf{V}}\\def\\W{\\textsf{W}}\\def\\X{\\textsf{X}}\\def\\Y{\\textsf{Y}(d)}\\Phi=0$. This\ncan be seen if we take some $\\boldsymbol{U}}\\def\\bsV{\\boldsymbol{V}}\\def\\bsW{\\boldsymbol{W}}\\def\\bsX{\\boldsymbol{X}}\\def\\bsY{\\boldsymbol{Y}$ such that $\\boldsymbol{U}}\\def\\bsV{\\boldsymbol{V}}\\def\\bsW{\\boldsymbol{W}}\\def\\bsX{\\boldsymbol{X}}\\def\\bsY{\\boldsymbol{Y}\\ket{\\psi_0}$ is\nan eigenvector of $\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}$. Since $\\textsf{U}}\\def\\V{\\textsf{V}}\\def\\W{\\textsf{W}}\\def\\X{\\textsf{X}}\\def\\Y{\\textsf{Y}(d)$ is also connected, then\n$\\mathrm{im}(\\Phi)=\\Phi(\\textsf{U}}\\def\\V{\\textsf{V}}\\def\\W{\\textsf{W}}\\def\\X{\\textsf{X}}\\def\\Y{\\textsf{Y}(d))$ is also connected, thus\n$\\mathrm{im}(\\Phi)=[0,\\max_{\\textsf{U}}\\def\\V{\\textsf{V}}\\def\\W{\\textsf{W}}\\def\\X{\\textsf{X}}\\def\\Y{\\textsf{Y}(d)}\\Phi]$. This amounts to say that\n$\\mathscr{P}}\\def\\sQ{\\mathscr{Q}}\\def\\sR{\\mathscr{R}}\\def\\sS{\\mathscr{S}}\\def\\sT{\\mathscr{T}(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})$ is a closed interval $[0,\\max_{\\textsf{U}}\\def\\V{\\textsf{V}}\\def\\W{\\textsf{W}}\\def\\X{\\textsf{X}}\\def\\Y{\\textsf{Y}(d)}\\Phi]$ which means\nthat $\\mathscr{P}}\\def\\sQ{\\mathscr{Q}}\\def\\sR{\\mathscr{R}}\\def\\sS{\\mathscr{S}}\\def\\sT{\\mathscr{T}(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})$ is a compact and convex set, i.e.,\n\\begin{eqnarray*}\n\\mathscr{P}}\\def\\sQ{\\mathscr{Q}}\\def\\sR{\\mathscr{R}}\\def\\sS{\\mathscr{S}}\\def\\sT{\\mathscr{T}(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}) =\\mathrm{conv}(\\mathscr{P}}\\def\\sQ{\\mathscr{Q}}\\def\\sR{\\mathscr{R}}\\def\\sS{\\mathscr{S}}\\def\\sT{\\mathscr{T}(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})).\n\\end{eqnarray*}\nTherefore\n\\begin{eqnarray*}\n\\mathscr{P}}\\def\\sQ{\\mathscr{Q}}\\def\\sR{\\mathscr{R}}\\def\\sS{\\mathscr{S}}\\def\\sT{\\mathscr{T}(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})\n=\\sM(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})=\\mathrm{conv}(\\mathscr{P}}\\def\\sQ{\\mathscr{Q}}\\def\\sR{\\mathscr{R}}\\def\\sS{\\mathscr{S}}\\def\\sT{\\mathscr{T}(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}))=[0,\\max_{\\textsf{U}}\\def\\V{\\textsf{V}}\\def\\W{\\textsf{W}}\\def\\X{\\textsf{X}}\\def\\Y{\\textsf{Y}(d)}\\Phi]=[0,\\max_{\\psi}\\mathrm{Var}_\\psi(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})].\n\\end{eqnarray*}\nThis completes the proof.\n\\end{proof}\n\nNext, we determine\n$\\max_{\\rho\\in\\density{\\mathbb{C}^d}}\\var_\\rho(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})$ for an\nobservable $\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}$. To this end, we recall the following\n$(d-1)$-dimensional probability simplex, which is defined by\n\\begin{eqnarray*}\n\\Delta_{d-1}:=\\Set{\\boldsymbol{p}}\\def\\bsq{\\boldsymbol{q}}\\def\\bsr{\\boldsymbol{r}}\\def\\bss{\\boldsymbol{s}}\\def\\bst{\\boldsymbol{t}=(p_1,\\ldots,p_d)\\in\\mathbb{R}^d:p_k\\geqslant0(\\forall\nk\\in[d]),\\sum_jp_j=1}.\n\\end{eqnarray*}\nIts interior of $\\Delta_{d-1}$ is denoted by $\\Delta^\\circ_{d-1}$:\n\\begin{eqnarray*}\n\\Delta^\\circ_{d-1}:=\\Set{\\boldsymbol{p}}\\def\\bsq{\\boldsymbol{q}}\\def\\bsr{\\boldsymbol{r}}\\def\\bss{\\boldsymbol{s}}\\def\\bst{\\boldsymbol{t}=(p_1,\\ldots,p_d)\\in\\mathbb{R}^d:p_k>0(\\forall\nk\\in[d]),\\sum_jp_j=1}.\n\\end{eqnarray*}\nThis indicates that a point $\\bsx$ in the boundary\n$\\partial\\Delta_{d-1}$ means that there must be at least a component\n$x_i=0$ for some $i\\in[d]$. Now we separate the boundary of\n$\\partial\\Delta_{d-1}$ into the union of the following subsets:\n\\begin{eqnarray*}\n\\partial\\Delta_{d-1} =\\bigcup^{d}_{j=1} F_j,\n\\end{eqnarray*}\nwhere $F_j:=\\Set{\\bsx\\in\\partial\\Delta_{d-1}: x_j=0}$. Although the\nfollowing result is known in 1935 \\cite{Bhatia2000}, we still\ninclude our proof for completeness.\n\\begin{prop}\\label{prop:varmax}\nAssume that $\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}$ is an observable acting on $\\mathbb{C}^d$. Denote\nthe vector consisting of eigenvalues of $\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}$ by $\\lambda(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})$\nwith components being $\\lambda_1(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})\\leqslant\\cdots\\leqslant\n\\lambda_d(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})$. It holds that\n\\begin{eqnarray*}\n\\max\\Set{\\var_\\rho(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}):\\rho\\in\\density{\\mathbb{C}^d}}\n=\\frac14\\Pa{\\lambda_{\\max}(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})-\\lambda_{\\min}(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})}^2.\n\\end{eqnarray*}\nHere $\\lambda_{\\min}(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})=\\lambda_1(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})$ and\n$\\lambda_{\\max}(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})=\\lambda_d(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})$.\n\\end{prop}\n\n\\begin{proof}\nAssume that $\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}:=\\lambda(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})$ where $a_j:=\\lambda_j(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})$. Note\nthat\n$\\var_\\rho(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})=\\Tr{\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}^2\\rho}-\\Tr{\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rho}^2=\\Inner{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}^2}{\\bsD_{\\boldsymbol{U}}\\def\\bsV{\\boldsymbol{V}}\\def\\bsW{\\boldsymbol{W}}\\def\\bsX{\\boldsymbol{X}}\\def\\bsY{\\boldsymbol{Y}}\\lambda(\\rho)}-\\Inner{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}{\\bsD_{\\boldsymbol{U}}\\def\\bsV{\\boldsymbol{V}}\\def\\bsW{\\boldsymbol{W}}\\def\\bsX{\\boldsymbol{X}}\\def\\bsY{\\boldsymbol{Y}}\\lambda(\\rho)}^2$,\nwhere $\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}=(a_1,\\ldots,a_d)^\\t, \\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}^2=(a^2_1,\\ldots,a^2_d)^\\t$,\nand $\\bsD_{\\boldsymbol{U}}\\def\\bsV{\\boldsymbol{V}}\\def\\bsW{\\boldsymbol{W}}\\def\\bsX{\\boldsymbol{X}}\\def\\bsY{\\boldsymbol{Y}}=\\overline{\\boldsymbol{U}}\\def\\bsV{\\boldsymbol{V}}\\def\\bsW{\\boldsymbol{W}}\\def\\bsX{\\boldsymbol{X}}\\def\\bsY{\\boldsymbol{Y}}\\circ\\boldsymbol{U}}\\def\\bsV{\\boldsymbol{V}}\\def\\bsW{\\boldsymbol{W}}\\def\\bsX{\\boldsymbol{X}}\\def\\bsY{\\boldsymbol{Y}$ (here $\\circ$ stands for\nSchur product, i.e., entrywise product), and\n$\\lambda(\\rho)=(\\lambda_1(\\rho),\\ldots,\\lambda_d(\\rho))^\\t$. Denote\n\\begin{eqnarray*}\n\\bsx:=\\bsD_{\\boldsymbol{U}}\\def\\bsV{\\boldsymbol{V}}\\def\\bsW{\\boldsymbol{W}}\\def\\bsX{\\boldsymbol{X}}\\def\\bsY{\\boldsymbol{Y}}\\lambda(\\rho)\\in\\Delta_{d-1}:=\\Set{\\boldsymbol{p}}\\def\\bsq{\\boldsymbol{q}}\\def\\bsr{\\boldsymbol{r}}\\def\\bss{\\boldsymbol{s}}\\def\\bst{\\boldsymbol{t}=(p_1,\\ldots,p_d)\\in\\mathbb{R}^d_+:\\sum_jp_j=1},\n\\end{eqnarray*}\nthe $(d-1)$-dimensional probability simplex. Then\n\\begin{eqnarray*}\n\\var_\\rho(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})=\\Inner{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}^2}{\\bsx}-\\Inner{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}{\\bsx}^2=\\sum^d_{j=1}a^2_jx_j\n- \\Pa{\\sum^d_{j=1}a_jx_j}^2=:f(\\bsx).\n\\end{eqnarray*}\n\n(i) If $d=2$,\n\\begin{eqnarray*}\nf(x_1,x_2)&=&a_1^2x_1+a_2^2x_2-(a_1x_1+a_2x_2)^2\\notag\\\\\n&=&a_1^2x_1+a_2^2(1-x_1)-(a_1x_1+a_2(1-x_2))^2\\notag\\\\\n&=&(a_2-a_1)^2\\Br{\\frac14-\\Pa{x_1-\\frac12}^2}\\leqslant\n\\frac14(a_2-a_1)^2,\n\\end{eqnarray*}\nimplying that $f_{\\max}=\\frac14(a_2-a_1)^2$ when $x_1=x_2=\\frac12$.\n\n(ii) If $d\\geqslant3$, without loss of generality, we assume that\n$a_1< a_2<\\cdots0$, $0$\notherwise. Thus the support of $f^{(d)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle}(r)$ is\nthe closed interval $[\\lambda_1(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}),\\lambda_n(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})]$. In\nparticular, for $d=2$, we have\n\\begin{eqnarray*}\nf^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle}(r)=\\frac1{\\lambda_2(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})-\\lambda_1(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})}(H(r-\\lambda_1(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}))-H(r-\\lambda_2(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}))).\n\\end{eqnarray*}\n\\end{prop}\n\n\\begin{proof}\nBy performing Laplace transformation $(r\\to s)$ of\n$f^{(d)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle}(r)$, we get that\n\\begin{eqnarray*}\n\\sL(f^{(d)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle})(s) = \\Gamma(d)\\int\n\\exp\\Pa{-s\\sum^d_{i=1}\\lambda_i(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})r_i}\\delta\\Pa{1-\\sum^d_{i=1}r_i}\\prod^d_{i=1}\\mathrm{d}\nr_i.\n\\end{eqnarray*}\nLet\n$$\nF_s(t) :=\\Gamma(d)\\int\n\\exp\\Pa{-s\\sum^d_{i=1}\\lambda_i(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})r_i}\\delta\\Pa{t-\\sum^d_{i=1}r_i}\\prod^d_{i=1}\\mathrm{d}\nr_i.\n$$\nStill by performing Laplace transformation $(t\\to x)$ of $F_s(t)$:\n\\begin{eqnarray*}\n\\sL(F_s)(x) &=&\\Gamma(d)\\int\n\\exp\\Pa{-s\\sum^d_{i=1}\\lambda_i(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})r_i}\\exp\\Pa{-x\\sum^d_{i=1}r_i}\\prod^d_{i=1}\\mathrm{d}\nr_i\\\\\n&=&\\Gamma(d)\\prod^d_{i=1}\\int^\\infty_0 \\exp\\Pa{-(s\\lambda_i(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})+x)r_i}\\mathrm{d} r_i\\\\\n&=&\\frac{\\Gamma(d)}{\\prod^d_{i=1}(s\\lambda_i(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})+x)},\n\\end{eqnarray*}\nimplying that \\cite{zhang2018qip}\n\\begin{eqnarray*}\nF_s(t) =\n\\Gamma(d)\\sum^d_{i=1}\\frac{\\exp\\Pa{-\\lambda_i(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})st}}{(-s)^{d-1}\\prod_{j\\in\\hat\ni}(\\lambda_i(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})-\\lambda_j(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}))},\n\\end{eqnarray*}\nwhere $\\hat i:=\\Set{1,\\ldots,d}\\backslash\\set{i}$. Thus\n\\begin{eqnarray*}\n\\sL(f^{(d)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle})(s)=F_s(1) =\n\\Gamma(d)\\sum^d_{i=1}\\frac{\\exp\\Pa{-\\lambda_i(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})s}}{(-s)^{d-1}\\prod_{j\\in\\hat\ni}(\\lambda_i(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})-\\lambda_j(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}))}.\n\\end{eqnarray*}\nTherefore, we get that\n\\begin{eqnarray*}\nf^{(d)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle}(r)\n=(-1)^{d-1}(d-1)\\sum^d_{i=1}\\frac{H(r-\\lambda_i(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}))(r-\\lambda_i(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}))^{d-2}}{\\prod_{j\\in\\hat\ni}(\\lambda_i(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})-\\lambda_j(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}))},\n\\end{eqnarray*}\nwhere $H(r-\\lambda_i(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}))$ is the so-called Heaviside function,\ndefined by $H(t)=1$ if $t>0$; otherwise $0$. The support of this pdf\nis the closed interval $[l,u]$ where\n$$\nl=\\min\\Set{\\lambda_i(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}):i=1,\\ldots,d},\\quad\nu=\\max\\Set{\\lambda_i(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}):i=1,\\ldots,d}.\n$$\nThe normalization of $f^{(d)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle}(r)$ (i.e.,\n$\\int_\\mathbb{R} f^{(d)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle}(r)\\mathrm{d} r=1$) can be checked\nby assuming $\\lambda_1<\\lambda_2<\\cdots<\\lambda_d$, then\n$[l,u]=[\\lambda_1,\\lambda_n]$ since\n$f^{(d)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle}(r)$ is a symmetric of $\\lambda_i$'s.\n\\end{proof}\n\n\\subsection{The case for one qubit observable}\n\nLet us now turn to the qubit observables. Any qubit observable\n$\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}$, which may be parameterized as\n\\begin{equation}\\label{AA}\n\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}=a_0\\mathbb{1}+\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}\\cdot\\boldsymbol{\\sigma},\\quad (a_0,\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e})\\in\\mathbb{R}^4,\n\\end{equation}\nwhere $\\mathbb{1}$ is the identity matrix on the qubit Hilbert space\n$\\mathbb{C}^2$, and $\\boldsymbol{\\sigma}=(\\sigma_1,\\sigma_2,\\sigma_3)$\nare the vector of the standard Pauli matrices:\n\\begin{eqnarray*}\n\\sigma_1=\\Pa{\\begin{array}{cc}\n 0 & 1 \\\\\n 1 & 0\n \\end{array}\n},\\quad \\sigma_2=\\Pa{\\begin{array}{cc}\n 0 & -\\mathrm{i} \\\\\n \\mathrm{i} & 0\n \\end{array}\n},\\quad \\sigma_3=\\Pa{\\begin{array}{cc}\n 1 & 0 \\\\\n 0 & -1\n \\end{array}\n}.\n\\end{eqnarray*}\nWithout loss of generality, we assume that our qubit observables are\nof simple eigenvalues, otherwise the problem is trivial. Thus the\ntwo eigenvalues of $\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}$ are\n\\begin{eqnarray*}\n\\lambda_k(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})=a_0+(-1)^ka, \\qquad k=1,2,\n\\end{eqnarray*}\nwith $a:=\\abs{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}=\\sqrt{a_1^2+a_2^2+a_3^2}>0$ being the length of\nvector $\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e} =(a_1,a_2,a_3)\\in \\mathbb{R}^3$. Thus \\eqref{eq:qubitexp}\nbecomes\n\\begin{eqnarray*}\nf^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle}(r) =\n\\frac1{\\lambda_2(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})-\\lambda_1(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})}[H(r-\\lambda_1(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}))-H(r-\\lambda_2(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}))].\n\\end{eqnarray*}\n\n\\begin{thrm}\\label{th:vardis}\nFor the qubit observable $\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}$ defined by Eq.~\\eqref{AA}, the\nprobability density function of $\\Delta_\\psi \\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}$, where $\\psi$ is\na Haar-distributed random pure state on $\\mathbb{C}^2$, is given by\n\\begin{eqnarray*}\n f^{(2)}_{\\Delta\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}}(x) = \\frac x{\\abs{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}\\sqrt{\\abs{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}^2-x^2}},\\qquad\nx\\in[0,\\abs{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}).\n\\end{eqnarray*}\n\\end{thrm}\n\n\\begin{proof}\nNote that\n\\begin{eqnarray}\\label{eq:2nddelta}\n\\delta(r^2-r^2_0)=\\frac1{2\\abs{r_0}}\\Br{\\delta(r-r_0)+\\delta(r+r_0)}.\n\\end{eqnarray}\nFor $x\\geqslant0$, because\n\\begin{eqnarray*}\n\\delta(x^2-\\Delta_\\psi \\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}^2) = \\frac1{2x}\\Br{\\delta(x+\\Delta_\\psi\n\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})+\\delta(x-\\Delta_\\psi \\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})}=\\frac1{2x}\\delta(x-\\Delta_\\psi\n\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}),\n\\end{eqnarray*}\nwe see that\n\\begin{eqnarray*}\nf^{(2)}_{\\Delta\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}}(x) = \\int\\delta(x-\\Delta_\\psi\n\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})\\mathrm{d}\\mu(\\psi) = 2x\\int\\delta\\Pa{x^2-\\Delta^2_\\psi\n\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}}\\mathrm{d}\\mu(\\psi).\n\\end{eqnarray*}\nFor any complex $2\\times 2$ matrix $\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}$,\n$\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}^2=\\Tr{\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}}\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}-\\det(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})\\mathbb{1}$. Then $\\Delta^2_\\psi\n\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}=(\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle_\\psi-\\lambda_1(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}))(\\lambda_2(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})-\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle_\\psi)$\n\\begin{eqnarray*}\n\\delta\\Pa{x^2-\\Delta^2_\\psi \\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}} &=&\n\\delta\\Pa{x^2-(\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle_\\psi-\\lambda_1(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}))(\\lambda_2(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})-\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle_\\psi)}.\n\\end{eqnarray*}\nIn particular, we see that\n\\begin{eqnarray*}\nf^{(2)}_{\\Delta\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}}(x) &=&\n2x\\int^{\\lambda_2(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})}_{\\lambda_1(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})}\n\\mathrm{d} r\\delta\\Pa{x^2-(r-\\lambda_1(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}))(\\lambda_2(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})-r)}\\int_{\\mathbb{C}^2}\\delta(r-\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle_\\psi)\\mathrm{d}\\mu(\\psi)\\notag\\\\\n&=& 2x\\int^{\\lambda_2(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})}_{\\lambda_1(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})} \\mathrm{d}\nrf^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle}(r)\\delta\\Pa{x^2-(r-\\lambda_1(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}))(\\lambda_2(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})-r)}.\n\\end{eqnarray*}\nDenote $f_x(r)=x^2-(r-\\lambda_1(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}))(\\lambda_2(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})-r)$. Thus\n$\\partial_rf_x(r)=2r-\\lambda_1(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})-\\lambda_2(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})$. Then\n$f_x(r)=0$ has two distinct roots in\n$[\\lambda_1(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}),\\lambda_2(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})]$ if and only if\n$x\\in\\left[0,\\frac{V_2(\\lambda(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}))}2\\right)$, where\n$V_2(\\lambda(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}))=\\lambda_2(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})-\\lambda_1(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})$. Now the roots\nare given by\n\\begin{eqnarray*}\nr_\\pm(x)=\\frac{\\lambda_1(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})+\\lambda_2(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})\\pm\\sqrt{V_2(\\lambda(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}))^2-4x^2}}2.\n\\end{eqnarray*}\nThus\n\\begin{eqnarray*}\n\\delta\\Pa{f_x(r)}=\\frac1{\\abs{\\partial_{r=r_+(x)}\nf_x(r)}}\\delta_{r_+(x)}+\\frac1{\\abs{\\partial_{r=r_-(x)}\nf_x(r)}}\\delta_{r_-(x)},\n\\end{eqnarray*}\nimplying that\n\\begin{eqnarray*}\nf^{(2)}_{\\Delta \\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}}(x) =\n\\frac{4x}{V_2(\\lambda(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}))\\sqrt{V_2(\\lambda(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}))^2-4x^2}}.\n\\end{eqnarray*}\n\nNow for $\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}=a_0\\mathbb{1}+\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}\\cdot\\boldsymbol{\\sigma}$, we have\n$V_2(\\lambda(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}))=2\\abs{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}$. Substituting this into the above\nexpression, we get the desired result:\n\\begin{eqnarray*}\nf^{(2)}_{\\Delta \\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}}(x) = \\frac\nx{\\abs{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}\\sqrt{\\abs{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}^2-x^2}},\n\\end{eqnarray*}\nwhere $x\\in[0,\\abs{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}})$. This is the desired result.\n\\end{proof}\n\n\\subsection{The case for two qubit\nobservables}\n\n\nLet $\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}=a_0+\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}\\cdot\\boldsymbol{\\sigma}$ and\n$\\bsB=b_0+\\bsb\\cdot\\boldsymbol{\\sigma}$\n\\begin{eqnarray*}\n&& f^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle,\\langle\\bsB\\rangle}(r,s) =\n\\int\\delta(r-\\Innerm{\\psi}{\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}}{\\psi})\\delta(s-\\Innerm{\\psi}{\\bsB}{\\psi})\\mathrm{d}\\mu(\\psi)\\\\\n&&=\\frac1{(2\\pi)^2}\\int_{\\mathbb{R}^2}\\mathrm{d}\\alpha\\mathrm{d}\\beta\n\\exp\\Pa{\\mathrm{i}(r\\alpha+s\\beta)}\\int\n\\exp\\Pa{-\\mathrm{i}\\Innerm{\\psi}{\\alpha\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}+\\beta\\bsB}{\\psi}}\\mathrm{d}\\mu(\\psi),\n\\end{eqnarray*}\nwhere\n\\begin{eqnarray*}\n&&\\int\n\\exp\\Pa{-\\mathrm{i}\\Innerm{\\psi}{\\alpha\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}+\\beta\\bsB}{\\psi}}\\mathrm{d}\\mu(\\psi)\n=\n\\int^{\\lambda_+(\\alpha\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}+\\beta\\bsB)}_{\\lambda_-(\\alpha\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}+\\beta\\bsB)}\n\\exp\\Pa{-\\mathrm{i}t}f_2(t)\\mathrm{d} t\\\\\n&&=\\frac1{2\\abs{\\alpha\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}+\\beta\\bsb}}\\int^{\\lambda_+(\\alpha\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}+\\beta\\bsB)}_{\\lambda_-(\\alpha\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}+\\beta\\bsB)}\n\\exp\\Pa{-\\mathrm{i}t}\\mathrm{d}\nt=\\exp\\Pa{-\\mathrm{i}(a_0\\alpha+b_0\\beta)}\\frac{\\sin\\abs{\\alpha\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}+\\beta\\bsb}}{\\abs{\\alpha\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}+\\beta\\bsb}},\n\\end{eqnarray*}\nfor\n$$\n\\lambda_\\pm(\\alpha\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}+\\beta\\bsB)=\\alpha a_0+\\beta b_0 \\pm\n\\abs{\\alpha\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}+\\beta\\bsb}.\n$$\nNow that\n\\begin{eqnarray*}\nf^{(2)}_{\\langle\\alpha\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}+\\beta\\bsB\\rangle}(r)\n=\\frac1{\\lambda_2-\\lambda_1}(H(r-\\lambda_1)-H(r-\\lambda_2)),\n\\end{eqnarray*}\ntherefore\n\\begin{eqnarray*}\nf^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle,\\langle\\bsB\\rangle}(r,s)=\\frac1{(2\\pi)^2}\\int_{\\mathbb{R}^2}\\mathrm{d}\\alpha\\mathrm{d}\\beta\n\\exp\\Pa{\\mathrm{i}((r-a_0)\\alpha+(s-b_0)\\beta)}\\frac{\\sin\\abs{\\alpha\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}+\\beta\\bsb}}{\\abs{\\alpha\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}+\\beta\\bsb}}.\n\\end{eqnarray*}\n(i) If $\\set{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb}$ is linearly independent, then the following\nmatrix $\\bsT_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb}$ is invertible, and thus\n\\begin{eqnarray*}\n\\Pa{\\begin{array}{c}\n \\tilde\\alpha \\\\\n \\tilde\\beta\n \\end{array}\n} = \\bsT^{\\frac12}_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb}\\Pa{\\begin{array}{c}\n \\alpha \\\\\n \\beta\n \\end{array}\n},\\quad\\text{where }\\bsT_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb} = \\Pa{\\begin{array}{cc}\n \\Inner{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}} & \\Inner{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}{\\bsb} \\\\\n \\Inner{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}{\\bsb} & \\Inner{\\bsb}{\\bsb}\n \\end{array}\n}.\n\\end{eqnarray*}\nThus we see that\n\\begin{eqnarray*}\n&&f^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle,\\langle\\bsB\\rangle}(r,s)=\\frac1{(2\\pi)^2\\sqrt{\\det(\\bsT_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb})}}\\int_{\\mathbb{R}^2}\\mathrm{d}\\tilde\\alpha\\mathrm{d}\\tilde\\beta\n\\exp\\Pa{\\mathrm{i}((\\tilde r-\\tilde a_0)\\alpha+(\\tilde s-\\tilde\nb_0)\\tilde\n\\beta)}\\frac{\\sin\\sqrt{\\tilde\\alpha^2+\\tilde\\beta^2}}{\\sqrt{\\tilde\\alpha^2+\\tilde\\beta^2}}\\\\\n&&=\\frac1{(2\\pi)^2\\sqrt{\\det(\\bsT_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb})}}\\int^\\infty_0\\mathrm{d} t\n\\sin t \\int^{2\\pi}_0\\mathrm{d}\\theta \\exp\\Pa{\\mathrm{i}t((\\tilde r-\\tilde\na_0)\\cos\\theta+(\\tilde s-\\tilde b_0)\\sin\\theta)}\\\\\n&&=\\frac1{(2\\pi)\\sqrt{\\det(\\bsT_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb})}}\\int^\\infty_0\\mathrm{d}\nt\\sin t J_0\\Pa{t\\sqrt{(\\tilde r-\\tilde a_0)^2+(\\tilde s-\\tilde\nb_0)^2}},\n\\end{eqnarray*}\nwhere $J_0(z)$ is the so-called Bessel function of first kind,\ndefined by\n\\begin{eqnarray*}\nJ_0(z)=\\frac1\\pi\\int^\\pi_0\\cos(z\\cos\\theta)\\mathrm{d}\\theta.\n\\end{eqnarray*}\nTherefore\n\\begin{eqnarray*}\nf^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle,\\langle\\bsB\\rangle}(r,s)\n=\\frac1{2\\pi\\sqrt{\\det(\\bsT_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb})}}\\int^{+\\infty}_0\\mathrm{d} t\\sin\ntJ_0\\Pa{t\\cdot\\sqrt{(r-a_0,\ns-b_0)\\bsT_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb}^{-1}\\Pa{\\begin{array}{c}\n r-a_0 \\\\\n s-b_0\n \\end{array}\n}}},\n\\end{eqnarray*}\nwhere\n\\begin{eqnarray*}\n\\int^{\\infty}_0J_0(\\lambda t)\\sin(t)\\mathrm{d} t =\n\\frac1{\\sqrt{1-\\lambda^2}}H(1-\\abs{\\lambda}).\n\\end{eqnarray*}\nTherefore\n\\begin{eqnarray*}\nf^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle,\\langle\\bsB\\rangle}(r,s)\n=\\frac{H(1-\\omega_{\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\bsB}(r,s))}{2\\pi\\sqrt{\\det(\\bsT_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb})(1-\\omega^2_{\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\bsB}(r,s))}},\n\\end{eqnarray*}\nwhere\n\\begin{eqnarray}\\label{eq:omegaAB}\n\\omega_{\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\bsB}(r,s)=\\sqrt{(r-a_0,\ns-b_0)\\bsT_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb}^{-1}\\Pa{\\begin{array}{c}\n r-a_0 \\\\\n s-b_0\n \\end{array}\n}}.\n\\end{eqnarray}\n(ii) If $\\set{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb}$ is linearly dependent, without loss of\ngenerality, let $\\bsb=\\kappa\\cdot\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}$ for some nonzero\n$\\kappa\\neq0$, then\n\\begin{eqnarray*}\nf^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle,\\langle\\bsB\\rangle}(r,s)=\\frac1{(2\\pi)^2}\\int_{\\mR^2}\\mathrm{d}\\alpha\\mathrm{d}\\beta\n\\exp\\Pa{\\mathrm{i}((r-a_0)\\alpha+(s-b_0)\\beta)}\\frac{\\sin\n(a\\abs{\\alpha+\\beta\\kappa})}{a\\abs{\\alpha+\\beta\\kappa}}.\n\\end{eqnarray*}\nHere $a=\\abs{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}$. We perform the change of variables\n$(\\alpha,\\beta)\\to(\\alpha',\\beta')$, where\n$\\alpha'=\\alpha+\\kappa\\beta$ and $\\beta'=\\beta$. We get its\nJacobian, given by\n\\begin{eqnarray*}\n\\det\\Pa{\\frac{\\partial(\\alpha',\\beta')}{\\partial(\\alpha,\\beta)}}=\\abs{\\begin{array}{cc}\n 1 & \\kappa \\\\\n 0 &\n 1\n \\end{array}\n}=1\\neq0.\n\\end{eqnarray*}\nThus\n\\begin{eqnarray*}\nf^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle,\\langle\\bsB\\rangle}(r,s)&=&\\frac1{(2\\pi)^2}\\iint\\mathrm{d}\\alpha'\\mathrm{d}\\beta'\n\\exp\\Pa{\\mathrm{i}((r-a_0)(\\alpha'-\\kappa\\beta')+(s-b_0)\\beta')}\\frac{\\sin\n(a\\abs{\\alpha'})}{a\\abs{\\alpha'}}\\\\\n&=&\\frac1{2\\pi}\\int\n\\exp\\Pa{\\mathrm{i}((s-b_0)-\\kappa(r-a_0))\\beta'}\\mathrm{d}\\beta'\\times\\frac1{2\\pi}\\int\\mathrm{d}\\alpha'\n\\exp\\Pa{\\mathrm{i}(r-a_0)\\alpha'}\\frac{\\sin(\na\\abs{\\alpha'})}{a\\abs{\\alpha'}}\\\\\n&=&\\delta((s-b_0)-\\kappa(r-a_0))f^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle}(r).\n\\end{eqnarray*}\n\n\n\n\n\n\n\n\n\\begin{prop}\\label{prop:expdis}\nFor a pair of qubit observables\n$\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}=a_0\\mathbb{1}+\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}\\cdot\\boldsymbol{\\sigma}$ and\n$\\bsB=b_0\\mathbb{1}+\\bsb\\cdot\\boldsymbol{\\sigma}$, (i) if $\\set{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb}$\nis linearly independent, then the pdf of\n$(\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle_\\psi,\\langle\\bsB\\rangle_\\psi)$, where\n$\\psi\\in\\mathbb{C}^2$ a Haar-distributed pure state, is given by\n\\begin{eqnarray*}\nf^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle,\\langle\\bsB\\rangle}(r,s)\n=\\frac{H(1-\\omega_{\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\bsB}(r,s))}{2\\pi\\sqrt{\\det(\\bsT_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb})(1-\\omega^2_{\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\bsB}(r,s))}}.\n\\end{eqnarray*}\n(ii) If $\\set{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb}$ is linearly dependent, without loss of\ngenerality, let $\\bsb=\\kappa\\cdot\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}$, then\n\\begin{eqnarray*}\nf^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle,\\langle\\bsB\\rangle}(r,s)=\\delta((s-b_0)-\\kappa(r-a_0))f^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle}(r).\n\\end{eqnarray*}\n\\end{prop}\nFrom Proposition~\\ref{prop:expdis}, we can directly infer the\nresults obtained in \\cite{Gutkin2013,Gallay2012}.\n\nWe now turn to a pair of qubit observables\n\\begin{eqnarray}\\label{AB}\n\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E} =a_0\\mathbb{1}+\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}\\cdot\\boldsymbol{\\sigma}, \\quad \\bsB\n=b_0\\mathbb{1}+\\bsb\\cdot\\boldsymbol{\\sigma} , \\quad (a_0,\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}),(b_0,\\bsb)\\in\n\\mathbb{R}^4,\n\\end{eqnarray}\nwhose uncertainty region\n\\begin{eqnarray}\\label{2DUR}\n\\mathcal{U}}\\def\\cV{\\mathcal{V}}\\def\\cW{\\mathcal{W}}\\def\\cX{\\mathcal{X}}\\def\\cY{\\mathcal{Y}_{\\Delta\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\Delta\\bsB}:=\\Set{(\\Delta_\\psi \\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\Delta_\\psi\n\\bsB)\\in\\mathbb{R}^2_+: \\ket{\\psi}\\in \\mathbb{C}^2}\n\\end{eqnarray}\nwas proposed by Busch and Reardon-Smith \\cite{Busch2019} in the\nmixed state case. We consider the probability distribution density\n\\begin{eqnarray*}\nf^{(2)}_{\\Delta\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\Delta\\bsB}(x,y) := \\int\\delta(x-\\Delta_\\psi\n\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})\\delta(y-\\Delta_\\psi \\bsB)\\mathrm{d}\\mu (\\psi),\n\\end{eqnarray*}\non the uncertainty region defined by Eq.~\\eqref{2DUR}. Denote\n\\begin{eqnarray*}\n\\bsT_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb}:=\\Pa{\\begin{array}{cc}\n \\Inner{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}} & \\Inner{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}{\\bsb} \\\\\n \\Inner{\\bsb}{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}} & \\Inner{\\bsb}{\\bsb}\n \\end{array}\n}.\n\\end{eqnarray*}\n\n\\begin{thrm}\\label{th:varAvarB}\nThe joint probability distribution density of the uncertainties\n$(\\Delta_\\psi \\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\Delta_\\psi \\bsB)$ for a pair of qubit\nobservables defined by Eq.~\\eqref{AB}, where $\\psi$ is a\nHaar-distributed random pure state on $\\mathbb{C}^2$, is given by\n\\begin{eqnarray*}\nf^{(2)}_{\\Delta\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\Delta\\bsB}(x,y) = \\frac{2xy\\sum_{j\\in\\set{\\pm}}\nf^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle,\\langle\\bsB\\rangle}(r_+(x),s_j(y))}{\\sqrt{(a^2-x^2)(b^2-y^2)}},\n\\end{eqnarray*}\nwhere $a=\\abs{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}>0, b=\\abs{\\bsb}>0$,\n$r_\\pm(x)=a_0\\pm\\sqrt{a^2-x^2}, s_\\pm(y)=b_0\\pm\\sqrt{b^2-y^2}$.\n\\end{thrm}\n\n\\begin{proof}\nNote that in the proof of Theorem~\\ref{th:vardis}, we have already\nobtained that\n\\begin{eqnarray*}\n\\delta(x^2-\\Delta^2_\\psi\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}) =\n\\delta(x^2-(r-\\lambda_1(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}))(\\lambda_2(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})-r))) =\\delta(f_x(r)),\n\\end{eqnarray*}\nwhere $f_x(r):=x^2-(r-\\lambda_1(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}))(\\lambda_2(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})-r)$.\nSimilarly,\n\\begin{eqnarray*}\n\\delta(y^2-\\Delta^2_\\psi\\bsB) = \\delta(g_y(s)),\n\\end{eqnarray*}\nwhere $g_y(s)=y^2-(s-\\lambda_1(\\bsB))(\\lambda_2(\\bsB)-s)$.\n\n\nAgain, by using \\eqref{eq:2nddelta}, we get that\n\\begin{eqnarray*}\nf^{(2)}_{\\Delta\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\Delta\\bsB}(x,y) &=&\n4xy\\int\\delta(x^2-\\Delta^2_\\psi\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})\\delta(y^2-\\Delta^2_\\psi\\bsB)\\mathrm{d}\\mu(\\psi)\\\\\n&=& 4 xy\\iint\\mathrm{d} r\\mathrm{d}\nsf^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle,\\langle\\bsB\\rangle}(r,s)\\delta(f_x(r))\\delta(g_y(s)),\n\\end{eqnarray*}\nwhere $f^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle,\\langle\\bsB\\rangle}(r,s)$ is\ndetermined by Proposition~\\ref{prop:expdis}. Hence\n\\begin{eqnarray*}\n\\delta\\Pa{f_x(r)}&=&\\frac1{\\abs{\\partial_{r=r_+(x)}\nf_x(r)}}\\delta_{r_+(x)}+\\frac1{\\abs{\\partial_{r=r_-(x)} f_x(r)}}\\delta_{r_-(x)},\\\\\n\\delta\\Pa{g_y(s)}&=&\\frac1{\\abs{\\partial_{s=s_+(y)}\ng_y(s)}}\\delta_{s_+(y)}+\\frac1{\\abs{\\partial_{s=s_-(y)}.\ng_y(s)}}\\delta_{s_-(y)}.\n\\end{eqnarray*}\nFrom the above, we have already known that\n\\begin{eqnarray*}\n\\delta(f_x(r))\\delta(g_y(s)) =\n\\frac{\\delta_{(r_+,s_+)}+\\delta_{(r_+,s_-)}+\\delta_{(r_-,s_+)}+\\delta_{(r_-,s_-)}}{4\\sqrt{(a^2-x^2)(b^2-y^2)}}.\n\\end{eqnarray*}\nBased on this observation, we get that\n\\begin{eqnarray*}\nf^{(2)}_{\\Delta\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\Delta\\bsB}(x,y) =\n\\frac{xy}{\\sqrt{(a^2-x^2)(b^2-y^2)}}\\sum_{i,j\\in\\set{\\pm}}f_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle,\\langle\\bsB\\rangle}(r_i(x),s_j(y)).\n\\end{eqnarray*}\nIt is easily checked that $\\omega_{\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\bsB}(\\cdot,\\cdot)$, defined\nin \\eqref{eq:omegaAB}, satisfies that\n\\begin{eqnarray*}\n\\omega_{\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\bsB}(r_+(x),s_+(y))=\\omega_{\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\bsB}(r_-(x),s_-(y)),\\quad\n\\omega_{\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\bsB}(r_+(x),s_-(y))=\\omega_{\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\bsB}(r_-(x),s_+(y)).\n\\end{eqnarray*}\nThese lead to the fact that\n\\begin{eqnarray*}\n\\sum_{i,j\\in\\set{\\pm}}f^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle,\\langle\\bsB\\rangle}(r_i(x),s_j(y))=2\\sum_{j\\in\\set{\\pm}}f_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle,\\langle\\bsB\\rangle}(r_+(x),s_j(y)).\n\\end{eqnarray*}\nTherefore\n\\begin{eqnarray*}\nf^{(2)}_{\\Delta\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\Delta\\bsB}(x,y) =\n\\frac{2xy\\sum_{j\\in\\set{\\pm}}f^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle,\\langle\\bsB\\rangle}(r_+(x),s_j(y))}{\\sqrt{(a^2-x^2)(b^2-y^2)}}.\n\\end{eqnarray*}\nWe get the desired result.\n\\end{proof}\n\n\n\\subsection{The case for three qubit observables}\\label{app:ABC}\n\nWe now turn to the case where there are three qubit observables\n\\begin{eqnarray}\\label{ABC}\n\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E} =a_0\\mathbb{1}+\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}\\cdot\\boldsymbol{\\sigma}, \\quad \\bsB\n=b_0\\mathbb{1}+\\bsb\\cdot\\boldsymbol{\\sigma}, \\quad \\bsC\n=c_0\\mathbb{1}+\\bsc\\cdot\\boldsymbol{\\sigma} \\quad\n(a_0,\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}),(b_0,\\bsb),(c_0,\\bsc)\\in \\mathbb{R}^4,\n\\end{eqnarray}\nwhose uncertainty region\n\\begin{eqnarray}\\label{3DUR}\n\\mathcal{U}}\\def\\cV{\\mathcal{V}}\\def\\cW{\\mathcal{W}}\\def\\cX{\\mathcal{X}}\\def\\cY{\\mathcal{Y}_{\\Delta\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\Delta\\bsB,\\Delta\\bsC}:=\\Set{(\\Delta_\\psi\n\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\Delta_\\psi \\bsB,\\Delta_\\psi \\bsC)\\in\\mathbb{R}^3_+: \\ket{\\psi}\\in\n\\mathbb{C}^2} .\n\\end{eqnarray}\nWe define the probability distribution density\n\\begin{eqnarray*}\nf^{(2)}_{\\Delta\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\Delta\\bsB,\\Delta\\bsC}(x,y,z) :=\n\\int\\delta(x-\\Delta_\\psi \\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})\\delta(y-\\Delta_\\psi\n\\bsB)\\delta(z-\\Delta_\\psi \\bsC)\\mathrm{d}\\mu (\\psi),\n\\end{eqnarray*}\non the uncertainty region defined by Eq.~\\eqref{3DUR}. Denote\n\\begin{eqnarray*}\n\\bsT_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb,\\bsc}:=\\Pa{\\begin{array}{ccc}\n \\Inner{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}} & \\Inner{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}{\\bsb} & \\Inner{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}{\\bsc} \\\\\n \\Inner{\\bsb}{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}} & \\Inner{\\bsb}{\\bsb} & \\Inner{\\bsb}{\\bsc} \\\\\n \\Inner{\\bsc}{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}} & \\Inner{\\bsc}{\\bsb} & \\Inner{\\bsc}{\\bsc}\n \\end{array}\n}.\n\\end{eqnarray*}\nAgain note that $\\bsT_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb,\\bsc}$ is also a semidefinite\npositive matrix. We find that\n$\\op{rank}(\\bsT_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb,\\bsc})\\leqslant3$. There are three cases that\nwould be possible: $\\op{rank}(\\bsT_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb,\\bsc})=1,2,3$. Thus\n$\\bsT_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb,\\bsc}$ is invertible (i.e.,\n$\\op{rank}(\\bsT_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb,\\bsc})=3$) if and only if\n$\\set{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb,\\bsc}$ linearly independent. In such case, we write\n\\begin{eqnarray*}\n\\omega_{\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\bsB,\\bsC}(r,s,t):=\\sqrt{(r-a_0,s-b_0,t-c_0)\\bsT^{-1}_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb,\\bsc}(r-a_0,s-b_0,t-c_0)^\\t}.\n\\end{eqnarray*}\nIn order to calculate $f_{\\Delta\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\Delta\\bsB,\\Delta\\bsC}$,\nessentially we need to derive the joint probability distribution\ndensity of\n$(\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle_\\psi,\\langle\\bsB\\rangle_\\psi,\\langle\\bsC\\rangle_\\psi)$,\nwhich is defined by\n\\begin{eqnarray*}\nf^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle,\\langle\\bsB\\rangle,\\langle\\bsC\\rangle}(r,s,t)\n:=\\int\\delta(r-\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle_\\psi)\\delta(s-\\langle\\bsB\\rangle_\\psi)\\delta(t-\\langle\\bsC\\rangle_\\psi)\\mathrm{d}\\mu(\\psi).\n\\end{eqnarray*}\n\n\nWe have the following result:\n\n\\begin{prop}\\label{prop:ABC}\nFor three qubit observables, given by Eq.~\\eqref{ABC}, (i) if\n$\\op{rank}(\\bsT_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb,\\bsc})=3$, i.e., $\\set{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb,\\bsc}$ is\nlinearly independent, then the joint probability distribution\ndensity of\n$(\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle_\\psi,\\langle\\bsB\\rangle_\\psi,\\langle\\bsC\\rangle_\\psi)$,\nwhere $\\psi$ is a Haar-distributed random pure state on\n$\\mathbb{C}^2$, is given by the following:\n\\begin{eqnarray}\\label{eq:meanABC}\nf^{(2)}_{\\langle \\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle ,\\langle \\bsB\\rangle , \\langle\n\\bsC\\rangle }(r,s,t)\n=\\frac1{4\\pi\\sqrt{\\det(\\bsT_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb,\\bsc})}}\\delta(1-\\omega_{\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\bsB,\\bsC}(r,s,t)).\n\\end{eqnarray}\n(ii) If $\\op{rank}(\\bsT_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb,\\bsc})=2$, i.e.,\n$\\set{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb,\\bsc}$ is linearly dependent, without loss of\ngenerality, we assume that $\\set{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb}$ are linearly\nindependent and $\\bsc=\\kappa_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}\\cdot\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}+\\kappa_{\\bsb}\\cdot\\bsb$\nfor some $\\kappa_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}$ and $\\kappa_{\\bsb}$ with\n$\\kappa_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}\\kappa_{\\bsb}\\neq0$, then\n\\begin{eqnarray*}\nf^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle,\\langle\\bsB\\rangle,\\langle\\bsC\\rangle}(r,s,t)=\n\\delta((t-c_0)-\\kappa_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}(r-a_0)-\\kappa_{\\bsb}(s-b_0))f^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle,\\langle\\bsB\\rangle}(r,s).\n\\end{eqnarray*}\n(iii) If $\\op{rank}(\\bsT_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb,\\bsc})=1$, i.e.,\n$\\set{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb,\\bsc}$ is linearly dependent, without loss of\ngenerality, we assume that $\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}$ are linearly independent and\n$\\bsb=\\kappa_{\\bsb\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}\\cdot\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsc=\\kappa_{\\bsc\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}\\cdot\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}$\nfor some $\\kappa_{\\bsb\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}$ and $\\kappa_{\\bsc\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}$ with\n$\\kappa_{\\bsb\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}\\kappa_{\\bsc\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}\\neq0$, then\n\\begin{eqnarray*}\nf^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle,\\langle\\bsB\\rangle,\\langle\\bsC\\rangle}(r,s,t)\n=\\delta((s-b_0)-\\kappa_{\\bsb\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}(r-a_0))\\delta((t-c_0)-\\kappa_{\\bsc\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}(r-a_0))\nf^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle}(r).\n\\end{eqnarray*}\n\\end{prop}\n\n\\begin{proof}\n(i) If $\\op{rank}(\\bsT_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb,\\bsc})=3$, then\n$\\bsT_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb,\\bsc}$ is \\emph{invertible}. By using Bloch\nrepresentation,\n$\\proj{\\psi}=\\frac12(\\mathbb{1}_2+\\boldsymbol{u}}\\def\\bsv{\\boldsymbol{v}}\\def\\bsw{\\boldsymbol{w}}\\def\\bsx{\\boldsymbol{x}}\\def\\bsy{\\boldsymbol{y}\\cdot\\boldsymbol{\\sigma})$, where\n$\\abs{\\boldsymbol{u}}\\def\\bsv{\\boldsymbol{v}}\\def\\bsw{\\boldsymbol{w}}\\def\\bsx{\\boldsymbol{x}}\\def\\bsy{\\boldsymbol{y}}=1$. Then for\n$(r,s,t)=(\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle_\\psi,\\langle\\bsB\\rangle_\\psi,\\langle\\bsC\\rangle_\\psi)=(a_0+\\Inner{\\boldsymbol{u}}\\def\\bsv{\\boldsymbol{v}}\\def\\bsw{\\boldsymbol{w}}\\def\\bsx{\\boldsymbol{x}}\\def\\bsy{\\boldsymbol{y}}{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}},b_0+\\Inner{\\boldsymbol{u}}\\def\\bsv{\\boldsymbol{v}}\\def\\bsw{\\boldsymbol{w}}\\def\\bsx{\\boldsymbol{x}}\\def\\bsy{\\boldsymbol{y}}{\\bsb},\\Inner{\\boldsymbol{u}}\\def\\bsv{\\boldsymbol{v}}\\def\\bsw{\\boldsymbol{w}}\\def\\bsx{\\boldsymbol{x}}\\def\\bsy{\\boldsymbol{y}}{\\bsc})$,\nwe see that\n\\begin{eqnarray*}\n(r-a_0,s-b_0,t-c_0) =\n(\\Inner{\\boldsymbol{u}}\\def\\bsv{\\boldsymbol{v}}\\def\\bsw{\\boldsymbol{w}}\\def\\bsx{\\boldsymbol{x}}\\def\\bsy{\\boldsymbol{y}}{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}},\\Inner{\\boldsymbol{u}}\\def\\bsv{\\boldsymbol{v}}\\def\\bsw{\\boldsymbol{w}}\\def\\bsx{\\boldsymbol{x}}\\def\\bsy{\\boldsymbol{y}}{\\bsb},\\Inner{\\boldsymbol{u}}\\def\\bsv{\\boldsymbol{v}}\\def\\bsw{\\boldsymbol{w}}\\def\\bsx{\\boldsymbol{x}}\\def\\bsy{\\boldsymbol{y}}{\\bsc}).\n\\end{eqnarray*}\nDenote $\\bsQ:=(\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb,\\bsc)$, which is a $3\\times 3$ invertible\nreal matrix due to the fact that $\\set{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb,\\bsc}$ is linearly\nindependent. Then $\\bsT_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb,\\bsc}=\\bsQ^\\t\\bsQ$ and\n$(r-a_0,s-b_0,t-c_0) =\\bra{\\boldsymbol{u}}\\def\\bsv{\\boldsymbol{v}}\\def\\bsw{\\boldsymbol{w}}\\def\\bsx{\\boldsymbol{x}}\\def\\bsy{\\boldsymbol{y}}\\bsQ$, which means that\n\\begin{eqnarray*}\n\\omega_{\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\bsB,\\bsC}(r,s,t)=\\sqrt{\\bra{\\boldsymbol{u}}\\def\\bsv{\\boldsymbol{v}}\\def\\bsw{\\boldsymbol{w}}\\def\\bsx{\\boldsymbol{x}}\\def\\bsy{\\boldsymbol{y}}\\bsQ(\\bsQ^\\t\\bsQ)^{-1}\\bsQ^\\t\\ket{\\boldsymbol{u}}\\def\\bsv{\\boldsymbol{v}}\\def\\bsw{\\boldsymbol{w}}\\def\\bsx{\\boldsymbol{x}}\\def\\bsy{\\boldsymbol{y}}}\n= \\abs{\\boldsymbol{u}}\\def\\bsv{\\boldsymbol{v}}\\def\\bsw{\\boldsymbol{w}}\\def\\bsx{\\boldsymbol{x}}\\def\\bsy{\\boldsymbol{y}}=1.\n\\end{eqnarray*}\nThis tells us an interesting fact that\n$(\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle_\\psi,\\langle\\bsB\\rangle_\\psi,\\langle\\bsC\\rangle_\\psi)$\nlies at the boundary surface of the ellipsoid\n$\\omega_{\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\bsB,\\bsC}(r,s,t)\\leqslant1$, i.e.,\n$\\omega_{\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\bsB,\\bsC}(r,s,t)=1$. This indicates that the PDF of\n$(\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle_\\psi,\\langle\\bsB\\rangle_\\psi,\\langle\\bsC\\rangle_\\psi)$\nsatisfies that\n\\begin{eqnarray*}\nf^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle,\\langle\\bsB\\rangle,\\langle\\bsC\\rangle}(r,s,t)\\propto\\delta(1-\\omega_{\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\bsB,\\bsC}(r,s,t)).\n\\end{eqnarray*}\nNext we calculate the following integral:\n\\begin{eqnarray*}\n\\int_{\\mathbb{R}^3}\\delta(1-\\omega_{\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\bsB,\\bsC}(r,s,t))\\mathrm{d} r\\mathrm{d}\ns\\mathrm{d} t=4\\pi\\sqrt{\\det(\\bsT_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb,\\bsc})}.\n\\end{eqnarray*}\nApparently\n\\begin{eqnarray*}\n\\int_{\\mathbb{R}^3}\\delta(1-\\omega_{\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\bsB,\\bsC}(r,s,t))\\mathrm{d} r\\mathrm{d}\ns\\mathrm{d} t=\n\\int_{\\mathbb{R}^3}\\delta\\Pa{1-\\sqrt{\\Innerm{\\bsx}{\\bsT^{-1}_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb,\\bsc}}{\\bsx}}}[\\mathrm{d}\n\\bsx].\n\\end{eqnarray*}\nHere $\\bsx=(r-a_0,s-b_0,t-c_0)$ and $[\\mathrm{d}\\bsx]=\\mathrm{d} r\\mathrm{d} s\\mathrm{d} t$.\nIndeed, by using spectral decomposition theorem for Hermitian\nmatrix, we get that there is orthogonal matrix $\\bsO\\in\\O(3)$ such\nthat\n$\\bsT_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb,\\bsc}=\\bsO^\\t\\mathrm{diag}(\\lambda_1,\\lambda_2,\\lambda_3)\\bsO$\nwhere $\\lambda_k>0(k=1,2,3)$. Thus\n\\begin{eqnarray*}\n\\omega_{\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\bsB,\\bsC}(r,s,t)=\\Innerm{\\bsO\\bsx}{\\mathrm{diag}(\\lambda^{-1}_1,\\lambda^{-1}_2,\\lambda^{-1}_3)}{\\bsO\\bsx}=\\Innerm{\\bsy}{\\mathrm{diag}(\\lambda^{-1}_1,\\lambda^{-1}_2,\\lambda^{-1}_3)}{\\bsy}\n\\end{eqnarray*}\nwhere $\\bsy=\\bsO\\bsx$. Thus\n\\begin{eqnarray*}\n\\int_{\\mathbb{R}^3}\\delta(1-\\omega_{\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\bsB,\\bsC}(r,s,t))\\mathrm{d} r\\mathrm{d}\ns\\mathrm{d} t=\n\\int_{\\mathbb{R}^3}\\delta\\Pa{1-\\sqrt{\\Innerm{\\bsy}{\\mathrm{diag}(\\lambda^{-1}_1,\\lambda^{-1}_2,\\lambda^{-1}_3)}{\\bsy}}}[\\mathrm{d}\n\\bsy].\n\\end{eqnarray*}\nLet\n$\\boldsymbol{z}=\\mathrm{diag}(\\lambda^{-1\/2}_1,\\lambda^{-1\/2}_2,\\lambda^{-1\/2}_3)\\bsy$.\nThen\n$[\\mathrm{d}\\boldsymbol{z}]=\\frac1{\\sqrt{\\lambda_1\\lambda_2\\lambda_3}}[\\mathrm{d}\\bsy]=\\frac1{\\sqrt{\\det(\\bsT_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb,\\bsc})}}[\\mathrm{d}\\bsy]$\nand\n\\begin{eqnarray*}\n\\int_{\\mathbb{R}^3}\\delta(1-\\omega_{\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\bsB,\\bsC}(r,s,t))\\mathrm{d} r\\mathrm{d}\ns\\mathrm{d} t=\n\\sqrt{\\det(\\bsT_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb,\\bsc})}\\int_{\\mathbb{R}^3}\\delta\\Pa{1-\\abs{\\boldsymbol{z}}}[\\mathrm{d}\n\\boldsymbol{z}]=4\\pi \\sqrt{\\det(\\bsT_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb,\\bsc})}.\n\\end{eqnarray*}\nFinally we get that\n\\begin{eqnarray*}\nf^{(2)}_{\\langle \\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle ,\\langle \\bsB\\rangle , \\langle\n\\bsC\\rangle }(r,s,t) =\n\\frac1{4\\pi\\sqrt{\\det(\\bsT_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb,\\bsc})}}\\delta(1-\\omega_{\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\bsB,\\bsC}(r,s,t)).\n\\end{eqnarray*}\n\n(ii) If $\\op{rank}(\\bsT_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb,\\bsc})=2$, then\n$\\set{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb,\\bsc}$ is linearly dependent. Without loss of\ngenerality, we assume that $\\set{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb}$ is independent. Now\n$\\bsc=\\kappa_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}+\\kappa_{\\bsb}\\bsb$ for some\n$\\kappa_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}},\\kappa_{\\bsb}\\in\\mathbb{R}$ with\n$\\kappa_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}\\kappa_{\\bsb}\\neq0$. Thus\n\\begin{eqnarray*}\nt-c_0&=&\\langle\\bsC\\rangle_\\psi-c_0\n=\\Innerm{\\psi}{\\bsc\\cdot\\boldsymbol{\\sigma}}{\\psi}\n=\\kappa_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}\\Innerm{\\psi}{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}\\cdot\\boldsymbol{\\sigma}}{\\psi}+\\kappa_{\\bsb}\\Innerm{\\psi}{\\bsb\\cdot\\boldsymbol{\\sigma}}{\\psi}\\\\\n&&=\\kappa_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}(r-a_0)+\\kappa_{\\bsb}(s-b_0).\n\\end{eqnarray*}\nTherefore we get that\n\\begin{eqnarray*}\nf^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle,\\langle\\bsB\\rangle,\\langle\\bsC\\rangle}(r,s,t)=\n\\delta((t-c_0)-\\kappa_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}(r-a_0)-\\kappa_{\\bsb}(s-b_0))f^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle,\\langle\\bsB\\rangle}(r,s).\n\\end{eqnarray*}\n\n(iii) If $\\op{rank}(\\bsT_{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb,\\bsc})=1$, then\n$\\set{\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsb,\\bsc}$ is linearly dependent. Without loss of\ngenerality, we assume that $\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}$ are linearly independent and\n$\\bsb=\\kappa_{\\bsb\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}\\cdot\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e},\\bsc=\\kappa_{\\bsc\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}\\cdot\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}$\nfor some $\\kappa_{\\bsb\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}$ and $\\kappa_{\\bsc\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}$ with\n$\\kappa_{\\bsb\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}\\kappa_{\\bsc\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e}}\\neq0$. Then we get the desired\nresult by mimicking the proof in (ii).\n\\end{proof}\n\n\\begin{thrm}\\label{th:ABC2}\nThe joint probability distribution density of $(\\Delta_\\psi\n\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\Delta_\\psi\\bsB,\\Delta_\\psi \\bsC)$ for a triple of qubit\nobservables defined by Eq.~\\eqref{ABC}, where $\\ket{\\psi}$ is a\nHaar-distributed random pure state on $\\mathbb{C}^2$, is given by\n\\begin{eqnarray*}\nf^{(2)}_{\\Delta\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\Delta\\bsB,\\Delta\\bsC}(x,y,z)\n=\\frac{2xyz}{\\sqrt{(a^2-x^2)(b^2-y^2)(c^2-z^2)}}\\sum_{j,k\\in\\set{\\pm}}\nf^{(2)}_{\\langle \\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle ,\\langle \\bsB\\rangle,\\langle\n\\bsC\\rangle}(r_+(x),s_j(y),t_k(z)),\n\\end{eqnarray*}\nwhere $f_{\\langle \\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle,\\langle \\bsB\\rangle, \\langle\n\\bsC\\rangle}(r,s,t)$ is the joint probability distribution density\nof the expectation values $(\\langle \\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle_\\psi,\\langle\n\\bsB\\rangle_\\psi,\\langle \\bsC\\rangle_\\psi)$, which is determined by\nEq.~\\eqref{eq:meanABC} in Proposition~\\ref{prop:ABC}; and\n\\begin{eqnarray*}\nr_\\pm(x):=a_0\\pm\\sqrt{a^2-x^2},\\quad\ns_\\pm(y):=b_0\\pm\\sqrt{b^2-y^2},\\quad t_\\pm(z):=c_0\\pm\\sqrt{c^2-z^2}.\n\\end{eqnarray*}\n\\end{thrm}\n\n\\begin{proof}\nNote that\n\\begin{eqnarray*}\nf^{(2)}_{\\Delta\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\Delta\\bsB,\\Delta\\bsC}(x,y,z)&=&\\int\\delta(x-\\Delta_\\psi\n\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})\\delta(y-\\Delta_\\psi\n\\bsB)\\delta(z-\\Delta_\\psi\\bsC)\\mathrm{d}\\mu(\\psi) \\notag\\\\\n&=& 8xyz\\int\\delta\\Pa{x^2-\\Delta^2_\\psi\n\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}}\\cdot\\delta\\Pa{y^2-\\Delta^2_\\psi\n\\bsB}\\cdot\\delta\\Pa{z^2-\\Delta^2_\\psi\\bsC}\\mathrm{d}\\mu(\\psi).\n\\end{eqnarray*}\nAgain using the method in the proof of Theorem 1, we have already\nobtained that\n\\begin{eqnarray*}\n\\delta\\Pa{x^2-\\Delta^2_\\psi \\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}}\\cdot\\delta\\Pa{y^2-\\Delta^2_\\psi\n\\bsB}\\cdot\\delta\\Pa{z^2-\\Delta^2_\\psi\n\\bsC}=\\delta(f_x(r))\\delta(g_y(s))\\delta(h_z(t)),\n\\end{eqnarray*}\nwhere\n\\begin{eqnarray*}\nf_x(r)&:=&x^2-(r-\\lambda_1(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}))(\\lambda_2(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})-r),\\\\\ng_y(s)&:=&y^2-(s-\\lambda_1(\\bsB))(\\lambda_2(\\bsB)-s),\\\\\nh_z(t)&:=&z^2-(t-\\lambda_1(\\bsC))(\\lambda_2(\\bsC)-t).\n\\end{eqnarray*}\nThen\n\\begin{eqnarray*}\nf^{(2)}_{\\Delta\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\Delta\\bsB,\\Delta\\bsC}(x,y,z)&=&8xyz\\iiint\n\\delta(f_x(r))\\delta(g_y(s))\\delta(h_z(t))f^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle,\\langle\\bsB\\rangle,\\langle\\bsC\\rangle}(r,s,t)\\mathrm{d}\nr\\mathrm{d} s\\mathrm{d} t.\n\\end{eqnarray*}\nFurthermore we have\n\\begin{eqnarray*}\n\\delta(f_x(r))\\delta(g_y(s))\\delta(h_z(t)) =\n\\frac{\\sum_{i,j,k\\in\\set{\\pm}}\\delta_{(r_i(x),s_j(y),t_k(z))}}{8\\sqrt{(a^2-x^2)(b^2-y^2)(c^2-z^2)}}.\n\\end{eqnarray*}\nBased on this observation, we get that\n\\begin{eqnarray*}\nf^{(2)}_{\\Delta\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\Delta\\bsB,\\Delta\\bsC}(x,y,z)&=&\\frac{xyz}{\\sqrt{(a^2-x^2)(b^2-y^2)(c^2-z^2)}}\\sum_{i,j,k\\in\\set{\\pm}}\\Inner{\\delta_{(r_i(x),s_j(y),t_k(z))}}{f^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle,\\langle\\bsB\\rangle,\\langle\\bsC\\rangle}}.\n\\end{eqnarray*}\nThus\n\\begin{eqnarray*}\nf^{(2)}_{\\Delta\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\Delta\\bsB,\\Delta\\bsC}(x,y,z) =\n\\frac{xyz\\sum_{i,j,k\\in\\set{\\pm}}f^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle,\\langle\\bsB\\rangle,\\langle\\bsC\\rangle}(r_i(x),s_j(y),t_k(z))}{\\sqrt{(a^2-x^2)(b^2-y^2)(c^2-z^2)}}.\n\\end{eqnarray*}\nIt is easily seen that\n\\begin{eqnarray*}\nf^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle,\\langle\\bsB\\rangle,\\langle\\bsC\\rangle}(r_+(x),s_+(y),t_+(z))=f^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle,\\langle\\bsB\\rangle,\\langle\\bsC\\rangle}(r_-(x),s_-(y),t_-(z)),\\\\\nf^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle,\\langle\\bsB\\rangle,\\langle\\bsC\\rangle}(r_+(x),s_+(y),t_-(z))=f^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle,\\langle\\bsB\\rangle,\\langle\\bsC\\rangle}(r_-(x),s_-(y),t_+(z)),\\\\\nf^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle,\\langle\\bsB\\rangle,\\langle\\bsC\\rangle}(r_+(x),s_-(y),t_+(z))=f^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle,\\langle\\bsB\\rangle,\\langle\\bsC\\rangle}(r_-(x),s_+(y),t_-(z)),\\\\\nf^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle,\\langle\\bsB\\rangle,\\langle\\bsC\\rangle}(r_+(x),s_-(y),t_-(z))=f^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle,\\langle\\bsB\\rangle,\\langle\\bsC\\rangle}(r_-(x),s_+(y),t_+(z)).\n\\end{eqnarray*}\nFrom these observations, we can reduce the above expression to the\nfollowing:\n\\begin{eqnarray*}\nf^{(2)}_{\\Delta\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E},\\Delta\\bsB,\\Delta\\bsC}(x,y,z) =\n\\frac{2xyz\\sum_{j,k\\in\\set{\\pm}}f^{(2)}_{\\langle\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}\\rangle,\\langle\\bsB\\rangle,\\langle\\bsC\\rangle}(r_+(x),s_j(y),t_k(z))}{\\sqrt{(a^2-x^2)(b^2-y^2)(c^2-z^2)}}.\n\\end{eqnarray*}\nThe desired result is obtained.\n\\end{proof}\n\nNote that the PDFs of uncertainties of multiple qubit observables\n(more than three) will be reduced into the three situations above,\nas shown in \\cite{Zhang2021preprint}. Here we will omit it here.\n\n\\section{PDF of uncertainty of a single qudit observable}\n\nAssume $\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E}$ is a non-degenerate positive matrix with eigenvalues\n$\\lambda_k(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})=a_k(k=1,\\ldots,d)$ with $a_d>\\cdots>a_1$. Denote by\n$V_d(\\boldsymbol{a}}\\def\\bsb{\\boldsymbol{b}}\\def\\bsc{\\boldsymbol{c}}\\def\\bsd{\\boldsymbol{d}}\\def\\bse{\\boldsymbol{e})=\\prod_{1\\leqslant i1)$, with\neigenvalues $\\lambda(\\boldsymbol{A}}\\def\\bsB{\\boldsymbol{B}}\\def\\bsC{\\boldsymbol{C}}\\def\\bsD{\\boldsymbol{D}}\\def\\bsE{\\boldsymbol{E})=(a_1,\\ldots,a_d)$, where\n$a_1<\\cdots