diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzizx" "b/data_all_eng_slimpj/shuffled/split2/finalzizx" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzizx" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{intro}\nThe search for a naturally occurring condensed matter system other than helium, capable of displaying the stunning phenomenon of superfluidity, has motivated decades of experimental and theoretical investigation of condensed parahydrogen ({\\it p}-H$_2$). One naturally thinks of it as a potential second superfluid, for its elementary constituents, namely {\\it p}-H$_2$ molecules, are composite bosons of spin $S=0$ with a mass equal to one half of that of a $^4$He atom.\\\\ Considering fluid {\\it p}-H$_2$ as a non-interacting gas, Ginzburg and Sobyanin \\cite{gs} proposed that Bose-Einstein Condensation ought to occur at a temperature $T\\sim 6$ K. Such a simple model yields an equivalent temperature for $^4$He $\\sim$ 3 K, remarkably close to that at which it is observed experimentally, with the concurrent onset of superfluidity; it seems thus plausible that the same physical behavior might occur in {\\it p}-H$_2$.\n\\\\ \\indent\nExperimental investigation spanning a few decades \\cite{bretz,maris,rail,schindler,sokol,chan}, however, has failed to observe the putative superfluid (SF) phase of {\\it p}-H$_2$, which, unlike $^4$He, solidifies at temperature $T=13.8$ K temperature, i.e., well above that at which the superfluid transition should take place.\n\\\\ \\indent \nNo controversy exists at the theoretical level about the fact that the different behavior of {\\it p}-H$_2$ and $^4$He is a direct consequence of the very different relative importance of interparticle interactions, imparting {\\it p}-H$_2$ a strong propensity to crystallize \\cite{boninsegni04,boninsegni13}. Molecular localization, and the consequent absence of superfluidity, is predicted even in confinement \\cite{omiyinka,screw}, disorder \\cite{turnbull}, or in thin films intercalated with a regular array of impurities \\cite{boninsegni05,boninsegni16}. It seems fair to state that no credible scenario of {\\em bulk} {\\it p}-H$_2$ superfluidity is presently being investigated, or even discussed; the only quantitative prediction of SF behavior of {\\it p}-H$_2$ has been made for small clusters (few tens of {\\it p}-H$_2$ molecules), at temperatures of the order of 1 K \\cite{sindzingre,kwon,fabio,fabio2,boninsegni20}; some experimental evidence of SF behavior of these clusters has actually been obtained \\cite{grebenev,li,raston}. \n\\\\ \\indent\nOne might wonder ``how far'', so to speak, bulk {\\it p}-H$_2$ is from turning SF; is it ``on the verge'' of undergoing such a transition, one clever idea or experimental trick away from circumventing solidification, or is it intrinsically prevented from displaying superfluidity, due to a particular combination of particle mass and interaction? \nThere are strong indications in favor of the latter scenario, one that no amount of effort or ingenuity on the part of experimenters may overcome. Besides the above-mentioned experimental failure to detect SF response, in very different experimental conditions, no theoretical evidence of a possible bulk metastable (super)fluid phase has been reported so far \\cite{boninsegni04,boninsegni18}. Solidification occurs in {\\it p}-H$_2$ as a direct consequence of the suppression of quantum-mechanical exchanges, which are known to play a crucial role in the crystallization of Lennard-Jones-like Bose system \\cite{role}. Exchanges are hindered in {\\it p}-H$_2$ by the relatively large value of the hard core diameter of the intermolecular interaction \\cite{boninsegni18}, {\\em not} by the depth of its attractive well (a long held, erroneous belief).\n\\\\ \\indent\nIn a recent experimental paper, however, the assertion has been made that the top layer of a thin (few layers thick) {\\it p}-H$_2$ film adsorbed on a glass substrate in, in fact, in the near proximity of a phase transition to a SF phase \\cite{ikr}. This conclusion is based on measurements of the elastic response of films of {\\it p}-H$_2$ (as well as HD and D$_2$) adsorbed inside gelsil -- a porous glass which can be regarded as a network of interconnected cylindrical channels\\footnote{Theoretical evidence suggests that the properties of an adsorbed layer of {\\it p}-H$_2$ on the inner surface of a cylinder of such a large diameter, are essentially identical with those on a flat substrate. See, for instance, Ref. \\cite{omiyinka}.} of average diameter $\\sim 40$ \\AA. Specific anomalies in the observed elastic response are interpreted in Ref. \\cite{ikr} as signaling the onset of different physical regimes of diffusion of surface molecules, culminating with their freezing into a localized state at a temperature $T\\sim 1$ K, whereupon the layer crystallizes.\nThe implication is that the top layer of the film may remain in a liquid-like phase, in which molecules experience a rather high mobility, down to a temperature fairly close to that at which a Bereszinskii-Kosterlitz-Thouless SF transition ought to occur, based on the well-known universal jump condition \\cite{nk}, assuming a two-dimensional (2D) density equal to the equilibrium value \\cite{boninsegni04}. \n\\\\ \\indent\nThe scenario laid out in Ref. \\cite{ikr}, while certainly intriguing, is at variance with first principle theoretical studies of {\\it p}-H$_2$ films on various substrates, including weakly attractive ones, showing that adsorption takes place through completion of successive {\\em solid } adlayers, whose melting temperature is close to 7 K \\cite{melting}, i.e., significantly higher than what implied in Ref. \\cite{ikr}. Those studies have yielded no indication of any liquid-like behavior of the top layer. \\\\ \\indent\nIn order to provide a theoretical check of the predictions made in Ref. \\cite{ikr}, as well as to gain additional theoretical insight, we have carried out first principle computer simulations at low temperature (down to $T$=0.5 K) of a thin (up to two layers) {\\it p}-H$_2$ film adsorbed on a glass substrate. Our simulations are based on standard Quantum Monte Carlo (QMC) techniques. \nWe made use of an accepted pair potential to describe the interaction between {\\it p}-H$_2$ molecules, while for the interaction between a {\\it p}-H$_2$ molecule and the substrate we utilized the simple ``3-9'' potential, with coefficients adjusted by starting from the commonly adopted values for helium, and modifying them to describe {\\it p}-H$_2$, using the Lorentz-Berthelot combining rules, as done in previous work \\cite{omiyinka}. \\\\ Despite its relative crudeness, this microscopic model turns out to be quite effective in reproducing important experimental observations, such as values of coverage at which the first and second adsorbed layers are completed. However, our results lend no support to the contention of Ref. \\cite{ikr}. Rather, we arrive at the same conclusions as in all similar computational studies of condensed {\\it p}-H$_2$, which have shaped our current understanding of the system. \n\\\\ \\indent\nQuantum-mechanical exchanges, both among molecules in different layers as well as in the same layer, are all but non-existent, all the way down to the lowest temperature considered here. This makes the entire contention that the top adlayer may be ``on the verge'' of turning superfluid, downright untenable. Rather, we find both a monolayer as well as the second layer to be in an insulating crystalline phase, at temperatures as high as 6 K. Structure and energetics of the adsorbed film display little or no change as the temperature is lowered from 6 to 0.5 K. In summary, our simulations, based on the currently accepted microscopic model of condensed {\\it p}-H$_2$, one that accurately accounts for a great deal of observed properties of the bulk phase (see, for instance, Ref. \\cite{dusseault}), are at variance with the interpretation proposed in Ref. \\cite{ikr} of the elastic anomalies observed therein. More generally, they reaffirm the notion that the study of adsorbed {\\it p}-H$_2$ films is scarcely a promising avenue to the observation of superfluidity.\n\\\\ \\indent\nThe remainder of this manuscript is organized as follows: in sec. \\ref{model} we describe the microscopic model adopted in this study and offer a brief description of the computational methodology adopted; we illustrate our results in sec. \\ref{else}, and outline our conclusions in sec. \\ref{concl}. \n\n\n\n\n\n\\section{Model and methodology}\\label{model}\nWe consider an ensemble of $N$ {\\it p}-H$_2$ molecules, regarded as point-like spin-zero bosons, moving in the presence of a smooth, flat substrate. In the coverages and temperature ranges considered here, up two {\\it p}-H$_2$ layers form. \nThe system is enclosed in a simulation cell shaped as a cuboid, with periodic boundary conditions in all directions (but the length of the cell in the $z$ direction can be considered infinite for all practical purposes). The flat (glass) substrate occupies the $z=0$ face of the cuboid, whose area is $A$. The nominal coverage $\\theta$ is given by $N\/A$, while $N_1\/A$, $N_2\/A$ are the 2D densities in the two layers.\nThe quantum-mechanical many-body Hamiltonian reads as follows:\n\\begin{eqnarray}\\label{u}\n\\hat H = -\\sum_{i}\\lambda\\nabla^2_{i}+\\sum_{i}U({z}_{i})+\\sum_{i$ 10$^{-6}$. While the peak power of the sound waves decreases with radius (top) the total integrated energy remains relatively constant over the range 1.5 $\\leq$ $r$ $\\leq$ 7.0 $r_0$ (bottom). Sound waves are dispersive within an atmosphere, a consequence of the RHS of Equation~\\ref{eq:11}. \n}\n\\label{fig:power_energy}\n\\end{figure}\n\nSound wave power is measured at 20 evenly-spaced measurement radii in the range 0.5 $\\leq$ $r$ $\\leq$ 10 $r_0$. We integrate the sound wave flux according to Equation~\\ref{eq:16} to determine $\\dot{E}_{\\mathrm{Acu}} (R_S,t)$, the acoustic power at a radius $R_S$. The acoustic energy is determined by integrating $\\dot{E}_{\\mathrm{Acu}} (R_s,t)$ over all time $t$ using a second-order accurate midpoint method. Results are presented in Figure~\\ref{fig:power_energy}.\n\nA single jet injects an energy of\n\\begin{equation} \\label{eq:21}\n\tE_{\\mathrm{Jet}} = \\frac{1}{2} \\rho_J 2 \\pi r_{\\mathrm{in}}^2 \\left(1 - \\cos{\\theta_J} \\right) v_J^3 t_J,\n\\end{equation}\ninto the domain. Because the jets are high Mach number flows, small numerical errors in the jet injection can become significant, especially if the internal Mach number, $\\sqrt{\\rho_J\/\\rho_0}(v_J\/ c_s)$, is low. The jet energy is computed numerically by integrating the total energy change in the domain immediately after the jet turns off. For our fiducial simulations, we find this measured injected energy agrees with Equation~\\ref{eq:21} within $\\approx$ 3.4\\%.\n\n\\begin{figure}\n\\centerline{\n\\hspace{0.3cm}\n\\psfig{figure=f3.pdf,height=0.4\\textwidth} \n}\n\\caption{Acoustic energy as a function of radius. Sound waves are launched at larger radii, leading to a depression in acoustic energy for $r$ $<$ 1.5 $r_0$. The acoustic energy remains relatively flat out to a radius $r$ = 7 $r_0$, with a slight increase of about 3.5\\% $E_{\\mathrm{Jet}}$, possibly due to sound waves being driven from the bubble as it rises through the atmosphere. The drop-off at $r$ = 7 $r_0$ of $\\approx$ 10\\% $E_{\\mathrm{Acu}}$ is consistent with the resolution-dependent drop-off seen in tests of a single large-scale eigenmode (see Appendix~\\ref{eigenmode}). The shaded region represents the \\citetalias{Tang2017} limit.}\n\\label{fig:efficiency}\n\\end{figure}\n\nMuch of the challenge with measuring sound waves in a jet simulation comes from trying to separate sound waves from fluctuations in the jet plasma. This separation is accomplished using the tracer fluid $\\mu_1$, injected with the jet at the inner boundary. In Figure~\\ref{fig:power_energy}, we omit the jet material by only measuring perturbations in plasma with $\\mu_1 <$ 10$^{-6}$. Figure 3 demonstrates the effect of the choice of jet omission method on the sound wave measurement. In general, as long as the threshold for $\\mu_1$ is less than 10$^{-4}$, the sound wave measurement is accurate. \n\nIn Figure~\\ref{fig:efficiency}, we include a calculation of the sound wave energy for $\\mu_1 <$ 10$^{-6}$ where we only count positive power in the integral in Equation~\\ref{eq:16}, $ \\delta P (\\textbf{r},t)$ $\\times$ $\\delta v_r (\\textbf{r},t)$ $>$ 0 (shown as the black colored line). By lowering the threshold for $\\mu_1$, we are removing ``negative'' acoustic power, i.e. uncorrelated fluctuations between the pressure and radial velocity in the jet plasma. The lower the threshold for $\\mu_1$, the closer the lines get to the black line; removing the jet increases the measured acoustic energy.\n\nThe high resolution fiducial simulations provide immediate clues to the sound wave driving process. Figure~\\ref{fig:efficiency} indicates that sound waves must be launched from large radii with $r \\geq$ 1.0 $r_0$; the dominant mode of sound wave is not from the initial shock when the jet enters the ICM, otherwise this shock wave (which would geometrically diverge into a weak shock\/ sound wave) would be measured at all radii with equal energy. Instead, the measured wavelengths are correlated with the size of the cocoon.\n\nThree dashed lines from the calculation of acoustic energy (Figure~\\ref{fig:efficiency}) show the effect of omitting the jet cone, $\\theta \\leq \\theta_J$, without explicitly specifying a threshold for removing the jet. Note the decrease in energy with radius---a geometric effect as sound waves pass into the jet cone at larger radii. The cocoon is elongated along the jet axis. While this omission method cannot capture the approximately 5\\% of $E_{\\mathrm{Jet}}$ contained in the jet cone, it underscores an important feature of the sound wave driving process: the majority of sound waves ($\\gtrsim$~20\\%~$E_{\\mathrm{Jet}}$) are driven outside of the jet cone, pointing to the cocoon as the source of acoustic energy.\n\nBecause we include no explicit dissipation, the acoustic energy should be conserved with radius. Instead of constant energy, we see a slight increase of 3.5\\% $E_{\\mathrm{Jet}}$. This increase is likely due to sound waves being driven off of the bubbles as they rise through the atmosphere. Because the bubbles are comprised of low-density plasma in a higher density background medium, they are Rayleigh-Taylor unstable. Disturbances in the bubble-atmosphere interface drive sound waves which would be measured at larger radii but not near the core. Furthermore, the turbulent jet seeds the bubbles with powerful vortices which can vibrate the bubble membrane, driving sound waves \\citep{Sternberg2009}. \n\nFinally, we note that not all ``negative'' sound power is unphysical. Indeed, sound waves driven off the bubble can propagate backwards toward the core. Because we compute a sound wave \\textit{flux}, these inward-propagating sound waves have negative power, lowering the overall measured sound wave energy after integrating over time. After integrating over the full spherical shells, only net positive powers are displayed in Figures~\\ref{fig:power_energy} and \\ref{fig:efficiency}; however, the inclusion of inward-propagating sound waves has little to no effect on the overall computed energy.\n\n\\subsection{Physics of Jet-Driven Sound Waves}\\label{jet_physics}\n\nIn this section, we describe the jet-ICM interaction, focusing on the relevant energy flows and how the nonlinear dynamics drives powerful sound waves. We parallel the review of \\cite{BBR1984} (hereafter \\citetalias{BBR1984}) as well as \\citetalias{Reynolds2002} which provide extensive discussions of the jet physics. This section provides a theoretical argument for why jets are able to efficiently produce sound waves while a spherical explosion cannot. \n\nAt the beginning of the active phase, the jet enters the ICM highly supersonic, with an internal Mach number of 10 and a velocity of 100 $c_s$. The conical inflow is focused by the pressure of the surrounding medium, forcing the jet into a series of oblique ``recollimation'' shocks which form Mach diamonds throughout the base of the jet channel. These shocks are likely an insignificant source of sound waves; the shock waves will dissipate before leaving the interaction region. A strong annular shock forms at the working surface of the jet against the ambient medium and is carried at the velocity of the jet head $v_h$. The jet velocity can be highly supersonic while $v_h$ is only mildly transonic.\n\nJet material upstream of the shock is strongly heated by passing through the shock, forming a ``hot spot,'' a common feature observed in radio galaxies. This hot plasma is now over-pressurized and will expand into the ambient ICM supersonically. The expansion of the shocked plasma pushes against ambient higher density material, and the interface of this hot plasma and the ambient ICM comes into pressure balance, forming a contact discontinuity. The hot spot pressure is balanced by the ram-pressure of the ambient medium in the jet frame.\n\n\\begin{figure}\n\\centerline{\n\\hspace{0.3cm}\n\\psfig{figure=f4.pdf,height=0.44\\textwidth} \n}\n\\caption{Constructive interference at the bow shock. Here, we plot a cut-through of the acoustic flux density along $\\theta = 90^{\\circ}$, perpendicular to the jet, for times 0.5 $r_0\/c_s$ $\\leq$ $t$ $\\leq$ 2.5 $r_0\/c_s$. Note how small-scale harmonics ``catch up'' to the leading bow shock. As the bow shock dissipates through shock-heating, small-scale sound waves constructively interfere at the forward shock, reinforcing the bow shock and forming a powerful, long-wavelength sound wave which can carry energy to large distances.}\n\\label{fig:interference}\n\\end{figure}\n\nThe supersonic expansion of the hot spot is a driver of sound waves, but it may be subdominant. This expansion acts like a spherical explosion, driving out a bow shock into the surrounding medium which dissipates via shock heating and weakens due to geometrical divergence. The bow shock forms the initial envelope around the jet, a structure visible as a density enhancement surrounding the jet in Figure~\\ref{fig:TRho_SAcu}. The most robust driver of sound waves is the ``cocoon,'' the roiling billow of shock heated material enveloping the central jet channel. Jet ejecta passing through the strong annular shock behind the working surface is diverted into a wide fan, slowing the material's radial velocity in the lab frame and forming ``backflows'' in the jet frame. \n\nThe interaction between the backflowing material and the jet channel fragments the fluid flow into vigorous Kelvin-Helmholtz instabilities which saturate by forming a turbulent cocoon of shocked plasma. These turbulent motions drive a broad spectrum of sound waves by diverting directed jet energy into vortices which are supersonic at the cocoon-ICM contact discontinuity. Fluid motions slow through shock heating, driving the cocoon towards equipartition while producing small-scale shock waves. These waves diverge into sound waves which propagate rapidly through the shocked ICM, accumulating at the bow shock. Constructive interference erases the small-scale structure of the sound waves, partitioning energy into large-amplitude, long-wavelength (comparable to the cocoon size), powerful sound waves which carry energy from the cocoon into the ambient ICM (Figure~\\ref{fig:interference}). \n\nAfter the jet phase, buoyant evolution determines the dynamics. A rarefaction wave \\citep{Guo2018} emanates from the collapsing cocoon, generating the second peak in Figure~\\ref{fig:power_energy}. Rayleigh-Taylor instabilities develop at the cocoon-ICM interface, driving weak sound waves. The cocoon plasma forms back-to-back plumes which rise near the sound speed through the atmosphere, producing sound waves at the bubble-ICM interface.\n\n\\subsection{Tracking the Energy} \\label{track_energy}\n\n\\begin{figure}\n\\centerline{\n\\hspace{0.3cm}\n\\psfig{figure=f5.pdf,height=0.45\\textwidth} \n}\n\\caption{Volume-integrated energies of the core of the atmosphere ($r$ $<$ 10 $r_0$). The jet-ICM interaction inflates bubbles of shocked plasma which rise through the atmosphere near the sound speed. Plasma entrained in the bubbles is lifted higher in the gravitational potential, converting bubble enthalpy (internal energy) to gravitational energy. The $\\approx$ 25\\% of energy which did not go into inflating the bubbles or ICM is available as kinetic energy, partitioned between sound waves and bubble motions.}\n\\label{fig:energetics}\n\\end{figure}\n\nThis work is a study of energy partitioning. Three energy channels are available, kinetic energy $E_{\\mathrm{Kin}}$, internal (thermal) energy $E_{\\mathrm{Int}}$, and gravitational potential energy $E_{\\mathrm{Grav}}$. Jet energy is distributed among these channels and divided between the jet ejecta and the ICM.\n\nOur isothermal atmosphere is convectively stable according to the \\cite{Schwarzschild1958} criterion. We define an entropy threshold $s_0$ as the entropy at our final measurement radius, $r$ = 10 $r_0$, a value of $s_0$ = 6.03 $c_s^2 \\rho_0^{-\\gamma}$. Any material which is shocked to an entropy at or above this threshold will buoyantly rise to at least this radius, forming the ``bubbles'' in the core of the cluster. We consider this high entropy plasma to be ``jet'' material, i.e. the material originated as jet ejecta or received significant shock heating. The remaining plasma is considered ``ICM.'' Figure~\\ref{fig:energetics} presents the energy evolution of these components (see also \\citetalias{Reynolds2002} Figure 4).\n\n\\begin{figure*}\n\\hbox{\n\\psfig{figure=f6a.pdf,width=0.495\\textwidth}\n\\psfig{figure=f6b.pdf,width=0.495\\textwidth}\n}\n\\caption{Time series of pressure perturbations (left) and velocity perturbations (right) in the fiducial simulation. We find values of $\\delta P\/P_0$ $\\approx$ 0.3 and $\\delta v_r\/c_s$ $\\approx$ 0.15 at the central measurement radius of 5 $r_0$, indicating that the assumption of linearity which underpins Equation~\\ref{eq:16} is reasonable. The assumption that the background pressures and velocities remain constant throughout the simulation is also valid, with state variables returning to their initial values at measurement radii beyond 2 $r_0$.}\n\\label{fig:perturbations}\n\\end{figure*}\n\nJet energy is rapidly thermalized in the hot spot and recollimation shocks. The total kinetic energy drops by $\\approx$ 5\\% $E_{\\mathrm{Jet}}$ after $t$ = 0.5 $r_0\/c_s$ when the jet is shut off; jet material slows due to the ram pressure of the atmosphere and the bow shock detaches from the cocoon. Rarefied plasma behind the bow shock drives a rarefaction wave, transferring weakly shocked ICM thermal energy back to kinetic energy. Thermal energy in the jet ejecta is used to inflate bubbles, raising the enthalpy of the ambient ICM. Bubbles rapidly convert internal energy to gravitational energy as they rise higher into the atmosphere. Approximately 50\\% of energy is in $E_{\\mathrm{Grav}}(\\mathrm{ICM})$, ambient ICM swept up by the weak bow shock which mediates adiabatic expansion of the atmosphere. The remaining 25\\% in kinetic energy is shared between sound waves and bulk motions of the bubble.\n\n\\subsection{Nature and Power Spectra of Perturbations} \\label{perturbations}\n\nOur method (Section~\\ref{method}) assumes that at large distances ($r$ $\\geq$ $r_0$), the fluid pressure is well-described as a constant background pressure $P_0$ plus a perturbation $\\delta P$, where $\\delta P\/ P_0$ $\\ll$ 1. Figure~\\ref{fig:perturbations} demonstrates the validity of these assumptions. Perturbations we refer to as ``sound waves'' are in reality weak shocks even at $r$ = 5 $r_0$ (see Appendix \\ref{weak_shocks}); however, our measurement method remains valid. \n\n\\begin{figure}\n\\centerline{\n\\hspace{0.3cm}\n\\psfig{figure=f7.pdf,height=0.56\\textwidth} \n}\n\\caption{Power spectra of pressure fluctuations at 7 different radii and 3 different angles, normalized to measured acoustic power at each radius. An $\\omega^{-5\/3}$ \\cite{Kolmogorov1941} scaling is shown by the black dashed line. Ringing due to sharp shock features is reduced by convolving the spectra with a flat sliding window. The spectra soften with radius as power is concentrated in the large-scale waves associated with the cocoon scale (dotted line).}\n\\label{fig:powerspec}\n\\end{figure}\n\nAt 5 $r_0$, the ratio $\\delta P\/P_0$ $\\approx$ 0.3 while $\\delta v_r\/c_s$ $\\approx$ 0.15. Contributions in the energy from higher order perturbations will be suppressed by a factor of $\\sim$ 10. Independent of errors imposed by the grid resolution, our method is accurate to within 10\\%. Thus, we report our fiducial measurement of $E_{\\mathrm{Acu}}$ $\\approx$ 28\\% $E_{\\mathrm{Jet}}$ as $E_{\\mathrm{Acu}}$ $\\gtrsim$ 25\\% $E_{\\mathrm{Jet}}$.\n\nSimilarly, the assumption of a constant background is reasonable (see inset plots). Beyond 2 $r_0$ the pressures and velocities return to their equilibrium values, with minor fluctuations due to a combination of weak, small-scale sound waves driven by fall-back of the cocoon onto the inner boundary as well as insignificant grid heating. The situation in the inner radii is more complicated. When the jet is shut off, the ICM collapses onto the evacuated channel, leading to large-scale backflows into the low-pressure region. These negative radial velocities and pressure perturbations are measured as an outgoing sound wave flux. Jet ejecta has already passed through the region, and we are unable to omit this spurious energy. The inclusion of these correlations is negligible.\n\n\\begin{figure*}\n\\hbox{\n\\psfig{figure=f8.pdf,width=1.0\\textwidth}\n}\n\\caption{Summary of parameter scan results. Color indicates acoustic efficiency, $E_{\\mathrm{Acu}}\/E_{\\mathrm{Jet}}$, measured at $r$ = 5 $r_0$, and the values on the heat maps correspond to measured efficiencies for the given set of parameters. Numbers displayed in white are below the \\citetalias{Tang2017} threshold while those in black are above this limit. Larger opening angles, higher velocities, and mid-range densities tend to produce sound waves more efficiently, while the smallest opening angles are inefficient, with values far below the \\citetalias{Tang2017} limit. High density jets propagate too rapidly through the ICM, driving sound waves beyond the measurement radius, while narrow opening angle jets are not spatially resolved enough to form the annular shocks which lead to the development of large cocoons. High Mach number, wide angle jets form strong annular shocks and drive vigorous Kelvin-Helmholtz instabilities, producing large cocoons and powerful sound waves.}\n\\label{fig:parameter_scan}\n\\end{figure*}\n\nFigure~\\ref{fig:powerspec} displays power spectra of the pressure perturbations normalized to acoustic power at a given radius. The spectra show an approximate $\\omega^{-5\/3}$ scaling (or $k^{-5\/3}$ since $\\omega^2$ = $c_s^2 k^2$ for a dispersionless wave), indicating that the sound waves inherit the turbulent structure of the cocoon. Spectra soften at larger radii as the shock structures dissipate, diverge, and disperse. Power is concentrated at the largest scales, with frequencies of 1\/2~$c_s\/r_0$, consistent with a cocoon size $\\approx$ $r_0$.\n\n\\subsection{Parameter Scan} \\label{parameter_scan}\n\nWe now present the results from a scan over 125 different combinations of jet half-opening angle ($\\theta_J$), velocity ($v_J$), and density ($\\rho_J$). By varying these parameters, we can explore the universality of efficient sound wave driving by AGN jets. We find that hydrodynamic jet simulations must satisfy three requirements to efficiently produce sound waves at a given radius: 1) opening angles must be wide enough to properly resolve the annular shocks and recollimation shocks which produce large-scale cocoons, 2) velocities must be high enough to power vigorous Kelvin-Helmholtz instabilities, and 3) jet densities must be low enough to avoid ballistically propagating beyond the measurement radius, at which point the ambient medium is filled with shocked plasma.\n\nThe first condition is the numerical constraint of \\citetalias{Reynolds2002}. If simulations lack the spatial resolution to properly evolve the formation of the initial strong annular shock and the ensuing recollimation shocks, jet thrust is not diverted and a backflowing cocoon is not formed. Rather, the jet develops into a ``drill'' \\citep{Scheuer1974} that rapidly bores through the cluster, focusing acoustic energy in the jet cone as a bow shock without reinforcement from cocoon-driven sound waves.\n\nA condition on velocity is really a condition on jet power since the power scales as $v_J^3$; high velocity jets are powerful jets. Weak jets are unable to produce significant cocoons since their initial interaction with the ICM does not form strong shocks. Powerful jets drive especially vigorous Kelvin-Helmholtz instabilities \\citep{Vernaleo2007}. The growth rate of the instability is \n\n\\begin{equation} \\label{eq:22}\n\t\\Gamma_{\\mathrm{KHI}} = k \\sqrt{ \\frac{\\rho_J \\rho_{\\mathrm{amb}} \\left(v_{J} - v_{\\mathrm{amb}} \\right)^2}{\\left(\\rho_J + \\rho_{\\mathrm{amb}} \\right)^2} },\n\\end{equation} \nwhere $k$ is the wavenumber of a perturbation to the jet-ICM surface and ``amb'' denotes the ambient medium \\citep{Chandrasekhar1961}. The stronger the velocity shear, the more vigorous the instability.\n\n\\begin{figure*}\n\\hbox{\n\\psfig{figure=Jet_Evolution_small.png,width=1.0\\textwidth}\n}\n\\caption{Time evolution of entropy (in units of $c_s \\rho_0^{-\\gamma}$) and acoustic flux density (in units of $\\dot{E}_{\\mathrm{Jet}}$) for 5 simulations presented in this paper. Each row is a different simulation, while each column represents times $t$ = 2, 5, 10, 30, and 50 $r_0\/c_s$ respectively. Row 1: Fiducial jet ($\\theta_J$ = 15$^{\\circ}$, $v_J$ = 100$c_s$, $\\rho_J$ = 0.01 $\\rho_0$, $E_{\\mathrm{Acu}}$ = 27\\% $E_{\\mathrm{Jet}}$) at parameter scan resolution. Row 2: Narrow jet ($\\theta_J$ = 5$^{\\circ}$, $v_J$ = 100$c_s$, $\\rho_J$ = 0.01 $\\rho_0$, $E_{\\mathrm{Acu}}$ = 9\\% $E_{\\mathrm{Jet}}$). Row 3: High velocity jet ($\\theta_J$ = 15$^{\\circ}$, $v_J$ = 10$^{2.5} c_s$, $\\rho_J$ = 0.01 $\\rho_0$, $E_{\\mathrm{Acu}}$ = 29\\% $E_{\\mathrm{Jet}}$). Row 4: High density jet ($\\theta_J$ = 15$^{\\circ}$, $v_J$ = 100$c_s$, $\\rho_J$ = 0.1 $\\rho_0$, $E_{\\mathrm{Acu}}$ = 28\\% $E_{\\mathrm{Jet}}$). Row 5: Pulsed jet with active time $t_J$ = 0.05 $r_0\/c_s$ and 10 active phases ($\\theta_J$ = 15$^{\\circ}$, $v_J$ = 100$c_s$, $\\rho_J$ = 0.01 $\\rho_0$, $E_{\\mathrm{Acu}}$ = 21\\% $E_{\\mathrm{Jet}}$). Note the lack of significant cocoon and rarefaction wave in the narrow jet (Row 2). The high velocity jet (Row 3) has the largest power, producing a significant cocoon. Similarly, the high density jet (Row 4) produces a smaller yet substantial cocoon. The pulsed jet (Row 5) begins by producing 10 distinct sound waves; however the waves accumulate at larger radii, forming the same 2 peak structure as the single jets (see Section~\\ref{pulsed}). Acoustic efficiencies are reported for $r$ = 5 $r_0$. }\n\\label{fig:jet_evolution}\n\\end{figure*}\n\nThe final condition demands that AGN act as a thermostat, carefully regulating the temperature of a given radius by launching outflows which deposit their energy at that location. If AGN launch high momentum jets containing significant mass ($\\rho_J$ $\\sim$ 0.1 $\\rho_0$), they would propagate ballistically through the cluster, depositing their energy at large radii beyond the cool core. Because AGN in cool core clusters tend to operate in the weaker Fanaroff-Riley Type I mode \\citep{FR1974}, this third condition is likely satisfied in real clusters.\n\nFigure~\\ref{fig:jet_evolution} provides insight into the morphology of sound waves in the parameter scan simulations. Sound waves begin as a narrow band concentrated around the cocoon, focused in the leading bow shock when the jet enters the ICM (Column 1). Once the jet turns off (Column 2), the shock detaches and sound waves from cocoon instabilities rush outward, reinforcing the bow shock. A rarefaction wave is launched (Columns 3 and 4) as the cocoon falls back into the core. Finally, the large scale sound waves propagate throughout the cool core (Column 5), passing the measurement radius and dispersing due to gravity. \n\nEntropy maps in Figure~\\ref{fig:jet_evolution} display bubbles, the remnants of the cocoon. When the cocoon collapses, material is forced along the jet axis and into the low density bubble regions, causing the bubbles to expand into quasi-spherical, elongated cavities. Our bubbles require supersonic expansion to clear a low density cavity.\n\n\\subsection{Pulsed Jets} \\label{pulsed}\n\n \\begin{figure}\n\\centerline{\n\\hspace{0.3cm}\n\\psfig{figure=f10.pdf,height=0.4\\textwidth} \n}\n\\caption{Pulsed jet results. Decreasing the outburst duration tends to decrease the overall acoustic efficiency by $\\approx$ 5\\% $E_{\\mathrm{Jet}}$. Backflows are driven most strongly with a continuous source of energy. By pulsing the jet, we allow each cocoon to expand away from the core, releasing a bow shock and rarefaction wave without strong reinforcement from instability-driven sound waves. Even short pulses remain at an efficiency of $\\approx$ 20\\%, indicating that a cocoon still forms due to the high power of each pulse.}\n\\label{fig:pulsed_jet}\n\\end{figure}\n\nThe wavelengths of sound waves in our simulations are inconsistent with observations. If we choose a unit system of $r_0$ = 30 kpc, our measured wavelengths of 2 $r_0$ are a factor of 6 larger than the $\\approx$ 10 kpc ripples measured in the Perseus Cluster (Sanders \\& Fabian 2007). The scale of our sound waves is set by the cocoon size and thus the duration of the jet; however, in real systems the wavelengths of sound waves are likely set by the recurrence time between outbursts \\citep{Million2010}. We explore the effect of this recurrence time by ``pulsing'' jets.\n\nJets are pulsed for a time $t$ = $t_J$ with an interval of $t_J$ between each outburst until they have injected the same amount of energy as the fiducial case. We use the same parameters as the fiducial run so that kinetic luminosities are identical across the pulsed jets. For one run, we explore the effect of doubling the length of the active phase, and thus doubling the energy injected (the ``long duration'' jet). Our results are summarized in Figure~\\ref{fig:pulsed_jet}. \n\nIn general, pulsing decreases the efficiency of sound wave production by $\\approx$ 5\\% $E_{\\mathrm{Jet}}$ compared to our fiducial run. Each pulse is powerful enough to produce a cocoon; however, without continual driving from a jet, instabilities are less significant. Pulsing allows the cocoon to cool between active phases, increasing the internal Mach number of shocks from subsequent outbursts. More energy is dissipated in the hot spot. Pulsed waves pile up into a single large-scale wave at large distances.\n\nThe long duration jet underscores two points: 1) the efficiency of driving sound waves is set by the kinetic luminosity of the jet alone and 2) the cocoon size and the dominant wavelength is set by the jet duration. While changing the jet duration had a minor effect on efficiency over a large range of radii, the long duration jet does not show the drop-off at $r$ = 7 $r_0$; longer wavelength sound waves are better resolved by our logarithmic grid. \n\n\\section{Discussion} \\label{discussion}\n\nWe have studied a simple toy model---a supersonic jet in an atmosphere. Real systems include a number of complications: thermodynamics such as radiative cooling, magnetic fields, and relativistic effects may all be significant for AGN jets. In this section, we scale our problem to real systems and discuss how the inclusion of physics beyond ideal hydrodynamics may affect our results. We close with a discussion of the other problem outlined in Section~\\ref{intro}: sound wave dissipation.\n\n\\subsection{Scaling to Real Systems} \\label{real_systems}\n\nWe define the density $\\rho_0$ as $\\mu_{\\mathrm{ICM}} m_H n_{\\mathrm{ICM}}$, where $n_{\\mathrm{ICM}}$ is set to 0.01 cm$^{-3}$, the mean particle mass $\\mu_{\\mathrm{ICM}}$ is 0.6, and $m_H$ is the proton mass. The sound speed of the cluster is set to that of Perseus, $c_s$ = 1000 km\/s \\citep{Fabian2017}. Already an issue arises with this choice: all velocities in our scan are greater than 10\\% of the speed of light. Relativistic effects apply, and the highest velocity jet, $\\log_{10}{(v_J\/c_s)}$ = 2.5, is superluminal in this unit system. The parameter scan is an exploration of jet physics rather than an effort to reproduce real sources.\n\nWe choose an atmosphere scale $r_0$ of 30 kpc and a measurement radius of 150 kpc. Bubbles in our fiducial simulations are approximately $r_0$ in diameter while Perseus shows cavities $\\sim$ 15 kpc across. Our simulations overestimate the size of bubbles. Our fiducial jet has a power of 4.7$\\times$10$^{44}$ erg s$^{-1}$, within the range of jet powers inferred for NGC 1275 in Perseus. \n\n\\begin{figure}\n\\centerline{\n\\hspace{0.3cm}\n\\psfig{figure=f11.pdf,height=0.5\\textwidth} \n}\n\\caption{Acoustic efficiency ($E_{\\mathrm{Acu}}\/E_{\\mathrm{Jet}}$) for realistic galaxy cluster\/ jet parameters. Shaded boxes indicate the range of possible jet powers for M87 in the Virgo Cluster (orange; \\cite{Allen2006}), NGC 1275 in the Perseus Cluster (purple; \\cite{Graham2008}), and the central galaxy in the Phoenix Cluster (blue; \\cite{McDonald2013}). The fiducial jet simulation is indicated by the pink star. The acoustic efficiency jumps significantly at 10$^{44}$ erg s$^{-1}$ from 15\\% to $\\gtrsim$ 25\\%. At this moderately high power, AGN jets are energetic enough to form a large-scale cocoon of shocked plasma. Turbulence driven by instabilities in this cocoon produces powerful sound waves which reinforce the initial bow shock from the jet-ICM interaction.}\n\\label{fig:observable_efficiency}\n\\end{figure}\n\nFigure~\\ref{fig:observable_efficiency} shows how acoustic efficiency varies with jet power across our parameter scan. The relation shows a number of trends consistent with the conditions discussed in Section~\\ref{parameter_scan}. A jump in efficiency occurs around a jet power of 10$^{44}$ erg s$^{-1}$ from $E_{\\mathrm{Acu}}$ $\\sim$ 15\\% to $\\gtrsim$ 25\\% $E_{\\mathrm{Jet}}$. The narrowest jets do not exhibit this jump, with efficiencies never breaking 20\\%, while wide angle jets cluster around a line spanning 10$^{45}$ - 10$^{46}$ erg s$^{-1}$ and efficiencies of 25 - 31\\%. Below this critical power, the efficiency increases logarithmically with jet power.\n\nThe jump in efficiency occurs as a result of cocoon formation. Above 10$^{44}$ erg s$^{-1}$ in our scaling, jet power becomes sufficient to produce a large-scale cocoon with vigorous Kelvin-Helmholtz instabilities. These instabilities drive powerful sound waves by providing a means of re-routing directed jet energy into turbulence which produces isotropic weak shocks and sound waves. Without this cocoon, the jet-ICM interaction drives insignificant turbulence; the jets behave like a weak spherical explosion and are constrained by the \\citetalias{Tang2017} limit.\n\nGiven that the efficiencies in our parameter scan never rise above 31\\%, the energy partition process may be governed simply by equipartition among the three channels: kinetic, thermal, and gravitational energy. We note that while equipartition appears to be a universal feature of strong turbulence, the equipartition theorem strictly applies only to energy terms quadratic in the degrees of freedom and to systems in thermal equilibrium. The cocoon is certainly not in thermal equilibrium as it drives sound waves which leave the system, and the gravitational energy is not quadratic in the degrees of freedom. The apparent limit on the acoustic efficiency may point to the limited range of our parameter scan or the properties of strong turbulence rather than true equipartition.\n\n\\subsection{Breaking Azimuthal Symmetry} \\label{3D}\n\nJets naturally break polar symmetry, but breaking azimuthal symmetry requires a 3D simulation. In real systems, precession between the AGN jet and accretion disk breaks this symmetry by reorienting the jet direction over time. In this paper, we restricted ourselves to axisymmetric jets, resulting in aspherical bubbles with unrealistically large diameters.\n\nPrevious works implemented precessing jets to produce the spherical cavities associated with X-ray images of clusters \\citep{Falceta2010, Yang2016, Cielo2018, Martizzi2019}. This work measures the contribution of sound waves which would dissipate due to the transport properties of the ICM (see Section~\\ref{dissipation}). Any measurement of this contribution to the feedback energy budget requires proper resolution of the sound wave structure throughout the entirety of the cool core, a significant limitation in 3D.\n\nWhile we ran tests of precessing jets in 3D, resource limitations required us to use a resolution of $N_{\\theta}$ = 256, a full factor of 4 less than the parameter scan runs and a factor of 8 less than the high resolution fiducial case. At this resolution, approximately spherical bubbles are able to form from rapidly precessing AGN jets, but sound waves become poorly resolved even at small radii, $r$ $<$ 2 $r_0$. Here, attenuation of sound waves by the logarithmic grid becomes significant and our sound wave efficiencies rapidly drop below the \\citetalias{Tang2017} limit.\n\nThe low resolution of a 3D simulation implies that the cocoon formation process may be improperly captured---the annular shocks, Kelvin-Helmholtz instabilities, and reinforcement of the bow shock are inhibited by the inability of the simulation to resolve these small-scale processes. Thus, this work remains a first step toward understanding the production of sound waves by AGN jets. Future work may ameliorate the resolution issues encountered in our efforts using adaptive mesh refinement; however, we caution that any proper treatment of the problem must prove that the dominant mode of sound waves can be fully resolved out to large measurement radii.\n\nAxisymmetric turbulence is subject to an inverse cascade of kinetic energy, i.e. turbulent energy can be transferred to larger scales \\citep{Kraichnan1967, Kraichnan1971, Batchelor1969}. This purely 2D effect may be increasing the acoustic efficiency measured in our high resolution axisymmetric simulations. If a simulation were able to resolve the jet physics properly, we expect competing processes to modify the acoustic efficiency in 3D: 1) Kelvin-Helmholtz instabilities will be more vigorous as the jet channel is directed into backflowing plasma by precession, 2) exclusion of the inverse turbulent cascade may inhibit efficient conversion of jet energy to sound waves, and 3) non-axisymmetric acoustic modes become accessible, raising the overall sound wave efficiency. If equipartition governs sound wave generation by cocoon turbulence, the increase in acoustic efficiency may be negligible.\n\n\\subsection{Non-Ideal Physics} \\label{non-ideal}\n\nIdeal hydrodynamics is unable to capture the richness of jet physics including radiation, magnetic fields, and relativistic effects. A detailed discussion of how each of these ingredients influences the overall efficiency of sound wave production is beyond the scope of this paper. \n\nRadiation physics may not modify our results since cocoon plasma is mildly relativistic and thus radiatively inefficient. Heat is trapped locally in real systems as is the case in our simulations. Magnetic fields may provide some level of suppression to the Kelvin-Helmholtz instabilities which drive sound waves; however this suppression is likely weak given the high kinetic energy density of the jet (\\citetalias{BBR1984}). The field bifurcates a simple sound wave into fast and slow magnetosonic modes, providing an extra degree of freedom to compressive waves while possibly adjusting the nonlinear energy partition process. Finally, a relativistic plasma would have a softer equation of state, providing less rigidity at the bubble-ICM interface which generates sound waves. The appendix of \\citetalias{Reynolds2002} discusses how the problem set-up, reproduced in this work, compensates for the realities of a non-relativistic simulation. We encourage careful isolation of each physical process to garner understanding. \n\n\\subsection{Dissipation in the ICM} \\label{dissipation}\n\nOur model adopts an ideal hydrodynamic framework and thus has no explicit means of dissipating sound waves. In non-ideal hydrodynamics, sound waves dissipate through energy diffusion in real space via viscosity and thermal conduction. The large mean free path $\\lambda_{\\mathrm{mfp}}$ of the ICM ($\\lambda_{\\mathrm{mfp}} \\sim$ kpc) implies a high kinematic viscosity, $\\nu \\sim v_{\\mathrm{th,i}} \\lambda_{\\mathrm{mfp}}$, where $v_{\\mathrm{th,i}}$ is the ion thermal velocity. Similarly, the high electron temperature of the ICM implies that thermal conduction is remarkably efficient, providing 87\\% of the energy dissipation for a sound wave. Left unchecked, viscosity and thermal conduction would dissipate sound waves within a wavelength of their launch radius, overheating the cluster center and destroying the integrity of the cool core \\citep{Fabian2005}. \n\nFor an unmagnetized plasma, the situation is not much more promising. \\cite{Zweibel2018} studied sound wave dissipation in an ion-electron plasma using both two-fluid and collisionless treatments. They found similar results to \\cite{Fabian2005} in the collisional (fluid) limit and a factor of $\\sim$ 2 decrease in transport coefficients when collionless Landau damping is considered. \n\nMagnetic fields may provide a path forward. In a weakly collisional, magnetized plasma \\citep{Braginskii1965}, magnetic fields modify the collisional transport by effectively restricting the mean free path perpendicular to the field to scales comparable to the ion gyroradius. With $\\sim \\mu$G fields now observed in a variety of nearby clusters, this implies a suppression of nearly 13 orders of magnitude in the transport occurring across field lines. Given that trans-Alfv$\\acute{\\text{e}}$nic turbulence appears to be the norm in the few clusters for which both magnetic field strengths and turbulent velocities have been observationally constrained \\citep{Carilli2002, Bonafede2010}, the likelihood of a tangled magnetic-field geometry, and thus an overall reduction in transport efficiency \\citep{Narayan2001}, deserves serious consideration. \n\nFurthermore, magnetized plasmas such as the ICM where the thermal pressure dominates over the magnetic pressure (the ``high-$\\beta$'' regime) are likely susceptible to a wealth of rapidly growing, Larmor-scale instabilities. Whistler wave, firehose, and mirror instabilities drive Larmor-scale distortions in the magnetic field which have been shown to enhance the effective collisionality of the plasma and thus affect the transport properties \\citep{RobergClark2016, RobergClark2018, Komarov2016, Komarov2018, Kunz2011, Kunz2014}. This enhanced collisionality may interrupt the collisionless damping of sound waves, enabling them to propagate to larger distances (Kunz et al. 2019, in prep.).\n\n\\subsection{Sound Wave Heating in AGN Feedback} \\label{wave_heating}\n\nAGN feedback in clusters has been investigated extensively using global hydrodynamic models. Early investigations with simple feedback prescriptions struggled to prevent catastrophic cooling \\citep{Vernaleo2006}; however, a number of works establish feedback loops which can sustain cool core temperature profiles over cosmological timescales \\citep{Gaspari2012, Li2015, Prasad2015}, in broad qualitative agreement with observations. Because these works include no dissipation physics, irreversible heating can only occur via shocks, turbulent dissipation, and mixing. Indeed, these works find that mixing and shocks are the dominant modes of heating, with large-scale motions distributing energy throughout the core.\n\nGlobal simulations must necessarily cope with resolution constraints, coarse-graining over sub-kiloparsec scale processes (accretion, plasma instabilities, star formation, etc.) through ``sub-grid'' prescriptions motivated by microphysics. The roles of these ``sub-grid'' phenomena have yet to be elucidated.\n\nIn the sound wave heating model supported by this work, acoustic energy originates from a cocoon of turbulent shocked plasma which drives small-scale waves---waves which are likely under-resolved in more complex global models and thus lost to grid dissipation. Powerful sound waves driven by the jets propagate rapidly throughout the core, spreading their energy. Transport properties within the ICM dissipate acoustic energy gradually, providing constant uniform heating of the entire core with each AGN outburst.\n\nWithin this paradigm, jet interactions with the ICM are rapid yet gentle. Supersonic inflation of the bubbles clears low-density cavities while only driving weak bow shocks, in accord with observations. Strong shock heating is unnecessary in this model due to the efficiency of sound wave production. Without substantial shock heating in the jet cones, the temperature gradients which drive convection are absent. Continuous outbursts (``bubbling'') from the jet maintain steady heating of the core. Cavities of relativistic particles formed by the outbursts may rise slowly through the core, depositing their energy via turbulence, mixing, or cosmic ray streaming. These mechanisms in combination provide significant heating over long time-scales, holding off catastrophic cooling.\n\n\\section{Summary and Conclusions} \\label{conclusion}\n\nWe argue that sound waves may comprise a significant fraction of the energy budget in AGN feedback.\n\\begin{itemize}\n\t\\item Our fiducial simulations convert $\\gtrsim$ 25\\% of the jet energy into long wavelength, powerful sound waves, exceeding the limit imposed by spherical symmetry by more than a factor of 2 (\\citetalias{Tang2017}).\n\t\\item A parameter scan of 125 combinations of jet opening angles, velocities, and densities indicates that high velocity, wide-angle jets are most efficient at producing sound waves, provided they are not so high density that they deposit their energy beyond the cluster core.\n\t\\item The origin of efficient sound wave production is the cocoon of shocked plasma generated by the jet-ICM interaction. Powerful Kelvin-Helmholtz instabilities drive supersonic turbulence in the cocoon, producing weak shocks and sound waves which reinforce the initial bow shock.\n\t\\item Pulsed jets may produce weaker sound waves since they do not constantly drive instabilities and enhance dissipation at the hot spot.\n\t\\item Breaking azimuthal symmetry may increase the efficiency of sound wave production, but significant computational resources are required to properly resolve waves throughout the cool core.\n\\end{itemize} \n\nOur work shows that energetically, sound wave heating remains a viable mechanism for AGN feedback. However, the challenge of disentangling g-modes from sound waves as the energy transport mechanism must ultimately be solved by observations. Deep Chandra observations may provide critical measurements of the temperature structure in the Perseus Cluster ripples which can motivate theory. The onus then falls on theorists to understand the complexity of plasma phenomena which influence g-modes, sound waves, and turbulence to finally understand the deep connections between microphysics and large-scale evolution captured in the mystery of cluster AGN feedback. \n\nWe thank Andy Fabian for helpful discussions on observations of AGN-driven sound waves and Matt Kunz for valuable input on transport physics. We are also grateful for the advice of Debora Sijacki, Sylvain Veilleux, and Anatoly Spitkovsky as well as an anonymous referee whose comments improved the manuscript. CJB acknowledges support from the Winston Churchill Foundation of the USA. CJB is grateful to the University of Maryland Department of Astronomy who provided significant time on the Deepthought2 supercomputer. Simulations in this paper were performed largely on the CSD3 computing cluster at the University of Cambridge.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\nA lot of scientific and engineering problems involve with tracking the interface between different materials.\nMultiple volume tracking\/capturing methods, \nsuch as volume-of-fluid method (VOF) \\cite{hirt_volume_1981}, \nlevel set method \\cite{osher_fronts_1988}, \nfront tracking method \\cite{unverdi_front-tracking_1992}\nhave been introduced to describe the motion of interface explicitly or implicitly.\nAmong those methods, the volume-of-fluid method with piecewise line interface construction (VOF-PLIC) is one \nof the most widely used methods in tracking the interface within the Eulerian framework.\n\nConventional VOF-PLIC reconstructs the normal vector of the reconstructed interface\nby using the a stencil that contains the information of the neighbouring grids,\nfor example, Parker and Youngs' algorithm \\cite{parker_two_1992},\nmixed Youngs-centered algorithm (MYC) \\cite{aulisa_interface_2007},\nand the efficient least squares volume-of-fluid interface reconstruction algorithm (ELVIRA) \\cite{pilliod_second-order_2004}.\nAlthough some of the VOF-PLIC reconstruction algorithms are second-order accuracy,\nwhen there is not enough information from the neighbouring grids, \nfor example, very small scale droplets,\nVOF-PLIC algorithm may not reconstruct the interface accurately.\n\nMoment of Fluid method (MOF) \\cite{dyadechko_moment--fluid_2005,dyadechko_reconstruction_2008}\nintroduces the centroid as the additional constraint to determine the normal vector of the reconstruction plane.\nWhen there is no data from adjacent cells used in reconstruction,\nMOF reconstruction resolves the the interface with a smaller minimum scale than the VOF-PLIC algorithm.\nIn MOF reconstruction,\nevaluating the objective function and its partial derivative is the most expensive part during iteration.\nThe original MOF algorithm by \\citet{dyadechko_moment--fluid_2005} has to call\na very complex polyhedra intersection algorithm for 5 times in every iteration.\nAlthough later study by \\citet{chen_improved_2016} reduces the calling of the geometric algorithm\nto one time each iteration, the computational cost is still heavy.\n\\citet{lemoine_moment--fluid_2017} made their first attempt to derive an analytic form of that describes \nthe objective function as the minimum distance from the reference centroid to a closed, continuous curve\nin 2D Cartesian grid.\nThis is a fully 2D MOF algorithm as solution to the objective function can be obtained by \ncomputing the cubic or quartic roots of polynomials instead of iteration.\nUnfortunately, this approach cannot be extended to 3D \\cite{milcent_moment--fluid_2020}.\n\\citet{milcent_moment--fluid_2020} derived an analytic form of the partial derivative of objective function in 3D rectangular hexahedron,\nby using the analytic form,\nthe computational cost is significantly reduced.\nThe algorithm could be more than 200 times faster than the conventional MOF reconstruction \\cite{dyadechko_moment--fluid_2005}.\n\nIn this study, we further accelerates the MOF algorithm based on the analytic gradient by \\citet{milcent_moment--fluid_2020}.\nWe use the Gauss-Newton algorithm instead of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm in \\citet{milcent_moment--fluid_2020}.\nAlthough Gauss-Newton algorithm has been used in other MOF studies \\cite{jemison_coupled_2013,asuri_mukundan_3d_2020},\nno detailed comparison between the two algorithms has been carried out.\nWe show that the Gauss-Newton algorithm significantly reduces the number of gradient calculation.\nWe also propose an improved form of initial guess,\nwhich provides a closer initial guess to the global minimum.\nThe improved initial guess helps to reduce the iteration step and improves the robustness\nof the Gauss-Newton algorithm.\n\nThe run-time ratio and robustness of the method could be implementation-dependent.\nOut implementation of the code and test cases are available on our Github repository (https:\/\/github.com\/zhoutengye\/NNMOF).\nAll the numerical tests are done on a workstation with Intel(R) Xeon(R) Platinum 8270 processors\nwith Intel compiler 2020.\n\n\\section{Moment of Fluid reconstruction}\n\n\\subsection{Description}\nAs an extented the VOF-PLIC method, \nthe MOF method reconstructs the interface in a 3D rectangular hextahedron cell\nwith a plane\n\\begin{equation}\n\\label{eq:mofplane}\n\\left\\{\\mathbf{x} \\in \\mathbb{R}^{3} \\mid \\mathbf{n} \\cdot\\left(\\mathbf{x}-\\mathbf{x_0}\\right)+\\alpha=0\\right\\},\n\\end{equation}\nwhere $\\mathbf{n}$ is the normal vector, $\\mathbf{x}$ is the reference point of the cell,\n$\\mathbf{x_0}$ is the origin of the Cartesian coordinate,\neither the center of the cell or the lower corner of the cell, \ndepending on the computational algorithm.\n$\\alpha$ is the parameter that represents the distances from the reference point $\\mathbf{x}$.\nThe volume of the cutting polyhedron by the reconstruction plane satisfies\n\\begin{equation}\n\\label{eq:mofvolumeconstraint}\n\\left|F_{\\mathrm{ref}}(\\mathbf{n}, \\alpha)-F_{A}(\\mathbf{n}, \\alpha)\\right|=0.\n\\end{equation}\nand\n\\begin{equation}\n\\label{eq:minimizecentroid}\nE_{\\mathrm{MOF}}=\\left\\|\\mathbf{x}_{\\mathrm{ref}}-\\mathbf{x}_{A}(\\mathbf{n}, b)\\right\\|_{2}\n\\end{equation}\nIn addition to the constraint on volume fraction,\nthe MOF reconstruction also minimizes error of the centroid with\n\\begin{equation}\n\\label{eq:mofminimizeangle}\nE_{\\mathrm{MOF}}\\left(\\Phi^{*}, \\Theta^{*}\\right)=\\left\\|\\mathbf{f}\\left(\\Phi^{*}, \\Theta^{*}\\right)\\right\\|_{2}=\n\\min _{(\\Phi, \\Theta):\\rm{Eq.} \\eqref{eq:mofvolumeconstraint} \\text { holds }}\\|\\mathbf{f}(\\Phi, \\Theta)\\|_{2}\n\\end{equation}\nEq. \\eqref{eq:mofminimizeangle} minimizes the distance in 3D with two parameters by converting the\nnormal vector in Eq. \\eqref{eq:mofplane} to the polar angle and azimuthal angle in a spherical coordinate system.\nEq. \\eqref{eq:mofminimizeangle} is a non-linear least square problem for $\\Phi$ and $\\Theta$,\nit is solved with optimization algorithm via iteration.\n\n\\subsection{Optimization of the centroid}\nWe use the Gauss-Newton algorithm to minimize Eq. \\eqref{eq:mofminimizeangle}. \nStarting with an initial guess of $\\Phi_{0}, \\Theta_{0}$, the solution procedure are:\n\n1. Find the centroid $\\mathbf{x}_k$ corresponds with the angle $(\\Phi_{k}, \\Theta_{k})$.\n\n2. Determine the Jacobian matrix $\\mathbf{J_k}$ using the analytic solution \\citet{milcent_moment--fluid_2020}.\n\n3. Determine the sift angle\n\\begin{equation}\n\\label{eq:shiftangle}\n(\\Delta \\Phi_k, \\Delta \\Theta_k) = - \\mathbf{H_k J_{k}^{T} f_k}\n\\end{equation}\n\nwhere $\\mathbf{H_k} = \\mathbf{J_k^{T}{J_k}}$ is the Hessian matrix. \nIn this problem, the dimension of the Hessian matrix is $2 \\times 2$.\n\n4. Update angle $(\\Phi_{k+1},\\Theta_{k+1}) = (\\Phi_{k} + \\Delta \\Phi_k, \\Theta_{k} + \\Delta \\Theta_k) $.\n\nThe iteration stops while convergence conditions are full-filled. \nMultiple convergence criteria can be adopted, for example, \ncentroid error, \nerror of the gradient of the objective function,\nminimum advance angle, \nand maximum iteration step.\n\nIn this problem, \neven though the gradient of the objective function has been significantly boosted by the analytic gradient of \\citet{milcent_moment--fluid_2020} compared with \nthe conventional numerical gradient approach,\ncalculating the objective function $\\mathbf{f}$ and the gradient objective function $(\\frac{\\partial \\mathbf{f}}{\\partial \\Phi},\\frac{\\partial \\mathbf{f}}{\\partial \\Theta}$) still takes most of the computational time during iteration.\nThe number of calling the gradient algorithm determines the total computational cost of the iteration.\nIn the original MOF method \\cite{dyadechko_moment--fluid_2005}, the non-linear optimization Eq. \\eqref{eq:mofminimizeangle} is solved with\nBroyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm, which is also used in \\citet{chen_improved_2016,milcent_moment--fluid_2020}.\nIn BFGS algorithm, the advance angle $(\\Delta \\Phi_{k}, \\Delta \\Theta_{k})$ needs to be determined by a line search algorithm.\nIn the numerical tests of \\citet{milcent_moment--fluid_2020}, \nevery iteration needs 8-10 steps of line search,\nwhich means the total number of calling the gradient algorithm is much bigger than the number of the iteration step.\n\nThe main advantage of the BFGS algorithm over Gauss-Newton algorithm is that the BFGS algorithm approximates the inverse of the\nHessian matrix,\nwhich avoids the calculation of the inverse of the Hessian matrix directly.\nHowever, in this problem, \nthe shape of the Hessian matrix is only $2 \\times 2$,\nmaking the cost of the inverse matrix algorithm negligible.\nWhile in Gauss-Newton algorithm, \nthe shift angle is directly determined by Eq. \\eqref{eq:shiftangle},\nso that the number of calling the gradient algorithm equals to the iteration step.\nCompared with BFGS algorithm, the number of calling the gradient algorithm is much smaller than the BFGS algorithm if\nboth algorithms converge with same iteration steps.\n\nOther non-linear optimization could potentially be used in minimizing Eq. \\eqref{eq:mofminimizeangle}. \nFor example, the Levenberg-Manquardt algorithm \\cite{madsen_methods_2004} is known as an inproved Gauss-Newton algorithm using a trust region approach.\nAlthough the Levenberg Manquardt algorithm is more robust than the Gauss-Newton algorithm,\nfinding the value of the trust region involves trial computation of the objective function and its gradient, \nwhich could significantly increase the computational cost.\nFor efficiency, \nwe use the Gauss-Newton algorithm other than Levenberg-Manquardt algorithm.\nTo ensure the robustness of the Gauss-Newton algorithm,\nwe provide an improved initial guess in next subsection.\n\n\\subsection{Initial guess of the normal vector}\nThe choice of initial guess is important because\nthere may exists multiple local minima in the objective function.\n\\citet{dyadechko_moment--fluid_2005} suggested the form\n\\begin{equation}\n\\label{eq:initialguess_1}\n\\mathbf{n_0^1} = \\mathbf{x_c}(\\Omega) - \\mathbf{x_{ref}}\n\\end{equation}\nas safe initial guess.\n\\citet{dyadechko_moment--fluid_2005} also claimed that \nthe line-search algorithm guarantees the initial guess finally reaching the global minima.\nIn Gauss-Newton iteration, \nthe step is automatically determined,\nthere is no trial-step selection.\nThe Gauss-Newton algorithm is more likely to be sensitive to the initial guess than \nBFGS with line search algorithm used in the study of \\citet{dyadechko_moment--fluid_2005, milcent_moment--fluid_2020}.\n\n\\ExecuteMetaData[figures.tex]{fig:flood}\n\nWe propose a new form of the initial guess in this section. \nTo better demonstrate the philosophy of our proposed initial guess, \nwe simplify the 3D problem to 2D by setting the polar angles $\\Phi=\\pi\/2$,\nwhich simplifies the 3D problem to a 2D problem.\nFig. \\ref{fig:flood} shows the locus of the centroids of the cutting polygon by a line interface in a unit cell.\nThe solid lines in Fig. \\ref{fig:flood} (b) corresponds with the evolution of the centroid with a fixed azimuthal angle $\\Phi$ (See Fig. \\ref{fig:flood}(a)),\nand the dashed lines in Fig. \\ref{fig:flood}(b) corresponds with the evolution of the centroid with a fixed volume fraction $V$ (See Fig. \\ref{fig:flood}(c)).\nWhen the volume fraction $V>1\/2$ (with red color in Fig. \\ref{fig:flood}),\nthe centroid can be determined by finding the centroid of its symmetric cutting polygon.\nWe only discuss the case when $V>1\/2$ in this section.\nFor any of the reference centroid close to one of a vertex of the cell,\nthe corresponding centroid $\\mathbf{x_0}$ of an initial guess $\\Phi_0$ could be very close to the reference centroid $\\mathbf{x_c}$,\nbut has a big error with the exact azimuthal angle $\\Phi$.\n\nWe show the error of initial guess \nEq. \\eqref{eq:initialguess_1}\nin Fig. \\ref{fig:initialguess} (a). \nEq. \\eqref{eq:initialguess_1} gives a good initial guess in most areas except for the \nregion that is near the face of the cell.\nThose regions correspond with very small volume fraction. \nWe propose another candidate initial guess by assuming the reference centroid \nas the centroid of a right triangle reconstruction (or a trirectangular tetrahedron in 3D).\n\\begin{equation}\n\\label{eq:initialguess_2}\n\\mathbf{n_0^2} = \\mathbf{\\frac{1}{\\tilde x_v(\\Omega) - \\mathbf{x_{ref}}}},\n\\end{equation}\nwhere $\\mathbf{\\tilde{x}_{v}(\\Omega)}$ is the vertex of the quadrant (or octant in 3D) that $\\mathbf{{x}_{ref}}$ is located.\nThe error map of the azimuthal angle is plotted in Fig. \\ref{fig:initialguess} (b).\nThe right triangle approximates the small volume correctly especially when the centroid is \nnear one of the vortex of the grid cell.\nWe evaluates the value of the objective function with the two candidate initial guesses and\npick the one with smaller error of the centroted\n\\begin{equation}\n\\label{eq:initialguess_3}\n\\mathbf{n}_{0}=\\underset{\\left\\{\\mathbf{n}_{0}^{1}, \\mathbf{n}_{0}^{2}\\right\\}}{\\arg \\min }\\left\\{E_{\\mathrm{MOF}}\\left(\\mathbf{n}_{0}^{1}\\right), E_{\\mathrm{MOF}}\\left(\\mathbf{n}_{0}^{2}\\right)\\right\\}\n\\end{equation}\nThe error of the azimuthal angle error $\\Delta \\Phi_{e}$ of Eq. \\eqref{eq:initialguess_3} are plotted in Fig. \\ref{fig:initialguess} (c).\nIn 2D case, the maximum error of the polar angle by Eq. \\eqref{eq:initialguess_3} is approximately $\\pi\/20$,\nwhile the maximum error of the polar angle from Eq. \\eqref{eq:initialguess_1} is about $\\frac{\\pi}{4}$.\nWe also teste the initial guess in 3D.\nThe error of the initial guess $\\Delta \\Theta+\\Delta \\Phi$ by Eq. \\eqref{eq:initialguess_1} is about $\\frac{\\pi}{2}$.\nBy using our improved initial guess, \nthe error reduces to about $\\frac{\\pi}{5}$.\n\n\\ExecuteMetaData[figures.tex]{fig:initialguess}\n\n\\section{Numerical tests}\n\\subsection{Reconstruction test}\n\n\\ExecuteMetaData[figures.tex]{fig:distribution}\n\nIn this section, we test the accuracy and robustness of our MOF reconstruction with Gauss-Newton algorithm with improved initial guess.\nTwo criteria are evaluated: the CPU time and the robustness. \nThree data sets are generated by finding the exact centroid of \na cutting polyhedron of a unit cube by a plane.\nWe use data-sets with different distribution to show the performance of our algorithm, especially the robustness for extreme cases (See Fig. \\ref{fig:distribution}):\n(1) Exponential case with a normal distribution;\n(2) Uniform case corresponds with uniform distribution;\n(3) Extreme case with a shifted normal distribution which contains more values near 0 or 1.\nIn this test, \nthe tolerance for iteration is $10^{-8}$, \nthe maximum iteration step is 100.\n\n\\ExecuteMetaData[tables.tex]{tab:error}\n\n\\ExecuteMetaData[tables.tex]{tab:time}\n\nIn Table \\ref{tab:Error},\nthe error of the Gauss-Newton algorithm with original initial guess increases \nwhen more extreme data appears in the test data set,\nwhile the BFGS shows a better robustness than the Gauss-Newton algorithm.\nWith the improved initial guess, \nboth algorithms show a very good robustness in all test cases.\nIn BFGS algorithm,\neach iteration needs a line search algorithm to determine the shift angle,\nwhich needs to call the gradient algorithm for multiple times.\nWhile in Gauss-Newton iteration,\nthe shift angle is automatically determined in each iteration,\nthe gradient algorithm only has to be called for once.\nIn Table \\ref{tab:time}, \nit is observed that the averaged iteration step using Gauss Newton algorithm is smaller than \nthat in BFGS algorithm.\nWhen taking the line search into account,\nthe number of calling gradient algorithm in BFGS algorithm is about 5 times larger than that \nin Gauss Newton algorithm.\nThe Gauss Newton algorithm is about 3 times faster than the BFGS algorithm with analytic reconstruction.\nIt should note that we also compared out algorithm with the conventional MOF reconstruction \\citep{dyadechko_moment--fluid_2005}, \nour algorithm is more than 1000 faster than the conventional MOF reconstruction.\n\n\\subsection{Advection test}\nIn the previous test,\nthe optimized centroid is consistent with the reference centroid.\nHowever, \nin practical, the optimized centroid may not be consistent with the reference centroid.\nIn order to test the accuracy and robustness of the proposed algorithm with non-linear reconstruction,\nwe test our algorithm with a 3D Zalesak's problem \\citet{enright_hybrid_2002}.\nWe use a directional splitting Lagrangian Explicit scheme for the advection of volume fraction \\citep{aulisa_interface_2007} and updates the centroid by calculating the evolution of corresponding Lagrangian centroid.\nFor advection of the volume fraction and centroid,\nWe use a directional splitting Lagrangian Explicit scheme similar with the VOF-PLIC advection in \\citet{aulisa_interface_2007}. \n\nThe difference between different method are very small (With an $L_1$ error of $O^{-7}$),\nwhich shows the robustness of our algorithm. \nThe averaged time of iteration and computational cost are listed in Table \\ref{tab:Zalesak}.\nWith our improved initial guess, \nthe averaged number of iteration decreases in Gauss-Newton algorithm and BFGS algorithm.\nThe Gauss-Newton algorithm with the improved initial guess gets an acceleration of 3\nto the algorithm of the \\cite{milcent_moment--fluid_2020}.\n\n\\ExecuteMetaData[tables.tex]{tab:zalesak}\n\n\\section{Conclusions}\nIn this study, we show that using Gauss-Newton algorithm instead of BFGS algorithm significantly helps to accelerate the iteration in MOF reconstruction.\nWe also proposed an improved initial guess which makes the Gauss-Newton iteration more robust.\nOur improved initial guess along with the Gauss-Newton algorithm is about 4 times faster \nthan the BFGS algorithm by \\citet{milcent_moment--fluid_2020} in reconstruction\nand about 2 times faster in advection test.\n\n\\section{Acknowledgments}\nThe support provided by National Science Foundation of China (Grant Nos. 51979245, 51679212) and \nChina Scholarship Council (CSC) and the during a visit of Zhouteng Ye to Florida State University is acknowledged.\n\n\\bibliographystyle{model1-num-names}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe theory of \\emph{Non-Bayesian Social Learning} \\cite{JMST2012} has gained increasing attention over the past few years as a scalable approach that models distributed inference of a group of agents interacting over a social network. Individually, each agent in the network may not be able to infer the true state of the world. Also, agents may only observe a small fraction of the total information, leading to conflicting beliefs. Additionally, the agent's measurement process or sensing modalities may lead to ambiguous decisions, to further hinder the inference problem. Thus, non-Bayesian social learning theory provides a framework that allows for heterogeneous data aggregation, enabling every agent in the network to form a consensus belief on the true state of the world. \n\nIn this framework, each agent repeatedly forms and communicates their beliefs about an unknown state of the world with their neighbors using a social learning rule and the likelihood of a new observation conditioned on predefined statistical models. \nThe social learning rule assumes \\textit{bounded rationality}, i.e., the beliefs of the agent's neighbors are sufficient statistics, also known as \\textit{imperfect recall}~\\cite{MTJ18}, which considerably simplifies computing the joint beliefs. Calculating the joint beliefs does not require knowledge of the network structure, inter-dependencies, or historical beliefs of every agent in the network as in \\textit{Bayesian social learning} theory~\\cite{GK2003, ADLO2011, KT2013, RJM2017}. Furthermore, imperfect recall has been shown to guarantee the agents' beliefs converge to the global Bayesian result almost surely~\\cite{JMST2012}. \n\nOne of the major assumptions in the current literature, is that the statistical models of each hypothesis are known precisely. This assumption requires that the agents collect a sufficiently large set of labeled training data to accurately model the parameters of the statistical models. However, in some situations, (e.g. data is too expensive\/impossible to collect or the measurement process is imprecise) the agents may only receive labeled data for a subset of states, or an insufficient amount of training data, which leads to \\textit{uncertain} model parameters.\n\n\nIn this work, we present a new non-Bayesian social learning method that takes into account uncertainties in the statistical models (i.e., hypotheses or likelihood functions). Classically, inferences are made by normalizing the statistical models over the set of hypotheses. In the uncertain case, the amount of prior evidence for each hypothesis may vary, causing the uncertain models to change significantly, making them incommensurable. We propose a generalized model that reflects the amount of prior evidence collected. We build up our results from the concept of uncertain likelihood ratios for decision making under uncertainty~\\cite{J2016,R1984}. This allows us to evaluate the consistency of the prior evidence with the observation sequence to judge each hypothesis on its own merit. We study the convergence properties of the proposed method for two standard social aggregating rules, \\emph{log-linear}~\\cite{MTJ18} and \\emph{DeGroot}~\\cite{JMST2012}. We show that when the agents have a finite amount of prior evidence, the agents' beliefs asymptotically converge to a finite value between zero and infinity, which represents the consistency of the hypothesis with respect to the ground truth. Furthermore, we show that we can exactly quantify the point of convergence for update rules based on log-linear aggregation. Finally, we show that as the amount of prior evidence grows unboundedly, the beliefs of every hypothesis inconsistent with the ground truth converge to zero. This indicates that learning is possible with uncertain models and is consistent with classical non-Bayesian social learning theory. \n\nThe remainder of this paper is organized as follows. First, in Section \\ref{sec:LR} we present a review of the current literature in non-Bayesian social learning theory and uncertainty modeling. Then, we describe the problem and main results in Section \\ref{sec:PF}. Next, we derive the uncertain statistical models in Section \\ref{sec:NULF}. In Section \\ref{sec:DNBLWUL}, we implement the uncertain models into the log-linear update rule and formally prove the main result. Then in Section \\ref{sec:DG_Update}, we study the properties of the DeGroot-style update rule with the uncertain likelihood ratio. Finally, we provide a numerical analysis in Section \\ref{sec:SIM} to empirically validate our results and conclude the paper in Section \\ref{sec:Conclusion}.\n\n\\textbf{Notation:} Bold symbols represent a vector\/matrix, while a non-bold symbol represents its element. We use the indexes $i$ and $j$ to represent agents, $t$ to constitute the time step, and $k$ to index the category. We use $[\\mathbf{A}]_{ij}$ to represent the entry of matrix $\\mathbf{A}$'s $i$th row and $j$th column. We denote $X\\overset{P}{\\to}Y$ to represent that the sequence $X$ converges in probability to $Y$. Furthermore, we abbreviate the terminology almost surely by a.s. and independent identically distributed as i.i.d..\n\n\\section{Literature Review} \\label{sec:LR}\n\n\n\n\\subsection{Non-Bayesian Social Learning}\n\nMuch of the learning algorithms developed in the literature have been derived using distributed optimization strategies for a group of agents, which typically utilize gradient-decent methods \\cite{YYZS2018}. These approaches construct their decentralized algorithm using a consensus strategy \\cite{NO2009, KM2011, KMR2012} or a diffusion strategy \\cite{LS2007, CS2012, CS2015p1, CS2015p2} to ensure that the agents learn the true state. At the same time, non-Bayesian social learning methods \\cite{JMST2012} were developed to perform distributed inference of a true state using a DeGroot-style \\cite{D1974} (arithmetic average) learning rule, where it has been shown in \\cite{SJ2013} that the Bayesian learning approach is linked to the distributed optimization framework. Since then, this learning rule has been studied in strongly-connected and weakly-connected graphs which characterized the beliefs rate of convergence \\cite{MJRT2013} and the effects of influential agents on the resulting beliefs \\cite{SYS2017}, respectively. Furthermore, this rule has been identified as a boundary condition that ensures learning \\cite{MTJ18}. \n\nThe DeGroot-style learning rule was then extended by a stream of papers that studied a geometric average learning rule known as the log-linear rule \\cite{RT2010, RMJ2014, SRJ2015, NOU2015, LJS2018}. These works found that the agents will converge to the ``Bayesian Peer Influence'' heuristic \\cite{LR2018} in finite time for fixed graphs \\cite{MTJ18, SRJ2015}, time-varying undirected graphs \\cite{NOU2017}, and time-varying directed graphs \\cite{NOU2015, NOU2016}. Much of the focus has been on developing learning rules that improve the convergence rate of the beliefs \\cite{NOU2015, NOU2016}. This has lead to the development of the log-linear learning rule with one-step memory \\cite{RMJ2014, NOU2017}, observation reuse \\cite{BT2018}, and the accumulation of all observations \\cite{SV2018}. However, the common assumption in the literature is that the likelihood functions are known precisely. Thus, this paper studies the Log-linear and DeGroot-style learning rules with uncertain models. \n\n\\subsection{Uncertainty Models}\nModeling the uncertainty in statistical models has been approached from many different philosophies, including possibility theory \\cite{DP2001,K2005,DP2012}, probability intervals \\cite{W1996, W1997, B2005}, and belief theory \\cite{S1976, SK1994}. These approaches extend traditional probability calculus to encompass uncertainty into the model parameters. This was then extended to the theory of subjective logic~\\cite{J2016}, which constructs a subjective belief of the model that can be mapped into a second-order probability distribution. \n\nSecond-order probability distributions \\cite{GS1982, C1996} are typically modeled as the conjugate prior of the first-order distribution, which does not complicate the overall analysis and allows for a reduction in uncertainty as more information becomes available. In particular, an example of a second-order distribution is the Dirichlet distribution who's shape is governed by the amount of prior evidence collected. This has led to the development of the imprecise Dirichlet process \\cite{W1996, DP2001, B2005}, which allows the likelihood parameters to be modeled within upper and lower probability bounds. \n\nFrom a Bayesian point of view, this approach was also studied by constructing the likelihood based on the posterior predictive distribution \\cite{R1984, M1994}. This lead to many approaches on how to correctly construct the prior distribution to provide non-informative information and allow the posterior distributions to be data-dominated \\cite{TGM2009} (see \\cite{KW1996} for a detailed review). However, these studies did not consider the problem of developing a prior based on the amount of prior information available. In this work we utilize the Bayesian point of view which computes the likelihood based on the posterior predictive distribution, while borrowing concepts from subjective logic to model the prior Dirichlet distribution. \n\n\\section{Problem Formulation, Algorithms and Results} \\label{sec:PF}\n\n\\subsection{Signals, Hypotheses, and Uncertain Models} \\label{sec:pf_ahu}\n\nConsider a network of $m$ agents interacting over a social network, who are trying to collectively infer and agree on the \\textit{unknown} state of the world $\\theta^* \\in \\boldsymbol{\\Theta}$, where $\\boldsymbol{\\Theta}=\\{\\theta_1,...,\\theta_S\\}$ is the set of possible states of the world. The agents gain information about the state $\\theta^*$ via a sequence of realizations of an i.i.d. random variable conditioned on the state of the world being~$\\theta^*$. Thus, given such observations, the agents seek to identify a hypothesis (i.e., a distribution for the random variable generating the observations), that best explains the observations and therefore the state of the world.\n\nEach agent $i$ seeks to infer the underlying state of the world $\\theta^*$ by sequentially collecting independent private signals $\\{\\omega_{it}\\}_{t \\geq { 1}}$, with $\\omega_{it} \\in \\boldsymbol{\\Omega} = \\{1,\\dots,K\\}$ and $K\\ge 2$ possible mutually exclusive outcomes, where the probability of observing an outcome $k\\in\\boldsymbol{\\Omega}$ is $\\pi_{k{i \\theta^*}}$. Moreover, an agent keeps track of these realizations via a histogram $\\mathbf{n}_{it}=\\{n_{i1t},...,n_{iKt}\\}$, s.t. $\\sum_{k=1}^K n_{ikt} = t$ and $n_{ikt}$ is the number of occurrences of category $k$ up to time $t$. \n\nThe vector $\\mathbf{n}_{it}$ is a realization of $t$ draws from a multinomial distribution with parameters $\\boldsymbol{\\pi}_{i\\theta^*}$. We call this distribution $P_{i\\theta^*}$. However, our main assumption is that agents do not have a precise statistical model for the possible states of the world, i.e., the values of $ \\{\\boldsymbol{\\pi}_{i\\theta}\\}_{\\forall \\theta \\in \\Theta}$ are partially unknown by the agents. Only limited information is available for each possible state of the world and decisions are made over \\textit{uncertain likelihood models}. We will assume that agents construct these uncertain likelihood models from available prior partial information acquired via private signals for each possible state of the world. For a hypothesis $\\theta$, an agent $i$ has available $R_{i\\theta}$ independent trials. This provides the agent with a set of counts $\\mathbf{r}_{i\\theta}=\\{r_{i1\\theta},...,r_{iK\\theta}\\}$, denoted as the \\emph{prior evidence} of hypothesis $\\theta$, where $r_{ik\\theta}\\in [0, \\infty)$ is the number of occurrences of outcome $k\\in\\boldsymbol{\\Omega}$ and $\\sum_{k=1}^K r_{ik\\theta } = R_{i\\theta}$. Thus, the vector of counts $\\mathbf{r}_{i\\theta}$ is a realization of a multinomial random variable with parameters $R_{i\\theta}$ and $\\boldsymbol{\\pi}_{i\\theta}$ for $i \\in \\{1,\\dots,m\\}$ and $\\theta \\in \\boldsymbol{\\Theta}$. Furthermore, when $R_{i\\theta}$ is finite (not sufficiently large), the vector $\\boldsymbol{\\pi}_{i\\theta}$ is \\emph{uncertain}, and an agent cannot compute the probability distribution precisely.\n\nTo clarify the model above consider that an agent $i$ is handed a set of $K$ sided dice labeled $1,...,S$. Each die $s$ represents a hypothesis $\\theta_s\\in\\boldsymbol{\\Theta}$ and the parameters $\\boldsymbol{\\pi}_{i\\theta_s}$ represents the set of probabilities of the die landing on each face. The agent only has access to each die for a small amount of time, where they roll the die and collect the counts of each face during each roll to construct the sets $\\mathbf{r}_{i\\theta}$ $\\forall \\theta \\in \\boldsymbol{\\Theta}$. Then, all of the dice are collected and a new unlabeled die is presented to the agent. The goal of the agent is to identify which of the $S$ hypotheses best matches the distribution observed by rolling the new die. This is the main object of study of this paper: the design of a distributed algorithm that allows a group of agents to construct consistent beliefs about a set of hypotheses based on uncertain likelihood models.\n\n\\subsection{Social Learning with Uncertain Models}\n\nGiven the prior evidence for the set of hypotheses, the sequence of private observations and the interactions with the other agents in the network, an agent iteratively constructs beliefs over the hypotheses set $\\boldsymbol{\\Theta}$. We will denote the belief of an agent $i$ about a hypothesis $\\theta$ at a time $t$ as $\\mu_{it}(\\theta)$. Moreover, the belief of agent $i$ about hypothesis $\\theta$ at time $t+1$ will be a function of the tuple $\\{\\mathbf{r}_{i\\theta},\\mathbf{n}_{it}, \\{\\mu_{jt}(\\theta)\\}_{j \\in \\mathbf{M}^i} \\}$, where $\\mathbf{M}^i$ is the set of agents (or neighbors) that can send information to agent $i$.\n\nWe propose the following belief update rule, based on uncertain likelihood models,\n\\begin{align}\\label{eq:main_algo}\n\\mu_{it+1}(\\theta) = \\ell_{i\\theta}(\\mathbf{n}_{it}, \\omega_{it+1}|\\mathbf{r}_{i\\theta})\\prod_{j\\in \\mathbf{M}^i}\\mu_{jt}(\\theta) ^{[\\mathbf{A}]_{ij}},\n\\end{align}\nwhere\n\\begin{align}\\label{eq:el;_def}\n\\ell_{i\\theta}(\\mathbf{n}_{it},k|\\mathbf{r}_{i\\theta}) &= \\frac{(r_{ik\\theta} + n_{ikt}+1)(t+K-1)}{(R_{i\\theta}+t+K-1)(n_{ikt} +1)},\n\\end{align}\n$\\mu_{i0}(\\theta)=1$ $\\forall i\\in\\{1,...,m\\}$, and $[\\mathbf{A}]_{ij}$ is the weight agent $i$ assigns to the belief shared by agent $j$.\n\nEquation~(\\ref{eq:main_algo}) is an aggregation step (a weighted geometric mean), and a normalized uncertain likelihood non-Bayesian update, where Equation (\\ref{eq:el;_def}) is the uncertain likelihood ratio update based on the observed signal at time $t$. This proposed belief update rule will be motivated in Section \\ref{sec:DNBLWUL}.\n\nNote that the generated beliefs are not probability distributions since they are not normalized over the set of hypotheses $\\boldsymbol{\\Theta}$ as in traditional non-Bayesian social learning. Rather, the generated beliefs with uncertain likelihoods represents the consistency of the hypothesis with the ground truth given the accumulated prior evidence. A detailed description of the proposed inference rule will be presented in Section \\ref{sec:NULF}.\n\n\\subsection{Assumptions and Definitions}\n\nThe agents social interactions are modeled as an exchange of beliefs over a weighted undirected graph $\\mathcal{G}=(\\mathbf{M},\\mathbf{E})$, which consists of the set of agents $\\mathbf{M}=\\{1,...,m\\}$ and a set of edges $\\mathbf{E}$. An edge is defined as the connection between agent $i$ and $j$ and is denoted by the ordered pair $(i,j)\\in \\mathbf{E}$. The weights along each edge form an adjacency matrix, $\\mathbf{A}$, which represents the amount of influence that agent $i$ has on agent $j$ (and vise versa) such that $[\\mathbf{A}]_{ij}>0$ if $(i,j)\\in \\mathbf{E}$ and $[\\mathbf{A}]_{ij}=0$ if $(i,j)\\notin \\mathbf{E}$. Furthermore, the set of neighbors of agent $i$ is defined as $\\mathbf{M}^{i}=\\{j\\in \\mathbf{M}|(i,j)\\in \\mathbf{E}\\}$ and we assume that the agents within $\\mathbf{M}^i$ report their beliefs truthfully.\n\n\\begin{assumption}\\label{assum:graph}\n\tThe graph $\\mathcal{G}$ is undirected and connected. Moreover, the corresponding adjacency matrix $\\mathbf{A}$ is doubly stochastic and aperiodic. Note that $\\mathbf{A}$ is irreducible due to connectivity. \n\\end{assumption}\n\t\nAssumption \\ref{assum:graph} is common among the consensus literature and allows the agents interactions to be represented by a Markov Chain. This guarantees convergence of the graph to a fully connected network and defines bounds on the second largest eigenvalue based on the number of agents \\cite{NOU2017}. Note that it is not always possible to derive a directed graph with a doubly stochastic adjacency matrix (as provided in \\cite{GC2010}) especially in a distributed manner. However, if the graph is undirected, then the distributed agents can construct a Lazy Metropolis matrix to form a doubly stochastic matrix. Furthermore, time-varying directed graphs can form doubly stochastic matrices using the push-sum algorithm \\cite{NO2016}. \n\n\\begin{assumption} \\label{assum:uncertain_distributioin}\n Each agent $i$ at time $t=0$ has their counter for the observations of their private signals set to $n_{ik0}=0$ for all $i\\in \\mathbf{M}$ and $k\\in\\boldsymbol{\\Omega}$. This enables the definition of the prior uncertain probability distribution $\\widetilde{\\mathcal{P}}_i(\\mathbf{0}) = \\{\\widetilde{P}_{i\\theta}(\\mathbf{n}_{i0}=\\mathbf{0}|\\mathbf{r}_{i\\theta})\\}_{\\forall \\theta \\in \\boldsymbol{\\Theta}}$ at time $t=0$, which are derived from the marginal of a second-order distribution of the probabilities $\\boldsymbol{\\pi}_{i\\theta}$ given the prior evidence $\\mathbf{r}_{i\\theta}$ (to be derived in Section~\\ref{sec:NULF}).\n\\end{assumption}\n\n\\begin{definition} \\label{assum:dog_distribution}\n When agent $i$ collects an infinite (or a sufficiently large) amount of prior evidence for hypothesis $\\theta$, the probabilities $\\boldsymbol{\\pi}_{i\\theta}$ are known precisely and we say that the agent has a epistemically \\emph{certain} statistical model for the hypothesis $\\theta$, i.e., $\\widetilde{\\mathcal{P}}_i(\\mathbf{0}) = \\{P_{i\\theta}(\\mathbf{n}_{i0}=\\mathbf{0}|\\boldsymbol{\\pi}_{i\\theta})\\}_{\\forall \\theta \\in \\boldsymbol{\\Theta}}$.\n\\end{definition}\n\nThe precise definitions of the uncertain and certain likelihood models for a multinomial distribution will be formally introduced in Section \\ref{sec:NULF}. Note that the usage of \\emph{certain} statistical models is the same as \\emph{dogmatic} opinions in subjective logic \\cite{J2016}. \n\nWe assume that the agents have calibrated their measurement models to allow them to distinctly identify the categories observed. However, it may be too expensive for the agents to conduct a sufficient number of trials to identify the probabilities $\\boldsymbol{\\pi}_{i\\theta}$ of each hypothesis $\\theta$ precisely.\n\nAdditionally, we allow the amount of prior evidence collected for each hypothesis can vary between hypotheses and agents, i.e. $R_{i\\theta}\\ne R_{i\\hat{\\theta}}$ for any $\\hat{\\theta}\\ne \\theta$ and $R_{i\\theta}\\ne R_{j\\theta}$ for any $i\\ne j$. This means that the distributions within $\\widetilde{\\mathcal{P}}_{i}$ are incommensurable, causing the traditional approach of normalizing $\\widetilde{P}_{i\\theta}$ over the set of $\\widetilde{\\mathcal{P}}_i$ to produce errors as an unintended consequence. Thus, we propose to normalize each distribution by a common \\emph{vacuous} probability model that statistically models the agents ignorance of hypothesis $\\theta$, i.e., $\\widetilde{P}_{i\\theta}(\\mathbf{0}|\\mathbf{r}_{i\\theta}=\\mathbf{0})$. A thorough discussion of this concept is presented in Section \\ref{sec:NULF}.\n\nFurthermore, we assume that the agent may face an identification problem due to (i) a varying amount of prior evidence and (ii) non-informative observations. The first condition is an effect of the proposed uncertain models, while the second condition is caused when multiple hypotheses $\\hat{\\boldsymbol{\\Theta}}$ have the same probability distribution as the ground truth state of the world, s.t. $\\hat{\\boldsymbol{\\Theta}}=\\{\\theta\\in\\boldsymbol{\\Theta}|\\boldsymbol{\\pi}_{i\\theta} = \\boldsymbol{\\pi}_{i\\theta^*}\\}$. However, for every hypothesis $\\hat{\\theta}\\in \\hat{\\boldsymbol{\\Theta}}$, we assume that there exists another agent $j$ that has informative observations for $\\hat{\\theta}$, s.t. $\\boldsymbol{\\pi}_{j\\theta} \\ne \\boldsymbol{\\pi}_{j\\theta^*}$. Thus, the agents must collaborate to unequivocally identify the true state of the world.\n\nFinally, we make the following assumption on the agents initial beliefs for each hypothesis.\n\\begin{assumption} \\label{assum:inital_beliefs}\n The agents initial beliefs $\\mu_{i0}(\\theta)=1$ $\\forall i\\in\\{1,...,m\\}$ and $\\forall \\theta \\in \\boldsymbol{\\Theta}$.\n\\end{assumption}\nAssumption \\ref{assum:inital_beliefs} allows the agents to express vacuous initial beliefs for each hypothesis based on the model of complete ignorance achieved by normalizing the uncertain probability distribution by the vacuous condition. This is also required to ensure that the beliefs evolve with time.\n\nNext, we provide a definition of the posterior probability distribution of hypothesis $\\theta$ for a centralized network.\n\\begin{definition}\n The \\emph{centralized uncertain likelihood} \n is the determination of the probability of the observations from all agents conditioned on the historical evidence for each hypothesis:\n \\begin{equation}\n \\widetilde{P}_\\theta(\\mathbf{n}_{1t},\\mathbf{n}_{2t},...,\\mathbf{n}_{mt}|\\mathbf{r}_{1\\theta},\\mathbf{r}_{2\\theta},...,\\mathbf{r}_{m\\theta}) = \\prod_{i=1}^m \\widetilde{P}_\\theta(\\mathbf{n}_{it}|\\mathbf{r}_{i\\theta}).\n \\end{equation}\n\\end{definition}\n\nNote that the decomposition of the centralized uncertain likelihood as the product of uncertain probabilities is only possible because the private signals as observations or evidence from training are statistically independent of each other and agents do not share their evidences $\\mathbf{r}_{i\\theta}$. As shown latter, the centralized uncertain likelihood and uncertain probabilities are sensitive to the amount of evidence, and it is more meaningful to normalize this value by the probability of the observations conditioned on no (or vacuous) historical evidence to form the centralized uncertain likelihood ratio:\n \\begin{eqnarray}\n \\prod_{i=1}^m \\Lambda_{i\\theta}(t) = \\prod_{i=1}^m \\frac{\\widetilde{P}_{i\\theta}(\\mathbf{n}_{it}|\\mathbf{r}_{i\\theta})}{\\widetilde{P}_{i\\theta}(\\mathbf{n}_{it}|\\mathbf{0})}.\n \\end{eqnarray}\n\n\nThe centralized uncertain likelihood ratio is achieved in a centralized network where a central node observes all of information. This distribution acts as the benchmark that the distributed agents should strive to achieve.\n\n\\subsection{Main Result}\n\nWe now present the main result of the paper. This result shows that the beliefs updated using the dynamics in Equation (\\ref{eq:main_algo}) converge to a value with a one-to-one correspondence to the centralized uncertain likelihood ratio. The theorem is proven in Section \\ref{sec:DNBLWUL}.\n\n\\begin{theorem} \\label{thm:ULR_Con}\n\tLet Assumptions~\\ref{assum:graph},~\\ref{assum:uncertain_distributioin}, and ~\\ref{assum:inital_beliefs} hold. Then, the beliefs generated by the update rule (\\ref{eq:main_algo}) have the following property\n\t\\begin{eqnarray} \\label{eq:LL_Limit}\n\t \\lim_{t\\to\\infty}\\mu_{it}(\\theta) = \\left( \\prod_{j=1}^m \\widetilde{\\Lambda}_{j\\theta} \\right)^\\frac{1}{m} , \\ \\text{a.s.}\n\t\\end{eqnarray}\n\twhere\n\t\\begin{eqnarray} \n\t\\widetilde{\\Lambda}_{j\\theta} = \\lim_{t \\to \\infty} \\Lambda_{j\\theta} (t) = \\frac{B(\\mathbf{1})}{B(\\mathbf{r}_{j\\theta}+\\mathbf{1})}\\prod_{k=1}^K (\\pi_{jk\\theta^*})^{r_{jk\\theta}}, \\ \\text{a.s.}\n\t\\end{eqnarray}\n\\end{theorem}\n\nTheorem~\\ref{thm:ULR_Con} states that the beliefs generated by the update rule \\eqref{eq:main_algo} converges almost surely to the $m$th root of the product of the asymptotic uncertain likelihood ratio $\\widetilde{\\Lambda}_{i\\theta}$ derived in Section \\ref{sec:NULF}. Thus, with an abuse of notation, we will refer to the point of convergence of the beliefs $\\mu_{it}(\\theta)$ in the remainder of the paper as the centralized uncertain likelihood ratio. \n\nNote that the centralized uncertain likelihood ratio ranges between $[0,\\infty)$ depending on the amount of prior evidence collected by the agents. When the agents have collected a finite amount of prior evidence, the probabilities $\\boldsymbol{\\pi}_{i\\theta}$ $\\forall i=1,..,m$ are uncertain, which results in the beliefs, $\\mu_{it}(\\theta)$ $\\forall i=1,...,m$, converging to a finite value within $(0,\\infty)$. Whereas, if the agents have collected an infinite amount of evidence, the probabilities $\\boldsymbol{\\pi}_{i\\theta}$ are certain (known precisely) and the beliefs will converge to $0$ or diverge to $\\infty$. This result will be presented in Section \\ref{sec:DNBLWUL}.\n\nThe current literature identifies the hypothesis that minimizes the Kullback-Liebler (KL) divergence between the certain likelihood and the ground truth distribution. This allows only the beliefs of one hypothesis to converge to $1$, while the remaining beliefs converge to $0$, which allows for learning. Our result differs from the current literature, in that our uncertain beliefs converge to a finite value and multiple hypotheses may be accepted. However, when the agents are certain, only the hypothesis with a distribution that exactly matches the ground truth will be accepted, while any divergence between the distributions will cause the hypothesis to be rejected. This result follows the current literature under the closed world assumption that one of the predefined hypotheses is the ground truth. \n\nNext, we will present the derivation of the uncertain likelihood ratio and its properties, as well as define a test to evaluate the consistency of each hypothesis with the private signals.\n\n\\section{Uncertain Models Derivation} \\label{sec:NULF}\n\nIn this section, we derive the uncertain models as a solution to incorporate the uncertainty about the statistical models for a set of hypotheses. For simplicity of exposition, throughout this section we will ignore the network, and assume the centralized scenario, i.e., there is only one agent. Thus, we will drop the $i$ in our notation. Later in Section \\ref{sec:DNBLWUL} we will extend our results to the distributed network setup.\n\n\\subsection{Uncertain Likelihood Function via the Posterior Predictive Distribution}\n\n\\begin{figure*}[t] \n\\centering \n\t\\subfigure[]{\n\t\t\\includegraphics[width=.31\\textwidth]{45_short_2.pdf}\\label{fig:norm_Lambda1} \n\t} \\vspace{-3pt}\n\t\\subfigure[]{\n\t\t\\includegraphics[width=.31\\textwidth]{65_short_2.pdf}\\label{fig:norm_Lambda2} \n\t} \\vspace{-3pt}\n\t\\subfigure[]{\n\t\t\\includegraphics[width=.31\\textwidth]{85_short_2.pdf}\\label{fig:norm_Lambda3} \n} \\vspace{-3pt}\n\t\\caption{The normalized posterior predictive distribution (\\ref{eq:exp_l}) using the update rule (\\ref{eq:surrogate_likelihood}) versus the amount of evidence for hypothesis $\\theta_2$ when $\\mu_0(\\theta_1)=\\mu_0(\\theta_2)=1$ and the evidence for hypothesis $\\theta_1$ is: (a)~$R_{\\theta_1} = 45$, (b)~$R_{\\theta_1} = 65$, and (c)~$R_{\\theta_1} = 85$.} \\vspace{-15pt} \\label{fig:norm_Lambda}\n\\end{figure*}\n\nWe model the uncertainty in the parameters of the multinomial distribution as a second-order probability density function. Similar approaches to modeling uncertainty have been presented in \\cite{J2001, GS1982} and \\cite{C1996}. As stated in Section~\\ref{sec:pf_ahu}, an agent is assumed to construct its statistical model of hypothesis $\\theta$ based on the prior evidence $\\mathbf{r}_{\\theta}$. Particularly, we are interested in a modified likelihood function that captures the uncertainty about the parameters $\\boldsymbol{\\pi}_\\theta$ for each hypothesis based on finite samples.\n\nBefore the prior evidence $\\mathbf{r}_{\\theta}$ is presented, the agent is assumed to have uniform prior belief about $\\{\\pi_{k\\theta}\\}_{k=1}^K$, thus $\\{\\pi_{k\\theta}\\}_{k=1}^K$ could be any point in the $K$-dimensional simplex,\n\\begin{equation*}\n\\mathcal{S}_K = \\left \\{\\boldsymbol{\\pi} \\bigg | \\sum_{k=0}^K \\pi_k = 1 \\hspace{.05in} \\mbox{ and $\\pi_k \\ge 0$ for $k=1,\\ldots,K$} \\right \\},\n\\end{equation*}\nwith equal probability. However, once $\\mathbf{r}_{\\theta}$ is available, the agent updates its beliefs and constructs a posterior belief about $\\{\\pi_{k\\theta}\\}_{k=1}^K$. Particularly, if we assume the prior belief follows the uniform distribution over $\\mathcal{S}_k$, and we observe $\\mathbf{r}_\\theta$ drawn from the multinomial distribution for hypothesis $\\theta$, then the posterior belief is\n\\begin{eqnarray} \\label{eq:dirichlet}\nf (\\boldsymbol{\\pi}_{\\theta}|\\mathbf{r}_{\\theta}) = \\frac{\\prod_{k=1}^K \\pi_{k\\theta }^{r_{k \\theta }}}{B(\\mathbf{r}_{\\theta}+1)} \\hspace{.05in}\\mbox{s.t. $\\boldsymbol{\\pi} \\in \\mathcal{S}_K$,}\n\\end{eqnarray}\nwhere $B(\\alpha_1,...,\\alpha_K)={\\prod_{k=1}^K \\Gamma(\\alpha_k)}\/{\\Gamma(\\sum_{k=1}^K \\alpha_k)}$ is the $K$-dimensional Beta function~\\cite{W1996}. The Dirichlet distribution is the conjugate prior of the multinomial distribution, which provides an algebraic convenience, and allows us to model the uncertainty of each parameter in the set $\\boldsymbol{\\pi}_{\\theta}$ as a second-order probability density function. Clearly, as the number of observations in $\\mathbf{r}_{\\theta}$ increases, the posterior belief concentrates around $\\boldsymbol{\\pi}_{\\theta}$.\n\nIn the social learning process an agent has collected $t$ signals $\\boldsymbol{\\omega}_{1:t}$ and has constructed its histogram $\\mathbf{n}_{t}$. If the probabilities $\\boldsymbol{\\pi}_\\theta$ are know absolutely, the agent would compute $P_\\theta(\\mathbf{n}_{t}|\\boldsymbol{\\pi}_\\theta)$ as its likelihood function for the signal $\\mathbf{n}_{t}$ given hypothesis $\\theta$. However, in the uncertain condition, we must incorporate the finite knowledge about $\\theta$ as $\\widetilde{P}_\\theta(\\mathbf{n}_{t}|\\mathbf{r}_\\theta)$.\n\nWe propose the use of the posterior predictive distribution as the likelihood in lieu of the imprecisely known likelihood $P_\\theta$. The posterior predictive distribution accounts for the uncertainty on $\\boldsymbol{\\pi}_{\\theta}$, and it is calculated by marginalizing the distribution of $\\mathbf{n}_{t}$ over the possible distributions of $\\boldsymbol{\\pi}_{\\theta}$ given $\\mathbf{r}_{\\theta}$, i.e.,\n\t\\begin{align} \\label{eq:exp_l}\n\t\\widetilde{P}_\\theta(\\mathbf{n}_{t}|\\mathbf{r}_{\\theta}) & = \\int_{\\mathcal{S}_K} P_\\theta(\\mathbf{n}_{t}|\\boldsymbol{\\pi}_\\theta) f(\\boldsymbol{\\pi}_{\\theta}| \\mathbf{r}_{\\theta}) d\\boldsymbol{\\pi}_\\theta, \\nonumber \\\\\n& = \\int_{\\mathcal{S}_K} \\prod_{k=1}^K \\pi_{k\\theta}^{n_{kt}} f(\\boldsymbol{\\pi}_{\\theta}| \\mathbf{r}_{\\theta}) d\\boldsymbol{\\pi}_\\theta, \\nonumber \\\\\n\t& = \\frac{B(\\mathbf{r}_{\\theta}+\\mathbf{n}_{t}+\\mathbf{1})}{B(\\mathbf{r}_{\\theta}+\\mathbf{1})}.\n\t\\end{align}\nThe uncertain likelihood function $\\widetilde{P}_\\theta$ represents the probability of the number of counts $\\mathbf{n}_{t}$ of each category realized by the measurement sequence $\\boldsymbol{\\omega}_{1:t}$ conditioned on the prior evidence $\\mathbf{r}_{\\theta}$ for hypothesis $\\theta$.\n\n\\vspace{-6pt}\n\n\\subsection{The Effects of Normalization with Uncertain Hypotheses}\n\n\n\nTypically in Bayesian inference, a normalization step is used to ensure that the values are between $[0,1]$. Next, we will show that an update rule generated by using the posterior predictive distribution, as the uncertainty likelihood function, i.e.,\n\\begin{align}\\label{eq:surrogate_likelihood}\n\\mu_t(\\theta) & = \\frac{\\widetilde{P}_\\theta(\\mathbf{n}_{t}|\\mathbf{r}_{\\theta})\\mu_{0}(\\theta)}{\\sum_{\\nu \\in \\boldsymbol{\\Theta}}\\widetilde{P}_\\nu(\\mathbf{n}_{t}|\\mathbf{r}_{\\nu})\\mu_{0}(\\nu)},\n\\end{align}\nis not robust to having dissimilar amounts of evidence for the different hypotheses. Thus, the following proposition holds.\n\n\\begin{proposition} \\label{prop:normal}\nConsider the update rule (\\ref{eq:surrogate_likelihood}), with $\\mu_0(\\theta)>0$ $\\forall \\theta \\in \\boldsymbol{\\Theta}\\in\\{\\theta^*, \\bar{\\theta}\\}$. Then, there exists a finite $R_{\\theta^*}$ and $R_{\\bar{\\theta}}$ such that $Prob(\\lim_{t\\rightarrow \\infty}\\mu_t(\\bar{\\theta})>\\lim_{t\\rightarrow \\infty}\\mu_t(\\theta^*))>0$.\n\\end{proposition}\n\nProposition \\ref{prop:normal} states that due to the finite amount of evidence collected by the agent, the ground truth hypothesis will be rejected with a probability greater than 0. This occurs due to the following properties. First, if an insufficient amount of prior evidence is collected for hypothesis $\\theta=\\theta^*$, there is a probability greater than $0$ that the histograms $\\mathbf{r}_\\theta$ generated mismatch the ground truth parameters $\\boldsymbol{\\pi}_{\\theta^*}$. Additionally, there is a probability greater than $0$ that the histograms generated for a hypothesis $\\hat{\\theta}\\ne \\theta^*$ could match the ground truth parameters. Thus, the hypothesis $\\hat{\\theta}$ would appear to be a better fit and be selected over the ground truth $\\theta$. \n\nThe second issue relates to the amount of prior evidence collected. Consider that the prior evidence for each hypothesis is consistent with their respective probability distribution, i.e., $\\mathbf{r}_{\\theta}=R_\\theta \\boldsymbol{\\pi}_{\\theta}$. However, consider that the amount of prior evidence collected for the ground truth hypothesis, say $\\theta_1$, is smaller than some hypothesis $\\theta_2$. Then, there is a chance that the belief update rule (\\ref{eq:surrogate_likelihood}) of $\\theta_2$ will be greater than $\\theta_1$, as illustrated in Figure~\\ref{fig:norm_Lambda}. \n\nAs seen in Figure~\\ref{fig:norm_Lambda1}, when $R_{\\theta_1}=45$ and $R_{\\theta_2}\\in[100,1250]$, $\\lim_{t \\to \\infty}\\mu_t(\\theta_2)>\\lim_{t \\to \\infty}\\mu_t(\\theta_1)$, the ground truth will be rejected. However, as the amount of prior evidence increases to $R_{\\theta_1}=65$ in Figure~\\ref{fig:norm_Lambda2} and $R_{\\theta_1}=85$ in Figure~\\ref{fig:norm_Lambda3}, the range of $R_{\\theta_2}$ that allows $\\theta_1$ to be rejected decreases. Thus, there are scenarios that allow the probability of rejecting the ground truth to be greater than $0$ when using the update rule (\\ref{eq:surrogate_likelihood}). Therefore, we cannot normalize over the set of hypotheses. \n\nWe propose that the agents compare the posterior predictive distribution $\\widetilde{P}_\\theta$ to the model of complete ignorance, i.e., the \\emph{vacuous} probability model. The vacuous probability model assumes that the agent has collected zero prior evidence for each hypothesis and strictly evaluates \\eqref{eq:exp_l} with parameters $\\mathbf{r}_{\\theta}= \\mathbf{0}$. This model considers that each probability $\\pi_{ik\\theta}$ is uniformly distributed in the simplex and represents complete uncertainty. Note that it follows from~\\eqref{eq:exp_l} that\n\\begin{equation}\n\\widetilde{P}_\\theta(\\mathbf{n}_{t}|\\mathbf{r}_{\\theta} =\\mathbf{0})= \\frac{B(\\mathbf{n}_t+1)}{B(\\mathbf{1})}.\n\\end{equation}\n\nThus, we define the \\emph{Uncertain Likelihood Ratio} as follows.\n\n\\begin{definition}[Uncertain Likelihood Ratio] \\label{def:ULR}\n\tThe uncertain likelihood ratio is the posterior predictive distribution normalized by the vacuous probability model, i.e. $R_{\\theta}=0$, as follows:\n\t\t\\begin{align} \\label{eq:L}\n\t\t\\Lambda_{\\theta}(t) = \\frac{\\widetilde{P}_\\theta(\\mathbf{n}_{t}|\\mathbf{r}_{\\theta})}{\\widetilde{P}_\\theta(\\mathbf{n}_{t}|\\mathbf{0} \n)}= \\frac{B(\\mathbf{r}_{\\theta} + \\mathbf{n}_{t}+\\mathbf{1})B(\\mathbf{1})}{B(\\mathbf{r}_{\\theta}+\\mathbf{1})B(\\mathbf{n}_{t}+\\mathbf{1})}.\n\t\t\\end{align}\n\\end{definition}\n\nSince the agent has different amounts of prior evidence for each hypothesis, the uncertain likelihood ratio cannot be evaluated over the set of all hypothesis as in \\eqref{eq:surrogate_likelihood}. Thus, we propose that the agent evaluates each hypothesis individually utilizing the \\emph{Uncertain Likelihood Ratio Test}.\n\n\\begin{definition}[Uncertain Likelihood Ratio Test] \\label{def:ULRT}\n\tThe uncertain likelihood ratio test is a likelihood ratio test that utilizes the uncertain likelihood ratio to evaluate the consistency of the prior evidence of hypothesis $\\theta$ with the ground truth $\\theta^*$. This test results in the following conclusions: \n\t\\begin{enumerate}\n\t\t\\item If $\\Lambda_{\\theta}(t)$ converges to a value above one, there is evidence to accept that $\\theta$ is consistent with the ground truth $\\theta^*$. Higher values indicate more evidence to accept $\\theta$ as being equivalent to the ground truth.\n\t\t\\item If $\\Lambda_{\\theta}(t)$ converges to a value below one, there is evidence to reject that $\\theta$ is the ground truth $\\theta^*$. Lower values indicate more evidence to reject $\\theta$ as $\\theta^*$. \n\t\t\\item If $\\Lambda_{\\theta}(t)$ converges to a value near one, there is not enough evidence to accept or reject $\\theta$ as $\\theta^*$. \n\t\\end{enumerate}\n\\end{definition}\nAs a practical matter, one can define a threshold $\\upsilon>1$ so that the hypothesis is deemed accepted, rejected or unsure if $\\Lambda_\\theta(t) \\geq \\upsilon$, $\\Lambda_\\theta(t) < 1\/\\upsilon$ and $1\/\\upsilon \\leq \\Lambda_\\theta(t) < \\upsilon$, respectively.\\footnote{This choice of thresholds induces a set of symmetric thresholds $\\pm \\log(\\upsilon)$ for $\\log\\left(\\Lambda_\\theta(t)\\right) \\in (-\\infty,\\infty)$.} The exact choice of thresholds is application dependent to balance the number of false positives and false negatives. Furthermore, the choice of threshold may be chosen based on the amount of prior evidence the agent has for hypothesis $\\theta$. The construction of this threshold and its effects on the overall inference is out of the scope of this paper and thus left for future work. \n\n\nThe uncertain likelihood ratio test incorporates a third conclusion into the traditional likelihood ratio test which is a direct result of the agents uncertainty in the hypothesis. The current literature assumes a closed world and that the agent must select the hypothesis that best matches the observed data. However, when uncertainty is incorporated, the agents should judge each hypothesis on its own merits, i.e., how well it matches the observations relative to the historical evidence about that hypothesis. For some hypotheses, there may not be enough evidence to accept or reject it. Furthermore, there may be evidence to accept multiple hypotheses, but the wrong hypothesis exhibits a larger uncertain likelihood ratio as evident in Figure~\\ref{fig:norm_Lambda}. Therefore, the inference problem is reformulated to accept the following set of hypotheses:\n\\begin{eqnarray}\n\\hat{\\boldsymbol{\\Theta}} = \\{\\theta\\in\\boldsymbol{\\Theta}|\\Lambda_{\\theta}(t) \\ge \\upsilon \n\\}.\n\\end{eqnarray}\n\n\\subsection{Asymptotic Behavior of the Centralized Uncertain Likelihood Ratio}\n\n\nThe inference drawn from the uncertain likelihood ratio test depends on the amount of prior evidence collected by the agent. This subsection studies the asymptotic properties of the uncertain likelihood ratio as $t\\rightarrow \\infty$. Particularly, we will assume a centralized scenario where there is only one agent, and we will observe the asymptotic behavior of its beliefs.\n\n\\begin{lemma} \\label{lem:ULR_lim}\n\tThe uncertain likelihood ratio in \\eqref{eq:L} of hypothesis $\\theta$ has the following property\n\t\\begin{eqnarray} \\label{eq:ULR}\n\t\\widetilde{\\Lambda}_{\\theta} = \\lim_{t \\to \\infty} \\Lambda_{\\theta}(t) = \\frac{B(\\mathbf{1})}{B(\\mathbf{r}_{\\theta}+\\mathbf{1})}\\prod_{k=1}^K \\pi_{k\\theta^*}^{r_{k\\theta}}, \\ \\ \\text{a.s.},\n\t\\end{eqnarray}\n\twhere $\\mathbf{r}_{\\theta}$ is the prior evidence about hypothesis $\\theta$ and $\\boldsymbol{\\pi}_{\\theta^*}$ are the ground truth probabilities. \n\\end{lemma}\n\n\n\n\\begin{proof}\n\tFirst, the uncertain likelihood ratio can be expressed as\n\t\\begin{eqnarray*}\n\t\\Lambda_{\\theta}(t) = \\frac{B(\\mathbf{1})\\Gamma(t+K)\\prod_{k=1}^K \\Gamma(r_{k\\theta }+{n_{kt}}+1)}{B(\\mathbf{r}_{\\theta}+1) \\Gamma(R_{\\theta} + t + K) \\prod_{k=1}^K \\Gamma({n_{kt}}+1)}.\n\t\\end{eqnarray*}\n\t\n\tFor a large $t$, we can approximate the ratio of gamma functions using Stirling's series \\cite{Laforgia12}, where\n\t\\begin{align*}\n\t\\frac{\\Gamma(x + \\alpha)}{\\Gamma(x + \\beta)} = x^{\\alpha -\\beta} \\left(1 +\\frac{(\\alpha -\\beta)(\\alpha - \\beta-1)}{2x} + O(x^{-2})\\right).\n\t\\end{align*}\n\t\n\tThus,\n\t\\begin{align*}\n\t \\frac{\\Gamma(r_{k\\theta} + n_{kt} +1)}{\\Gamma(n_{kt} +1)} & = n_{kt}^{r_{k\\theta }} \\left(1 +\\frac{r_{k\\theta }(r_{k\\theta}-1)}{2n_{kt}} + O(n_{kt}^{-2})\\right)\n\t\\end{align*}\n\t$\\forall k \\in \\boldsymbol{\\Omega}$, and\n\t\\begin{align*}\n\t\\frac{\\Gamma(t+K)}{\\Gamma(t+K+R_\\theta)} & = t^{-R_\\theta} \\left(1 +\\frac{-R_\\theta(-R_\\theta-1)}{2t} + O(t^{-2})\\right) .\n\t\\end{align*}\n\t\n\tThen, the limit of the uncertain likelihood ratio as $t\\rightarrow \\infty$ becomes\n\t\\begin{multline*}\n\t\\lim_{t\\rightarrow \\infty} \\Lambda_{\\theta}(t) = \\lim_{t\\rightarrow \\infty} t^{-R_\\theta} \\left(1 +\\frac{-R_\\theta(-R_\\theta-1)}{2t} + O(t^{-2})\\right) \\\\ \\cdot \\prod_{k=1}^K n_{kt}^{r_{k\\theta}} \\left(1 +\\frac{r_{k\\theta}(r_{k\\theta}-1)}{2n_{kt}} + O(n_{kt}^{-2})\\right)\\cdot \\frac{B(\\mathbf{1})}{B(\\mathbf{r}_{\\theta}+1)}. \\nonumber\n\t\\end{multline*}\n\t\n Note that\n\t\\begin{align*}\n\t\\lim_{t\\rightarrow \\infty}\\left(1 +\\frac{r_{k\\theta}(r_{k\\theta}-1)}{2n_{kt}} + O(n_{kt}^{-2})\\right) = 1,\n\t\\end{align*}\n\tand\n\t\\begin{align*}\n\t\\lim_{t\\rightarrow \\infty} \\left(1 +\\frac{R_\\theta(R_\\theta+1)}{2t} + O(t^{-2})\\right) =1.\n\t\\end{align*}\n\tThen,\n\t\t\\begin{align*}\n\t\\lim_{t\\rightarrow \\infty} \\Lambda_{\\theta}(t) & = \\frac{B(\\mathbf{1})}{B(\\mathbf{r}_{\\theta}+\\mathbf{1})}\\prod_{k=1}^K \\pi_{k\\theta^*}^{r_{k\\theta}},\n\t\\end{align*}\n\twith probability $1$ by the strong law of large numbers.\n\\end{proof}\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=0.95\\columnwidth]{Lambda_Tilde_short.pdf}\n\t\\caption{Asymptotic uncertain likelihood ratio vs. the amount of prior evidence $R_{\\theta}$ for various hypothesis $\\theta$ that differ from the ground truth $\\theta^*$ with varying degrees of divergence. These curves assume that $\\mathbf{r}_{\\theta} = R_\\theta \\boldsymbol{\\pi}_{\\theta}$.} \\vspace{-12pt}\n\t\\label{fig:L_tilde}\n\\end{figure}\n\nThe effect of the prior evidence on $\\widetilde{\\Lambda}_\\theta$ can be seen in Figure~\\ref{fig:L_tilde}, where the asymptotic uncertain likelihood ratio vs. the amount of prior evidence $R_{\\theta}$ is presented for various hypotheses. In this example, we consider $K=2$ and that the ground truth probabilities are $\\boldsymbol{\\pi}_{\\theta^*}=\\{0.6, 0.4\\}$. Each curve in Figure \\ref{fig:L_tilde} represents ideal conditions where the prior evidence is $\\mathbf{r}_{\\theta}=\\{\\pi, (1-\\pi)\\}R_{\\theta}$, for $\\pi\\in\\{0.1,0.2,...,0.9\\}$. This result shows that for a finite amount of prior evidence, $\\widetilde{\\Lambda}_\\theta$ converges to a finite value between $(0,\\infty)$. Additionally, this shows that for small amounts of prior evidence, there are some hypotheses that produce an asymptotic uncertain likelihood ratio that is greater than $1$. Although, as the amount of prior evidence increases, the hypotheses with $\\frac{\\mathbf{r}_{\\theta}}{R_{\\theta}}\\ne \\boldsymbol{\\pi}_{\\theta^*}$ eventually decrease to $0$.\n\nThis result shows the effect of drawing conclusions using uncertain models. If the agent does not have enough prior evidence about a hypothesis, the asymptotic uncertain likelihood ratio will converge to a value around $1$, which falls into the third conclusion of the uncertain likelihood ratio test. As the amount of prior evidence increases, the asymptotic uncertain likelihood ratio for hypotheses with a small KL divergence, i.e., $D_{KL}(\\frac{\\mathbf{r}_{\\theta}}{R_{\\theta}}||\\boldsymbol{\\pi}_{\\theta^*})\\approx 0$, will converge to a value bigger than $1$, which results in the agent accepting the hypotheses. However, as the KL divergence and the amount of prior evidence increases, the asymptotic uncertain likelihood ratio converges to a value less than $1$, which is therefore rejected according to the uncertain likelihood ratio test.\n\nFurthermore, Figure \\ref{fig:L_tilde} provides an understanding of the asymptotic uncertain likelihood ratio as the amount of evidence increases to infinity, i.e., the agent becomes certain. This result is analytically characterized in the following Corollary.\n\n\\begin{corollary} \\label{lem:dog_ULR}\n For an infinite amount of prior evidence, i.e., $R_{\\theta}\\rightarrow \\infty$, the asymptotic uncertain likelihood ratio, $\\widetilde{\\Lambda}_{\\theta}$, diverges to\n\t\\begin{eqnarray}\n\t\\lim_{t\\rightarrow \\infty} \\widetilde{\\Lambda}_{\\theta}(t) = \\infty & if \\ \\boldsymbol{\\pi}_{\\theta}=\\boldsymbol{\\pi}_{\\theta^*} \\ \\text{a.s.}, \\ \\text{and} \\end{eqnarray}\n\tconverges to \n\t\\begin{eqnarray}\n\t\\lim_{t\\rightarrow \\infty} \\widetilde{\\Lambda}_{\\theta}(t) = 0 & if \\ \\boldsymbol{\\pi}_{\\theta}\\ne\\boldsymbol{\\pi}_{\\theta^*} \\ \\text{a.s.}\n\\label{eq:dog_ULR}\n\t\\end{eqnarray}\n\\end{corollary}\n\n\\begin{proof}\nFirst, by (\\ref{eq:ULR}), the uncertain likelihood ratio converges to a function of a Dirichlet distribution evaluated at the ground truth probabilities $\\boldsymbol{\\pi}_{\\theta^*}$, i.e.,\n$\\widetilde{\\Lambda}_\\theta = B(\\mathbf{1}) f(\\boldsymbol{\\pi}_{\\theta^*}|\\mathbf{r}_\\theta)$.\nThe expected value and variance of $f(\\boldsymbol{\\pi}_{\\theta^*}|\\mathbf{r}_\\theta)$ are $E[\\pi_{k\\theta}]=\\frac{r_{k\\theta}+1}{R_\\theta +K}$ and $Var[\\pi_{k\\theta}] = \\frac{(r_{k\\theta} + 1)(R_\\theta - r_{k\\theta} + K - 1)}{(R_\\theta + K)^2(R_\\theta +K+1)}$. Then, as $R_\\theta \\to \\infty$, $E[\\pi_{k\\theta}]\\to \\pi_{k\\theta}$ and $Var[\\pi_{k\\theta}]\\to 0$ a.s. due to the strong law of large numbers. Therefore, $f(\\boldsymbol{\\pi}_{\\theta^*}|\\boldsymbol{r}_\\theta)= \\delta(\\pi_{k\\theta^*}-\\pi_{k\\theta})$ a.s., where $\\delta(\\cdot)$ is the Dirac delta function. This causes the asymptotic uncertain likelihood ratio to diverge to $\\infty$ if $\\boldsymbol{\\pi}_{\\theta^*}=\\boldsymbol{\\pi}_\\theta$ and converge to $0$ if $\\boldsymbol{\\pi}_{\\theta^*}\\ne\\boldsymbol{\\pi}_\\theta$.\n\\end{proof}\n\nCorollary \\ref{lem:dog_ULR} shows the relationship of the uncertain likelihood ratio with the assumption typically presented in non-Bayesian social learning literature. When the amount of prior evidence tends to infinity, the set of hypotheses with $\\boldsymbol{\\pi}_{\\theta}=\\boldsymbol{\\pi}_{\\theta^*}$ will be accepted, while the remaining hypotheses will be rejected since $\\widetilde{\\Lambda}_{\\theta}=0$. This becomes the classical result, except that our definition of the uncertain likelihood ratio ranges from $[0,\\infty)$ rather than $[0,1]$. Therefore, our uncertain model generalizes the certain likelihood assumption by forming an analytical expression of the likelihood as a function of the prior evidence.\n\nOverall, one can view the amount of prior evidence $R_\\theta$ as the amount of precision for knowledge about $\\boldsymbol{\\pi}_\\theta$. Larger $R_\\theta$ provides the opportunity for a larger uncertain likelihood ratio as long as $\\boldsymbol{\\pi}_\\theta = \\boldsymbol{\\pi}_{\\theta^*}$. However, larger $R_\\theta$ also means that the uncertain likelihood ratio is more likely to drop below one as the divergence between $\\boldsymbol{\\pi}_\\theta$ and $\\boldsymbol{\\pi}_{\\theta^*}$ increases. As $R_\\theta \\to \\infty$, any small divergence is enough for the uncertain likelihood ratio to go to zero. One could view the idea that traditional social learning actually selects the hypothesis that has the smallest KL divergence with the observations (e.g., see \\cite{NOU2015}) as an admission that the underlying models $\\boldsymbol{\\pi}_\\theta$ are not precise enough to match the ground truth precisely. The uncertain likelihood ratio developed in this section provides a formal method to evaluate the hypotheses based upon that lack of precision.\n\n\\section{Distributed Non-Bayesian Learning with Uncertain Models} \\label{sec:DNBLWUL}\n\nThus far, we have derived the uncertain likelihood ratio for an agent $i$ that has received a set of measurements $\\boldsymbol{\\omega}_{i1:t}$ up to time $t\\ge 1$. However, in non-Bayesian social learning theory, the agent's belief $\\mu_{it}(\\theta)$ is updated using the likelihood of the measurement $\\omega_{it+1}$ given that hypothesis $\\theta$ is the ground truth, not the uncertain likelihood ratio over the entire sequence of measurements. Therefore, in order to incorporate the uncertain likelihood ratio into non-Bayesian social learning, we must derive the uncertain likelihood ratio update function.\n\\begin{lemma} \\label{lem:ULR_Up}\n\tGiven that agent $i$ receives the measurement $\\omega_{it}=k$ at time $t$, then the uncertain likelihood ratio update function $\\ell_{i\\theta}(\\mathbf{n}_{it-1},k|\\mathbf{r}_{i\\theta})$ at time $t$ is defined as {\n\t\t\\begin{eqnarray} \\label{eq:ell_update}\n\t\t\\ell_{i\\theta}(\\mathbf{n}_{it-1},k|\\mathbf{r}_{i\\theta}) &=& \\frac{(r_{ik\\theta} + n_{ikt-1}+1)}{(R_{i\\theta}+t+K-1)} \\frac{(K+t-1)}{(n_{ikt-1} +1)} \\nonumber \\\\ &= & \\frac{\\hat{\\pi}_{r_{ik\\theta}}}{\\hat{\\pi}_{0}},\n\t\t\\end{eqnarray}\n\t\twhere $\\hat{\\pi}_{r_{ik\\theta}}=\\frac{r_{ik\\theta} + n_{ikt-1}+1}{R_{i\\theta}+t+K-1}$ and $\\hat{\\pi}_{0}=\\frac{n_{ikt-1} +1}{K+t-1}$ are estimates of the private signal probabilities incorporating the prior evidence and not, respectively.\n\t\tThis allows the uncertain likelihood ratio to be expressed in the following recursive form\n\t\t\\begin{eqnarray} \\label{eq:URL_Prod_Up}\n\t\t\\Lambda_{i\\theta}(t) =\\ell_{i\\theta}(\\mathbf{n}_{it-1},k|\\mathbf{r}_{i\\theta})\n\\Lambda_{i\\theta}(t-1). \n\t\t\\end{eqnarray} }\n\\end{lemma}\n\n\\begin{proof}\nThe uncertain likelihood ratio update is derived by expressing $\\Lambda_{i\\theta}(t)$ as a series of telescoping products.\n\t\\begin{eqnarray} \\label{eq:URL_tell}\n\t\\Lambda_{i\\theta}(t) = \\prod_{\\tau=1}^t \\ell^{i\\theta}(\\mathbf{n}_{i\\tau-1},k|\\mathbf{r}_{i\\theta}) = \\prod_{\\tau=1}^t \\frac{\\Lambda_{i\\theta}(\\tau)}{\\Lambda_{i\\theta}(\\tau-1)}, \\nonumber \n\t\\end{eqnarray}\n\tsince $\\Lambda_{i\\theta}(0)=1$. Therefore,\n\t\\begin{eqnarray} \\label{eq:ell}\n\t& & \\ell_{i\\theta}(\\mathbf{n}_{i\\tau-1},k|\\mathbf{r}_{i\\theta}) = \\frac{\\Lambda_{i\\theta}(\\tau)}{\\Lambda_{i\\theta}(\\tau-1)} \\nonumber \\\\ &=& \\frac{B(\\mathbf{r}_{i\\theta}+\\mathbf{n}_{i\\tau}+\\mathbf{1}) B(\\mathbf{n}_{i\\tau-1}+\\mathbf{1})}{B(\\mathbf{n}_{i\\tau}+\\mathbf{1})B(\\mathbf{r}_{i\\theta}+\\mathbf{n}_{i\\tau-1}+\\mathbf{1})} \\nonumber \\\\\n\t&=& \\frac{\\Gamma(R_{i\\theta}+K+\\sum_{k=1}^K n_{ik\\tau-1})\\Gamma(K+\\sum_{k=1}^K n_{ik\\tau})}{\\Gamma(R_{i\\theta}+K+\\sum_{k=1}^K n_{ik\\tau})\\Gamma(K+\\sum_{k=1}^K n_{ik\\tau-1})} \\nonumber \\\\\n\t& & \\cdot \\prod_{k=1}^K \\frac{\\Gamma(r_{ik\\theta} + n_{ik\\tau}+1)\\Gamma(n_{ik\\tau-1}+1)}{\\Gamma(r_{ik\\theta} + n_{ik\\tau-1}+1)\\Gamma(n_{ik\\tau}+1)}\n\t\\end{eqnarray}\n\tThen, if $\\omega_{i\\tau}=k$ is received, $n_{ik\\tau}=n_{ik\\tau-1}+1$ and $n_{i\\bar{k}\\tau}=n_{i\\bar{k}\\tau-1}$ for all $\\bar{k} \\in \\boldsymbol{\\Omega}\\setminus \\{k\\}$. Recall that $\\sum_{k=1}^K n_{ikt} = t$.\nTherefore because $\\Gamma(x+1)=x\\Gamma(x)$, (\\ref{eq:ell}) simplifies to\n\t\\begin{eqnarray} \\label{eq:ell1}\n\t\\ell_{i\\theta}(\\mathbf{n}_{i\\tau-1},k|\\mathbf{r}_{i\\theta}) &=& \\frac{(r_{ik\\theta} + n_{ik\\tau-1}+1)(K+\\tau-1)}{(R_{i\\theta}+\\tau+K-1)(n_{ik\\tau-1} +1)}. \\nonumber\n\t\\end{eqnarray}\n\\end{proof}\n\nThe likelihood of the measurement $\\omega_{it+1}$ given that hypothesis $\\theta$ is the ground truth provides the following intuition. The numerator $\\hat{\\pi}_{r_{ik\\theta}}$ represents the estimate of $\\pi_{ik\\theta}$ given the prior evidence $r_{ik\\theta}$ and accumulated counts $n_{ik\\theta}$, while the denominator $\\hat{\\pi}_0$ represents the estimate of $\\pi_{ik\\theta^*}$ given $0$ prior evidence and the accumulated counts $n_{ik\\theta}$. The estimate $\\hat{\\pi}_0 \\to \\pi_{ik\\theta^*}$ as $t\\rightarrow \\infty$ a.s. due to the strong law of large number, whereas the estimate $\\hat{\\pi}_{r_{ik\\theta}}$ will converge based on the amount of prior evidence. If the prior evidence is finite, $\\hat{\\pi}_{r_{ik\\theta}}\\to \\pi_{ik\\theta^*}$ as $t\\rightarrow \\infty$ a.s., while as $R_{i\\theta}\\to\\infty$, $\\hat{\\pi}_{r_{ik\\theta}}\\to \\pi_{ik\\theta}$ $\\forall t\\ge 0$ a.s. due to the strong law of large numbers. These properties are captured in the following lemmas.\n\n\\begin{lemma} \\label{lem:update_lim}\n The likelihood of the measurement $\\omega_{it+1}=k$ given that hypothesis $\\theta^*$ is the ground truth has the following properties:\n \\begin{enumerate}\n \\item For finite evidence $R_{i\\theta}$, $\\lim_{t\\rightarrow \\infty} \\ell_{i\\theta}(\\mathbf{n}_{it},k|\\mathbf{r}_{i\\theta}) = 1$, $\\forall k\\in\\boldsymbol{\\Omega}$ a.s., and\n \\item For infinite evidence (i.e., $R_{i\\theta} \\to \\infty$), $\\lim_{t\\rightarrow \\infty} \\ell_{i\\theta}(\\mathbf{n}_{it},k|\\mathbf{r}_{i\\theta}) = \\frac{\\pi_{ik\\theta}}{\\pi_{ik\\theta^*}}$, $\\forall k\\in\\boldsymbol{\\Omega}$ a.s..\n \\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\nFirst, since each private signal $\\omega_{i\\tau} \\in \\boldsymbol{\\omega}_{i1:t}$ is i.i.d. and drawn from the $K$-state multinomial distribution with probabilities $\\boldsymbol{\\pi}_{i\\theta^*}$, the strong law of large numbers leads to $\\frac{n_{ikt}}{t} \\to \\pi_{ik\\theta^*}$ for all $k\\in \\Omega$ a.s. Then, since $\\hat{\\pi}_0=\\frac{\\frac{n_{ikt}}{t}t+1}{t+K-1}$ and is continuous at $\\pi_{ik\\theta^*}$, $\\hat{\\pi}_0(\\frac{n_{ikt}}{t})\\to \\pi_{ik\\theta^*}$ with probability $1$ as $t\\to \\infty$. Similarly, when the prior evidence is finite and since $\\hat{\\pi}_{r_{ik\\theta}}=\\frac{r_{ik\\theta}+\\frac{n_{ikt}}{t}t+1}{R_{i\\theta}+t+K-1}$ is continuous at $\\pi_{ik\\theta^*}$, then $\\hat{\\pi}_{r_{ik\\theta}}(\\frac{n_{ikt}}{t})\\to \\pi_{ik\\theta^*}$ with probability $1$ as $t\\to\\infty$. Thus, when the prior evidence is finite and $\\pi_{ik\\theta^*}>0$, condition 1 holds. Furthermore, if $\\pi_{ik\\theta^*}=0$, the private signal $\\omega_{it}=k$ will never be received when $\\theta=\\theta^*$. Thus\n $\\hat{\\pi}_{r_{ik\\theta}} = \\hat{\\pi}_0$ as they both go to zero as $t \\to \\infty$ and\n condition 1 still holds.\n\n When the amount of prior evidence for hypothesis $\\theta$ tends to infinity and is drawn from the distribution $\\boldsymbol{\\pi}_{i\\theta}$, the strong law of large numbers leads to $\\frac{r_{ik\\theta}}{R_{i\\theta}}\\to \\pi_{ik\\theta}$ with probability $1$. Then, since $\\hat{\\pi}_{r_{ik\\theta}}=\\frac{\\frac{r_{ik\\theta}}{R_{i\\theta}}R_{i\\theta} +\\frac{n_{ikt}}{t}t+1}{R_{i\\theta}+t+K-1}$ is continuous at $\\frac{r_{ik\\theta}}{R_{i\\theta}}=\\pi_{ik\\theta}$, $\\hat{\\pi}_{r_{ik\\theta}}(\\frac{r_{ik\\theta}}{R_{i\\theta}})\\to \\pi_{ik\\theta}$ with probability 1 as $R_{i\\theta}\\to \\infty$. Then, as $t\\to\\infty$, $\\hat{\\pi}_0(\\frac{n_{ikt}}{t})\\to\\pi_{ik\\theta^*}$ with probability $1$ as stated above. Therefore condition 2 holds. When $\\pi_{ik\\theta^*}=0$, the likelihood ratio goes to infinity, but the private signal $\\omega_{it}=k$ will never be received as $\\theta^*$ is the ground truth. \n\\end{proof}\n\nThis immediately results in the following corollary. \n\n\\begin{corollary}\\label{cor:ell_dog}\nWhen the agent is certain, i.e., $R_{i\\theta}\\to\\infty$, and $\\mathbf{r}_{i\\theta}$ is drawn from the distribution $\\boldsymbol{\\pi}_{i\\theta} = \\boldsymbol{\\pi}_{i\\theta^*}$, then the likelihood update of the measurement $\\omega_{it+1}=k$ converges to \n\\begin{eqnarray}\n \\lim_{t\\to\\infty, R_{i\\theta}\\to\\infty} \\ell_{i\\theta}(\\mathbf{n}_{it},\\omega_{it+1}|\\mathbf{r}_{i\\theta}) = 1, \\ \\text{a.s.}\n\\end{eqnarray}\n\\end{corollary}\n\nThe above lemma and corollary show that modeling with uncertainty results in a likelihood function that varies with time. Furthermore, Lemma \\ref{lem:update_lim} condition 2 and Corollary \\ref{cor:ell_dog} show that in the certain case, the numerator of the likelihood function is a constant and is modeled in the same manner as the traditional non-Bayesian social learning theory. Thus, the proposed uncertain likelihood ratio translates to a likelihood function that models uncertain and certain conditions based on the amount of prior evidence. \n\nTherefore, at time $t\\ge1$, agent $i$ will combine their neighbors' beliefs of hypothesis $\\theta$ at time $t$ and update their belief of $\\theta$ using the likelihood update (\\ref{eq:el;_def}) of the private signal at time $t+1$ according to (\\ref{eq:main_algo}). Then, the agent can interpret hypothesis $\\theta$ using the uncertain likelihood ratio test, except $\\Lambda_{i\\theta}(t)$ is now replaced with the agents belief $\\mu_{it}(\\theta)$.\n\n\\subsection{Asymptotic Behavior on Arbitrary Graphs}\n\nNext, we present the proof of main results in Theorem~\\ref{thm:ULR_Con}. First, we begin by providing three auxiliary lemmas. The first lemma provides a result about the convergence of a product of doubly stochastic matrices provided in \\cite{NOU2017}.\n\n\\begin{lemma}[Lemma 5 in~\\cite{NOU2017}] \\label{lem:doubly}\nFor a stationary doubly stochastic matrix, we have for all $t>0$\n\\begin{eqnarray}\n\\left \\| \\mathbf{A}^t - \\frac{1}{m} \\mathbf{11}' \\right\\| \\le \\sqrt{2}m \\lambda^t\n\\end{eqnarray}\n\twhere $\\|\\cdot\\|$ is the spectral norm, $\\lambda= 1-\\frac{\\eta}{4m^2}$, and $\\eta$ is a positive constant s.t. if $A_{ij}>0$, then $A_{ij}\\ge \\eta$.\n\\end{lemma}\n\nThe above lemma shows that every element of a repeated product of a doubly stochastic matrices will converge to $1\/m$. Next, we present the bounds of the likelihood update to show that $\\ell_{i\\theta}(\\omega_{it})$ is bounded.\n\\begin{lemma} \\label{lem:el_b}\n\tFor an uncertain likelihood, i.e., $R_{i\\theta}<\\infty$, the likelihood update is bounded as follows.\n\t\\begin{eqnarray} \\label{eq:el_b}\n\t\\frac{1}{R_{i\\theta}+K} \\le \\ell_{i\\theta}(\\mathbf{n}_{it-1},\\omega_{it}|\\mathbf{r}_{i\\theta}) \\le \\max_{k\\in\\Omega}(r_{ik\\theta})+1.\n\t\\end{eqnarray}\n\\end{lemma}\n\n\\begin{proof}\n\tConsider that the agent $i$ has received $n_{ikt-1}\\in[0, t-1]$ private signals for attribute $k$ and $n_{i\\bar{k}t-1}=t-1-n_{ikt-1}$ for other signals up to time $t-1$ where $\\bar{k} = \\boldsymbol{\\Omega}\\setminus \\{k\\}$. Then, if agent $i$ receives $\\omega_{it}=k$ at time $t$, the log of the likelihood update is\n\t\\begin{eqnarray}\\label{eq:lit_n1}\n\t\\log(\\ell_{i\\theta}(\\mathbf{n}_{it-1},k|\\mathbf{r}_{i\\theta}) = \\log\\left((r_{ik\\theta}+n_{ikt-1}+1)(t+K-1)\\right) \\nonumber \\\\\n\t- \\log\\left((R_{i\\theta}+t+K-1)(n_{ikt-1}+1)\\right). \\nonumber\n\t\\end{eqnarray}\n\tThe partial derivatives of the update with respect to $n_{ikt-1}$ is\n\\begin{eqnarray}\n\\frac{\\partial \\log(\\ell_{i\\theta}(\\mathbf{n}_{it-1},k|\\mathbf{r}_{i\\theta}))}{\\partial n_{ikt-1}} = \\frac{1}{(r_{ik\\theta}+n_{ikt-1}+1)} - \\frac{1}{(n_{ikt-1}+1)} \\nonumber \\\\\n=\\frac{-r_{ik\\theta}}{(r_{ik\\theta}+n_{ikt-1}+1)(n_{ikt-1}+1)} < 0. \\nonumber\n\\end{eqnarray}\n\t\n\tTherefore, since the function $\\log(\\ell_{i\\theta}(\\mathbf{n}_{it-1},k|\\mathbf{r}_{i\\theta}))$ is monotonically decreasing with respect to $n_{ikt-1}$, the maximum and minimum occur at $n_{ikt-1}=0$ and $n_{ikt-1}=t-1$, respectively. To maximize the update, setting $n_{ikt-1}=0$ leads to\n\\begin{eqnarray}\n\t\\log(\\ell_{i\\theta}(\\mathbf{n}_{it-1},k|\\mathbf{r}_{i\\theta})) = \\log\\left((r_{ik\\theta}+1)(t+K-1)\\right) \\nonumber \\\\ \n\t- \\log\\left((R_{i\\theta}+t+K-1)\\right) \\nonumber\n\\end{eqnarray}\nso that the derivative of the log-update with respect to $t$ is\n\\begin{equation*}\n\\frac{d \\log(\\ell_{i\\theta}(\\mathbf{n}_{it-1},k|\\mathbf{r}_{i\\theta}))}{d t} = \\frac{R_{i\\theta}}{(R_{i\\theta}+t+K-1)(t+K-1)} > 0.\n\\end{equation*}\nSo the update is maximized by letting $t \\to \\infty$ so that $\\log(\\ell_{i\\theta}(\\mathbf{n}_{it-1},k|\\mathbf{r}_{i\\theta})) \\le \\log(r_{ik\\theta}+1) \\le \\log\\left(\\max_{k\\in\\Omega}(r_{ik\\theta})+1\\right)$.\nNow to minimize the update, setting $n_{ikt-1}=t-1$ leads to\n\\begin{eqnarray}\n\t\\log(\\ell_{i\\theta}(\\mathbf{n}_{it-1},k|\\mathbf{r}_{i\\theta})) = \\log\\left((r_{ik\\theta}+t)(t+K-1)\\right) \\nonumber \\\\ \n\t- \\log\\left((R_{i\\theta}+t+K-1)t\\right). \\nonumber\n\\end{eqnarray}\nNow $\\log(t+K-1)-\\log(t) \\ge 0$ and $\\log(r_{ik\\theta}+t)-\\log(R_{i\\theta}+t+K-1)$ is minimized over $t \\ge 1$ at $t=1$ so that $\\log(r_{ik\\theta}+t)-\\log(R_{i\\theta}+t+K-1) \\ge \\log(r_{ik\\theta}+1)-\\log(R_{i\\theta}+K) \\ge -\\log(R_{i\\theta}+K)$. Thus, $\\log(\\ell_{i\\theta}(\\mathbf{n}_{it-1},k|\\mathbf{r}_{i\\theta})) \\ge -\\log(R_{i\\theta}+K)$ for all $k \\in \\Omega$.\n\\end{proof}\nFinally, we recall Lemma 3.1 from \\cite{ram10}, which provides a convergence property of scalar sequences. \n\\begin{lemma}[Lemma $3.1$ in \\cite{ram10}]\\label{lemm:ram}\n\t\tLet $\\{\\gamma_k \\}$ be a scalar sequence. If $\\lim_{k \\to \\infty} \\gamma_k = \\gamma$ and $0\\leq \\beta \\leq 1$, then $\\lim_{k\\to \\infty} \\sum_{l=0}^{k} \\beta^{k{-}l} \\gamma_l = \\frac{\\gamma}{1{-}\\beta}$.\n\\end{lemma}\n\n\\begin{proof}[Proof of Theorem \\ref{thm:ULR_Con}]\nWith the above lemmas stated, we can now prove Theorem \\ref{thm:ULR_Con}. First, we prove that the beliefs converge to the $m$th root of the product of uncertain likelihood ratios, i.e.,\n\\begin{eqnarray} \\label{eq:con_diff}\n\\lim_{t\\rightarrow \\infty} \\ \\ \\left\\| \\log(\\boldsymbol{\\mu}_{t}(\\theta)) - \\log\\left( (\\prod_{i=1}^m \\Lambda_{i\\theta}(t))^{\\frac{1}{m}}\\right) \\mathbf{1} \\right\\| = 0,\n\\end{eqnarray}\nwhere for vectors $\\|\\cdot\\|$ is the standard 2-norm. Thus, since\n\\begin{equation}\n\\begin{split}\n&\\log(\\boldsymbol{\\mu}_{t}(\\theta)) = \\sum_{\\tau = 1}^{t} \\mathbf{A}^{t-\\tau} \\log\\left(\\boldsymbol{\\ell}_\\theta(\\boldsymbol{\\omega}_\\tau)\\right), \\ \\text{and} \\label{eq:log_belief} \\\\\n&\\log\\left((\\prod_{i=1}^m \\Lambda_{i\\theta}(t))^\\frac{1}{m} \\right)\\mathbf{1} = \\frac{1}{m}\\mathbf{11'} \\sum_{\\tau=1}^t \\log(\\boldsymbol{\\ell}_\\theta(\\boldsymbol{\\omega}_\\tau)),\n\\end{split}\n\\end{equation}\nwhere with a slight abuse of notation, $\\boldsymbol{\\ell}_\\theta(\\boldsymbol{\\omega}_{\\tau}) = [\\ell_{1\\theta}(\\mathbf{n}_{1\\tau-1},\\omega_{1\\tau}|\\mathbf{r}_{1\\theta}),...,\\ell_{m\\theta}(\\mathbf{n}_{m\\tau-1},\\omega_{m\\tau}|\\mathbf{r}_{m\\theta})]'$, (\\ref{eq:con_diff}) can be rewritten as\n\n\\begin{eqnarray} \\label{eq:norm_LL}\n\\left\\|\\log(\\boldsymbol{\\mu}_{t}(\\theta))-\\log\\left((\\prod_{i=1}^m \\Lambda_{i\\theta}(t))^\\frac{1}{m} \\right)\\mathbf{1} \\right\\| \\le \\nonumber \\\\ \\sum_{\\tau=1}^{t} \\left\\|\\mathbf{A}^{t-\\tau}-\\frac{1}{m}\\mathbf{11}' \\right\\|\\left\\|\\log\\left(\\boldsymbol{\\ell}_\\theta(\\boldsymbol{\\omega}_\\tau)\\right)\\right\\| \\le \\nonumber \\\\\n\\sqrt{2}m\\left(\\sum_{\\tau=0}^{t} \\lambda^{t-\\tau} \\left\\|\\log\\left(\\boldsymbol{\\ell}_\\theta(\\boldsymbol{\\omega}_\\tau)\\right)\\right\\| - \\lambda^t \\left\\|\\log\\left(\\boldsymbol{\\ell}_\\theta(\\boldsymbol{\\omega}_0)\\right)\\right\\| \\right),\n\\end{eqnarray}\nwhere (\\ref{eq:norm_LL}) follows from Lemma~\\ref{lem:doubly}. Furthermore, since $\\lim_{t\\to\\infty} \\left\\|\\log\\left(\\boldsymbol{\\ell}_\\theta(\\boldsymbol{\\omega}_0)\\right)\\right\\|=0$ a.s. from Lemma~\\ref{lem:update_lim}, then \n\n\\begin{eqnarray}\n\\lim_{t\\to\\infty} \\sum_{\\tau=0}^{t} \\lambda^{t-\\tau} \\left\\|\\log\\left(\\boldsymbol{\\ell}_\\theta(\\boldsymbol{\\omega}_\\tau)\\right)\\right\\| = 0 \\nonumber\n\\end{eqnarray}\na.s. from Lemma~\\ref{lemm:ram}. Finally, since $\\lambda<1$ and $\\left\\|\\log\\left(\\boldsymbol{\\ell}_\\theta(\\boldsymbol{\\omega}_0)\\right)\\right\\|$ is bounded according to Lemma~\\ref{lem:el_b}\n\\begin{eqnarray}\n \\lim_{t\\to\\infty} \\lambda^t \\left\\|\\log\\left(\\boldsymbol{\\ell}_\\theta(\\boldsymbol{\\omega}_0)\\right)\\right\\| = 0 \\ \\ \\ \\text{a.s..} \\nonumber\n\\end{eqnarray}\nThen, by the continuity of the logarithmic function, this implies that $\\lim_{t\\to\\infty} \\boldsymbol{\\mu}_t(\\theta)\/ \\left( \\prod_{j=1}^m \\Lambda_{j\\theta}(t)\\right)^{1\/m} = 1$ a.s. and the desired result is achieved. \n\\end{proof}\n\n\n\n\\subsection{Learning with Certain Likelihoods} \\label{sec:Con_Dog}\n\nNext, we present the results for when the agents are certain, i.e., $R_{i\\theta}\\to\\infty$. First, we will consider the scenario when hypothesis $\\theta$ is the ground truth for all agents, i.e., $\\boldsymbol{\\pi}_{i\\theta}=\\boldsymbol{\\pi}_{i\\theta^*}$ $\\forall i\\in \\mathbf{M}$. Then, we will present the condition when hypothesis $\\theta$ is not the ground truth for at least one agent $i$, i.e., $\\boldsymbol{\\pi}_{i\\theta}\\ne\\boldsymbol{\\pi}_{i\\theta^*}$.\n\n\\begin{corollary} \\label{lem:LL_Dog_Inf}\nLet Assumptions \\ref{assum:graph} and \\ref{assum:inital_beliefs} hold and $\\boldsymbol{\\pi}_{i\\theta}=\\boldsymbol{\\pi}_{i\\theta^*}$ $\\forall i\\in \\mathbf{M}$. Then, the beliefs generated using the update rule (\\ref{eq:main_algo}) with infinite evidence diverge to the following.\n\\begin{eqnarray}\n\\lim_{t\\rightarrow \\infty} \\mu_{it}(\\theta) = \\infty, \\ \\text{a.s.}\n\\end{eqnarray}\n\\end{corollary}\n\\begin{proof}\nBy Corollary~\\ref{cor:ell_dog}, $\\lim_{t\\to\\infty} \\ell_{i\\theta}(\\mathbf{n}_{it-1},\\omega_{it}|\\mathbf{r}_{i\\theta}) = 1$ a.s.. As a result, the proof of Theorem~\\ref{thm:ULR_Con} still applies and $\\mu_{it}(\\theta) = (\\prod_{i=1}^m \\Lambda_{i\\theta}(t))^\\frac{1}{m}$ as $t \\to \\infty$ with probability 1. Now by Lemma~\\ref{lem:dog_ULR}, $\\Lambda_{i\\theta}(t) = \\infty$ for each $i$ as $t \\to \\infty$ and $R_{i\\theta}\\to\\infty$. Thus, the geometric mean is also diverging to $\\infty$ a.s..\n\\end{proof}\n\n\\begin{lemma} \\label{lem:LL_dog}\n Let Assumption \\ref{assum:graph} and \\ref{assum:inital_beliefs} hold and at least one agent $i\\in\\mathbf{M}$ has a set of probabilities s.t. $\\boldsymbol{\\pi}_{i\\theta}\\ne\\boldsymbol{\\pi}_{i\\theta^*}$. Then, the beliefs generated by the update rule (\\ref{eq:main_algo}) allow for learning, i.e., they converge in probability to \n \\begin{eqnarray}\n \\mu_{it}(\\theta) \\overset{P}{\\to} 0. \n \\end{eqnarray}\n\\end{lemma}\n\nBefore proving Lemma~\\ref{lem:LL_dog}, we must first present the following lemma which provides an upper bound of the certain likelihood update.\n\n\\begin{lemma} \\label{lem:ell_dog_bound}\nFor a finite time $t$, the certain likelihood update is bounded above by\n\\begin{eqnarray} \\label{eq:ell_dog_bound}\n\\ell_{i\\theta}(\\mathbf{n}_{it-1},\\omega_{it}|\\mathbf{r}_{i\\theta}) \\le (t+K-1) < \\infty\n\\end{eqnarray}\n\\end{lemma}\n\\begin{proof}\nFirst, by inspection of (\\ref{eq:ell_update}) for the certain condition such that $\\ell_{i\\theta}(\\mathbf{n}_{it-1},\\omega_{it}|\\mathbf{r}_{i\\theta}) = \\pi_{ik\\theta} \\frac{t+K-1}{n_{ikt-1}+1} $ , it is clear that the maximum occurs when an attribute $k\\in\\Omega$ has not been received up to time $t-1$. In other words, the term $\\frac{t+K-1}{n_{ikt-1}+1}$ is maximized when $n_{ikt-1}=0$, resulting in the likelihood update being bounded by $(t+K-1)$ because $\\pi_{ik\\theta}\\le 1$. For any finite value of $t$ this value is the highest possible value for the update.\n\\end{proof}\n\nNow that the likelihood update is shown to be bounded by a finite value for finite $t$, we can now prove Lemma~\\ref{lem:LL_dog}.\n\n\\begin{proof}[Proof of Lemma~\\ref{lem:LL_dog}]\nStarting with (\\ref{eq:log_belief}) the log-beliefs $\\mu_{it}(\\theta)$ can be written as\n\\begin{eqnarray} \\label{eq:dog-belief}\n\\log(\\boldsymbol{\\mu}_{t}(\\theta)) &=&\n\\sum_{\\tau = 1}^{T} \\mathbf{A}^{t-\\tau} \\log\\left(\\boldsymbol{\\ell}_\\theta(\\boldsymbol{\\omega}_\\tau)\\right) \\nonumber \\\\\n& &+ \\sum_{\\tau = T+1}^{t} \\left (\\mathbf{A}^{t-\\tau}-\\frac{1}{m}\\mathbf{11}'\\right) \\log\\left(\\boldsymbol{\\ell}_\\theta(\\boldsymbol{\\omega}_\\tau)\\right) \\nonumber \\\\ & & + \\frac{1}{m}\\sum_{\\tau = T+1}^{t}\\mathbf{11}'\\log\\left(\\boldsymbol{\\ell}_\\theta(\\boldsymbol{\\omega}_\\tau)\\right).\n\\end{eqnarray}\nNow because $\\mathbf{A}$ is doubly stochastic, $\\|\\mathbf{A}\\|=1$ and the norm of the first term in the right hand side of (\\ref{eq:dog-belief}) is bounded by\n\\begin{eqnarray*}\n\\left\\|\\sum_{\\tau = 1}^{T} \\mathbf{A}^{t-\\tau} \\log\\left(\\boldsymbol{\\ell}_\\theta(\\boldsymbol{\\omega}_\\tau)\\right) \\right\\| &\\leq& \\sum_{\\tau = 1}^{T} \\left \\| \\log\\left(\\boldsymbol{\\ell}_\\theta(\\boldsymbol{\\omega}_\\tau)\\right) \\right\\|\\\\\n& \\leq &\\sum_{\\tau = 1}^{T} \\sqrt{m}(\\tau+K-1) \\nonumber \\\\ &=& \\sqrt{m}T\\left(\\frac{1}{2}(T+1)+(K-1)\\right),\n\\end{eqnarray*}\nwhere the second line is the result of the upper bound for the possible update value given in Lemma~\\ref{lem:ell_dog_bound}. As long as $T$ is finite this first term is finite.\n\n\nBy Lemma~\\ref{lem:update_lim}, $\\log\\left(\\ell_{i\\theta}(\\mathbf{n}_{i\\tau-1},\\omega_{\\tau}|\\mathbf{r}_{i\\theta})\\right) \\to \\log\\left(\\frac{\\pi_{ik\\theta}}{\\pi_{ik\\theta^*}}\\right)$ a.s., and so for any $\\epsilon>0$ and $\\delta>0$ there exist a finite value $T$ such that $|\\log\\left(\\ell_{i\\theta}(\\mathbf{n}_{i\\tau-1},\\omega_{\\tau}|\\mathbf{r}_{i\\theta})\\right)-\\log\\left(\\frac{\\pi_{ik\\theta}}{\\pi_{ik\\theta^*}}\\right)|<\\epsilon$ with probability greater than $1-\\delta$. Thus the second term on the right hand side of (\\ref{eq:dog-belief}) with probability greater than $1-\\delta$ is bounded by\n\\begin{eqnarray*}\n& &\\left\\| \\sum_{\\tau = T+1}^{t} \\left (\\mathbf{A}^{t-\\tau}-\\frac{1}{m}\\mathbf{11}'\\right) \\log\\left(\\boldsymbol{\\ell}_\\theta(\\boldsymbol{\\omega}_\\tau)\\right) \\right\\| \\nonumber \\\\ &\\leq& \\sum_{\\tau = T+1}^{t} \\left\\|\\mathbf{A}^{t-\\tau}-\\frac{1}{m}\\mathbf{11}' \\right\\| \\left\\| \\log\\left(\\boldsymbol{\\ell}_\\theta(\\boldsymbol{\\omega}_\\tau)\\right)\\right\\|\\\\\n&\\leq &\\sqrt{2}m \\left( \\sum_{\\tau=T+1}^t \\lambda^{t-\\tau} \\right) (L+\\epsilon) \\leq \\frac{\\sqrt{2}m}{(1-\\lambda)} (L+\\epsilon),\n\\end{eqnarray*}\nwhere $L = \\max_{i,k\\in\\Omega_i^*} \\left|\\log\\left(\\frac{\\pi_{ik\\theta}}{\\pi_{ik\\theta^*}}\\right) \\right|$ is the largest converged value that is realizable, i.e, $\\Omega_i^*$ is the set of all $k$ values such that $\\pi_{ik\\theta^*} >0$. Because $L$ is finite, the second term in (\\ref{eq:dog-belief}) is also finite.\n\nFinally, each element for the third term on the right hand side of (\\ref{eq:dog-belief}) can be reexpressed as\n\\begin{eqnarray}\n& & \\frac{1}{m} \\sum_{\\tau = T+1}^t \\sum_{i=1}^m \\log\\left(\\ell_{i\\theta}(\\mathbf{n}_{i\\tau-1},\\omega_{i\\tau}|\\mathbf{r}_{i\\theta})\\right) \\nonumber \\\\ &=& \\frac{1}{m} \\sum_{\\tau=T+1}^t \\sum_{i=1}^m \\left( \\log \\left(\\frac{\\pi_{i\\omega_{i\\tau}\\theta}}{\\pi_{i\\omega_{i\\tau}\\theta^*}}\\right) + e_{i\\tau}\\right), \\nonumber\\\\\n&\\leq& (t-T) \\left( \\frac{1}{(t-T)}\\left( \\sum_{\\tau=T+1}^t \\frac{1}{m} \\sum_{i=1}^m \\log \\left(\\frac{\\pi_{i\\omega_{i\\tau}\\theta}}{\\pi_{i\\omega_{i\\tau}\\theta^*}}\\right)\\right) + \\epsilon \\right), \\nonumber \\\\\n\\label{eq:fullyconnectedlogupdatebound}\n\\end{eqnarray}\nwhere $e_{i\\tau} = \\log\\left(\\ell_{i\\theta}(\\mathbf{n}_{i\\tau-1},\\omega_{i\\tau}|\\mathbf{r}_{i\\theta})\\right)-\\log \\left(\\frac{\\pi_{i\\omega_{i\\tau}\\theta}}{\\pi_{i\\omega_{i\\tau}\\theta^*}}\\right)$ is the error and $|e_{i\\tau}| \\le \\epsilon$, which leads to the second line. Due the strong law of large numbers, the bound for the third term converges with probability one to\n\\begin{equation}\n(t-T) \\left(-\\frac{1}{m} \\sum_{i=1}^m D_{KL}(\\boldsymbol{\\pi}_{i\\theta^*}||\\boldsymbol{\\pi}_{i\\theta})+\\epsilon\\right).\n\\end{equation}\nIn other words, for $t$ sufficiently large with probability $1-\\delta$\n\\begin{eqnarray}\n\\log(\\mu_{it}(\\theta)) &\\leq& \\sqrt{m}\\frac{T}{2}(T+1)+T(K-1)+\\frac{\\sqrt{2}m}{(1-\\lambda)}(L+\\epsilon)\\nonumber \\\\ & & +(t-T)\\left(-\\frac{1}{m}\\sum_{i=1}^m D_{KL}(\\boldsymbol{\\pi}_{i\\theta^*}||\\boldsymbol{\\pi}_{i\\theta}) + 2\\epsilon\\right) \\nonumber \n\\end{eqnarray}\nSince $\\frac{1}{m} \\sum_{i=1}^m D_{KL}(\\boldsymbol{\\pi}_{i\\theta^*}||\\boldsymbol{\\pi}_{i\\theta})>0$ as $\\boldsymbol{\\pi}_{\\theta^*}\\ne \\boldsymbol{\\pi}_\\theta$, and $\\epsilon$ can be made smaller by making $T$ larger, the upper bound is diverging to $-\\infty$ as $t$ increases. Thus, the log-belief is diverging to $-\\infty$ as $t \\to \\infty$. Because the exponential is continuous, the beliefs converge in probability to zero. \n\\end{proof}\n\nCorollary~\\ref{lem:LL_Dog_Inf} and Lemma~\\ref{lem:LL_dog} show that in order for the agents to learn the ground truth precisely, all of the agents must have certain probability distributions that match the ground truth exactly. While if a single agent disagrees, then the beliefs will converge to $0$. Therefore, this result is consistent with the traditional non-Bayesian social learning literature except that the hypothesis that matches the ground truth diverges to infinity instead of converging to $1$. Thus, the design of the uncertain likelihood ratio still preserves the consensus result while allowing the agents to consider uncertain scenarios.\n\nAfter expanding the beta functions and applying Stirling's approximation, it can be shown that the certain likelihood ratio for large $t$ behaves as \n\\begin{eqnarray} \\label{eq:th_lambda}\n\\Lambda_{i\\theta}(t) = C t^\\alpha e^{-tD_{KL}(\\boldsymbol{\\pi}_{i\\theta^*}||\\boldsymbol{\\pi}_{i\\theta})},\n\\end{eqnarray}\nwhere $C$ and $\\alpha$ are constants. Note that in the centalized uncertain likelihood ratio is the product of the individual uncertain likelihood ratios. Without any divergence between $\\boldsymbol{\\pi}_{i\\theta}$ and $\\boldsymbol{\\pi}_{i\\theta^*}$ for all agents, the uncertain likelihood ratio goes to infinity sub-exponentially as $t^\\alpha$. It only takes any divergence between $\\boldsymbol{\\pi}_{i\\theta}$ and $\\boldsymbol{\\pi}_{i\\theta^*}$ at a single agent to drive the centralized uncertain likelihood ratio to zero as the decay to zero is exponential. Essentially, a hypothesis $\\theta$ that is consistent with the observations can never be declared as the absolute ground truth as any new certain agent whose model for that hypothesis is inconsistent with their observation would drive the uncertain likelihood ratio to zero. Rather, one can only state that the hypothesis is consistent with the ground truth as no counter example has been observed. On the other hand, once a counter example is found by any agent, one can state unequivocally that the hypothesis is not the ground truth. No finite number of agents such that $\\boldsymbol{\\pi}_{i\\theta} = \\boldsymbol{\\pi}_{i\\theta^*}$ can drive the belief to be non-zero.\n\nFor the more general uncertain case, the updates $\\ell_{i\\theta}(\\omega_{it})$ as given by (\\ref{eq:ell_update}) begin as ratios of the expected value of $\\boldsymbol{\\pi}_{i\\theta}$ based upon the prior evidence $\\mathbf{r}_{i\\theta}$ over that based upon the observations $\\mathbf{n}_{it}$ . As time evolves, the numerator of the ratio transitions from an estimate of $\\boldsymbol{\\pi}_{i\\theta}$ to that of $\\boldsymbol{\\pi}_{i\\theta^*}$. On the other hand, the denominator is going to an estimate of $\\boldsymbol{\\pi}_{i\\theta^*}$. The larger the amount of prior evidence $R_{i\\theta}$, the longer it takes for the transition to occur. Before the transition, the uncertain likelihood ratio behaves like the certain case. After the transition, the updates converge to one, which cause the uncertain likelihood ratio to level out. If $\\theta \\ne \\theta^*$, whether or not the uncertain likelihood ratio converges to a value larger or less than one depends on whether or not the divergence between the $\\pi$'s is able to overwhelm the $t^\\alpha$ growth before the updates become close to one. This in turn depends on the amount of prior evidence. Less prior evidence means that $\\theta$ may not be distinguished from $\\theta^*$ given the precision of the evidence. The simulations in Section~\\ref{sec:SIM} will demonstrate these properties.\n\n\n\\section{The Effects of DeGroot Aggregation for Uncertain Models} \\label{sec:DG_Update}\nNext, we will consider a DeGroot-style update rule and present the effects of the beliefs with uncertain likelihood models. The DeGroot-style update rule consists of taking the weighted arithmetic average of the agents prior beliefs instead of the geometric average. Thus, the DeGroot-style update rule with uncertain likelihood models is defined as,\n\\begin{eqnarray} \\label{eq:DG_algo}\n \\mu_{it+1}(\\theta) = \\ell_{i\\theta}(\\mathbf{n}_{it}, \\omega_{it+1}|\\mathbf{r}_{i\\theta})\\sum_{j\\in\\mathbf{M}^i} [\\mathbf{A}]_{ij} \\mu_{jt}(\\theta).\n\\end{eqnarray}\n\nFirst, let us consider the asymptotic properties of the beliefs generated using the update rule (\\ref{eq:DG_algo}) with a finite amount of prior evidence.\n\\begin{lemma} \\label{thm:DG_finite}\nLet Assumptions \\ref{assum:graph}, \\ref{assum:uncertain_distributioin}, and \\ref{assum:inital_beliefs} hold. Then, the beliefs generated using the update rule (\\ref{eq:DG_algo}) have the following property with probability 1:\n\\begin{eqnarray} \\label{eq:CES_LL}\n\\lim_{t\\to \\infty} \\mu_{it}(\\theta)\\ge \\left( \\prod_{i=1}^m \\tilde{\\Lambda}_{i\\theta} \\right)^{\\frac{1}{m}}.\n\\end{eqnarray}\n\\end{lemma}\n\n\\begin{proof}\n To prove this, we will first compare the beliefs generated from the update rule (\\ref{eq:DG_algo}), denoted $\\boldsymbol{\\mu}_t^{[DG]}(\\theta)$, with the beliefs generated for the update rule (\\ref{eq:main_algo}), denoted $\\boldsymbol{\\mu}_t^{[LL]}(\\theta)$. Then, by induction, we have the following. At $t=0$, the agents beliefs are initialized to the same value, $\\boldsymbol{\\mu}_0^{[DG]}(\\theta)=\\boldsymbol{\\mu}_0^{[LL]}(\\theta)=\\mathbf{1}$ and $\\boldsymbol{\\mu}_0^{[DG]}(\\theta)\\ge\\boldsymbol{\\mu}_0^{[LL]}(\\theta)$ is true. Given that $\\boldsymbol{\\mu}_{t-1}^{[DG]}(\\theta)\\ge\\boldsymbol{\\mu}_{t-1}^{[LL]}(\\theta)$ is true for time $t-1$, the log of the beliefs from the DeGroot and LL rules ag time $t$ respectively becomes\n \\begin{eqnarray}\n \\log(\\mu_{it}^{[DG]}(\\theta)) &=& \\log(\\ell_{i\\theta}(\\mathbf{n}_{it}, \\omega_{it+1}|\\mathbf{r}_{i\\theta}) ) \\nonumber \\\\ & & + \\log\\left(\\sum_{j=1}^m [\\mathbf{A}]_{ij} \\mu_{jt-1}^{[DG]}(\\theta)\\right), \\nonumber\\\\\n \\log(\\mu_{it}^{[LL]}(\\theta)) &=& \\log(\\ell_{i\\theta}(\\mathbf{n}_{it}, \\omega_{it+1}|\\mathbf{r}_{i\\theta}) ) \\nonumber \\\\ & &+ \\sum_{j=1}^m [\\mathbf{A}]_{ij} \\log\\left(\\mu_{jt-1}^{[LL]}(\\theta)\\right) \\nonumber\n \\end{eqnarray}\n Using Jensen's inequality, $\\log(\\sum_{j=1}^m [\\mathbf{A}]_{ij}\\mu_{jt-1}^{[DG]}(\\theta)) \\ge \\sum_{j=1}^m [\\mathbf{A}]_{ij} \\log(\\mu_{jt-1}^{[DG]}(\\theta))$ since the logarithm is a concave function. Since $\\sum_{j=1}^m [\\mathbf{A}]_{ij}\\log(\\mu_{jt-1}^{[DG]}(\\theta)) \\ge \\sum_{j=1}^m [\\mathbf{A}]_{ij} \\log(\\mu_{jt-1}^{[LL]}(\\theta))$, $\\boldsymbol{\\mu}_{t}^{[DG]}(\\theta)\\ge\\boldsymbol{\\mu}_{t}^{[LL]}(\\theta)$. By induction, $\\boldsymbol{\\mu}_{t}^{[DG]}(\\theta)\\ge\\boldsymbol{\\mu}_{t}^{[LL]}(\\theta)$ is true $\\forall t\\ge 0$, and asymptotically we can say that\n \\begin{eqnarray}\n \\lim_{t\\rightarrow \\infty} \\mu_{it}^{[DG]}(\\theta) \\ge \\lim_{t\\rightarrow \\infty} \\mu_{it}^{[LL]}(\\theta) = \\left( \\prod_{i=1}^m \\tilde{\\Lambda}_{i\\theta} \\right)^{\\frac{1}{m}} \\nonumber\n \\end{eqnarray}\n with probability 1.\n\\end{proof}\n\nLemma \\ref{thm:DG_finite} shows that the beliefs generated from the DeGroot-style update rule will always be greater than or equal to the $m$th root of the centralized uncertain likelihood ratio. This means that the interpretation of the beliefs using the update rule (\\ref{eq:main_algo}) and the DeGroot rule (\\ref{eq:DG_algo}) are not the same. Nevertheless the simulations in Section~\\ref{sec:SIM} demonstrates that the DeGroot rule reaches consensus but is non-commutative because the order in which the private signals are received affects where the belief converges. Thus, a further understanding of the beliefs point of convergence is necessary to identify thresholds that allow for the use of the uncertain likelihood ratio test. This will be studied as a future work.\n\nThe certain likelihood conditions presented next indicate that the DeGroot rule still enables learning. Additionally, we derive the beliefs asymptotic convergence rate for a fully connected network and show that learning with the update rule (\\ref{eq:DG_algo}) is slower than learning with (\\ref{eq:main_algo}). First, noting the result of the uncertain DeGroot-style update rule, we can conclude the following corollary.\n\n\\begin{corollary}\\label{cor:DG_dog}\n Let Assumptions \\ref{assum:graph} and \\ref{assum:inital_beliefs} hold and $\\boldsymbol{\\pi}_{i\\theta}=\\boldsymbol{\\pi}_{i\\theta^*}$ $\\forall i\\in \\mathbf{M}$. Then, the beliefs generated using the update rule (\\ref{eq:DG_algo}) and infinite evidence diverge to the following.\n\\begin{eqnarray}\n\\lim_{t\\rightarrow \\infty} \\mu_{it}(\\theta)= \\infty, \\ \\text{a.s.}\n\\end{eqnarray}\n\\end{corollary}\n\\begin{proof}\n This can be directly seen from Lemma \\ref{thm:DG_finite} and Corollary \\ref{lem:LL_Dog_Inf}.\n\\end{proof}\n\nNext, we will derive the point of convergence when at least one agent $i$ has a certain set of probabilities s.t. $\\boldsymbol{\\pi}_{i\\theta}\\ne \\boldsymbol{\\pi}_{i\\theta^*}$. First, we provide the following lemma that describes the properties of the beliefs updated using the DeGroot-style learning rule for a fully connected network.\n\\begin{lemma} \\label{lem:DG_rate}\nLet Assumption \\ref{assum:inital_beliefs} hold, the network graph be fully connected, i.e. $\\mathbf{A}=\\frac{1}{m}\\mathbf{11}'$, and there exists a $\\theta$ s.t. $\\boldsymbol{\\pi}_{i\\theta}\\ne \\boldsymbol{\\pi}_{i\\theta^*}$ for at least one agent~$i$. Then, the beliefs generated by the update rule (\\ref{eq:DG_algo}) with infinite evidence asymptotically convergence to zero at a geometric rate determined by the Centralized Average (CA) divergence, i.e., for all $i \\in \\mathbf{M}$\n\\begin{eqnarray}\n\\lim_{t \\to \\infty} \\frac{1}{t} \\mu_{it}(\\theta) = -D_{CA}(\\boldsymbol{\\Pi}_{\\theta^*}||\\boldsymbol{\\Pi}_{\\theta}),\n\t\\end{eqnarray}\n\twhere \n\t\\begin{eqnarray}\n\tD_{CA}(\\boldsymbol{\\Pi}_{\\theta^*}||\\boldsymbol{\\Pi}_{\\theta}) = -\\sum_{k_1=1}^K \\cdots \\sum_{k_m=1}^K \\pi_{1k_1 \\theta^*}\\cdots \\pi_{mk_m \\theta^*} \\nonumber \\\\ \\cdot \\log\\left(\\frac{1}{m}\\left( \\frac{\\pi_{1k_1\\theta}}{\\pi_{1k_1\\theta^*}}+\\cdots + \\frac{\\pi_{mk_m\\theta}}{\\pi_{mk_m\\theta^*}}\\right)\\right) \n\t\\end{eqnarray}\n\tand $\\boldsymbol{\\Pi}_{\\theta}=\\{\\boldsymbol{\\pi}_{i\\theta}\\}_{\\forall i\\in \\mathbf{M}}$ is the set of probabilities of all agents.\n\\end{lemma}\n\n\\begin{proof}\n First, from Lemma \\ref{lem:update_lim} condition 2, the likelihood updates converge to the ratio of the probabilities for $\\theta$ and $\\theta^*$. Since the logarithm and average operations are continuous, we know that for any $\\epsilon>0$ and $\\delta>0$, there exists a finite $T$ s.t. for $t>T$ the log average likelihood update is bounded as\n \\begin{equation*}\n \\left | \\log\\left(\\frac{1}{m} \\sum_{i=1}^m \\ell_{i\\theta}(\\mathbf{n}_{it-1}\\omega_{it}|\\mathbf{r}_{i\\theta})\\right) - \\log\\left( \\frac{1}{m} \\sum_{i=1}^m \\frac{\\pi_{i\\omega_{it}\\theta}}{\\pi_{i\\omega_{it}\\theta^*}} \\right) \\right | \\leq \\epsilon\n \\end{equation*}\n with probability at least $1-\\delta$. Also, we know that $\\boldsymbol{\\mu}_T(\\theta)$ is bounded since $\\ell_{i\\theta}(\\mathbf{n}_{it-1},\\omega_{it}|\\mathbf{r}_{i\\theta})$ is bound by Lemma \\ref{lem:ell_dog_bound} and converging to within $\\ell_{i\\theta}(\\mathbf{n}_{it-1},k|\\mathbf{r}_{i\\theta})<\\frac{\\pi_{ik\\theta}}{\\pi_{ik\\theta^*}}+\\epsilon$ with probability at least $1-\\delta$. Now, the beliefs generated using the update rule (\\ref{eq:DG_algo}) at times times $t$ and $T$ are related as \n \\begin{eqnarray}\\label{eq:dg_bel_con}\n\\boldsymbol{\\mu}_{t}(\\theta) &=& \\mathbf{L}_\\theta(\\boldsymbol{\\omega}_{t}) \\mathbf{A} \\mathbf{L}_\\theta(\\boldsymbol{\\omega}_{t-1}) \\cdots \\mathbf{A}\\mathbf{L}_\\theta(\\boldsymbol{\\omega}_{T+1}) \\mathbf{A} \\boldsymbol{\\mu}_T(\\theta) \\nonumber \\\\\n&=& \\mathbf{L}_\\theta(\\boldsymbol{\\omega}_{t}) \\frac{1}{m}\\mathbf{11'} \\mathbf{L}_\\theta(\\boldsymbol{\\omega}_{t-1}) \\cdots \\frac{1}{m}\\mathbf{11'}\\mathbf{L}_\\theta(\\boldsymbol{\\omega}_{T+1}) \\nonumber \\\\ & &\\cdot \\left( \\frac{1}{m} \\sum_{i=1}^m \\mu_{iT}(\\theta)\\right) \\mathbf{1} \\nonumber \\\\\n&= & \\prod_{\\tau = T+1}^t \\left( \\frac{1}{m} \\sum_{i=1}^m \\ell_{i\\theta}(\\mathbf{n}_{i\\tau-1}\\omega_{i\\tau}|\\mathbf{r}_{i\\theta}) \\right) \\left( \\frac{1}{m} \\sum_{i=1}^m \\mu_{iT}(\\theta)\\right) \\nonumber \\\\\n\\end{eqnarray}\nwhere $\\mathbf{L}_\\theta(\\boldsymbol{\\omega}_\\tau)= diag(\\ell_{1\\theta}(\\mathbf{n}_{1\\tau-1},\\omega_{1\\tau}|\\mathbf{r}_{1\\theta}),...,$ $\\ell_{m\\theta}(\\mathbf{n}_{m\\tau-1},\\omega_{m\\tau}|\\mathbf{r}_{m\\theta}))$. We then take the logarithm of both sides of the above equation and use the knowledge that the log-updates are bounded in probability to determine that the bounds with probability at least $1-\\delta$ for the log-beliefs are\n\\begin{equation*}\nG(t;T)-(t-T)\\epsilon \\leq log\\left(\\boldsymbol{\\mu}_t(\\theta)\\right) \\leq G(t;T)+(t-T)\\epsilon,\n\\end{equation*}\nwhere\n\\begin{equation*}\nG(t;T) = \\log\\left( \\frac{1}{m} \\sum_{i=1}^m \\mu_{iT}(\\theta)\\right) + \\sum_{\\tau=T+1}^t \\log\\left(\\frac{1}{m} \\sum_{i=1}^m \\frac{\\pi_{i\\omega_{i\\tau}\\theta}}{\\pi_{i\\omega_{i\\tau}\\theta^*}}\\right).\n\\end{equation*}\nNote that the first term $G(t,T)$ is finite and constant with respect to $t$. Using the law of large numbers the\nasymptotic convergence rate is bounded with probability at least $1-\\delta$ as\n\\begin{equation*}\nD_{CA}(\\boldsymbol{\\Pi}_{\\theta^*}||\\boldsymbol{\\Pi}_{\\theta})-\\epsilon \\leq -\\lim_{t\\rightarrow \\infty} \\frac{1}{t} \\log\\left(\\boldsymbol{\\mu}_{t}(\\theta)\\right) \\leq D_{CA}(\\boldsymbol{\\Pi}_{\\theta^*}||\\boldsymbol{\\Pi}_{\\theta})+\\epsilon.\n\\end{equation*}\nNote that $\\epsilon$ can be made arbitrarily small by setting $T$ larger. Thus, the convergence rate converges in probability to \n\\begin{equation*}\n\\lim_{t\\rightarrow \\infty} \\frac{1}{t} \\log\\left(\\boldsymbol{\\mu}_{t}(\\theta)\\right) = -D_{CA}(\\boldsymbol{\\Pi}_{\\theta^*}||\\boldsymbol{\\Pi}_{\\theta}).\n\\end{equation*}\n\\end{proof}\n\nThis shows that even for the DeGroot-style rule, any divergence between $\\boldsymbol{\\pi}_{i\\theta}$ and $\\boldsymbol{\\pi}_{i\\theta^*}$ causes the beliefs to decrease at a rate larger than the sub-exponential growth rate. This is state formally in the following corollary.\n\n\\begin{corollary} \\label{cor:DG_dog2}\n Let Assumption \\ref{assum:inital_beliefs} hold and the network graph be fully connected, i.e., $\\mathbf{A}=\\frac{1}{m}\\mathbf{11'}$, and at least one agent $i$ has a set of probabilities s.t. $\\boldsymbol{\\pi}_{i\\theta}\\ne \\boldsymbol{\\pi}_{i\\theta^*}$. Then, the beliefs generated by the update rule (\\ref{eq:DG_algo}) with infinite evidence allows for learning, i.e., they converge in probability to \n \\begin{eqnarray}\n \\lim_{t\\to\\infty} \\mu_{it}(\\theta) = 0.\n \\end{eqnarray}\n\\end{corollary}\n\nNow, let us compare this result to a network updating their beliefs using the log-linear rule (\\ref{eq:main_algo}) in the following lemma.\n\n\\begin{lemma} \\label{thm:comp_rate}\nAssuming a network with a doubly stochastic aperiodic matrix $\\mathbf{A}$ and a certain set of probabilities such that there exists a $\\theta$ s.t. $\\boldsymbol{\\pi}_{i\\theta}\\ne \\boldsymbol{\\pi}_{i\\theta^*}$ for at least one agent $i$, the log-linear beliefs (\\ref{eq:main_algo}) converge in probability to zero at a geometric rate determined by the average Kullback-Leibler divergence, i.e.,\n\\begin{equation}\n\\lim_{t \\to \\infty} \\frac{1}{t} \\log\\left(\\boldsymbol{\\mu}_t(\\theta)\\right) = -\\frac{1}{m} \\sum_{i=1}^m D_{KL}(\\boldsymbol{\\pi}_{i\\theta^*}||\\boldsymbol{\\pi}_{i\\theta}).\n\\end{equation}\nfor $i\\in \\mathbf{M}$. Furthermore, this convergence rate is faster than that of the DeGroot rule (\\ref{eq:DG_algo}) for a fully connected graph where $\\mathbf{A}=\\frac{1}{m}\\mathbf{11}'$, i.e.,\n\\begin{eqnarray} \\label{eq:cov_rates}\n\t\\frac{1}{m}\\sum_{i=1}^m D_{KL}(\\boldsymbol{\\pi}_{i\\theta^*}||\\boldsymbol{\\pi}_{i\\theta})\\ge D_{CA}(\\boldsymbol{\\Pi}_{\\theta^*}||\\boldsymbol{\\Pi}_{\\theta})\\ge 0.\n\t\\end{eqnarray}\n\\end{lemma}\n\n\\begin{proof}\n The proof of Lemma~\\ref{lem:LL_dog} provides the starting point to prove the first part of this theorem.\nThe log belief at time $t$ is expressed by (\\ref{eq:dog-belief}). For any $\\epsilon>0$ and $\\delta>0$ there exists a value of $T$ such that the first two terms on the right side of (\\ref{eq:dog-belief}) are constant with respect to $t$ and finite with probability at least $1-\\delta$. The upper bound for third term is given by (\\ref{eq:fullyconnectedlogupdatebound}). By the same argument to get to this upper bound, it is clear that the lower bound can be given by replacing $\\epsilon$ with $-\\epsilon$, and thus with probability at least $1-\\delta$,\n\\begin{eqnarray}\n\\left| \\log\\left(\\mu_{it}(\\theta)\\right) - C + \\left( \\frac{1}{m} \\sum_{i=1}^m \\sum_{\\tau=T+1}^t\\log \\left(\\frac{\\pi_{i\\omega_{i\\tau}\\theta}}{\\pi_{i\\omega_{i\\tau}\\theta^*}}\\right)\\right) \\right | \\leq \\epsilon\n\\end{eqnarray}\nfor any $i \\in \\mathbf{M}$ where $C$ represents the finite constant incorporating the first two terms in (\\ref{eq:dog-belief}).\nAs $t \\to \\infty$, the law of large numbers leads to the bounds for convergence rate as\n\\begin{eqnarray}\n\\frac{1}{m} \\sum_{i=1}^m D_{KL}(\\boldsymbol{\\pi}_{i\\theta^*}||\\boldsymbol{\\pi}_{i\\theta})-\\epsilon \\leq -\\lim_{t\\to \\infty} \\log\\left(\\mu_{it}(\\theta) \\right) \\nonumber \\\\ \\leq \\frac{1}{m} \\sum_{i=1}^m D_{KL}(\\boldsymbol{\\pi}_{i\\theta^*}||\\boldsymbol{\\pi}_{i\\theta})+\\epsilon.\n\\end{eqnarray}\nNote that $\\epsilon$ can be made arbitrarily small by increasing the value of $T$ in (\\ref{eq:dog-belief}); thus proving the first part of the theorem.\n\nNext, we will prove (\\ref{eq:cov_rates}). First, we prove that the CA divergence is non-negative using Jensen's inequality as follows:\n\\begin{eqnarray}\nD_{CA}(\\boldsymbol{\\Pi}_{\\theta^*}||\\boldsymbol{\\Pi}_{\\theta})&=&-E^{\\theta^*}\\left[ \\log \\left( \\frac{1}{m} \\sum_{i=1}^m \\frac{\\boldsymbol{\\pi}_{i\\theta}}{\\boldsymbol{\\pi}_{i\\theta^*}}\\right)\\right], \\nonumber \\\\\n& \\ge & \\log\\left(\\frac{1}{m}\\sum_{i=1}^m E^{\\theta^*} \\left[\\frac{\\boldsymbol{\\pi}_{i\\theta}}{\\boldsymbol{\\pi}_{i\\theta^*}}\\right]\\right), \\nonumber \\\\\n& = & \\log(1) = 0,\n\\end{eqnarray}\nwith equality only when $\\boldsymbol{\\pi}_{i\\theta}=\\boldsymbol{\\pi}_{i\\theta^*}$, $\\forall i\\in N$. Then, we prove that the CA divergence is upper bounded by the average Kullback-Leibler divergence using Jensen's inequality, i.e.,\n\\begin{eqnarray}\n\\frac{1}{m}\\sum_{i=1}^m D_{KL}(\\boldsymbol{\\pi}_{i\\theta^*}||\\boldsymbol{\\pi}_{i\\theta}) & = & -E^{\\theta^*}\\left[\\frac{1}{m}\\sum_{i=1}^m \\log \\left( \\frac{\\boldsymbol{\\pi}_{i\\theta}}{\\boldsymbol{\\pi}_{i\\theta^*}} \\right) \\right] \\nonumber \\\\\n& \\ge & -E^{\\theta^*}\\left[\\log \\left(\\frac{1}{m}\\sum_{i=1}^m \\frac{\\boldsymbol{\\pi}_{i\\theta}}{\\boldsymbol{\\pi}_{i\\theta^*}} \\right) \\right] \\nonumber \\\\\n&=& D_{CA}(\\boldsymbol{\\Pi}_{\\theta^*}||\\boldsymbol{\\Pi}_{\\theta}).\n\\end{eqnarray}\nwith equality only when $\\boldsymbol{\\pi}_{i\\theta}=\\boldsymbol{\\pi}_{i\\theta^*}$, $\\forall i\\in \\mathbf{M}$.\n\\end{proof}\n\nThese results indicate that the DeGroot-style update rule learns that a hypothesis is not the ground truth at a slower rate than the log-linear update rule (\\ref{eq:main_algo}). Additionally, we found (through empirical evaluation) that the DeGroot belief for uncertain likelihood models reach a consensus and converge to finite value as the simulations in Section~\\ref{sec:SIM} indicates. This is because the uncertain likelihood ratio update functions $\\ell_{i\\theta}$ are converging to one. For the certain likelihood condition, the DeGroot rule allows for learning for a fully connected network, as shown in Corollaries~\\ref{cor:DG_dog} and \\ref{cor:DG_dog2}. Actually, the DeGroot-style rule is able to do this for any network satisfying Assumption~\\ref{assum:graph} as indicated next. \n\n\\begin{theorem} \\label{thm:DG_dogmatic}\n Let Assumptions \\ref{assum:graph} and \\ref{assum:inital_beliefs} hold. Then, the beliefs generated by the update rule (\\ref{eq:DG_algo}) with infinite evidence converge in probability to:\n \\begin{equation}\n \\lim_{t\\rightarrow \\infty} \\mu_{it}(\\theta) = 0 \\ \\text{if } \\exists j\\in\\mathbf{M} \\ \\text{s.t.} \\ \\boldsymbol{\\pi}_{j\\theta}\\ne\\boldsymbol{\\pi}_{j\\theta^*}.\n \\end{equation}\n\\end{theorem}\n\n\\begin{proof}\n The beliefs at time $t$ can be expressed in matrix-vector form as\n\\begin{eqnarray*}\n\\boldsymbol{\\mu}_t(\\theta) &=& \\mathbf{L}_\\theta(\\boldsymbol{\\omega}_{t})\\mathbf{A}\\cdots \\mathbf{L}_\\theta(\\boldsymbol{\\omega}_{2})\\mathbf{A}\\mathbf{L}_\\theta(\\boldsymbol{\\omega}_{1})\\mathbf{A}\\boldsymbol{\\mu}_0(\\theta)\\\\\n& =& \\prod_{\\tau=T+1}^t \\left( \\mathbf{L}_\\theta(\\boldsymbol{\\omega}_{\\tau})\\mathbf{A} \\right) \\boldsymbol{\\mu}_T(\\theta),\n\\end{eqnarray*}\nwhere $\\mathbf{L}_\\theta(\\boldsymbol{\\omega}_\\tau)= diag(\\ell_{1\\theta}(\\mathbf{n}_{1\\tau-1},\\omega_{1\\tau}|\\mathbf{r}_{1\\theta}),...,$ $\\ell_{m\\theta}(\\mathbf{n}_{m\\tau-1},\\omega_{m\\tau}|\\mathbf{r}_{m\\theta}))$, the initial belief $\\boldsymbol{\\mu}_0(\\theta) = \\mathbf{1}$ and\n\\begin{equation*}\n\\boldsymbol{\\mu}_T(\\theta) = \\prod_{\\tau=1}^T \\left( \\mathbf{L}_\\theta(\\boldsymbol{\\omega}_{\\tau})\\mathbf{A} \\right) \\mathbf{1}.\n\\end{equation*}\nFor any finite value of $T$, it is clear that $\\boldsymbol{\\mu}_T(\\theta)$ is finite because it can be bounded by a finite number since the norms $\\|\\mathbf{L}_\\theta(\\boldsymbol{\\omega}_\\tau)\\|\\leq (T+K-1)$ for $1 \\le \\tau \\le T$ via Lemma~\\ref{lem:ell_dog_bound} and $\\|\\mathbf{A}\\|=1$. From Lemma~\\ref{lem:update_lim} condition 2, it is known that for any $\\epsilon>0$ and $\\delta>0$ there exist a finite $T$ such that with probability at least $1-\\delta$, $\\ell_{i\\theta}(\\mathbf{n}_{i\\tau},\\omega_{i\\tau}|\\mathbf{r}_{i\\theta}) \\leq \\frac{\\pi_{i\\omega_{i\\tau}\\theta}}{\\pi_{i\\omega_{i\\tau}\\theta^*}} + \\epsilon$ and $E^{\\theta^*}\\left[ \\ell_{i\\theta}(\\mathbf{n}_{i\\tau},\\omega_{i\\tau}|\\mathbf{r}_{i\\theta}) \\right] \\leq 1+\\epsilon$. Let $E^{\\theta^*}_{\\chi_\\nu}[\\cdot]$ represent the expectation over the private signals for specific segments in time so that $\\chi_\\nu = \\{ \\omega_{i\\tau} | i \\in \\mathbf{M}, \\tau = T+Z_1+\\nu Z_2$ for $Z_1 = 1,\\ldots,\\nu-1$ and $Z_2 = 0,1,\\ldots$ $\\}$. Now because all the elements of the $\\mathbf{A}$ and $\\mathbf{L}_\\theta$ matrices are non-negative, for $Z>0$ with probability at least $1-\\delta$\n\\begin{eqnarray*}\n& & E^{\\theta^*}_{\\chi_\\nu}[\\boldsymbol{\\mu}_{T+\\nu Z}(\\theta)] \\nonumber \\\\ & = & \\mathbf{L}_\\theta(\\boldsymbol{\\omega}_{T+\\nu Z})\\mathbf{A} \\prod_{\\tau=T+\\nu(Z-1)+1}^{T+\\nu Z-1}\\left( E^{\\theta^*}\\left[\\mathbf{L}_\\theta(\\boldsymbol{\\omega}_{\\tau})\\right]\\mathbf{A}\\right) \\nonumber \\\\ & & \\cdot \\mathbf{L}_\\theta(\\boldsymbol{\\omega}_{T+\\nu(Z-1)})\\mathbf{A} \\prod_{\\tau=T+\\nu(Z-2)+1}^{T+\\nu (Z-1)-1}\\left( E^{\\theta^*}\\left[\\mathbf{L}_\\theta(\\boldsymbol{\\omega}_{\\tau})\\right]\\mathbf{A}\\right)\\\\\n&& \\mathbf{L}_\\theta(\\boldsymbol{\\omega}_{T+\\nu(Z-2)})\\mathbf{A} \\cdots \\mathbf{L}_\\theta(\\boldsymbol{\\omega}_{T+\\nu})\\mathbf{A} \\nonumber \\\\ & & \\cdot \\prod_{\\tau=T++1}^{T+\\nu-1}\\left( E^{\\theta^*}\\left[\\mathbf{L}_\\theta(\\boldsymbol{\\omega}_{\\tau})\\right]\\mathbf{A}\\right) \\boldsymbol{\\mu}_T(\\theta)\\\\\n&\\leq& (1+\\epsilon)^{Z(\\nu-1)} \\mathbf{L}_\\theta(\\boldsymbol{\\omega}_{T+\\nu Z})\\mathbf{A}^\\nu \\mathbf{L}_\\theta(\\boldsymbol{\\omega}_{T+\\nu (Z-1)}) \\mathbf{A}^\\nu \\nonumber \\\\ & & \\cdots \\mathbf{L}_\\theta(\\boldsymbol{\\omega}_{T+\\nu}) \\mathbf{A}^\\nu \\boldsymbol{\\mu}_T(\\theta).\n\\end{eqnarray*}\nBy Lemma~5 in~\\cite{NOU2017}, each element of $\\mathbf{A^\\nu}$ is bounded above by $[\\mathbf{A^\\nu}]_{ij} \\leq \\frac{1}{m} + \\sqrt{2}m\\lambda^\\nu$. Thus,\n\\begin{eqnarray*}\n& & E^{\\theta^*}_{\\chi_\\nu}[\\boldsymbol{\\mu}_{T+\\nu Z}(\\theta)] \\nonumber \\\\ & \\leq &(1+\\epsilon)^{Z(\\nu-1)} \\left(1+\\sqrt{2}m\\lambda^\\nu \\right)^Z \\mathbf{L}_\\theta(\\boldsymbol{\\omega}_{T+\\nu Z})\\nonumber \\\\ && \\frac{1}{m}\\mathbf{11}' \\mathbf{L}_\\theta(\\boldsymbol{\\omega}_{T+\\nu (Z-1)})\\frac{1}{m}\\mathbf{11}' \\cdots \\mathbf{L}_\\theta(\\boldsymbol{\\omega}_{T+\\nu})\\frac{1}{m}\\mathbf{11}' \\boldsymbol{\\mu}_T(\\theta)\\\\\n&=& (1+\\epsilon)^{Z(\\nu-1)} \\left(1+\\sqrt{2}m\\lambda^\\nu \\right)^Z \\nonumber \\\\ && \\cdot \\prod_{z=1}^Z \\left(\\frac{1}{m} \\sum_{i=1}^m \\ell_{i\\theta}(\\omega_{i(T+\\nu z)}) \\right) \\frac{1}{m} \\sum_{i=1}^m \\mu_{iT}(\\theta) \\mathbf{L}_\\theta(\\boldsymbol{\\omega}_{T+\\nu}) \\mathbf{1}.\n\\end{eqnarray*}\nSince $\\ell_{i\\theta}(\\mathbf{n}_{it-1},\\omega_{it}|\\mathbf{r}_{i\\theta}) = \\frac{\\pi_{i\\omega_{it}\\theta}}{\\pi_{i\\omega_{it}\\theta*}}$ as $t\\to \\infty$, then $\\log\\left( \\frac{1}{m} \\sum_{i=1}^m \\ell_{i\\theta}(\\mathbf{n}_{it-1},\\omega_{it}|\\mathbf{r}_{i\\theta})\\right) = \\log\\left( \\frac{1}{m} \\sum_{i=1}^m \\frac{\\pi_{i\\omega_{it}\\theta}}{\\pi_{i\\omega_{it}\\theta*}}\\right)$ a.s.. Using the fact that $\\log(1+x)\\le x$ for $x\\ge 0$, it is easy to see that for $T$ sufficiently large, the log expected belief can be bounded with probability at least $1-\\delta$ as\n\\begin{eqnarray*}\n\\log\\left( E^{\\theta^*}_{\\chi_\\nu}[\\boldsymbol{\\mu}_{T+\\nu Z}(\\theta)] \\right) \\leq Z \\left(\\nu\\epsilon+\\sqrt{2}m\\lambda^\\nu \\right) \\nonumber \\\\ + \\sum_{z=1}^Z \\log\\left( \\frac{1}{m} \\sum_{i=1}^m \\frac{\\pi_{i\\omega_{i(T+\\nu z)}\\theta}}{\\pi_{i\\omega_{i(T+\\nu z)}\\theta*}}\\right)+C,\n\\end{eqnarray*}\nwhere $C = \\log\\left(\\frac{1}{m} \\sum_{i=1}^m \\mu_{iT}(\\theta)\\right)+\\log\\left(\\boldsymbol{\\ell}_\\theta(\\mathbf{n}_{iT+\\nu -1}\\omega_{iT+\\nu }|\\mathbf{r}_{i\\theta}) \\right)$ is a finite constant.\nBy the law of large numbers for sufficiently large $Z$,\n\\begin{eqnarray*}\n\\log\\left( E^{\\theta^*}_{\\chi_\\nu}[\\boldsymbol{\\mu}_{T+\\nu Z}(\\theta)] \\right) \\\\ \\leq Z \\left((\\nu-1)\\epsilon+\\sqrt{2}m\\lambda^\\nu - D_{CA}(\\boldsymbol{\\Pi}_{\\theta^*}|| \\boldsymbol{\\Pi}_{\\theta}) \\right)+C.\n\\end{eqnarray*}\nSince the centralized average divergence is positive as $\\boldsymbol{\\pi}_{i\\theta} \\ne \\boldsymbol{\\pi}_{i\\theta^*}$ for at least one agent $i$, $\\epsilon$ and $\\nu$ can be chosen such that $\\nu\\epsilon+\\sqrt{2}m\\lambda^\\nu < D_{CA}(\\boldsymbol{\\Pi}_{\\theta^*}|| \\boldsymbol{\\Pi}_{\\theta}) $ and the bounds diverges to $-\\infty$ with probability at least $1-\\delta$. Thus, $\\lim_{Z \\to \\infty} E^{\\theta^*}_{\\chi_\\nu}[\\boldsymbol{\\mu}_{T+\\nu Z}(\\theta)] \\overset{P}{\\to} 0$. Finally, the beliefs are always bounded below by zero, and so convergence of the expectation to zero also implies that $\\boldsymbol{\\mu}_t(\\theta) \\overset{P}{\\to} \\infty$.\n\\end{proof}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.5\\columnwidth]{graph20.pdf} \\vspace{-10pt}\n \\caption{Example of the network structure considered in the numerical analysis.} \\vspace{-12pt}\n \\label{fig:graph}\n\\end{figure}\n\nIn summary, DeGroot-style social learning with finite prior evidence does not in general lead to the same beliefs as the centralized uncertain likelihood ratio, unlike the learning rule in~(\\ref{eq:main_algo}). Nevertheless for infinite evidence, learning is still achieved. For the general case as the uncertain update $\\ell_{i\\theta}(\\mathbf{n}_{it-1},\\omega_{it}|\\mathbf{r}_{i\\theta})$ transitions from a certain-like update to a value of one more slowly as more prior evidence $R_{i\\theta}$ is available, more prior evidence leads to a larger chance that beliefs using the DeGroot-style rule will converge to a value greater than one when $\\theta=\\theta^*$ and a value less than one when $\\theta \\ne \\theta^*$. The experiments in Section~\\ref{sec:SIM} empirically show that the interpretation of the beliefs as a uncertain likelihood ratio via Definition~\\ref{def:ULRT} is still meaningful even though it is less so than for the social aggregation rule given by (\\ref{eq:main_algo}).\n\n\n\n\\section{Numerical Analysis} \\label{sec:SIM}\nNext, we present a simulation study of a group of $m=20$ agents applying the proposed algorithms to empirically validate the results. In this study, we considered that the agents are socially connected according to an undirected random geometric graph shown in Figure~\\ref{fig:graph}. The weights of the adjacency matrix were constructed using a lazy metropolis matrix \\cite{NOU2017} to ensure that the network is doubly stochastic. \n\nThen, we considered three scenarios based on the amount of prior evidence randomly collected within the following categories: Low, i.e., $R_{i\\theta}\\in [0, 100]$, High, i.e., $R_{i\\theta}\\in [1000, 10000]$, and Infinite, i.e., $R_{i\\theta}\\to \\infty$. Within each scenario, each agent randomly selects $R_{i\\theta}$ and collects a set of prior evidence for each hypothesis $\\theta\\in\\boldsymbol{\\Theta}=\\{\\theta_1,\\theta_2,\\theta_3,\\theta_4\\}$, where the parameters of each hypothesis are shown in Table~\\ref{table:theta}. Then, each learning algorithm is simulated for $N=50$ Monte Carlo runs, where the amount of prior evidence, the set of prior evidence, and the measurement sequence is randomly generated during each run. \n\n\\begin{table}[t]\n\\centering\n\\caption{Set of hypotheses $\\boldsymbol{\\Theta}$}\n\\resizebox{\\columnwidth}{!}{\\begin{tabular}{c|cccc|}\n & $\\theta_1$ & $\\theta_2$ & $\\theta_3$ & $\\theta_4$ \\\\ \\hline\n$\\boldsymbol{\\pi}_{i\\theta}$ & $\\{0.6, 0.4\\}$ & $\\{0.55, 0.45\\}$ & $\\{0.5, 0.5\\}$ & $\\{0.4, 0.6\\}$ \\\\\n$D_{KL}(\\boldsymbol{\\pi}_{i\\theta}||\\boldsymbol{\\pi}_{i\\theta^*})$ & $0$ & $0.0051$ & $0.0204$ & $0.0811$\n\\end{tabular}} \\label{table:theta}\n\\vspace{-14pt}\n\\end{table}\n\n\\begin{figure*}[t]\n\t\\subfigure[Log-linear $\\theta_1$]{\n\t\t\\centering\n\t\t\\includegraphics[width=0.24\\textwidth]{t1_ll.pdf}\n\t\t\\label{fig:ll_t1}\n\t}%\n\t\\subfigure[Log-linear $\\theta_2$]{\n\t\t\\centering\n\t\t\\includegraphics[width=0.24\\textwidth]{t2_ll.pdf}\n\t\t\\label{fig:ll_t2}\n\t}%\n\t\\subfigure[Log-linear $\\theta_3$]{\n\t \\centering\n\t\t\\includegraphics[width=0.24\\textwidth]{t3_ll.pdf}\n\t\t\\label{fig:ll_t3}\n\t}%\n\t\\subfigure[Log-linear $\\theta_4$]{\n\t \\centering\n\t\t\\includegraphics[width=0.24\\textwidth]{t4_ll.pdf}\n\t\t\\label{fig:ll_t4}\n\t} \\vspace{-6pt}\n\t\n\t\\subfigure[DeGroot $\\theta_1$]{\n\t\t\\centering\n\t\t\\includegraphics[width=0.24\\textwidth]{t1_dg.pdf}\n\t\t\\label{fig:dg_t1}\n\t}%\n\t\\subfigure[DeGroot $\\theta_2$]{\n\t\t\\centering\n\t\t\\includegraphics[width=0.24\\textwidth]{t2_dg.pdf}\n\t\t\\label{fig:dg_t2}\n\t}%\n\t\\subfigure[DeGroot $\\theta_3$]{\n\t \\centering\n\t\t\\includegraphics[width=0.24\\textwidth]{t3_dg.pdf}\n\t\t\\label{fig:dg_t3}\n\t}%\n\t\\subfigure[DeGroot $\\theta_4$]{\n\t \\centering\n\t\t\\includegraphics[width=0.24\\textwidth]{t4_dg.pdf}\n\t\t\\label{fig:dg_t4}\n\t} \\vspace{-6pt}\n\t\n\t\\centering\n\t\\subfigure{\n\t \\centering\n\t\t\\includegraphics[width=0.7\\textwidth]{short_leg2.pdf}\n\t} \\vspace{-6pt}\n\t\\caption{Belief evolution of the Log-linear (\\ref{eq:main_algo}) and DeGroot (\\ref{eq:DG_algo}) update rules for hypotheses $\\theta_1$, $\\theta_2$, $\\theta_3$, and $\\theta_4$. }\\label{fig:mu_graphs} \\vspace{-15pt}\n\\end{figure*}\n\nFirst, we present the agents' beliefs for both learning rules in Figure~\\ref{fig:mu_graphs} for a single Monte Carlo run. These figures show that the amount of prior evidence directly effects the point of convergence of both learning rules. As the evidence increases, the point of convergence increases for $\\theta_1=\\theta^*$ and decreases for $\\theta\\ne\\theta^*$. Additionally, the log-linear beliefs with finite evidence are converging to $(\\prod_{j=1}^m \\widetilde{\\Lambda}_{j\\theta})^\\frac{1}{m}$, while the DeGroot beliefs are converging to something larger as stated in Theorem~\\ref{thm:ULR_Con} and Lemma~\\ref{thm:DG_finite} respectively. This indicates that we could select a threshold that allows for accurate inference with log-linear. However, this is not necessarily the case for the DeGroot model since the beliefs can converge to a value $>1$, as seen for $\\theta_2$. The properties of the DeGroot learning rule requires further study as future work. \n\nFurthermore, these figures show that when the agents are certain, learning occurs as stated in Corollaries~\\ref{lem:LL_Dog_Inf}, \\ref{lem:LL_dog}, and \\ref{cor:DG_dog} and Theorem \\ref{thm:DG_dogmatic}. Additionally, we can see that the certain beliefs generated by the DeGroot rule decrease to $0$ at a slower rate than the log-linear beliefs as indicated in Lemma~\\ref{thm:comp_rate}. \n\n\\begin{table}\n\\centering\n\\caption{Maximum Error Statistics for the uncertain likelihood ratio. }\n\\begin{tabular}{cc|cc|}\n & & \\multicolumn{2}{c|}{$e_{\\Lambda_t}(\\theta)$} \\\\\n \\multicolumn{2}{c|}{Time step} & $T=10^3$ & $T=10^6$ \\\\ \\hline \n \\multirow{4}{*}{\\rotatebox[origin=c]{90}{Low}} & $\\theta_1$ & $2.27^{\\diamond}$ & $0.045^{\\diamond}$ \\\\\n & $\\theta_2$ & $7.39^{\\diamond}$ & $0.060^{\\diamond}$ \\\\\n & $\\theta_3$ & $11.15^{\\diamond}$ & $0.086^{\\diamond}$ \\\\\n & $\\theta_4$ & $100.93^{\\diamond}$ & $0.125^{\\diamond}$ \\\\ \\hline\n \\multirow{4}{*}{\\rotatebox[origin=c]{90}{High}} & $\\theta_1$ & $267.91^{\\diamond}$ & $0.557^{\\diamond}$ \\\\\n & $\\theta_2$ & 23.28 & 0.424 \\\\\n & $\\theta_3$ & 7.64 & 1.2e-5 \\\\\n & $\\theta_4$ & 2.2e-12 & 0.00* \\\\ \\hline\n \\multirow{4}{*}{\\rotatebox[origin=c]{90}{Infinite}} & $\\theta_1$ & n\/a$^{\\diamond}$ & n\/a$^{\\diamond}$ \\\\\n & $\\theta_2$ & 25.38 & 0.00* \\\\\n & $\\theta_3$ & 0.366 & 0.00* \\\\\n & $\\theta_4$ & 1.3e-16 & 0.00* \\\\ \n \\multicolumn{4}{l}{\\scriptsize $^{\\diamond}$Values are normalized by $\\widetilde{\\Lambda}_{i\\theta}$} \\\\\n \\multicolumn{4}{l}{\\scriptsize *Values are less than $10^{-16}$}\n\\end{tabular} \\label{table:error_stats_ulr} \\vspace{-15pt}\n\\end{table}\n\nNext, we studied error statistics to validate the results presented in the previous sections, as seen in Tables~\\ref{table:error_stats_ulr}, \\ref{table:error_stats_ll}, and \\ref{table:error_stats_dg}. First, we consider the maximum error between the uncertain likelihood ratio and the asymptotic uncertain likelihood ratio, i.e., $e_{\\Lambda_t}(\\theta)=\\max_{i\\in\\mathcal{M},mc\\in\\{1,...,N\\}} |\\Lambda_{i\\theta}(T,mc)-\\widetilde{\\Lambda}_{i\\theta}(mc)|$, to empirically validate Lemma~\\ref{lem:ULR_lim} as seen in Table~\\ref{table:error_stats_ulr}. Note that we have normalized the values when the beliefs converge to a value greater than 1, while we do not normalize the values when the beliefs are converging to a value close to $0$ to avoid divide by 0 singularities. \n\nThese results show that as time increases, the error decreases significantly, suggesting that the uncertain likelihood ratio is converging to $\\widetilde{\\Lambda}_{i\\theta}$. Then, as the KL divergence and the amount of evidence increases, the error for hypotheses $\\theta\\ne\\theta^*$ further decreases until the error is $<10^{-16}$, while the error slightly increases for hypotheses $\\theta_1$. This is because $\\widetilde{\\Lambda}_{i\\theta_1}$ increases which requires additional time steps for the uncertain likelihood ratio to reach the convergence point. Furthermore, we cannot compute the error for a certain likelihood ratio of $\\theta_1$ since $\\widetilde{\\Lambda}_{i\\theta_1}$ is diverging to infinity. However, the median ratio $\\Lambda_{i\\theta}(T=10^6)\/\\Lambda_{i\\theta}(T=10^3) = 31.55$, indicating that the likelihood ratios are diverging to infinity. \n\n\\begin{table}\n\\centering\n\\caption{Maximum Error Statistics for the Log-linear update rule. }\n\\begin{tabular}{cc|cc|cc|}\n & & \\multicolumn{2}{c|}{$e_{\\mu_t}^{con}(\\theta)$} & \\multicolumn{2}{c|}{$e_{\\mu_t}^{cen}(\\theta)$} \\\\\n \\multicolumn{2}{c|}{Time step} & $T=10^3$ & $T=10^6$ & $T=10^3$ & $T=10^6$ \\\\ \\hline \n \\multirow{4}{*}{\\rotatebox[origin=c]{90}{Low}} & $\\theta_1$ & $0.072^{\\diamond}$ & $6.1e-5^{\\diamond}$ & $0.144^{\\triangleright}$ & $3.9e-3^{\\triangleright}$ \\\\\n & $\\theta_2$ & $0.086^{\\diamond}$ & $1.2e-4^{\\diamond}$ & $0.267^{\\triangleright}$ & $5.6e-3^{\\triangleright}$ \\\\\n & $\\theta_3$ & $0.132^{\\diamond}$ & $1.1e-4^{\\diamond}$ & $0.403^{\\triangleright}$ & $8.8e-3^{\\triangleright}$ \\\\\n & $\\theta_4$ & $0.236^{\\diamond}$ & $1.6e-4^{\\diamond}$ & $1.001^{\\triangleright}$ & $1.7e-2^{\\triangleright}$ \\\\ \\hline\n \\multirow{4}{*}{\\rotatebox[origin=c]{90}{High}} & $\\theta_1$ & $0.241^{\\diamond}$ & $6.5e-4^{\\diamond}$ & $0.802^{\\triangleright}$ & $0.132^{\\triangleright}$ \\\\\n & $\\theta_2$ & 1.477 & 1.6e-11 & 0.239 & 2.1e-14 \\\\\n & $\\theta_3$ & 6.3e-5 & 0.00* & 7.2e-7 & 0.00* \\\\\n & $\\theta_4$ & 0.00* & 0.00* & 0.00* & 0.00* \\\\ \\hline\n \\multirow{4}{*}{\\rotatebox[origin=c]{90}{Infinite}} & $\\theta_1$ & $0.340^{\\diamond}$ & $9.3e-3^{\\diamond}$ & $n\/a^{\\triangleright}$ & $n\/a^{\\triangleright}$ \\\\\n & $\\theta_2$ & 0.504 & 0 & 0.105 & 0 \\\\\n & $\\theta_3$ & 8.7e-7 & 0 & 9.3e-8 & 0 \\\\\n & $\\theta_4$ & 0.00* & 0 & 0.00* & 0 \\\\ \n \\multicolumn{6}{l}{\\scriptsize{$^{\\diamond}$Values are normalized by $\\bar{\\mu}_T(\\theta)$}} \\\\\n \\multicolumn{6}{l}{\\scriptsize $^{\\triangleright}$Values are normalized by $( \\prod_{i=1}^m \\widetilde{\\Lambda}_{i\\theta} )^{1\/m}$} \\\\\n \\multicolumn{6}{l}{\\scriptsize *Values are less than $10^{-16}$}\n\\end{tabular} \\label{table:error_stats_ll} \\vspace{-15pt}\n\\end{table}\n\nThe second error statistic shows that the agents converge to a consensus belief, i.e., $e_{\\mu_t}^{con}(\\theta)=\\max_{i\\in\\mathcal{M}, mc\\in\\{1,..,N\\}} |\\mu_{it}(\\theta,mc)-\\bar{\\mu}_T(\\theta,mc)|$, where $\\bar{\\mu}_T(\\theta,mc)=\\frac{1}{m}\\sum_{j=1}^m \\mu_{jt}(\\theta,mc)$ is the average belief of the agents during the Monte Carlo run $mc$. These results are shown for the log-linear and DeGroot-style learning rules in Tables~\\ref{table:error_stats_ll} and \\ref{table:error_stats_dg} respectively. Similar to $e_{\\Lambda_t}(\\theta)$, we normalized the results where the beliefs converge to a value greater than $1$. These tables show that as the number of time steps increases, the error between the agents decreases significantly, thus suggesting that the agents are forming a consensus belief with both rules. Furthermore, it can be seen that the errors between the log-linear and DeGroot beliefs are similar, which suggests that the learning rules are correlated. \n\n\\begin{table}\n\\centering\n\\caption{Maximum Error Statistics for the DeGroot-style update rule. }\n\\begin{tabular}{cc|cc|cc|}\n & & \\multicolumn{2}{c|}{$e_{\\mu_t}^{con}(\\theta)$} & \\multicolumn{2}{c|}{$e_{\\mu_t}^{cen}(\\theta)$} \\\\\n \\multicolumn{2}{c|}{Time step} & $T=10^3$ & $T=10^6$ & $T=10^3$ & $T=10^6$ \\\\ \\hline \n \\multirow{4}{*}{\\rotatebox[origin=c]{90}{Low}} & $\\theta_1$ & $0.072^{\\diamond}$ & $6.1e-5^{\\diamond}$ & $5.497^{\\triangleright}$ & $5.638^{\\triangleright}$ \\\\\n & $\\theta_2$ & $0.081^{\\diamond}$ & $1.2e-4^{\\diamond}$ & $5.492^{\\triangleright}$ & $5.850^{\\triangleright}$ \\\\\n & $\\theta_3$ & $0.138^{\\diamond}$ & $1.1e-4^{\\diamond}$ & $27.69$ & $25.73$ \\\\\n & $\\theta_4$ & $0.243^{\\diamond}$ & $1.6e-4^{\\diamond}$ & $25.73$ & $25.68$ \\\\ \\hline\n \\multirow{4}{*}{\\rotatebox[origin=c]{90}{High}} & $\\theta_1$ & $0.266^{\\diamond}$ & $6.5e-4^{\\diamond}$ & $19.80^{\\triangleright}$ & $111.52^{\\triangleright}$ \\\\\n & $\\theta_2$ & $0.751^{\\diamond}$ & $4.7e-3^{\\diamond}$ & 694.79 & 1.0e4 \\\\\n & $\\theta_3$ & $1.761^{\\diamond}$ & $9.4e-3^{\\diamond}$ & 2.4e3 & 687.76 \\\\\n & $\\theta_4$ & 132.43 & 1.8e-5 & 269.21 & 1.5e-3 \\\\ \\hline\n \\multirow{4}{*}{\\rotatebox[origin=c]{90}{Infinite}} & $\\theta_1$ & $0.371^{\\diamond}$ & $9.3e-3^{\\diamond}$ & $n\/a^{\\triangleright}$ & $n\/a^{\\triangleright}$ \\\\\n & $\\theta_2$ & 765.33 & 0.00* & 2.0e3 & 0.00* \\\\\n & $\\theta_3$ & 1.75e3 & 0.00* & 3.8e3 & 0.00* \\\\\n & $\\theta_4$ & 36.91 & 0.00* & 74.41 & 0.00* \\\\ \n \\multicolumn{6}{l}{\\scriptsize{$^{\\diamond}$Values are normalized by $\\bar{\\mu}_T(\\theta)$}} \\\\\n \\multicolumn{6}{l}{\\scriptsize $^{\\triangleright}$Values are normalized by $( \\prod_{i=1}^m \\widetilde{\\Lambda}_{i\\theta} )^{1\/m}$} \\\\\n \\multicolumn{6}{l}{\\scriptsize *Values are less than $10^{-16}$}\n\\end{tabular} \\label{table:error_stats_dg} \\vspace{-15pt}\n\\end{table}\n\n\nFinally, Tables~\\ref{table:error_stats_ll} and \\ref{table:error_stats_dg} show the error between the agents' beliefs and the centralized uncertain likelihood ratio, i.e., $e_{\\mu_t}^{cen}(\\theta) = \\max_{i\\in\\mathcal{M}, mc\\in\\{1,...,N\\}} |\\mu_{it}(\\theta,mc)-(\\prod_{j=1}^m \\widetilde{\\Lambda}_{j\\theta}(mc))^\\frac{1}{m} |$, to empirically validate Theorem~\\ref{thm:ULR_Con} and Lemma~\\ref{thm:DG_finite}. Similar to the previous results, we have normalized the values where the beliefs converge to a value greater than $1$. The results for the log-linear rule indicate that the beliefs are converging to the centralized uncertain likelihood ratio, while the DeGroot beliefs are converging to a value much larger. When the agents are certain, both learning rules result in beliefs that are converging to $0$ for hypotheses $\\theta\\ne \\theta^*$. Although we cannot evaluate this result for $\\theta_1$, we can see that the median of the ratio of beliefs $\\mu_{i10^6}(\\theta_1)\/\\mu_{i10^3}(\\theta_1)$ is $33.03$ and $880.90$ for log-linear and DeGroot respectively, indicating that the beliefs are diverging to infinity. \n\n\\section{Conclusion} \\label{sec:Conclusion}\nThis work presents the properties of uncertain models in non-Bayesian social learning theory where a group of agents are collaborating together to identify the unknown ground truth hypothesis. Uncertainty arises in many situations where an agent cannot acquire enough prior evidence about a hypothesis to develop precise statistical models. To accommodate for uncertainty, we derived an approximate statistical model for each hypothesis based on the partial information available to a single agent and studied the convergence properties of a group of agents that compute a belief for each hypothesis using a log-linear update rule. We found that when the agents are uncertain, the group forms a consensus belief, albeit different than traditional social beliefs. However, when the agents are certain, the beliefs generated using our uncertain models allow for learning and achieves results consistent with the literature.\n\nWe then found that agents can also learn in the certain condition with a DeGroot-style rule, but cannot quantify the convergence point in the uncertain condition. Furthermore, the beliefs generated using the DeGroot-style rule converge at a rate much slower than the log-linear rule. \n\nAs a future work, we will study the effects of malicious agents where preliminary results are presented in \\cite{HULJ2019}. Building on analysis of DeGroot-style rules, we will aim to quantify their convergence point as well as those of other aggregation rules. Additionally, we aim to understand how the uncertain likelihood ratio test trades off type I and II errors as a function of prior evidence. \n\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIt is well known that the observed lifetime difference between the\n$D^+$ and $D^0$ is ascribed to the destructive interference in\n$D^+$ decays and\/or the constructive $W$-exchange contribution to\n$D^0$ decays (for a review, see e.g., \\cite{Bigi1}). By contrast,\nthe $D_s^+$ and $D^0$ lifetimes are theoretically expected to be\nclose to each other. For example, it is estimated in \\cite{Bigi2}\nthat\n\\begin{eqnarray}\n{\\tau(D_s^+)\\over\\tau(D^0)}=\\,1.00-1.07\\,.\n\\end{eqnarray}\nHowever, the recent Fermilab E791 measurement of the $D_s^+$\nlifetime yields $\\tau(D_s^+)=0.518\\pm 0.014\\pm 0.007$ ps\n\\cite{E791}. When combining with the world average $D^0$ lifetime\n\\cite{PDG} yields the ratio\n\\begin{eqnarray}\n{\\tau(D_s^+)\\over\\tau(D^0)}=\\,1.25\\pm 0.04\\, \\qquad{\\rm (E791)},\n\\end{eqnarray}\nwhich is different from unity by $6\\sigma$. Meanwhile, the CLEO\nmeasurement of $D_s^+$ and $D^0$ lifetimes indicates\n$\\tau(D_s^+)=0.4863\\pm 0.015\\pm 0.005$ ps \\cite{CLEO} and\n\\begin{eqnarray}\n{\\tau(D_s^+)\\over\\tau(D^0)}=\\,1.19\\pm 0.04\\, \\qquad{\\rm (CLEO)},\n\\end{eqnarray}\nwhich is $5\\sigma$ different from unity. Note that the $D^+_s$\nlifetime measured by Fermilab and CLEO is better than the errors\nof the world average value \\cite{PDG} and that the lifetime ratio\nof $D^+_s$ to $D^0$ is larger than the previous world average\n\\cite{PDG}:\n\\begin{eqnarray}\n{\\tau(D_s^+)\\over\\tau(D^0)}=\\,1.13\\pm 0.04\\, \\qquad{\\rm (PDG)}.\n\\end{eqnarray}\n\nBased on the operator product expansion (OPE) approach for the\nanalysis of inclusive weak decays of heavy hadrons, it is known\nthat the $1\/m_c^2$ corrections due to the nonperturbative kinetic\nand chromomagnetic terms are small and essentially canceled out in\nthe lifetime ratios. By contrast, the $1\/m_c^3$ corrections due to\n4-quark operators can be quite significant because of the\nphase-space enhancement by a factor of $16\\pi^2$. The nonspectator\neffects of order $1\/m_c^3$ involve the Pauli interference in $D^+$\ndecay, the $W$-exchange in $D^0$ decay, the $W$-annihilation and\nCabibbo-suppressed Pauli interference in nonleptonic $D_s^+$.\nWhile the semileptonic decay rates of $D^+,D^0$ and $D_s^+$ are\nessentially the same, there is an additional purely leptonic decay\ncontribution to $D_s^+$, namely $D^+_s\\to\\tau\\bar\\nu$. The\ndimension-6 four-quark operators which describe the nonspectator\neffects in inclusive decays of heavy hadrons are well known\n\\cite{Bigi92,BS93}. However, it is also known that there is a\nserious problem with the evaluation of the destructive Pauli\ninterference $\\Gamma^{\\rm int}(D^+)$ in $D^+$. A direct\ncalculation indicates that $\\Gamma^{\\rm int}(D^+)$ overcomes the\n$c$ quark decay rate so that the resulting nonleptonic decay width\nof $D^+$ becomes negative \\cite{BS94,Chern}. This certainly does\nnot make sense. This implies that the $1\/m_c$ expansion is not\nwell convergent and sensible, to say the least. In other words,\nhigher dimension terms are in principle also important. It has\nbeen conjectured in \\cite{BS94} that higher-dimension corrections\namount to replacing $m_c$ by $m_D$ in the expansion parameter\n$f_D^2m_D\/m_c^3$, so that it becomes $f_D^2\/m_D^2$. As a\nconsequence, the destructive Pauli interference will be reduced by\na factor of $(m_c\/m_D)^3$.\n\nAnother way of alleviating the problem is to realize that the\nusual local four-quark operators are derived in the heavy quark\nlimit so that the effect of spectator light quarks can be\nneglected. Since the charmed quark is not heavy enough, it is very\nimportant, as stressed by Chernyak \\cite{Chern}, for calculations\nwith charmed mesons to account for the nonzero momentum of\nspectator quarks. It turns out that the Pauli interference in\n$D^+$ decay is suppressed by a factor of $(\\langle p_c\\rangle-\\langle\np_d\\rangle)^2\/\\langle p_c\\rangle^2=(\\langle p_D\\rangle-2\\langle p_d\\rangle)^2\/m_c^2$, where\n$\\langle p_c\\rangle$ and $\\langle p_d\\rangle$ are the momenta of the $c$ and $\\bar\nd$ quarks, respectively, in the $D^+$ meson. Because the charmed\nquark is not heavy, the spectator $\\bar d$ quark carries a sizable\nfraction of the charmed meson momentum. Consequently, the Pauli\neffect in $D^+$ decay is subject to a large suppression and will\nnot overcome the leading $c$ quark decay width. Based on this\nobservation, in the present paper we will follow \\cite{Chern} to\ntake into account the effects of the spectator quark's momentum\nconsistently. In the framework of heavy quark expansion, this\nspectator effect can be regarded as higher order $1\/m_c$\ncorrections.\n\nIn order to understand the $D$-meson lifetime pattern, it is\nimportant to have a reliable estimate of the hadronic matrix\nelements. In the present paper we will employ the QCD sum rule to\nevaluate the unknown hadronic parameters $B_1,B_2,\\varepsilon_1,\\varepsilon_2$, to\nbe introduced below. In Sec.~\\ref{sec:GF}, we will outline the\ngeneral framework for the study of the charmed meson lifetimes.\nThen in Sec.~\\ref{sec:sum rules} we proceed to compute the\nhadronic parameters using the sum rule approach. Sec.~\\ref{sec:DC}\npresents results and discussions.\n\n\n\\section{General Framework}\\label{sec:GF}\nThe inclusive nonleptonic and semileptonic decay\nrates of a charmed meson to order $1\/m_c^2$ are given by \\cite{Bigi92,BS93}\n\\begin{eqnarray}\n\\label{nlspec} \\Gamma_{\\rm NL,spec}(D) &=& {G_F^2m_c^5\\over\n192\\pi^3}N_c\\,V_{\\rm CKM}\\, {1\\over 2m_D} \\Bigg\\{\n\\left(c_1^2+c_2^2+{2c_1c_2\\over N_c}\\right)-\n \\Big[\\alpha I_0(x,0,0)\\langle D|\\bar cc|D\\rangle \\nonumber \\\\\n&-& {1\\over m_c^2}I_1(x,0,0) \\langle D|\\bar cg_s\\sigma \\cdot G c|D\\rangle\n\\Big] -{4\\over m_c^2}\\,{2c_1c_2\\over N_c}\\,I_2(x,0,0) \\langle D|\\bar\ncg_s\\sigma\\cdot G c|D\\rangle\\Bigg\\},\n\\end{eqnarray}\nwhere $\\sigma\\!\\cdot\\! G=\\sigma_{\\mu\\nu}G^{\\mu\\nu}$,\n$x=(m_s\/m_c)^2$, $N_c$ is the number of colors, the parameter\n$\\alpha$ denotes QCD radiative corrections \\cite{Bagan}, and\n\\begin{eqnarray}\n\\label{sl} \\Gamma_{\\rm SL}(D) &=& {G_F^2m_c^5\\over\n192\\pi^3}|V_{cs}|^2\\,{ \\eta(x,x_\\ell,0)\\over 2m_D} \\nonumber \\\\\n&\\times& \\Big[ I_0(x,0,0)\\langle D|\\bar cc|D\\rangle-{1\\over\nm_c^2}\\,I_1(x,0,0) \\langle D|\\bar cg_s\\sigma\\cdot G c|D\\rangle \\Big] \\,,\n\\end{eqnarray} where $\\eta(x,x_\\ell,0)$ with $x_\\ell=(m_\\ell\/m_Q)^2$ is the\nQCD radiative correction to the semileptonic decay rate and its\ngeneral analytic expression is given in \\cite{Hokim}. In\nEqs.~(\\ref{nlspec}) and (\\ref{sl}), $I_{0,1,2}$ are phase-space\nfactors (see e.g. \\cite{Cheng} for their explicit expressions),\nand the factor $V_{\\rm CKM}$ takes care of the relevant\nCabibbo-Koyashi-Maskawa (CKM) matrix elements. In\nEq.~(\\ref{nlspec}) $c_1$ and $c_2$ are the Wilson coefficients in\nthe effective Hamiltonian.\n\n\nThe two-body matrix elements in Eqs.~(\\ref{nlspec}) and (\\ref{sl})\ncan be parameterized as\n\\begin{eqnarray}\n \\frac{\\langle D|\\bar cc|D \\rangle}{2m_D} &=& 1\n - \\frac{K_D}{2m_c^2}+\\frac{G_D}{2 m_c^2} + O(1\/m_c^3) \\,,\n\\nonumber\\\\\n \\frac{\\langle D|\\bar c{1\\over 2}g_s\\sigma\\cdot G c|D\n \\rangle}{2m_D} &=& {G_D} + O(1\/m_c) \\,,\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n K_D &\\equiv& -\\frac{\\langle D|\\bar h^{(c)}_v\\, (iD_\\perp)^2\nh^{(c)}_v |D \\rangle}{2m_D}=-\\lambda_1\\,,\\nonumber\\\\\nG_D &\\equiv&\n \\frac{\\langle D|\\bar h^{(c)}_v\\,{1\\over 2}g_s\\sigma\\cdot G\nh^{(c)}_v |D \\rangle}{2m_D}=3\\lambda_2 \\,.\n\\end{eqnarray}\nThe nonperturbative parameter $\\lambda_2$ is obtained from the\nmass squared difference of the vector and pseudoscalar mesons:\n\\begin{eqnarray}\n(\\lambda_2)_D &=& {3\\over 4}(m^2_{D^*}-m^2_D)=0.138\\,{\\rm GeV}^2,\n\\nonumber \\\\ (\\lambda_2)_{D_s} &=& {3\\over\n4}(m^2_{D_s^*}-m^2_{D_s})=0.147\\,{\\rm GeV}^2.\n\\end{eqnarray}\nAs for the parameter $\\lambda_1$, it is determined from the mass\nrelation \\cite{Bigi2}\n\\begin{eqnarray}\n(\\lambda_1)_{D_s}-(\\lambda_1)_D\\cong {2m_bm_c\\over m_b-m_c}\\left[\n\\overline m_{B_s}-\\overline m_B-(\\overline m_{D_s}-\\overline m_D)\\right],\n\\end{eqnarray}\nwhere $\\overline m_P={1\\over 4}(m_P+3m_{P^*})$ denotes the spin-averaged\nmeson mass. For $m_b=5.05$ GeV and $m_c=1.65$ GeV, we obtain\n$(\\lambda_1)_{D_s}-(\\lambda_1)_D=-0.067\\,{\\rm GeV}^2$.\n\n\n\\begin{figure}[ht]\n\\vspace{1cm}\n \\leftline{\\hspace{1.1cm} \\epsfig{figure=light.eps,width=12cm,height=9cm}}\n\\vspace{0.9cm}\n \\caption{Nonspectator effects: (a) $W$-exchange,\n(b1) $W$-annihilation, (b2) and (c) Pauli interference.\n\\label{fig:nonspec}} \\vspace{0.5cm}\n\\end{figure}\n\n\nTo the order of $1\/m_c^3$, the nonspectator effects due to the\nPauli interference and $W$-exchange (see Fig. 1) may contribute\nsignificantly to the lifetime ratios due to the two-body\nphase-space enhancement by a factor of $16\\pi^2$ relative to the\nthree-body phase space for heavy quark decay. As stressed in the\nIntroduction, it is crucial to invoke the effect of the light\nquark's momentum in the charmed meson in order to properly\ndescribe the $D$ lifetimes. For this purpose, the four-quark\noperators relevant to inclusive nonleptonic $D$ decays are\n\\cite{Chern}\n\\begin{eqnarray}\n\\label{nsp} {\\cal L}_{\\rm NL,nspec} &=& {2G_F^2\\over \\pi}\\,V_{\\rm\nCKM}\\Bigg\\{ g^{\\mu\\nu}k^2\\eta_1 \\left[ \\left(2c_1c_2+{1\\over\nN_c}(c^2_1+c_2^2)\\right)O_{\\mu\\nu}^d+2(c_1^2+ c_2^2)T_{\\mu\\nu}^d\n\\right] \\nonumber \\\\ &+&{1\\over 3}(k^\\mu k^\\nu \\eta_2-k^2\ng^{\\mu\\nu}\\eta_3)\\Big[ N_c \\Big( c_2+{1\\over N_c}c_1\\Big)^2\nO_{\\mu\\nu}^u+2c_1^2T_{\\mu\\nu}^u \\nonumber\\\\ &+& N_c\\Big( c_1+{1\\over\nN_c}c_2\\Big)^2 O_{\\mu\\nu}^s+2c_2^2T_{\\mu\\nu}^s \\Big]\\Bigg\\},\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\nO_{\\mu\\nu}^q &=& \\bar c_L\\gamma_\\mu q_L\\,\\bar q_L\\gamma_\\nu c_L,\n\\nonumber\n\\\\ T_{\\mu\\nu}^q &=& \\bar c_L\\gamma_\\mu t^a q_L\\,\\bar\nq_L\\gamma_\\nu t^a c_L,\n\\end{eqnarray}\nwith $t^a=\\lambda^a\/2$ and $\\lambda^a$ being the Gell-Mann\nmatrices, and $\\eta_1,~\\eta_2,~\\eta_3$ are phase-space factors,\ndepending on the number of strange quarks inside the loop of Fig.\n1 \\cite{Chern,NS}:\n\\begin{eqnarray}\n(i)&& \\qquad \\eta_1=(1-x)^2,\\qquad \\eta_2=(1-x)^2(1+{x\\over 2}),\n\\qquad \\eta_3=(1-x)^2(1+2x), \\nonumber \\\\ (ii)&& \\qquad \\eta_1=(1-x)^2,\n\\qquad \\eta_2=\\sqrt{1-4x}\\,(1-x), \\qquad\n\\eta_3=\\sqrt{1-4x}\\,(1+2x),\n\\end{eqnarray}\nfor (i) one strange quark and (ii) two strange quarks in the loop,\nrespectively, with $x=(m_s\/m_c)^2$. Of course, $\\eta_i=1$ in the\nabsence of strange loop quarks. In Eq.~(\\ref{nsp}) the first term\nproportional to $g^{\\mu\\nu}k^2$ contributes to the Pauli\ninterference, while the rest to the $W$-exchange or\n$W$-annihilation, where $k$ is the total four-momentum of the\nintegrated quark pair \\cite{Chern}. More specifically, $k=p_c+p_q$\nfor the $W$-exchange and $W$-annihilation, and $k=p_c-p_q$ for the\nPauli interference. In the heavy quark limit, $k\\to p_c$ and it is\neasily seen that (\\ref{nsp}) is reduced to the more familiar form\n\\cite{NS}\n\\begin{eqnarray}\n{\\cal L}_{\\rm NL,nspec} &=& {2G_F^2 m_c^2\\over \\pi}\\,V_{\\rm\nCKM}\\Bigg\\{ \\left(2c_1c_2+{1\\over\nN_c}(c^2_1+c_2^2)\\right)\\eta_1O_{V-A}^d+2(c_1^2+c_2^2)\\eta_1\nT_{V-A}^d \\nonumber\n\\\\ &-& {1\\over 3}N_c\\Big( c_2+{1\\over N_c}c_1\\Big)^2( \\eta_2O_{V-A}^u-\\eta_3O_{\nS-P}^u) -{2\\over 3}c_1^2(\\eta_2T_{V-A}^u-\\eta_3T_{S-P}^u) \\nonumber\\\\\n&-& {1\\over 3} N_c\\Big( c_1+{1\\over N_c}c_2\\Big)^2\n(\\eta_2O_{V-A}^s-\\eta_3O_{ S-P}^s) -{2\\over 3}\nc_2^2(\\eta_2T_{V-A}^s-\\eta_3T_{ S-P}^s)\\Bigg\\},\n\\end{eqnarray}\nwhere use has been made of equations of motion, and\n\\begin{eqnarray}\\label{4qops}\n O_{V-A}^q &=& \\bar c_L\\gamma_\\mu q_L\\,\\bar q_L\\gamma^\\mu c_L\n \\,, \\nonumber\\\\\n O_{S-P}^q &=& \\bar c_R\\,q_L\\,\\bar q_L\\,c_R \\,, \\nonumber\\\\\n T_{V-A}^q &=& \\bar c_L\\gamma_\\mu t^a q_L\\,\n\\bar q_L\\gamma^\\mu t^a c_L \\,, \\nonumber\\\\\n T_{S-P}^q &=& \\bar c_R\\,t^a q_L\\,\\bar q_L\\, t^ac_R \\,,\n\\end{eqnarray}\nwith $q_{R,L}=(1\\pm\\gamma_5)q\/2$.\n\nIn analog to the hadronic parameters defined in \\cite{NS} for the\n$B$ meson sector, we can also define four hadronic parameters\n$B_1,B_2,\\varepsilon_1,\\varepsilon_2$ in the charm sector as\n\\begin{eqnarray}\n\\label{parameters1} {1\\over 2m_{_{D_q}}}\\langle D_q|O^q_{V-A}| D_q\\rangle\n&&\\equiv {f^2_{D_q} m_{_{D_q}} \\over 8}B_1\\,, \\nonumber\\\\ {1\\over\n2m_{_{D_q}}}\\langle D_q|T^q_{V-A}| D_q\\rangle &&\\equiv {f^2_{D_q}\nm_{_{D_q}}\\over 8}\\varepsilon_1\\,,\n\\end{eqnarray}\nand\n\\begin{eqnarray}\n\\label{parameters2} {k^\\mu k^\\nu\\over 2m^3_{_{D_q}}} \\langle\nD_q|O_{\\mu\\nu}^q| D_q\\rangle &&\\equiv {f^2_{D_q} m_{_{D_q}}\\over\n8}B_2\\,,\\nonumber\\\\ {k^\\mu k ^\\nu\\over 2m^3_{_{D_q}}} \\langle\nD_q|T_{\\mu\\nu}^q| D_q\\rangle &&\\equiv {f^2_{D_q} m_{_{D_q}}\\over\n8}\\varepsilon_2\\,,\n\\end{eqnarray}\nfor the matrix elements of these four-quark operators between $D$\nmeson states. Under the factorization approximation, $B_i=1$ and\n$\\varepsilon_i=0$ \\cite{NS}.\n\nThe destructive Pauli interference in inclusive nonleptonic $D^+$\nand $D_s^+$ decays and the $W$-exchange contribution to $D^0$ and\nthe $W$-annihilation contribution to $D^+_s$ are\n\\begin{eqnarray}\n\\label{bnonspec}\n \\Gamma^{\\rm exc}(D^0) = &-&\\Gamma_0 \\, \\eta_{\\rm\n nspec}\\, (|V_{cs}|^2 |V_{ud}|^2+|V_{cd}|^2 |V_{us}|^2){m_D^2\\over\n m_c^2}(1-x)^2\\nonumber\\\\\n &&\\times\\Bigg\\{ (1+{1 \\over 2}x)\\Big[({1\\over\n N_c}c_1^2+2c_1c_2+N_cc_2^2)B_1+2c_1^2\\varepsilon_1 \\Big]\\nonumber \\\\\n &&-(1+2x)\\Big[({1\\over N_c}c_1^2+2c_1c_2+N_cc_2^2)B_2\n +2c_1^2\\varepsilon_2\\Big] \\Bigg\\} \\nonumber\\\\\n &-&\\Gamma_0 \\, \\eta_{\\rm\n nspec}\\, |V_{cs}|^2 |V_{us}|^2{m_D^2\\over\n m_c^2}\\sqrt{1-4x}\\nonumber\\\\\n &&\\times\\Bigg\\{ (1-x)\\Big[({1\\over\n N_c}c_1^2+2c_1c_2+N_cc_2^2)B_1+2c_1^2\\varepsilon_1 \\Big]\\nonumber \\\\\n &&-(1+2x)\\Big[({1\\over N_c}c_1^2+2c_1c_2+N_cc_2^2)B_2\n +2c_1^2\\varepsilon_2\\Big] \\Bigg\\}\\nonumber\\\\\n &-&\\Gamma_0 \\, \\eta_{\\rm\n nspec}\\, |V_{cd}|^2 |V_{ud}|^2{m_D^2\\over\n m_c^2}\\Bigg\\{ ({1\\over\n N_c}c_1^2+2c_1c_2+N_cc_2^2)(B_1-B_2)+2c_1^2(\\varepsilon_1-\\varepsilon_2)\n \\Bigg\\},\\nonumber \\\\\n \\Gamma^{\\rm int}_-(D^+) &=&\n \\Gamma_0\\,\\eta_{\\rm nspec}|V_{ud}|^2 (|V_{cs}|^2(1-x)^2+|V_{cd}|^2)\n \\,{(\\langle p_c\\rangle-\\langle p_d\\rangle)^2\\over m_c^2}\\nonumber\\\\\n &&\\times \\left\n [(c_1^2+c_2^2)(B_1+6\\varepsilon_1)+6c_1c_2B_1\\right],\\nonumber\\\\\n \\Gamma^{\\rm ann}(D^+_s) &=& -\\Gamma_0\\eta_{\\rm\n nspec} |V_{cs}|^2 |V_{ud}|^2\\, \\,{m_{D_s}^2\\over m_c^2}\\Bigg\\{ ({1\\over\n N_c}c_2^2+2c_1c_2+N_cc_1^2)(B_1-B_2)+2c_2^2(\\varepsilon_1-\\varepsilon_2) \\Bigg\\}\n \\nonumber \\\\\n &&-\\Gamma_0 \\, \\eta_{\\rm\n nspec}\\, |V_{cs}|^2|V_{us}|^2{m_{D_s}^2\\over\n m_c^2}(1-x)^2\\Bigg\\{ (1+{1 \\over 2}x)\\Big[({1\\over\n N_c}c_1^2+2c_1c_2+N_cc_2^2)B_1+2c_1^2\\varepsilon_1 \\Big]\\nonumber \\\\\n &&-(1+2x)\\Big[({1\\over N_c}c_1^2+2c_1c_2+N_cc_2^2)B_2\n +2c_1^2\\varepsilon_2\\Big] \\Bigg\\} \\,,\\nonumber\\\\\n \\Gamma^{\\rm int}_-(D^+_s)\n &=& \\Gamma_0\\,\\eta_{\\rm nspec}|V_{us}|^2\n(|V_{cs}|^2(1-x)^2+|V_{cd}|^2)\\,{(\\langle p_c\\rangle-\\langle\n p_s\\rangle)^2\\over m_c^2} \\nonumber\\\\\n &&\\times\\left\n [(c_1^2+c_2^2)(B_1+6\\varepsilon_1)+6c_1c_2B_1\\right],\n\\end{eqnarray}\nwith\n\\begin{eqnarray}\n\\Gamma_0={G_F^2m_c^5\\over 192\\pi^3},~~~\\eta_{\\rm nspec}=16\n\\pi^2{f_{D_q}^2m_{D_q}\\over m_c^3}.\n\\end{eqnarray}\n\nIn Eq. (\\ref{bnonspec}), $\\langle p_c\\rangle$ and $\\langle p_q\\rangle$ ($q=d,s$)\nare the average momenta of the charmed and light quarks,\nrespectively, in the charmed meson. The sum $p_c+p_q$ can be\neffectively substituted by $m_{D_q}$, the mass of the charmed\nmeson $D_q$. This can be nicely illustrated by the example of\n$D_s\\to\\tau\\bar\\nu_\\tau$ decay with the decay rate:\n\\begin{eqnarray}\n \\Gamma(D_s\\to \\tau\\bar \\nu_\\tau)\n \\simeq\n \\frac{G_F^2 m_\\tau^2 f_{D_s}^2 m_{D_s}}{8\\pi}|V_{cs}|^2\n \\left( 1-\\frac{m_\\tau^2}{m_{D_s}^2}\\right)^2 \\,,\n \\end{eqnarray}\nan expression which can be found in the textbook. In the OPE\nstudy, the same decay width is represented by\n \\begin{eqnarray}\n \\Gamma(D_s\\to \\tau\\bar\\nu_\\tau)\n &&\\simeq {G_F^2\\over 6\\pi} |V_{cs}|^2\n \\left[ (p_c+p_{\\bar s})^\\mu (p_c+p_{\\bar s})^\\nu\n -g^{\\mu\\nu}(p_c+p_{\\bar s})^2 +{3\\over\n 2}g^{\\mu\\nu}m_\\tau^2\n \\right]\\nonumber\\\\\n &&\\times{\\langle D_s|(\\bar c\\gamma_\\mu (1-\\gamma_5)s) (\\bar s\\gamma_\\nu\n (1-\\gamma_5)c)|D_s\\rangle \\over 2m_{D_s}}\n \\left( 1-\\frac{m_\\tau^2}{(p_c+p_{\\bar s})^2}\\right)^2 \\,.\n \\end{eqnarray}\nComparing the above two expressions, it is clear that\n$(p_c+p_{\\bar s})^2$ is nothing but $m_{D_s}^2$. Consequently,\n$p_c-p_q$ can be approximated as $p_{D_q}-2p_q$ where $p_q$ could\nbe roughly set as the constituent quark mass $\\sim 350$ MeV.\nCompared to the naive OPE predictions, it is evident from Eq.\n(\\ref{bnonspec}) that the decay widths of $W$-exchange and\n$W$-annihilation are enhanced by a factor of $(m_{D_q}\/m_c)^2$,\nwhereas the Pauli interference is substantially suppressed by a\nfactor of $(p_{D_q}-2p_q)^2\/m_c^2\\sim 0.5\\,$.\n\n\n\\section{QCD sum rule calculations of four-quark matrix\nelements}\\label{sec:sum rules} In order to calculate the\nfour-quark matrix elements appearing in the formula of the $D$\nmeson liftimes within the QCD sum rule approach, it is convenient\nto adopt the following parametrization:\n\\begin{eqnarray}\n&&\\langle D_q(p^D)|O^q_{\\mu\\nu}|D_q(p^D) \\rangle =(B p^D_\\mu p^D_\\nu +\n\\delta B\\, g_{\\mu\\nu}m_{D_q}^2) {f_{D_q}^2\\over\n 4}\\,,\\nonumber\\\\\n&&\\langle D_q(p^D)|T^q_{\\mu\\nu}|D_q(p^D) \\rangle =(\\varepsilon p^D_\\mu p^D_\\nu +\n\\delta \\varepsilon\\, g_{\\mu\\nu} m_{D_q}^2) {f_{D_q}^2\\over4}\\,,\n\\end{eqnarray}\nwhere the relations between $B, \\delta B, \\varepsilon, \\delta\\varepsilon$ and the\nparameters $B_{1,2}, \\varepsilon_{1,2}$ defined in Eqs.\n(\\ref{parameters1}) and (\\ref{parameters2}) are\n\\begin{eqnarray}\n &&B_1=B+4\\delta B, \\ \\ B_2=B+\\delta B\\,,\\nonumber\\\\\n &&\\varepsilon_1=\\varepsilon+4\\delta \\varepsilon, \\ \\ \\varepsilon_2=\\varepsilon+\\delta \\varepsilon\\,.\n\\end{eqnarray}\n\nUnlike the $B$ meson case, the study of the $D$ meson is preferred\nto begin with the full theory directly for several reasons: (1) In\nthe QCD sum rule study of the full theory, the working Borel\nwindow of the $D$ meson case is about 2.0 GeV$^2