diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzlbfk" "b/data_all_eng_slimpj/shuffled/split2/finalzzlbfk" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzlbfk" @@ -0,0 +1,5 @@ +{"text":"\\section{MCCFR with baseline-corrected values}\n\\label{sec:pseudocode}\n\nPseudocode for MCCFR with baseline-corrected values is given in Algorithm~\\ref{alg:bmccfr}. Quantities of the form $\\sigma^\\time(h, \\cdot)$ refer to the vector of all quantities $\\sigma^\\time(h, a)$ for $a \\in A(h)$. A version for the predictive baseline, which must calculate extra values, is given in Algorithm~\\ref{alg:predmccfr}. Each of these algorithms has the same worst-case iteration complexity as MCCFR without baselines, namely $\\BigO{d|A_{\\text{max}}|}$ where $d$ is the tree's depth and $|A_{\\text{max}}| = \\max_h |A(h)|$.\n\n\\begin{algorithm}[htb]\n\\caption{MCCFR w\/ baseline}\n\\label{alg:bmccfr}\n\\begin{algorithmic}[1]\n\\Statex\n\\Function{MCCFR}{$h$}\n \\Iif{$h \\in Z$}{\\Return $u(h)$}\n \\State $\\sigma^\\time(h, \\cdot) \\gets \\Call{RegretMatching}{\\Regret{\\time-1}(I(h), \\cdot)}$\n \\State $\\overline{\\strategy}^\\time(h, \\cdot) \\gets \\frac{\\time-1}{\\time} \\overline{\\strategy}^{\\time - 1}(h, \\cdot) + \\frac{1}{\\time} \\sigma^\\time$\n \\State sample action $a \\sim \\sampling{\\time}(h, \\cdot)$\n \\State $\\baseutil{(ha)}{\\sigma^\\time}{z^\\time} \\gets \\Call{MCCFR}{(ha)}$\\label{line:recursion}\n \\State $\\baseutil{h, a'}{\\sigma^\\time}{z^\\time} \\gets b^\\time(h, a') \\qquad \\forall a' \\neq a$\n \\State $\\baseutil{h, a}{\\sigma^\\time}{z^\\time} \\gets b^\\time(h, a) + \\frac{1}{\\sampling{\\time}(h, a)}\\left(\\baseutil{(ha)}{\\sigma^\\time}{z^\\time} - b^\\time(h, a)\\right)$\n \\State $\\baseutil{h}{\\sigma^\\time}{z^\\time} \\gets \\sum_{a'} \\sigma^\\time(h, a') \\baseutil{h, a'}{\\sigma^\\time}{z^\\time}$\n \\If {$P(h) = 1$}\n \\State $\\regret{\\time}(I(h), a) \\gets \\frac{\\reach{\\sigma^\\time}_2(h)}{\\reach{\\sampling{\\time}}(h)}\\left(\\baseutil{h, \\cdot}{\\sigma^\\time}{z^\\time} - \\baseutil{h}{\\sigma^\\time}{z^\\time}\\right)$\n \\ElsIf {$P(h) = 2$}\n \\State $\\regret{\\time}(I(h), a) \\gets \\frac{\\reach{\\sigma^\\time}_1(h)}{\\reach{\\sampling{\\time}}(h)}\\left({-\\baseutil{h, \\cdot}{\\sigma^\\time}{z^\\time}} + \\baseutil{h}{\\sigma^\\time}{z^\\time}\\right)$\n \\EndIf\n \\State $\\Regret{\\time}(I(h), \\cdot) \\gets \\Regret{\\time-1}(I(h), \\cdot) + \\regret{\\time}(I(h), \\cdot)$\\label{line:regupdate}\n \\State $b^{\\time+1}(h,a) \\gets \\Call{UpdateBaseline}{b^\\time(h,a), \\baseutil{(ha)}{\\sigma^\\time}{z^\\time}}$\n \\State \\Return $\\baseutil{h}{\\sigma^\\time}{z^\\time}$\\label{line:return}\n\\EndFunction\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{algorithm}[htb]\n\\caption{MCCFR w\/ predictive baseline}\n\\label{alg:predmccfr}\n\\begin{algorithmic}[1]\n\\Statex\n\\Function{MCCFR}{$h$}\n \\Iif{$h \\in Z$}{\\Return $u(h), u(h)$}\n \\State $\\sigma^\\time(h, \\cdot) \\gets \\Call{RegretMatching}{\\Regret{\\time-1}(I(h), \\cdot)}$\n \\State $\\overline{\\strategy}^\\time(h, \\cdot) \\gets \\frac{\\time-1}{\\time} \\overline{\\strategy}^{\\time - 1}(h, \\cdot) + \\frac{1}{\\time} \\sigma^\\time$\n \\State sample action $a \\sim \\sampling{\\time}(h, \\cdot)$\n \\State $\\baseutil{(ha)}{\\sigma^\\time}{z^\\time}, \\baseutil{(ha)}{\\sigma^{\\time+1}}{z^\\time} \\gets \\Call{MCCFR}{(ha)}$\n \\State $\\baseutil{h, a'}{\\sigma^\\time}{z^\\time} \\gets b^\\time(h, a') \\qquad \\forall a' \\neq a$\n \\State $\\baseutil{h, a}{\\sigma^\\time}{z^\\time} \\gets b^\\time(h, a) + \\frac{1}{\\sampling{\\time}(h, a)}\\left(\\baseutil{(ha)}{\\sigma^\\time}{z^\\time} - b^\\time(h, a)\\right)$\n \\State $\\baseutil{h}{\\sigma^\\time}{z^\\time} \\gets \\sum_{a'} \\sigma^\\time(h, a') \\baseutil{h, a'}{\\sigma^\\time}{z^\\time}$\n \\If {$P(h) = 1$}\n \\State $\\regret{\\time}(I(h), a) \\gets \\frac{\\reach{\\sigma^\\time}_2(h)}{\\reach{\\sampling{\\time}}(h)}\\left(\\baseutil{h, \\cdot}{\\sigma^\\time}{z^\\time} - \\baseutil{h}{\\sigma^\\time}{z^\\time}\\right)$\n \\ElsIf {$P(h) = 2$}\n \\State $\\regret{\\time}(I(h), a) \\gets \\frac{\\reach{\\sigma^\\time}_1(h)}{\\reach{\\sampling{\\time}}(h)}\\left({-\\baseutil{h, \\cdot}{\\sigma^\\time}{z^\\time}} + \\baseutil{h}{\\sigma^\\time}{z^\\time}\\right)$\n \\EndIf\n \\State $\\Regret{\\time}(I(h), \\cdot) \\gets \\Regret{\\time-1}(I(h), \\cdot) + \\regret{\\time}(I(h), \\cdot)$\n \\State $b^{\\time+1}(h,a) \\gets \\baseutil{(ha)}{\\sigma^{\\time+1}}{z^\\time}$\n \\State $\\sigma^{\\time+1}(h, \\cdot) \\gets \\Call{RegretMatching}{\\Regret{\\time}(I(h), \\cdot)}$\n \\State $\\baseutil{h, a'}{\\sigma^{\\time+1}}{z^\\time} \\gets b^\\time(h, a') \\qquad \\forall a' \\neq a$\n \\State $\\baseutil{h, a}{\\sigma^{\\time+1}}{z^\\time} \\gets b^{\\time+1}(h, a)$\n \\State $\\baseutil{h}{\\sigma^{\\time+1}}{z^\\time} \\gets \\sum_{a'} \\sigma^{\\time+1}(h, a') \\baseutil{h, a'}{\\sigma^{\\time+1}}{z^\\time}$\n \\State \\Return $\\baseutil{h}{\\sigma^\\time}{z^\\time}, \\baseutil{h}{\\sigma^{\\time+1}}{z^\\time}$\n\\EndFunction\n\\end{algorithmic}\n\\end{algorithm}\n\n\\section{Proof of Theorem~\\ref{thm:unbiased}}\n\nThis proof is a simplified version of the proof of Lemma~5 in \\citet{Schmid19}.\n\nWe directly analyze the expectation of the baseline-corrected utility:\n\\begin{align*}\n&\\Exp[z^\\time]{\\baseutil{h, a}{\\sigma^\\time}{z^\\time} \\mid z^\\time \\sqsupseteq h}\\\\\n&= \\Prob{(ha) \\sqsubseteq z^\\time \\mid h \\sqsubseteq z^\\time}\\left(\\frac{1}{\\sampling{\\time}(h, a)}\\left(\\Exp[z^\\time]{\\baseutil{(ha)}{\\sigma^\\time}{z^\\time} \\mid z^\\time \\sqsupseteq (ha)} - b^\\time(h, a)\\right) + b^\\time(h, a)\\right)\\\\\n&\\quad + \\Prob{(ha) \\not\\sqsubseteq z^\\time \\mid h \\sqsubseteq z^\\time}(b^\\time(h, a))\\\\\n&= \\sampling{\\time}(h, a)\\left(\\frac{1}{\\sampling{\\time}(h, a)}\\left(\\Exp[z^\\time]{\\baseutil{(ha)}{\\sigma^\\time}{z^\\time} \\mid z^\\time \\sqsupseteq (ha)} - b^\\time(h, a)\\right) + b^\\time(h, a)\\right)\\\\\n&\\quad + (1 - \\sampling{\\time}(h, a))(b^\\time(h, a))\\\\\n&= \\Exp[z^\\time]{\\baseutil{(ha)}{\\sigma^\\time}{z^\\time} \\mid z^\\time \\sqsupseteq (ha)}\n\\end{align*}\n\nWe now proceed by induction on the height of $(ha)$ in the tree. If $(ha)$ has height 0, then $(ha) \\in Z$ and $\\Exp[z^\\time]{\\baseutil{h, a}{\\sigma^\\time}{z^\\time} \\mid z^\\time \\sqsupseteq h} = \\Exp[z^\\time]{\\baseutil{(ha)}{\\sigma^\\time}{z^\\time} \\mid z^\\time \\sqsupseteq (ha)} = u((ha))$ by definition.\n\nFor the inductive step, consider arbitrary $h, a$ such that $(ha)$ has height more than 0. We assume that $\\Exp[z^\\time]{\\baseutil{h', a'}{\\sigma^\\time}{z^\\time} \\mid z^\\time \\sqsupseteq h'} = \\exputil{(h'a')}{\\sigma^\\time}$ for all $h', a'$ such that $(h'a')$ has smaller height than $(ha)$. We then have\n\\begin{align*}\n&\\hspace{-10pt}\\Exp[z^\\time]{\\baseutil{h, a}{\\sigma^\\time}{z^\\time} \\mid z^\\time \\sqsupseteq h}\\\\\n&= \\Exp[z^\\time]{\\baseutil{(ha)}{\\sigma^\\time}{z^\\time} \\mid z^\\time \\sqsupseteq (ha)}\\\\\n&= \\sum_{a' \\in A((ha))} \\sigma^\\time((ha), a') \\Exp[z^\\time]{\\baseutil{(ha), a'}{\\sigma^\\time}{z^\\time} \\mid z^\\time \\sqsupseteq (ha)}\\\\\n&= \\sum_{a' \\in A((ha))} \\sigma^\\time((ha), a') \\exputil{(ha\\act')}{\\sigma^\\time} &&\\text{by inductive hypothesis}\\\\\n&= \\exputil{(ha)}{\\sigma^\\time} &&\\text{by definition}\n\\end{align*}\nWe are able to apply the inductive hypothesis because $(ha\\act')$ is a suffix of $(ha)$ and thus must have smaller height. The proof follows by induction. \\qed\n\n\\section{Proof of Theorem \\ref{thm:variance}}\n\nThis proof is similar to the proof of Lemma~3 in \\citet{Schmid19}.\n\nWe begin by proving that the assumption of the theorem necessitates that the baseline-corrected values are the true expected values.\n\n\\begin{lem} Assume that we have a baseline that satisfies $b^\\time(h,a) = \\exputil{(ha)}{\\sigma^\\time}$ for all $h \\in H$, $a \\in A(h)$. Then for any $h, a, z^\\time$,\n\\begin{equation}\n\\baseutil{h, a}{\\sigma^\\time}{z^\\time} = \\exputil{(ha)}{\\sigma^\\time}\n\\end{equation}\n\\label{lem:trueval}\n\\end{lem}\n\n\\begin{proof}\nAs for Theorem~\\ref{thm:unbiased}, we prove this by induction on the height of $(ha)$. If $(ha) \\in Z$, then by definition $\\baseutil{(ha)}{\\sigma^\\time}{z^\\time} = u((ha)) = \\exputil{(ha)}{\\sigma^\\time}$. This then means\n\\begin{align*}\n&\\hspace{-10pt}\\baseutil{h, a}{\\sigma^\\time}{z^\\time}\\\\\n&= \\frac{\\mathbbm{1}((ha) \\sqsubseteq z^\\time)}{\\sampling{\\time}(h, a)} \\left(\\baseutil{(ha)}{\\sigma^\\time}{z^\\time} - b^\\time(h, a)\\right) + b^\\time(h, a)\\\\\n&= \\frac{\\mathbbm{1}((ha) \\sqsubseteq z^\\time)}{\\sampling{\\time}(h, a)} \\left(\\exputil{(ha)}{\\sigma^\\time} - b^\\time(h, a)\\right) + b^\\time(h, a)\\\\\n&= \\frac{\\mathbbm{1}((ha) \\sqsubseteq z^\\time)}{\\sampling{\\time}(h, a)} \\left(\\exputil{(ha)}{\\sigma^\\time} - \\exputil{(ha)}{\\sigma^\\time}\\right) + \\exputil{(ha)}{\\sigma^\\time} &&\\text{by lemma assumption}\\\\\n&= \\exputil{(ha)}{\\sigma^\\time}\n\\end{align*}\n\nFor the inductive step, we assume that $\\baseutil{h', a'}{\\sigma^\\time}{z^\\time} = \\exputil{(h'a')}{\\sigma^\\time}$ for any $(h'a')$ with smaller height than $(ha)$. Then\n\\begin{align*}\n\\baseutil{(ha)}{\\sigma^\\time}{z^\\time} &= \\sum_{a' \\in A((ha))} \\sigma^\\time((ha), a') \\baseutil{(ha), a'}{\\sigma^\\time}{z^\\time}\\\\\n&= \\sum_{a' \\in A((ha))} \\sigma^\\time((ha), a') \\exputil{(ha\\act')}{\\sigma^\\time} &&\\text{by inductive hypothesis}\\\\\n&= \\exputil{(ha)}{\\sigma^\\time}\n\\end{align*}\nand this in turn gives us\n\\begin{align*}\n&\\hspace{-10pt}\\baseutil{h, a}{\\sigma^\\time}{z^\\time}\\\\\n&= \\frac{\\mathbbm{1}((ha) \\sqsubseteq z^\\time)}{\\sampling{\\time}(h, a)} \\left(\\exputil{(ha)}{\\sigma^\\time} - b^\\time(h, a)\\right) + b^\\time(h, a)\\\\\n&= \\frac{\\mathbbm{1}((ha) \\sqsubseteq z^\\time)}{\\sampling{\\time}(h, a)} \\left(\\exputil{(ha)}{\\sigma^\\time} - \\exputil{(ha)}{\\sigma^\\time}\\right) + \\exputil{(ha)}{\\sigma^\\time} &&\\text{by lemma assumption}\\\\\n&= \\exputil{(ha)}{\\sigma^\\time}\n\\end{align*}\nThe proof of the lemma follows by induction.\n\\end{proof}\n\nTo finish the proof of Theorem~\\ref{thm:unbiased}, we only have to note that\n\\begin{align*}\n\\Var[z^\\time]{\\baseutil{h, a}{\\sigma^\\time}{z^\\time} | z^\\time \\sqsupseteq h} &= \\Var[z^\\time]{\\exputil{(ha)}{\\sigma^\\time} | z^\\time \\sqsupseteq h} &&\\text{by Lemma~\\ref{lem:trueval}}\\\\\n&= 0\n\\end{align*}\nThe last step follows because the expected utility is not a random variable. \\qed\n\n\\section{Further variance analysis}\n\nTheorem~\\ref{thm:variance} shows that if the baseline function exactly predicts the true expected utility, then the baseline-corrected sampled values will have zero variance. However, it doesn't show what happens to the variance when the baseline function only approximates the expected utility. In this section, we show that the variance is a function of the differences between the baseline estimates and the true expected values.\n\n\\setcounter{thm}{3}\n\\begin{thm}\nFor any baseline function $b^\\time$ and any $h, a$\n\\begin{equation*}\n\\Var[z^\\time]{\\baseutil{h, a}{\\sigma^\\time}{z^\\time} | z^\\time \\sqsupseteq h} \\leq \\sum_{(h'a') \\sqsupseteq (ha)} \\frac{(\\reach{\\sigma^\\time}((ha), (h'a')))^2}{\\reach{\\sampling{\\time}}(h, (h'a'))} \\left(\\exputil{(h'a')}{\\sigma^\\time} - b^\\time(h', a') \\right)^2\n\\end{equation*}\n\\label{thm:fullvar}\n\\end{thm}\n\nTheorem~\\ref{thm:fullvar} shows that as the baseline estimates $b^\\time(h, a)$ approach the expected values $\\exputil{(ha)}{\\sigma^\\time}$, the variance converges to zero. It also establishes a bound on how quickly it happens.\n\nBefore proving Theorem~\\ref{thm:fullvar}, we first examine how the full (trajectory) variance can be decomposed into contributions from individual actions.\n\n\\begin{lem}\nFor any baseline function $b^\\time$ and any $h \\in H$\n\\begin{align*}\n\\Var[z^\\time]{\\baseutil{h}{\\sigma^\\time}{z^\\time} | z^\\time \\sqsupseteq h} = &\\sum_{a \\in A(h)} \\frac{\\left(\\sigma^\\time(h, a)\\right)^2}{\\sampling{\\time}(h, a)} \\Var[z^\\time]{\\baseutil{(ha)}{\\sigma^\\time}{z^\\time}}\\\\ &~+ \\Var[a]{\\frac{\\sigma^\\time(h, a)}{\\sampling{\\time}(h, a)} \\left(\\exputil{(ha)}{\\sigma^\\time} - b^\\time(h, a) \\right)}\\\\\n\\end{align*}\n\\label{lem:recvar}\n\\end{lem}\n\n\n\\begingroup\n\\allowdisplaybreaks\n\n\\begin{proof}\nWe use the law of total variance, conditioning on which $a$ is sampled at $h$. This gives us\n\\begin{align}\n&\\Var[z^\\time]{\\baseutil{h}{\\sigma^\\time}{z^\\time} | z^\\time \\sqsupseteq h}\\nonumber\\\\&\\quad = \\Exp[a]{\\Var[z^\\time]{\\baseutil{h}{\\sigma^\\time}{z^\\time} \\middle| z^\\time \\sqsupseteq (ha)}} + \\Var[a]{\\Exp[z^\\time]{\\baseutil{h}{\\sigma^\\time}{z^\\time} \\middle| z^\\time \\sqsupseteq (ha)}}\\label{eq:ltv}\n\\end{align}\nWe analyze each of these terms separately.\n\nFirst, to analyze the left summand in (\\ref{eq:ltv}), we note that if $ha \\sqsubset z^\\time$ then by the recursive definition of baseline-corrected values\n\\begin{align*}\n\\baseutil{h}{\\sigma^\\time}{z^\\time} = \\frac{\\sigma^\\time(h, a)}{\\sampling{\\time}(h, a)} \\baseutil{(ha)}{\\sigma^\\time}{z^\\time} - \\frac{\\sigma^\\time(h, a)}{\\sampling{\\time}(h, a)} b^\\time(h, a) + \\sum_{a' \\in A(h)} \\sigma^\\time(h, a') b^\\time(h, a')\n\\end{align*}\nOnly the first term depends on the sampled trajectory $z^\\time$, and thus\n\\begin{align}\n\\Exp[a]{\\Var[z^\\time]{\\baseutil{h}{\\sigma^\\time}{z^\\time} \\middle| z^\\time \\sqsupseteq (ha)}} &= \\Exp[a]{\\Var[z^\\time]{\\frac{\\sigma^\\time(h, a)}{\\sampling{\\time}(h, a)} \\baseutil{(ha)}{\\sigma^\\time}{z^\\time} \\middle| z^\\time \\sqsupseteq (ha)}}\\nonumber\\\\\n&= \\Exp[a]{\\left(\\frac{\\sigma^\\time(h, a)}{\\sampling{\\time}(h, a)}\\right)^2\\Var[z^\\time]{\\baseutil{(ha)}{\\sigma^\\time}{z^\\time} \\middle| z^\\time \\sqsupseteq (ha)}}\\nonumber\\\\\n&=\\sum_{a \\in A(h)} \\frac{\\left(\\sigma^\\time(h, a)\\right)^2}{\\sampling{\\time}(h, a)} \\Var[z^\\time]{\\baseutil{(ha)}{\\sigma^\\time}{z^\\time} \\middle| z^\\time \\sqsupseteq (ha)}\\label{eq:lem2eq2}\n\\end{align}\n\nNext, we analyze the inner expectation of the right summand of (\\ref{eq:ltv})\n\\begin{align*}\n&\\hspace{-30pt}\\Exp[z^\\time]{\\baseutil{h}{\\sigma^\\time}{z^\\time} \\middle| z^\\time \\sqsupseteq (ha)}\\\\\n&= \\sum_{a'} \\sigma^\\time(h, a') b^\\time(h, a') + \\frac{\\sigma^\\time(h, a)}{\\sampling{\\time}(h, a)} \\left(\\Exp[z^\\time]{\\baseutil{(ha)}{\\sigma^\\time}{z^\\time}} - b^\\time(h, a) \\right)\\\\\n&= \\sum_{a'} \\sigma^\\time(h, a') b^\\time(h, a') + \\frac{\\sigma^\\time(h, a)}{\\sampling{\\time}(h, a)} \\left(\\exputil{(ha)}{\\sigma^\\time} - b^\\time(h, a) \\right)\n\\end{align*}\nThe first term here doesn't depend on the sampled $a$, giving us\n\\begin{equation}\n\\Var[a]{\\Exp[z^\\time]{\\baseutil{h}{\\sigma^\\time}{z^\\time} \\middle| (ha) \\sqsubseteq z^\\time}} = \\Var[a]{\\frac{\\sigma^\\time(h, a)}{\\sampling{\\time}(h, a)} \\left(\\exputil{(ha)}{\\sigma^\\time} - b^\\time(h, a) \\right)}\\label{eq:lem2eq3}\n\\end{equation}\n\nCombining (\\ref{eq:ltv}), (\\ref{eq:lem2eq2}), and (\\ref{eq:lem2eq3}) completes the proof.\n\\end{proof}\n\nLemma~\\ref{lem:recvar} decomposes the variance into a part from the immediately sampled action, and a part from the remainder of the sampled trajectory. We extend this to completely decompose the trajectory variance.\n\n\\begin{lem}\nFor any baseline function $b^\\time$ and any $h, a$\n\\begin{equation*}\n\\Var[z^\\time]{\\baseutil{h}{\\sigma^\\time}{z^\\time} | z^\\time \\sqsupseteq h} = \\sum_{h' \\sqsupseteq h} \\frac{(\\reach{\\sigma^\\time}(h, h'))^2}{\\reach{\\sampling{\\time}}(h, h')} \\Var[a']{\\frac{\\sigma^\\time(h', a')}{\\sampling{\\time}(h', a')} \\left(\\exputil{(h'a')}{\\sigma^\\time} - b^\\time(h', a') \\right)}\n\\end{equation*}\n\\label{lem:fullvar}\n\\end{lem}\n\\begin{proof}\nWe proceed by induction on the height of $h$ in the tree. If $h$ has height 0, then $A(h) = \\emptyset$, and $\\Var[z^\\time]{\\baseutil{h}{\\sigma^\\time}{z^\\time} | z^\\time \\sqsupseteq h} = 0$. Otherwise, we begin from Lemma~\\ref{lem:recvar} and apply the inductive hypothesis for $h'$ with height less than that of $h$. This gives\n\\begin{align*}\n&\\hspace{-20pt}\\Var[z^\\time]{\\baseutil{h}{\\sigma^\\time}{z^\\time} | z^\\time \\sqsupseteq h}\\\\ \n&= \\sum_{a \\in A(h)} \\frac{\\left(\\sigma^\\time(h, a)\\right)^2}{\\sampling{\\time}(h, a)} \\Var[z^\\time]{\\baseutil{(ha)}{\\sigma^\\time}{z^\\time}} + \\Var[a]{\\frac{\\sigma^\\time(h, a)}{\\sampling{\\time}(h, a)} \\left(\\exputil{(ha)}{\\sigma^\\time} - b^\\time(h, a) \\right)}\\\\\n&= \\sum_{a \\in A(h)} \\frac{\\left(\\sigma^\\time(h, a)\\right)^2}{\\sampling{\\time}(h, a)} \\sum_{h' \\sqsupseteq (ha)} \\frac{(\\reach{\\sigma^\\time}((ha), h'))^2}{\\reach{\\sampling{\\time}}((ha), h')} \\Var[a']{\\frac{\\sigma^\\time(h', a')}{\\sampling{\\time}(h', a')} \\left(\\exputil{(h'a')}{\\sigma^\\time} - b^\\time(h', a') \\right)}\\\\\n&\\qquad + \\Var[a]{\\frac{\\sigma^\\time(h, a)}{\\sampling{\\time}(h, a)} \\left(\\exputil{(ha)}{\\sigma^\\time} - b^\\time(h, a) \\right)}\\\\\n&= \\sum_{a \\in A(h)} \\sum_{h' \\sqsupseteq (ha)} \\frac{(\\reach{\\sigma^\\time}(h, h'))^2}{\\reach{\\sampling{\\time}}(h, h')} \\Var[a']{\\frac{\\sigma^\\time(h', a')}{\\sampling{\\time}(h', a')} \\left(\\exputil{(h'a')}{\\sigma^\\time} - b^\\time(h', a') \\right)}\\\\\n&\\qquad + \\Var[a]{\\frac{\\sigma^\\time(h, a)}{\\sampling{\\time}(h, a)} \\left(\\exputil{(ha)}{\\sigma^\\time} - b^\\time(h, a) \\right)}\\\\\n&= \\sum_{h' \\sqsupseteq h} \\frac{(\\reach{\\sigma^\\time}(h, h'))^2}{\\reach{\\sampling{\\time}}(h, h')} \\Var[a']{\\frac{\\sigma^\\time(h', a')}{\\sampling{\\time}(h', a')} \\left(\\exputil{(h'a')}{\\sigma^\\time} - b^\\time(h', a') \\right)}\n\\end{align*}\nThe lemma follows by induction.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:fullvar}]\nStarting from Lemma~\\ref{lem:fullvar}, we first bound the variance of history values\n\\begin{align}\n&\\hspace{-20pt}\\Var[z^\\time]{\\baseutil{h}{\\sigma^\\time}{z^\\time} | z^\\time \\sqsupseteq h}\\nonumber\\\\\n&= \\sum_{h' \\sqsupseteq h} \\frac{(\\reach{\\sigma^\\time}(h, h'))^2}{\\reach{\\sampling{\\time}}(h, h')} \\Var[a']{\\frac{\\sigma^\\time(h', a')}{\\sampling{\\time}(h', a')} \\left(\\exputil{(h'a')}{\\sigma^\\time} - b^\\time(h', a') \\right)}\\nonumber\\\\\n&\\leq \\sum_{h' \\sqsupseteq h} \\frac{(\\reach{\\sigma^\\time}(h, h'))^2}{\\reach{\\sampling{\\time}}(h, h')} \\Exp[a']{\\left(\\frac{\\sigma^\\time(h', a')}{\\sampling{\\time}(h', a')} \\left(\\exputil{(h'a')}{\\sigma^\\time} - b^\\time(h', a') \\right)\\right)^2}\\nonumber\\\\\n&= \\sum_{h' \\sqsupseteq h} \\frac{(\\reach{\\sigma^\\time}(h, h'))^2}{\\reach{\\sampling{\\time}}(h, h')} \\sum_{a' \\in A(h')} \\frac{\\left(\\sigma^\\time(h', a')\\right)^2}{\\sampling{\\time}(h', a')} \\left(\\exputil{(h'a')}{\\sigma^\\time} - b^\\time(h', a') \\right)^2\\nonumber\\\\\n&= \\sum_{\\substack{h' \\sqsupseteq h\\\\ a' \\in A(h')}} \\frac{(\\reach{\\sigma^\\time}(h, (h'a')))^2}{\\reach{\\sampling{\\time}}(h, (h'a'))} \\left(\\exputil{(h'a')}{\\sigma^\\time} - b^\\time(h', a') \\right)^2\\label{eq:thm4eq1}\n\\end{align}\n\nWe then reformulate the variance of the history action value $\\baseutil{h, a}{\\sigma^\\time}{z^\\time}$ in terms of the variance of the succeeding history value $\\baseutil{(ha)}{\\sigma^\\time}{z^\\time}$. To do this, we apply the law of total variance conditioning on the random variable $\\mathbbm{1}((ha) \\sqsubseteq z^\\time)$ which indicates whether $a$ is sampled at $h$.\n\\begin{align}\n&\\hspace{-20pt}\\Var[z^\\time]{\\baseutil{h, a}{\\sigma^\\time}{z^\\time} \\middle| z^\\time \\sqsupseteq h}\\nonumber\\\\\n&= \\Var[z^\\time]{\\frac{\\mathbbm{1}((ha) \\sqsubseteq z^\\time)}{\\sampling{\\time}(h, a)} \\left(\\baseutil{(ha)}{\\sigma^\\time}{z^\\time} - b^\\time(h, a)\\right) + b^\\time(h, a) \\middle| z^\\time \\sqsupseteq h}\\nonumber\\\\\n&= \\Var[z^\\time]{\\frac{\\mathbbm{1}((ha) \\sqsubseteq z^\\time)}{\\sampling{\\time}(h, a)} \\left(\\baseutil{(ha)}{\\sigma^\\time}{z^\\time} - b^\\time(h, a)\\right) \\middle| z^\\time \\sqsupseteq h}\\nonumber\\\\\n&= \\Exp{\\Var[z^\\time]{\\frac{\\mathbbm{1}((ha) \\sqsubseteq z^\\time)}{\\sampling{\\time}(h, a)} \\left(\\baseutil{(ha)}{\\sigma^\\time}{z^\\time} - b^\\time(h, a)\\right) \\middle| \\mathbbm{1}((ha) \\sqsubseteq z^\\time)}}\\nonumber\\\\\n&\\qquad + \\Var{\\Exp[z^\\time]{\\frac{\\mathbbm{1}((ha) \\sqsubseteq z^\\time)}{\\sampling{\\time}(h, a)} \\left(\\baseutil{(ha)}{\\sigma^\\time}{z^\\time} - b^\\time(h, a)\\right) \\middle| \\mathbbm{1}((ha) \\sqsubseteq z^\\time)}}\\nonumber\\\\\n&= \\Exp{\\frac{\\mathbbm{1}((ha) \\sqsubseteq z^\\time)}{(\\sampling{\\time}(h, a))^2}\\Var[z^\\time]{\\baseutil{(ha)}{\\sigma^\\time}{z^\\time} \\middle| \\mathbbm{1}((ha) \\sqsubseteq z^\\time)}}\\nonumber\\\\\n&\\qquad + \\Var{\\frac{\\mathbbm{1}((ha) \\sqsubseteq z^\\time)}{\\sampling{\\time}(h, a)} \\left(\\exputil{(ha)}{\\sigma^\\time} - b^\\time(h, a)\\right)}\\nonumber\\\\\n&= \\frac{1}{\\sampling{\\time}(h, a)}\\Var[z^\\time]{\\baseutil{(ha)}{\\sigma^\\time}{z^\\time} \\middle| z^\\time \\sqsupseteq (ha)}\\nonumber\\\\\n&\\qquad + \\frac{1}{(\\sampling{\\time}(h, a))^2}\\left(\\exputil{(ha)}{\\sigma^\\time} - b^\\time(h, a)\\right)^2\\Var{\\mathbbm{1}((ha) \\sqsubseteq z^\\time)}\\nonumber\\\\\n&= \\frac{1}{\\sampling{\\time}(h, a)}\\Var[z^\\time]{\\baseutil{(ha)}{\\sigma^\\time}{z^\\time} \\middle| z^\\time \\sqsupseteq (ha)} + \\frac{1 - \\sampling{\\time}(h, a)}{\\sampling{\\time}(h, a)}\\left(\\exputil{(ha)}{\\sigma^\\time} - b^\\time(h, a)\\right)^2\\nonumber\\\\\n&\\leq \\frac{1}{\\sampling{\\time}(h, a)}\\left(\\Var[z^\\time]{\\baseutil{(ha)}{\\sigma^\\time}{z^\\time} \\middle| z^\\time \\sqsupseteq (ha)} + \\left(\\exputil{(ha)}{\\sigma^\\time} - b^\\time(h, a)\\right)^2\\right)\\label{eq:thm4eq2}\n\\end{align}\n\nCombining (\\ref{eq:thm4eq1}) and (\\ref{eq:thm4eq2}), we get\n\\begin{align*}\n&\\hspace{-20pt}\\Var[z^\\time]{\\baseutil{h, a}{\\sigma^\\time}{z^\\time} \\middle| z^\\time \\sqsupseteq h}\\\\\n&\\leq \\frac{1}{\\sampling{\\time}(h, a)}\\biggl(\\sum_{\\substack{h' \\sqsupseteq (ha)\\\\ a' \\in A(h')}} \\frac{(\\reach{\\sigma^\\time}((ha), (h'a')))^2}{\\reach{\\sampling{\\time}}((ha), (h'a'))} \\left(\\exputil{(h'a')}{\\sigma^\\time} - b^\\time(h', a') \\right)^2\\\\\n&\\hspace{2cm} + \\left(\\exputil{(ha)}{\\sigma^\\time} - b^\\time(h, a)\\right)^2\\biggr)\\\\\n&= \\frac{1}{\\sampling{\\time}(h, a)}\\sum_{(h'a') \\sqsupseteq (ha)} \\frac{(\\reach{\\sigma^\\time}((ha), (h'a')))^2}{\\reach{\\sampling{\\time}}((ha), (h'a'))} \\left(\\exputil{(h'a')}{\\sigma^\\time} - b^\\time(h', a') \\right)^2\\\\\n&= \\sum_{(h'a') \\sqsupseteq (ha)} \\frac{(\\reach{\\sigma^\\time}((ha), (h'a')))^2}{\\reach{\\sampling{\\time}}(h, (h'a'))} \\left(\\exputil{(h'a')}{\\sigma^\\time} - b^\\time(h', a') \\right)^2\n\\end{align*}\n\n\\end{proof}\n\n\\endgroup\n\\section{Public trees}\n\\label{sec:pubtrees}\n\nThere are multiple sources of variance when computing the regret at an information set in MCCFR. One form of variance comes from sampling actions (and recursively, trajectories) from the information set, rather than walking the full subtree. A second form of variance comes from sampling only one of the histories in the information set itself. Our baseline framework reduces the first kind of variance, but does not take the second form of variance into account.\n\nOne approach to combating this single-history variance could be to extend the use of the baseline; analogous to how we created a control variate from using $b^\\time(h, a)$ to evaluate unsampled actions $a$, we could also create a control variate that uses $b^\\time(h', a)$ to evaluate all unsampled $h' \\in I(h)$. However, this requires evaluating alternate histories along every step of the sampled trajectory, meaning that a single iteration of MCCFR goes from complexity $\\BigO{d|A_{\\text{max}}|}$ to $\\BigO{d|A_{\\text{max}}||I_{\\text{max}}}|$.\n\nA second approach, and the one we present in this section, is to change the sampling method used. Rather than using a baseline to consider each alternate history in the information set, we directly evaluate all such histories. Intuitively, this can be done by only sampling actions that are publicly observable, and walking all actions that change the game's hidden state. This approach was used by \\citet{Schmid19}, but was never formalized. We formalize the algorithm here, after presenting some additional assumptions and definitions.\n\nWe assume that the EFG is \\emph{timeable} \\citep{Jakobsen16}, which informally means that no player can gain additional information by tracking how much time elapses while they are not acting. Formally, this means that we can assign a value $\\timefunc{h}$ to every $h \\in H$ such that $\\timefunc{h} = \\timefunc{h'}$ for any $h' \\in I_{P(h)}(h)$, and $\\timefunc{h} < \\timefunc{h'}$ for any $h' \\sqsupseteq h$. Every game played by humans must be timeable, or else the human could distinguish histories in the same information set by tracking elapsed time. If a game is timeable, players always observe the timing when they are acting, so there must be some strategically identical game where they observe the timing even when not acting. Thus we will assume that our games satisfy this requirement.\n\nWe now introduce the concept of a \\emph{public state} \\citep{Johanson11}, which groups histories based on information available to all players, or informally, based on whether they distinguishable to an outside observer. Formally, a public state is a set of histories that is (minimally) closed under the information set relation for all players. Let $\\mathcal{S}$ be the set of public states (which partitions $H$), and $S(h) \\in \\mathcal{S}$ be the public state that $h$ belongs to. By assumption that all players observe the game's timing, necessarily $\\timefunc{h} = \\timefunc{h'}$ if $S(h) = S(h')$. In turn, this means that if $h \\sqsubseteq h'$, then $S(h) \\neq S(h')$. We also assume for simplicity that if $S(h) = S(h')$, then $P(h) = P(h')$. If necessary, this can be made true for any timeable game by splitting information sets and adding dummy actions, without strategically changing the game.\n\nWe define $\\Transitions{S}$ to be the set of successor public states to $\\mathcal{S}$: $S' \\in \\Transitions{S}$ if there is some $h \\in S$, $a \\in A(h)$, and $h' \\in S'$ such that $(ha) = h'$. The successor relation defines the edges of a \\emph{public tree}, where the public states are nodes. It should be noted that more than one action can lead to the same successor public state when some player doesn't observe the action, and that one action can lead to more than one successor public state if some previously private information becomes public.\n\nIn the statement of Theorem~\\ref{thm:zerovar}, we used $\\text{samp}^\\time(h)$ to notate whether $h$ was sampled on iteration $\\time$. With the notation introduced here, we can formalize this by defining $\\text{samp}^\\time(h)$ to occur if and only if $h' \\sqsubseteq z^\\time$ for some $h' \\in S(h)$ and some $z^\\time \\in Z^\\time$. For clarity, we thus symbolize this relation as $S(h) \\sqsubseteq Z^\\time$.\n\n\\subsection{Public Outcome Sampling}\n\nWe now define our MCCFR variant, which we call \\emph{Public Outcome Sampling (POS)}. Instead of walking trajectories through the EFG tree by sampling actions, POS walks trajectories through the public tree by sampling successor public states.\n\nFor public state $S$, let $\\my{\\mathcal{I}}(S) \\subseteq \\my{\\mathcal{I}}$ be the collection of player $i$ information sets contained within $S$. While walking down the tree, POS keeps track of reach probabilities $\\my{\\reach{\\sigma^\\time}}(\\my{I})$ for each $\\my{I} \\in \\my{\\mathcal{I}}(S)$ and each player $i$ at public state $S$. To recurse, it samples some successor $S' \\in \\Transitions{S}$ using a probability distribution $\\sampling{\\time}(S) \\in \\Delta_{\\Transitions{S}}$. It updates the reach probabilities to $\\my{\\reach{\\sigma^\\time}}(\\my{I})$ for each $\\my{I} \\in \\my{\\mathcal{I}}(S')$, using the current strategy $\\sigma^\\time$. Ultimately, the recursion reaches a public state which only contains terminal nodes (as the end of the game is publicly observable). This public state, which defines the sampled trajectory in the public tree, we label $Z^\\time$. The terminal histories are evaluated as $u(z)$ for each $z \\in Z^\\time$.\n\nWalking back up the tree, at each recursion step we pass back the utilities $\\baseutil{h'}{\\sigma^\\time}{Z^\\time}$ for each $h' \\in S'$. From these, we apply a baseline and recursively calculate utilities as\n\\begin{align}\n\\baseutil{h, a}{\\sigma^\\time}{Z^\\time} &= \\frac{\\mathbbm{1}(S((ha)) \\sqsubseteq Z^\\time)}{\\sampling{\\time}(S(h), S((ha))}\\left(\\baseutil{(ha)}{\\sigma^\\time}{Z^\\time} - b^\\time(h, a)\\right) + b^\\time(h, a)\\\\\n\\baseutil{h}{\\sigma^\\time}{Z^\\time} &= \\sum_{a \\in A(h)} \\sigma^\\time(h, a) \\baseutil{h, a}{\\sigma^\\time}{Z^\\time}\n\\end{align}\nfor each $h \\in S$ and $a \\in A(h)$. We then use these values to calculate regrets $\\regret{\\time}(I, a)$ for each $I \\in \\my{\\mathcal{I}}(S)$ and update the saved regrets.\n\nAlgorithm~\\ref{alg:posmccfr} gives pseudocode for MCCFR with POS. Algorithm~\\ref{alg:predposmccfr} gives pseudocode for a version using the predictive baseline.\n\n\\begin{algorithm}[htb]\n\\caption{MCCFR w\/ POS and baseline}\n\\label{alg:posmccfr}\n\\begin{algorithmic}[1]\n\\Statex\n\\Function{POS-MCCFR}{$S$}\n \\Iif{$S \\subseteq Z$}{\\Return $\\{u(h) \\mid \\forall h \\in S\\}$}\n \\For {$I \\in \\mathcal{I}_{P(S)}(S)$}\n \\State $\\sigma^\\time(I, \\cdot) \\gets \\Call{RegretMatching}{\\Regret{\\time-1}(I, \\cdot)}$\n \\State $\\overline{\\strategy}^\\time(I, \\cdot) \\gets \\frac{\\time-1}{\\time} \\overline{\\strategy}^{\\time - 1}(I, \\cdot) + \\frac{1}{\\time} \\sigma^\\time$\n \\EndFor\n \\State sample successor $S' \\sim \\sampling{\\time}(S, \\cdot)$\n \\State $\\{\\baseutil{h'}{\\sigma^\\time}{Z^\\time} \\mid \\forall h' \\in S'\\} \\gets \\Call{POS-MCCFR}{S'}$\n \\For {$I \\in \\mathcal{I}_{P(S)}$}\n \\For {$h \\in I$}\n \\For {$a \\in A(h)$}\n \\If {$(ha) \\in S'$}\n \\State $\\baseutil{h, a}{\\sigma^\\time}{Z^\\time} \\gets b^\\time(h, a) + \\frac{1}{\\sampling{\\time}(S, S')}\\left(\\baseutil{(ha)}{\\sigma^\\time}{Z^\\time} - b^\\time(h, a)\\right)$\n \\State $b^{\\time+1}(h,a) \\gets \\Call{UpdateBaseline}{b^\\time(h,a), \\baseutil{(ha)}{\\sigma^\\time}{Z^\\time}}$\n \\Else\n \\State $\\baseutil{h, a}{\\sigma^\\time}{Z^\\time} \\gets b^\\time(h, a)$\n \\EndIf\n \\EndFor\n \\State $\\baseutil{h}{\\sigma^\\time}{Z^\\time} \\gets \\sum_{a'} \\sigma^\\time(h, a') \\baseutil{h, a'}{\\sigma^\\time}{Z^\\time}$\n \\EndFor\n \\If {$P(h) = 1$}\n \\State $\\regret{\\time}(I, a) \\gets \\frac{1}{\\reach{\\sampling{\\time}}(S)}\\sum_{h \\in I}\\reach{\\sigma^\\time}_2(h)\\left(\\baseutil{h, \\cdot}{\\sigma^\\time}{Z^\\time} - \\baseutil{h}{\\sigma^\\time}{Z^\\time}\\right)$\n \\ElsIf {$P(h) = 2$}\n \\State $\\regret{\\time}(I, a) \\gets \\frac{1}{\\reach{\\sampling{\\time}}(S)}\\sum_{h \\in I}\\reach{\\sigma^\\time}_1(h)\\left({-\\baseutil{h, \\cdot}{\\sigma^\\time}{Z^\\time}} + \\baseutil{h}{\\sigma^\\time}{Z^\\time}\\right)$\n \\EndIf\n \\State $\\Regret{\\time}(I, \\cdot) \\gets \\Regret{\\time-1}(I, \\cdot) + \\regret{\\time}(I, \\cdot)$\n \\EndFor\n \\State \\Return $\\{\\baseutil{h}{\\sigma^\\time}{Z^\\time} \\mid \\forall h \\in S\\}$\n\\EndFunction\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{algorithm}[htb]\n\\caption{MCCFR w\/ POS and predictive baseline}\n\\label{alg:predposmccfr}\n\\begin{algorithmic}[1]\n\\Statex\n\\Function{POS-MCCFR}{$S$}\n \\Iif{$S \\subseteq Z$}{\\Return $\\{u(h), u(h) \\mid \\forall h \\in S\\}$}\n \\For {$I \\in \\mathcal{I}_{P(S)}(S)$}\n \\State $\\sigma^\\time(I, \\cdot) \\gets \\Call{RegretMatching}{\\Regret{\\time-1}(I, \\cdot)}$\n \\State $\\overline{\\strategy}^\\time(I, \\cdot) \\gets \\frac{\\time-1}{\\time} \\overline{\\strategy}^{\\time - 1}(I, \\cdot) + \\frac{1}{\\time} \\sigma^\\time$\n \\EndFor\n \\State sample successor $S' \\sim \\sampling{\\time}(S, \\cdot)$\n \\State $\\{\\baseutil{h'}{\\sigma^\\time}{Z^\\time}, \\baseutil{h'}{\\sigma^{\\time+1}}{Z^\\time} \\mid \\forall h' \\in S'\\} \\gets \\Call{POS-MCCFR}{S'}$\n \\For {$I \\in \\mathcal{I}_{P(S)}$}\n \\For {$h \\in I$}\n \\For {$a \\in A(h)$}\n \\If {$(ha) \\in S'$}\n \\State $\\baseutil{h, a}{\\sigma^\\time}{Z^\\time} \\gets b^\\time(h, a) + \\frac{1}{\\sampling{\\time}(S, S')}\\left(\\baseutil{(ha)}{\\sigma^\\time}{Z^\\time} - b^\\time(h, a)\\right)$\n \\State $b^{\\time+1}(h,a) \\gets \\baseutil{(ha)}{\\sigma^{\\time+1}}{Z^\\time}$\n \\State $\\baseutil{h, a}{\\sigma^{\\time+1}}{Z^\\time} \\gets b^{\\time+1}(h,a)$\n \\Else\n \\State $\\baseutil{h, a}{\\sigma^\\time}{Z^\\time} \\gets b^\\time(h, a)$\n \\State $\\baseutil{h, a}{\\sigma^{\\time+1}}{Z^\\time} \\gets b^\\time(h, a)$\n \\EndIf\n \\EndFor\n \\State $\\baseutil{h}{\\sigma^\\time}{Z^\\time} \\gets \\sum_{a'} \\sigma^\\time(h, a') \\baseutil{h, a'}{\\sigma^\\time}{Z^\\time}$\n \\EndFor\n \\If {$P(h) = 1$}\n \\State $\\regret{\\time}(I, a) \\gets \\frac{1}{\\reach{\\sampling{\\time}}(S)}\\sum_{h \\in I}\\reach{\\sigma^\\time}_2(h)\\left(\\baseutil{h, \\cdot}{\\sigma^\\time}{Z^\\time} - \\baseutil{h}{\\sigma^\\time}{Z^\\time}\\right)$\n \\ElsIf {$P(h) = 2$}\n \\State $\\regret{\\time}(I, a) \\gets \\frac{1}{\\reach{\\sampling{\\time}}(S)}\\sum_{h \\in I}\\reach{\\sigma^\\time}_1(h)\\left({-\\baseutil{h, \\cdot}{\\sigma^\\time}{Z^\\time}} + \\baseutil{h}{\\sigma^\\time}{Z^\\time}\\right)$\n \\EndIf\n \\State $\\Regret{\\time}(I, \\cdot) \\gets \\Regret{\\time-1}(I, \\cdot) + \\regret{\\time}(I, \\cdot)$\n \\State $\\sigma^{\\time+1}(I, \\cdot) \\gets \\Call{RegretMatching}{\\Regret{\\time}(I, \\cdot)}$\n \\For {$h \\in I$}\n \\State $\\baseutil{h}{\\sigma^{\\time+1}}{Z^\\time} \\gets \\sum_{a'} \\sigma^{\\time+1}(h, a') \\baseutil{h, a'}{\\sigma^{\\time+1}}{Z^\\time}$\n \\EndFor\n \\EndFor\n \\State \\Return $\\{\\baseutil{h}{\\sigma^\\time}{Z^\\time}, \\baseutil{h}{\\sigma^{\\time+1}}{Z^\\time} \\mid \\forall h \\in S\\}$\n\\EndFunction\n\\end{algorithmic}\n\\end{algorithm}\n\nUpdating a public state $S$ with this algorithm requires walking through all of the possible histories in the public state, as well as all of the actions possible at each history, giving a complexity $\\BigO{|S||A_{\\text{max}}|}$, or equivalently $\\BigO{|\\my{\\mathcal{I}}(S)||\\opp{\\mathcal{I}}(S)||A_{\\text{max}}|}$. However, the computations for each information set $I \\in \\my{\\mathcal{I}}(S)$ with acting player $i$ can be done completely independently, allowing for easy parallelization (e.g. on a GPU) to achieve complexity $\\BigO{|\\opp{\\mathcal{I}}(S)||A_{\\text{max}}|}$. This approach was taken with the non-sampling algorithm used in DeepStack \\citep{Moravcik17}.\n\n\\section{Proof of Theorem~\\ref{thm:zerovar}}\n\\label{sec:zerovar}\n\nWe introduce the following definition that tracks sampled values of terminal histories:\n\\begin{equation}\n\\partutil{\\time}{z} = \\begin{cases} u(z) \\qquad &\\text{if $z \\in Z^\\tau$ for any $\\tau < \\time$}\\\\ 0 &\\text{otherwise} \\end{cases}\n\\end{equation}\n\nWe begin by showing that the if our baseline function maintains is a weighted sum of values $\\partutil{\\time}{z}$, then our predictive baseline-corrected value estimates will be as well.\n\n\\begin{lem}\nLet $S((ha)) \\sqsubseteq Z^\\time$, and assume that for all $(h'a') \\sqsupseteq (ha)$, the predictive baseline function satisfies $b^\\time(h', a') = \\sum_{z \\in Z[(h'a')]} \\reach{\\sigma^\\time}((h'a'), z) \\partutil{\\time}{z}$. Then the baseline-corrected utility satisfies\n\\begin{equation}\n\\baseutil{h, a}{\\sigma^{\\time+1}}{Z^\\time} = \\sum_{z \\in Z[(ha)]} \\reach{\\sigma^{\\time+1}}((ha), z) \\partutil{\\time+1}{z}\n\\end{equation}\n\\label{lem:predupdate}\n\\end{lem}\n\n\\begin{proof}\nWe prove this by induction on the height of $(ha)$ in the tree. Our base case is that $(ha) = z^\\time$ for some $z^\\time \\in Z^\\time$, in which case\n\\begin{align*}\n&\\hspace{-15pt}\\baseutil{h, a}{\\sigma^{\\time+1}}{Z^\\time}\\\\\n&= \\frac{1}{\\sampling{\\time}(S(h),S((ha))}(u(z^\\time) - b^{\\time+1}(h, a)) + b^{\\time+1}(h, a)\\\\\n&= \\frac{1}{\\sampling{\\time}(S(h),S((ha))}(u(z^\\time) - u(z^\\time)) + u(z^\\time) &&\\text{by predictive baseline definition}\\\\\n&= u(z^\\time)\\\\\n&= \\partutil{\\time+1}{(ha)} &&\\text{as $(ha)=z^\\time \\in Z^\\time$}\n\\end{align*}\n\nFor the inductive step, we consider some $(ha)$ and assume the hypothesis holds for all $(h'a')$ with smaller height.\n\\begin{align*}\n&\\baseutil{h, a}{\\sigma^{\\time+1}}{Z^\\time}\\\\\n&= \\frac{1}{\\sampling{\\time}(S(h),S((ha))}\\left(\\baseutil{(ha)}{\\sigma^{\\time+1}}{Z^\\time} - b^{\\time+1}(h, a)\\right) + b^{\\time+1}(h, a)\\\\\n&= \\baseutil{(ha)}{\\sigma^{\\time+1}}{Z^\\time} &&\\hspace{-19pt}\\text{by predictive baseline definition}\\\\\n&= \\sum_{a' \\in A((ha))} \\sigma^{\\time+1}((ha), a') \\baseutil{(ha), a'}{\\sigma^{\\time+1}}{Z^\\time}\\\\\n&= \\sum_{a' \\in A((ha))} \\sigma^{\\time+1}((ha), a') \\sum_{z \\in Z[(ha\\act')]} \\reach{\\sigma^{\\time+1}}((ha\\act'), z) \\partutil{\\time+1}{z} &&\\text{by inductive hypothesis}\\\\\n&= \\sum_{a' \\in A((ha))} \\sum_{z \\in Z[(ha\\act')]} \\reach{\\sigma^{\\time+1}}((ha), z) \\partutil{\\time+1}{z}\\\\\n&= \\sum_{z \\in Z[(ha)]} \\reach{\\sigma^{\\time+1}}((ha), z) \\partutil{\\time+1}{z}\n\\end{align*}\nIn the last step we use that $Z[(ha)]$ is partitioned into the sets $Z[(ha\\act')]$ by which action $a' \\in A((ha))$ follows $(ha)$. The lemma follows by induction.\n\\end{proof}\n\nNext, we show that the predictive baseline update maintains an invariant that the baseline values are a weighted sum of values $\\partutil{\\time}{z}$.\n\n\\begin{lem}\nFor any time step $\\time$, the predictive baseline satisfies\n\\begin{equation}\nb^{\\time}(h, a) = \\sum_{z \\in Z[(ha)]} \\reach{\\sigma^{\\time}}((ha), z) \\partutil{\\time}{z}\n\\end{equation}\n\\label{lem:predinvariant}\n\\end{lem}\n\n\\begin{proof}\nWe proceed by induction on time. For the base case, $\\time = 1$, by definition $b^\\time(h, a) = 0$, and $\\partutil{\\time}{z} = 0$ for all $z \\in Z$.\n\nFor the inductive step, we assume that the lemma holds at time $\\time$, and we show that it then follows for time $\\time + 1$. We break this into two cases, based on whether $(ha)$ is sampled on time $\\time$ or not.\n\nIf $S((ha)) \\not\\sqsubseteq Z^{\\time}$, then $b^{\\time+1}(h, a) = b^{\\time}(h, a)$ by definition of the predictive baseline. Also by definition $\\partutil{\\time+1}{z} = \\partutil{\\time}{z}$ for any $z \\in Z[(ha)]$, because otherwise $z \\in Z^\\time$. Next, we show that $\\reach{\\sigma^{\\time+1}}((ha), z) = \\reach{\\sigma^{\\time}}((ha), z)$ for any $z \\in Z[(ha)]$. Assume for the sake of contradiction that this doesn't hold. Then there must be some $(h'a') \\sqsupseteq (ha)$ such that $\\sigma^{\\time+1}(h',a') \\neq \\sigma^{\\time}(h',a')$. By the definition of the MCCFR algorithm, this only occurs if $\\Regret{\\time+1}(I(h'),a') \\neq \\Regret{\\time}(I(h'),a')$, which in turn only occurs if $S(h') \\sqsubseteq Z^\\time$. But then $S((ha)) \\sqsubseteq S(h') \\sqsubseteq Z^\\time$, which contradicts the premise. Putting this all together, we have\n\\begin{align*}\nb^{\\time+1}(h, a) &= b^{\\time}(h, a)\\\\\n&= \\sum_{z \\in Z[(ha)]} \\reach{\\sigma^{\\time}}((ha), z) \\partutil{\\time}{z} &&\\text{by inductive hypothesis}\\\\\n&= \\sum_{z \\in Z[(ha)]} \\reach{\\sigma^{\\time+1}}((ha), z) \\partutil{\\time+1}{z}\n\\end{align*}\nThis completes the inductive step in the case that $(ha)$ was not sampled.\n\nIf $S((ha)) \\sqsubseteq Z^{\\time}$, then the following holds\n\\begin{align*}\nb^{\\time+1}(h, a) &= \\baseutil{(ha)}{\\sigma^{\\time+1}}{Z^\\time} &&\\text{by predictive baseline definition}\\\\\n&= \\sum_{z \\in Z[(ha)]} \\reach{\\sigma^{\\time+1}}((ha), z) \\partutil{\\time+1}{z} &&\\text{by Lemma~\\ref{lem:predupdate}}\n\\end{align*}\nThis completes the inductive step in the case that $(ha)$ was sampled. Thus the inductive step always holds, and the lemma follows by induction.\n\\end{proof}\n\nTo prove Theorem~\\ref{thm:zerovar}, we note that if $Z[h] \\subseteq \\bigcup_{\\tau < \\time} Z^\\tau$, then by definition $\\partutil{\\time}{z} = u(z)$ for any $z \\in Z[h]$. Thus we have\n\\begin{align*}\nb^\\time(h,a) &= \\sum_{z \\in Z[(ha)]} \\reach{\\sigma^{\\time}}((ha), z) \\partutil{\\time}{z} &&\\text{by Lemma~\\ref{lem:predinvariant}}\\\\\n&= \\sum_{z \\in Z[(ha)]} \\reach{\\sigma^{\\time}}((ha), z) u(z)\\\\\n&= \\exputil{(ha)}{\\sigma^\\time} &&\\text{by definition}\n\\end{align*}\\qed\n\n\\section{Measuing counterfactual value variance}\n\\label{sec:empvar}\n\nIn Figure~2b of the main paper, we reported empirical variance of counterfactual values for POS MCCFR with baselines. To measure these, we run MCCFR for some number of iterations, then freeze the strategy. We then walk every information set-action pair in the game tree, and for each such pair we run a large number of sampled trajectories originating at the pair. These trajectories are walked as if we were running MCCFR with POS, but we do not update the strategy. Instead, we only calculate the sampled counterfactual value $\\sum_{h \\in I} \\reach{\\sigma^\\time}_{-P(h)}(h)\\baseutil{(ha)}{\\sigma^\\time}{z^\\time}$ at the initial $I, a$ pair. From these samples, we compute an estimate of the true variance of the counterfactual value. Finally, we average these variance estimates across all information set-action pairs in the game.\n\n\\section{Baselines in Monte Carlo continual resolving}\n\nTypically, online play in games with perfect information is accomplished by performing an independent computation at each new decision point to decide the agent's next action. Traditionally, this approach was intractable in games with imperfect information because there was no way to guarantee that these individual decisions would fit together into a cohesive equilibrium strategy. Instead, the traditional way of playing online was to use a precomputed strategy as a lookup table. Recently, however, techniques have been developed for safe and efficient online computation of strategies in imperfect information games \\citep{Moravcik17, Brown17, Brown18b}. In this section, we discuss how Monte Carlo baselines fit into this work.\n\nA particular example of this new paradigm, \\emph{continual resolving}, was used in the DeepStack agent which defeated poker professionals \\citep{Moravcik17}. Continual resolving contains two key parts. First, a safe resolving method is used to compute each decision independently in an online fashion, while still guaranteeing that the agent plays an approximate equilibrium strategy. Second, value approximation is used to restrict the depth of future actions that are considered when solving each decision. With these ingredients, CFR+ is used to solve a relatively small subgame each time the agent must make an action selection. \\citet{Sustr19} replaced the CFR+ solver with MCCFR, creating \\emph{Monte Carlo continual resolving (MCCR)}. It is straightforward to use our baseline framework within MCCR.\n\nWe conducted an experiment examining MCCR with baselines. We performed our experiment in Leduc. We measure the exploitability of strategy profiles that are constructed by independently solving each decision point as in online self-play. For each decision, we solve a subgame of depth three (i.e. looking a maximum of three actions into the future). After three actions, we approximately a history by running 100 iterations of CFR+ on the subtree rooted at the history and using the resulting strategy\\footnote{This strategy, which contain errors because of the low number of iterations, approximates using a neural net for evaluation.} to generate values. At each decision point, we run resolving until we have performed a maximum number of evaluations, either at terminal histories or depth-limited evaluations. We use this as an implementation-independent way of comparing algorithms, as evaluations take the vast majority of computation time in continual resolving.\n\nWe compare MCCR with and without baselines. We use CFR+ updates, which we've found to decrease variance when combined with any baseline, and public outcome sampling with transitions sampled from the uniform distribution. Because of the inexact nature of the evaluation function, Theorem~\\ref{thm:zerovar} does not hold in this setting, and we found learned history baselines to slightly outperform predictive baselines in preliminary experiments. We also compare to (deterministic) continual resolving, with both CFR and CFR+ update rules.\n\n\\begin{figure*}[htbp]\n\\centering\n\\includegraphics[width=\\linewidth]{figures\/resolving.pdf}\n\\caption{Exploitability of continual resolving strategies based on the maximum number of evaluations allowed per resolve. For MCCFR, results are averaged over 20 runs with 95\\% confidence intervals shown as (indiscernible) error bands.}\n\\label{fig:resolving}\n\\end{figure*}\n\nResults are shown in Figure~\\ref{fig:resolving}. We see that the inclusion of a baseline significantly decreases the exploitability of MCCR strategies. Without a baseline, MCCR is not competitive with the deterministic continual resolving. With a baseline, it is able to clearly outperform continual resolving with CFR updates, and slightly outperform continual resolving with CFR+ updates. This is especially notable because there is still plenty of room to improve the technique, such as by tuning the baseline and especially by refining the sampling strategy.\n\\section{Introduction}\n\nMulti-agent strategic interactions are often modeled as \\emph{extensive-form games (EFGs)}, a game tree representation that allows for hidden information, stochastic outcomes, and sequential interactions. Research on solving EFGs has been driven by the experimental domain of poker games, in which the \\emph{Counterfactual Regret Minimization (CFR)} algorithm \\citep{Zinkevich07} has been the basis of several breakthroughs. Approaches incorporating CFR have been used to essentially solve one nontrivial poker game \\citep{Bowling15}, and to beat human professionals in another \\citep{Moravcik17, Brown18}.\n\nCFR is in essence a policy improvement algorithm that iteratively evaluates and improves a strategy for playing an EFG. As part of this process, it must walk the entire game tree on every iteration. However, many games have prohibitively large trees when represented as EFGs. For example, many commonly played poker games have more possible game states than there are atoms in the universe \\citep{Johanson13}. In such cases, performing even a single iteration of traditional CFR is impossible.\n\nThe prohibitive cost of CFR iterations is the motivation for \\emph{Monte Carlo Counterfactual Regret Minimization (MCCFR)}, which samples trajectories to walk through the tree to allow for significantly faster iterations \\citep{Lanctot09}. Additionally, while CFR spends equal time updating every game state, the sampling scheme of MCCFR can be altered to target updates to parts of the game that are more critical or more difficult to learn \\citep{Gibson12, Gibson12b}. As a trade-off for these benefits, MCCFR requires more iterations to converge due to the variance of sampled values.\n\nIn the Reinforcement Learning (RL) community, the topic of variance reduction in sampling algorithms has been extensively studied. In particular, baseline functions that estimate state values are typically used within policy gradient methods to decrease the variance of value estimates along sampled trajectories \\citep{Williams92, Greensmith04, Bhatnagar09, Schulman16}. Recent work by \\citet{Schmid19} has adapted these ideas to variance reduction in MCCFR, resulting in the VR-MCCFR algorithm.\n\nIn this work, we generalize and extend the ideas of Schmid et al. We introduce a framework for variance reduction of sampled values in EFGs by use of state-action baseline functions. We show that the VR-MCCFR is a specific application of our baseline framework that unnecessarily generalizes across dissimilar states. We introduce alternative baseline functions that take advantage of our access to the full hidden state during training, avoiding this generalization. Empirically, our new baselines result in significantly reduced variance and faster convergence than VR-MCCFR.\n\nSchmid et al. also discuss the idea of an oracle baseline that provably minimizes variance, but is impractical to compute. We introduce a \\emph{predictive baseline} that estimates this oracle value and can be efficiently computed. We show that under certain sampling schemes, the predictive baseline exactly tracks the true oracle value, thus provably computing zero-variance sampled values. For the first time, this allows for exact CFR updates to be performed along sampled trajectories.\n\n\\section{Background}\\label{sec:bg}\n\nAn \\emph{extensive-form game (EFG)} \\citep{Osborne94} is a game tree, formally defined by a tuple $\\langle N, H, P, \\chances{\\sigma}, u, \\mathcal{I} \\rangle$. $N$ is a finite set of players. $H$ is a set of \\emph{histories}, where each history is a sequence of \\emph{actions} and corresponds to a vertex of the tree. For $h,h' \\in H$, we write $h \\sqsubseteq h'$ if $h$ is a prefix of $h'$. The set of actions available at $h \\in H$ that lead to a successor history $(ha) \\in H$ is denoted $A(h)$. Histories with no successors are \\emph{terminal histories} $Z \\subseteq H$. $P \\colon H \\setminus Z \\to N \\cup \\{c\\}$ maps each history to the player that chooses the next action, where $c$ is the \\emph{chance} player that acts according to the defined distribution $\\chances{\\sigma}(h) \\in \\Delta_{A(h)}$, where $\\Delta_{A(h)}$ is the set of probability distributions over $A(h)$. The \\emph{utility function} $u \\colon N \\times Z \\to \\mathbb{R}$ assigns a value to each terminal history for each player.\n\nFor each player $i \\in N$, the collection of \\emph{(augmented) information sets} $\\my{\\mathcal{I}} \\in \\mathcal{I}$ is a partition of the histories $H$.\\footnote{Augmented information sets were introduced by \\citet{Burch14}.} Player $i$ does not observe the true history $h$, but only the information set $\\my{I}(h)$. Necessarily, this means that $A(h) = A(h')$ if $I_{P(h)}(h) = I_{P(h)}(h')$, which we then denote $A(I)$.\n\nEach player selects actions according to a \\emph{(behavioral) strategy} that maps each information set $I \\in \\my{\\mathcal{I}}$ where $P(I) = i$ to a distribution over actions, $\\my{\\sigma}(I) \\in \\Delta_{A(I)}$. The probability of taking a specific action at a history is $\\sigma_{P(h)}(h, a) = \\sigma_{P(h)}(I(h), a)$. A \\emph{strategy profile}, $\\sigma = \\{\\my{\\sigma} | i \\in N\\}$, specifies a strategy for each player. The \\emph{reach probability} of a history $h$ is $\\reach{\\sigma}(h) = \\prod_{(h'a) \\sqsubseteq h} \\sigma_{P(h')}(h', a)$. This product can be decomposed as $\\reach{\\sigma}(h) = \\my{\\reach{\\my{\\sigma}}}(h) \\opp{\\reach{\\opp{\\sigma}}}(h)$, where the first term contains the actions of player $i$, and the second contains the actions of other players and chance. We also write $\\reach{\\sigma}(h, h')$ for the probability of reaching $h'$ from $h$, defined to be 0 if $h \\not\\sqsubseteq h'$. A strategy profile defines an expected utility for each player as $\\my{u}(\\sigma) = \\my{u}(\\my{\\sigma},\\opp{\\sigma}) = \\sum_{z \\in Z} \\reach{\\sigma}(z) \\my{u}(z)$.\n\nIn this work, we consider two-player zero-sum EFGs, in which $N = \\{1,2\\}$ and $u(z) \\coloneqq \\my{u}(z) = {-\\opp{u}(z)}$. We also assume that the information sets satisfy \\emph{perfect recall}, which requires that players not forget any information that they once observed. Mathematically, this means that two histories in the same information set $I_i$ must have the same sequence of past information sets and actions for player $i$. All games played by humans exhibit perfect recall, and solving games without perfect recall is NP-hard. We write $\\my{I} \\sqsubseteq h$ if there is any history $h' \\in \\my{I}$ such that $h' \\sqsubseteq h$, and we denote that history (unique by perfect recall) by $\\my{I}[h]$.\n\n\\subsection{Solving EFGs}\n\nA common solution concept for EFGs is a \\emph{Nash equilibrium}, in which no player has incentive to deviate from their specified strategy. We evaluate strategy profiles by their distance from equilibrium, as measured by \\emph{exploitability}, which is the average expected loss against a worst-case opponent: $\\text{exploit}(\\sigma) = \\nicefrac{1}{2} \\max_{\\sigma' \\in \\Sigma} (u_2(\\sigma_1,\\sigma_2') + u_1(\\sigma_1', \\sigma_2))$.\n\n\\emph{Counterfactual Regret Minimization (CFR)} is an algorithm for learning Nash equilibria in EFGs through iterative self play \\citep{Zinkevich07}. For any $h \\in H$, let $Z[h] = \\{z \\in Z \\mid h \\sqsubseteq z\\}$ be the set of terminal histories reachable from $h$, and define the history's expected utility as $\\exputil{h}{\\sigma} = \\sum_{z \\in Z[h]} \\reach{\\sigma}(h, z) u(z)$. For each information set $I$ and action $a \\in A(I)$, CFR accumulates the \\emph{counterfactual regret} of not choosing that action on previous iterations:\n\\begin{equation}\n\\regret{\\time}(I, a) = \\sum_{h \\in I} \\reach{\\sigma^\\time}_{-P(h)}(h) \\left(\\exputil{(ha)}{\\sigma} - \\exputil{h}{\\sigma}\\right)\n\\qquad\n\\Regret{T}(I, a) = \\sum_{\\time=1}^T \\regret{\\time}(I, a)\n\\label{eq:cfr}\n\\end{equation}\nThe next strategy profile is then selected with \\emph{regret matching}, which sets probabilities proportional to the positive regrets: $\\sigma^{T+1}(I, a) \\propto \\max(\\Regret{T}(I,a), 0)$. Defining the average strategy $\\overline{\\strategy}^T$ such that $\\overline{\\strategy}^T(h, a) \\propto \\sum_{\\time=1}^T \\my{\\reach{\\sigma^\\time}}(h) \\my{\\sigma^\\time}(h, a)$, CFR guarantees that $\\text{exploit}(\\overline{\\strategy}^T) \\to 0$ as $T \\to \\infty$, thus converging to a Nash equilibrium.\n\nThe state-of-the-art \\emph{CFR+} variant of CFR greedily zeroes all negative regrets on every iteration, replacing $\\Regret{\\time}$ with an accumulant $Q^\\time$ recursively defined with $Q^0(I, a) = 0, Q^\\time(I, a) = \\max(Q^{\\time-1}(I, a) + \\regret{\\time}(I, a), 0)$ \\citep{Tammelin15}. It also alternates updates for each player, and uses linear averaging, which gives greater weight to more recent strategies.\n\nCFR(+) requires a full walk of the game tree on each iteration, which can be a very costly operation on large games. \\emph{Monte Carlo Counterfactual Regret Minimization (MCCFR)} avoids this cost by only updating along sampled trajectories. For simplicity, we focus on the \\emph{outcome sampling (OS)} variant of MCCFR \\citep{Lanctot09}, though all results in this paper can be trivially extended to other MCCFR variants. On each iteration $\\time$, a sampling strategy $\\sampling{\\time} \\in \\Sigma$ is used to sample a single terminal history $z^\\time \\sim \\reach{\\sampling{\\time}}$. A sampled utility is then calculated recursively for each prefix of $z^\\time$ as\n\\begin{equation}\n\\samputil{h, a}{\\sigma^\\time}{z^\\time} = \\frac{\\mathbbm{1}((ha) \\sqsubseteq z^\\time)}{\\sampling{\\time}(h, a)} \\samputil{(ha)}{\\sigma^\\time}{z^\\time}\n\\qquad\n\\samputil{h}{\\sigma^\\time}{z^\\time} = \\sum_{a \\in A(h)} \\sigma^\\time(h, a) \\samputil{h, a}{\\sigma^\\time}{z^\\time}\n\\end{equation}\nwhere $\\mathbbm{1}$ is the indicator function and $\\samputil{z^\\time}{\\sigma^\\time}{z^\\time} = u(z^\\time)$. For any $h \\sqsubseteq z^\\time$, the sampled value $\\samputil{h, a}{\\sigma^\\time}{z^\\time}$ is an unbiased estimate of the expected utility $\\exputil{(ha)}{\\sigma^\\time}$, whether $a$ is sampled or not. These sampled values are used to calculate a sample of the counterfactual regret:\n\\begin{equation}\n\\sampreg{\\time}{I, a}{z^\\time} = \\sum_{h \\in I} \\frac{\\reach{\\sigma^\\time}_{-P(h)}(h)}{\\reach{\\sampling{\\time}}(h)} \\left(\\samputil{h, a}{\\sigma^\\time}{z^\\time} - \\samputil{h}{\\sigma^\\time}{z^\\time}\\right)\n\\label{eq:sampledregret}\n\\end{equation}\nThis gives an unbiased sample of the counterfactual regret $\\regret{\\time}(I, a)$ for all $I \\in \\mathcal{I}$, which is then used to perform unbiased CFR updates. As long as as the sampling strategies satisfy $\\reach{\\sampling{\\time}}(z) > 0$ for all $z \\in Z$, MCCFR guarantees that $\\text{exploit}(\\overline{\\strategy}^T) \\to 0$ with high probability as $T \\to \\infty$, thus converging to a Nash equilibrium. However, the rate of convergence depends on the variance of $\\sampreg{\\time}{I, a}{z^\\time}$ \\citep{Gibson12}.\n\n\\section{Baseline framework for EFGs}\n\\label{sec:framework}\n\nWe now introduce a method for calculating unbiased estimates of utilities in EFGs that has lower variance than the sampled utilities $\\samputil{h, a}{\\sigma^\\time}{z^\\time}$ defined above. We do this using \\emph{baseline functions}, which estimate the expected utility of actions in the game. We will describe specific examples of such functions in Section~\\ref{sec:baselines}; for now, we assume the existence of some function $b^\\time \\colon H \\times A \\to \\mathbb{R}$ such that $b^\\time(h, a)$ in some way approximates $\\exputil{(ha)}{\\sigma^\\time}$. We define a baseline-corrected sampled utility as\n\\begin{align}\n\\baseutil{h, a}{\\sigma^\\time}{z^\\time} &= \\frac{\\mathbbm{1}((ha) \\sqsubseteq z^\\time)}{\\sampling{\\time}(h, a)} \\left(\\baseutil{(ha)}{\\sigma^\\time}{z^\\time} - b^\\time(h, a)\\right) + b^\\time(h, a)\\label{eq:baseline}\\\\\n\\baseutil{h}{\\sigma^\\time}{z^\\time} &= \\sum_{a \\in A(h)} \\sigma^\\time(h, a) \\baseutil{h, a}{\\sigma^\\time}{z^\\time}\n\\end{align}\n\nEquation (\\ref{eq:baseline}) comes from the application of a \\emph{control variate}, in which we lower the variance of a random variable ($X = \\frac{\\mathbbm{1}((ha) \\sqsubseteq z^\\time)}{\\sampling{\\time}(h, a)} \\baseutil{(ha)}{\\sigma^\\time}{z^\\time}$) by subtracting another random variable ($Y = \\frac{\\mathbbm{1}((ha) \\sqsubseteq z^\\time)}{\\sampling{\\time}(h, a)} b^\\time(h, a)$) and adding its known expectation ($\\Exp{Y} = b^\\time(h, a)$), thus keeping the resulting estimate unbiased. If $X$ and $Y$ are correlated, then this estimate will have lower variance than $X$ itself. Because $\\baseutil{(ha)}{\\sigma^\\time}{z^\\time}$ is defined recursively, its computation includes the application of independent control variates at every action taken between $h$ and $z^\\time$.\n\nThese estimates are unbiased and, if the baseline function is chosen well, have low variance:\n\n\\begin{thm}\nFor any $h \\sqsubseteq z^\\time$ and any $a \\in A(h)$, the baseline-corrected utilities satisfy\n\\begin{equation*}\n\\Exp[z^\\time]{\\baseutil{h, a}{\\sigma^\\time}{z^\\time}|z^\\time \\sqsupseteq h} = \\exputil{(ha)}{\\sigma^\\time}\n\\qquad\n\\Exp[z^\\time]{\\baseutil{h}{\\sigma^\\time}{z^\\time}|z^\\time \\sqsupseteq h} = \\exputil{h}{\\sigma^\\time}\n\\end{equation*}\n\\label{thm:unbiased}\n\\end{thm}\n\n\\begin{thm} Assume that we have a baseline that satisfies $b^\\time(h,\\time) = \\exputil{(ha)}{\\sigma^\\time}$ for all $h \\in H$, $a \\in A(h)$. Then for any $h, a, z^\\time$,\n\\begin{equation*}\n\\Var[z^\\time]{\\baseutil{h, a}{\\sigma^\\time}{z^\\time} | z^\\time \\sqsupseteq h} = 0\n\\end{equation*}\n\\label{thm:variance}\n\\end{thm}\n\nAll proofs are given in \\iftoggle{appendix}{the appendix.}{the supplementary materials.} Theorem~\\ref{thm:unbiased} show that we can use $\\baseutil{h, a}{\\sigma^\\time}{z^\\time}$ in place of $\\samputil{h, a}{\\sigma^\\time}{z^\\time}$ in equation~\\ref{eq:sampledregret} and maintain the convergence guarantees of MCCFR. Theorem~\\ref{thm:variance} shows that an ideal baseline eliminates all variance in the MCCFR update. By choosing our baseline well, we decrease the MCCFR variance and speed up its convergence. Pseudocode for MCCFR with baseline-corrected values is given in \\iftoggle{appendix}{Appendix~\\ref{sec:pseudocode}}{the supplementary materials}.\n\nAlthough we focus on using our baseline-corrected samples in MCCFR, nothing in the value definition is particular to that algorithm. In fact, a lower variance estimate of sampled utilities is useful in any algorithm that performs iterative training using sampled trajectories. Examples of such algorithms include policy gradient methods \\citep{Srinivasan18} and stochastic first-order methods \\citep{Kroer15}.\\todo{Other examples?}\\todo{This paragraph here or in related work?}\n\n\\section{Baselines for EFGs}\n\\label{sec:baselines}\n\nIn this section we propose several baseline functions for use during iterative training. Theorem~\\ref{thm:variance} shows that we can minimize variance by choosing a baseline function $b^\\time$ such that $b^\\time(h, a) \\approx \\exputil{(ha)}{\\sigma^\\time}$.\n\n\\paragraph{No baseline.}\n\nWe begin by examining MCCFR under its original definition, where no baseline function is used. We note that when we run baseline-corrected MCCFR with a static choice of $b^\\time(h, a) = 0$ for all $h, a$, the operation of the algorithm is identical to MCCFR. Thus, opting to not use a baseline is, in itself, a choice of a very particular baseline.\n\nUsing $b^\\time(h, a) = 0$ might seem like a reasonable choice when we expect the game's payouts to be balanced between the players. However, even when the overall expected utility $u(\\sigma)$ is very close to 0, there will usually be particular histories with high magnitude expected utility $\\exputil{h}{\\sigma}$. For example, in poker games, the expected utility of a history is heavily biased toward the player who has been dealt better cards, even if these biases cancel out when considered across all histories. In fact, often there is no strategy profile at all that satisfies $\\exputil{(ha)}{\\sigma} = 0$, which makes $b^\\time(h, a) = 0$ a poor choice in regards to the ideal criteria $b^\\time(h, a) \\approx \\exputil{(ha)}{\\sigma^\\time}$. An example game where a zero baseline performs very poorly is explored in Section~\\ref{sec:results}.\n\n\\paragraph{Static strategy baseline.}\n\nThe simplest way to ensure that the baseline function does correspond to an actual strategy is to choose a static, known strategy profile $\\sigma^b \\in \\Sigma$ and let $b^\\time(h, a) = \\exputil{(ha)}{\\sigma^b}$ for each time $t$. Once the strategy is chosen, the baseline values only need to be computed once and stored. In general this requires a full walk of the game tree, but it is sometimes possible to take advantage of the structure of the game to greatly reduce this cost. For an example, see Section~\\ref{sec:results}.\n\n\\paragraph{Learned history baseline.}\n\nUsing a static strategy for our baseline ensures that it corresponds to some expected utility, but it fails to take advantage of the iterative nature of MCCFR. In particular, when attempting to estimate $\\exputil{(ha)}{\\sigma^\\time}$, we have access to all past samples $\\baseutil{(ha)}{\\sigma^\\tau}{z^\\tau}$ for $\\tau < \\time$. Because the strategy is changed incrementally, we might expect the expected utility to change slowly and for these to be reasonable samples of the utility at time $t$ as well.\n\nDefine $\\mathcal{T}^{ha}(\\time) = \\{ \\tau < \\time \\mid (ha) \\sqsubseteq z^\\tau \\}$ to be the set of timesteps on which $(ha)$ was sampled, and denote the $j$th such timestep as $\\tau_j$. We define the \\emph{learned history baseline} as\n\\begin{equation}\nb^\\time(h, a) = \\sum_{j=1}^{|\\mathcal{T}^{ha}(\\time)|} \\weight{j}\\baseutil{(ha)}{\\sigma^{\\tau_j}}{z^{\\tau_j}}\n\\end{equation}\nwhere $(\\weight{j})_{j=1}^{|\\mathcal{T}^{ha}(\\time)|}$ is a sequence of weights satisfying $\\sum_{j=1}^{|\\mathcal{T}^{ha}(\\time)|} \\weight{j} = 1$. Possible weighting choices include simple averaging, where $\\weight{j}= 1\/|\\mathcal{T}^{ha}(\\time)|$, and exponentially-decaying averaging, where $\\weight{j} = \\alpha(1-\\alpha)^{|\\mathcal{T}^{ha}(\\time)| - j}$ for some $\\alpha \\in (0,1]$. In either case, the baseline can be efficiently updated online by tracking the weighted sum and the number of times that $(ha)$ has been sampled.\n\n\\paragraph{Learned infoset baseline.}\n\nThe learned history baseline is very similar to the VR-MCCFR baseline defined by \\citet{Schmid19}. The principle difference is that the VR-MCCFR baseline tracks values for each information set, rather than for each history; we thus refer to it as the \\emph{learned infoset baseline}. This baseline also updates values for each player separately, based on their own information sets. This can be accomplished by tracking separate values for each player throughout the tree walk, or by running MCCFR with alternating updates, where only one player's regrets are updated on each tree walk. The VR-MCCFR baseline can be defined in our framework as\n\\begin{equation}\nb^\\time(h, a) = b^\\time(\\my{I}(h), a)\\qquad\\text{where}\\qquad\nb^\\time(\\my{I}, a) = \\sum_{j=1}^{|\\mathcal{T}^{\\my{I}a}(\\time)|} \\weight{j}\\baseutil{(\\my{I}[z^{\\tau_j}]a)}{\\sigma^{\\tau_j}}{z^{\\tau_j}}\n\\end{equation}\nwhere $i$ is the player being updated, $\\mathcal{T}^{\\my{I}a}(\\time)$ is the set of timesteps on which $(h'a)$ was sampled for any $h' \\in \\my{I}$, and $\\tau_j$ is $j$th such timestep. Following Schmid et al. we consider both simple averaging and exponentially-decaying averaging for selecting the weights $\\weight{j}$.\n\n\\paragraph{Predictive baseline.}\n\nOur last baseline takes advantage of the recursive nature of the MCCFR update. On each iteration, each history along the sampled trajectory is evaluated and updated in depth-first order. Thus when the update of history $h \\sqsubseteq z^\\time$ is complete and the value is returned , we have already calculated the next regrets $\\Regret{\\time+1}(I(h'),\\cdot)$ for all $h'$ such that $h \\sqsubseteq h' \\sqsubseteq z^\\time$. These values will be the input to the regret matching procedure on the next iteration, computing $\\sigma^{\\time+1}(h', \\cdot)$ at these histories. Thus we can immediately compute this next strategy, and using the already sampled trajectory, compute an estimate of the strategy's utility as $\\baseutil{(ha)}{\\sigma^{\\time+1}}{z^\\time}$. This is an unbiased sample of the expected utility $\\exputil{(ha)}{\\sigma^{\\time+1}}$, which is our target value for the next baseline $b^{\\time+1}(h, a)$. We thus use this sample to update the baseline:\n\\begin{equation}\nb^{\\time+1}(h, a) = \\begin{cases}\\baseutil{(ha)}{\\sigma^{\\time+1}}{z^\\time}\\qquad&\\text{if~}(ha) \\sqsubseteq z^\\time\\\\\nb^\\time(h, a)&\\text{otherwise}\\end{cases}\n\\end{equation}\n\nThe computation for this update can be done efficiently by a simple change to MCCFR. In MCCFR, we compute $\\baseutil{h}{\\sigma^{\\time}}{z^\\time}$ at each step by using $\\sigma^\\time$ to weight recursively-computed action values. In MCCFR with predictive baseline, after updating the regrets at $h$, we use a second regret matching computation to compute $\\sigma^{\\time+1}(h, \\cdot)$. We use this strategy to weight a second set of recursively-computed action values to compute $\\baseutil{h}{\\sigma^{\\time+1}}{z^\\time}$. When we walk back up the tree, we return both of the values $\\baseutil{h}{\\sigma^{\\time}}{z^\\time}$ and $\\baseutil{h}{\\sigma^{\\time+1}}{z^\\time}$, allowing this recursion to continue. The predictive value $\\baseutil{h}{\\sigma^{\\time+1}}{z^\\time}$ is only used for updating the baseline function. These changes do not modify the asymptotic time complexity of MCCFR. Pseudocode is given in \\iftoggle{appendix}{Appendix~\\ref{sec:pseudocode}}{the supplementary materials}.\n\n\\todo{Talk about combining baselines?}\n\n\\section{Experimental comparison}\n\\label{sec:results}\n\nWe run our experiments using a commodity desktop machine in Leduc hold'em \\citep{Southey05}, a small poker game commonly used as a benchmark in games research\\footnote{An open source implementation of CFR+ and Leduc hold'em is available from the University of Alberta \\citep{CPRGcode}.}. We compare the effect of the various baselines on the MCCFR convergence rate. Our experiments use the regret zeroing and linear averaging of CFR+, as these improve convergence when combined with any baseline. For the static strategy baseline, we use the ``always call'' strategy, which matches the opponent's bets and makes no bets of its own. Expected utility under this strategy is determined by the current size of the pot, which is measurable at run time, and the winning chance of each player's cards. Before training, we measure and store these odds for all possible sets of cards, which is significantly smaller than the size of the full game. For both of the learned baselines, we use simple averaging as it was found to work best in preliminary experiments.\n\nWe run experiments with two sampling strategies. The first is uniform sampling, in which $\\sampling{\\time}(h, a) = 1\/{|A(h)|}$. The second is opponent on-policy sampling, depends on the player $i$ being updated: we sample uniformly ($\\sampling{\\time}(h, a) = 1\/{|A(h)|}$) at histories $h$ where $P(h) = i$, and sample on-policy ($\\sampling{\\time}(h, a) = \\sigma^\\time(h, a)$) otherwise. For consistency, we use alternating updates for both schemes.\n\n\\begin{figure*}[htb]\n\\centering\n\\begin{subfigure}[t]{0.34\\linewidth}\n\\centering\n \\includegraphics[height=2.1in]{figures\/baseline_os-crop.pdf}\n \\caption{Uniform sampling}\n \\label{fig:osexp}\n\\end{subfigure}%\n\\begin{subfigure}[t]{0.325\\linewidth}\n\\centering\n \\includegraphics[height=2.1in]{figures\/baseline_opos-crop.pdf}\n \\caption{Opponent on-policy sampling}\n \\label{fig:oposexp}\n\\end{subfigure}%\n\\begin{subfigure}[t]{0.325\\linewidth}\n\\centering\n \\includegraphics[height=2.1in]{figures\/baseline_shifted_os-crop.pdf}\n \\caption{Shifted utilities}\n \\label{fig:shift}\n\\end{subfigure}%\n\\caption{Log-log plots of convergence of MCCFR strategies with various baselines. \\textbf{(a)} and \\textbf{(b)} Leduc with different MCCFR sampling schemes. \\textbf{(c)} Leduc with utilities shifted by 100 and opponent on-policy sampling.}\n\\end{figure*} \n\nFigures~\\ref{fig:osexp}~and~\\ref{fig:oposexp} show the convergence of MCCFR with the various baselines, as measured by exploitability (recall that exploitability converges to zero). All results in this paper are averaged over 20 runs, with 95\\% confidence intervals shown as error bands (often too narrow to be visible). With uniform sampling, the learned infoset (VR-MCCFR) baseline improves modestly on using no baseline at all, while the other three baselines achieve a significant improvement on top of that. With opponent on-policy sampling, the gap is smaller, but the learned infoset baseline is still noticeably worse than the other options.\n\nMany true expected values in Leduc are very close to zero, making MCCFR without a baseline (i.e. $b^\\time(h, a) = 0$) better than it might otherwise be. To demonstrate the necessity of a baseline in some games, we ran MCCFR in a modified Leduc game where player 2 always transfers 100 chips to player 1 after every game. This utility change is independent of the player's actions, so it doesn't strategically change the game. However, it means that 0 is now an extremely inaccurate value estimate for all histories. Figure~\\ref{fig:shift} shows convergence in Leduc with shifted utilities. The always call baseline is omitted, as the results would be identical to those in Figure~\\ref{fig:oposexp}. Here we see that using any baseline at all provides a significant advantage over not using a baseline, due to the ability to adapt to the shifted utilities. We also see that the learned infoset baseline performs relatively well early on in this setting, because it generalizes across histories.\n\n\\section{Public Outcome Sampling}\n\nAlthough the results in Section~\\ref{sec:results} show large gains in convergence speed when using baselines with MCCFR, the magnitudes are not as large as those shown with the VR-MCCFR baseline by \\citet{Schmid19}. This is because their experiments use a \"vectorized\" form of MCCFR, which avoids the sampling of individual histories within information sets. Instead, they track a vector of values on each iteration, one for each possible true history given the player's observed information set. Schmid et al. do not formally define their algorithm. We refer to it as \\emph{Public Outcome Sampling (POS)} as the algorithm samples any actions that are publicly visible to both players, while exhaustively considering all possible private states. We give a full formal definition of POS in \\iftoggle{appendix}{Appendix~\\ref{sec:pubtrees}}{the supplementary materials}.\n\n\\subsection{Baselines in POS}\n\nIn MCCFR with POS, we still use action baselines $b^\\time(h, a)$ with the ideal baseline values being $b^\\time(h,a) = \\exputil{(ha)}{\\sigma^\\time}$. Thus the baselines in Section~\\ref{sec:baselines} apply to this setting as well.\n\nFor the learned infoset baseline, we have more information available to us than in the OS case. This is because when POS samples some history-action pair $h,a$, it also samples every pair $h',a$ for $h' \\in I(h)$. Thus, rather, than using one sampled history value to update the baseline, we use a weighted sample of all of the history values. Following Schmid et al., we weight the baseline values\n\\begin{equation*}\nb^\\time(h, a) = b^\\time(\\my{I}(h), a)\\qquad\\text{where}\\qquad\nb^\\time(\\my{I}, a) = \\sum_{j=1}^{|\\mathcal{T}^{\\my{I}a}(\\time)|} \\weight{j}\\frac{\\sum_{h' \\in \\my{I}}\\opp{\\reach{\\sigma^{\\tau_j}}}(h')\\baseutil{(h'a)}{\\sigma^{\\tau_j}}{z^{\\tau_j}}}{\\sum_{h' \\in \\my{I}}\\opp{\\reach{\\sigma^{\\tau_j}}}(h')}.\n\\end{equation*}\nThis is the same relative weighting given to each history when calculating the counterfactual regret.\n\n\\paragraph{Zero-variance baseline.}\n\nPOS also has implications for the predictive baseline. In fact, we can guarantee that after every outcome of the game has been sampled, the predictive baseline will have learned the true value of the current strategy. For time $\\time$, let $Z^\\time$ be the set of sampled terminal histories (consistent with a public outcome), and let $\\text{samp}^\\time(h)$ be the event that $h$ is sampled on way to $Z^\\time$.\n\n\\begin{thm}\nIf each of the terminal states $Z[h]$ reachable from history $h \\in H$ has been sampled at least once under public outcome sampling \\textup{($Z[h] \\subseteq \\bigcup_{\\tau < \\time} Z^\\tau$)}, then the predictive baseline satisfies\n\\begin{equation*}\nb^\\time(h,a) = \\exputil{(ha)}{\\sigma^\\time} \\textup{~~and~~} \\Var[Z^\\time]{\\baseutil{h}{\\sigma^\\time}{Z^\\time}|\\mathrm{samp}^\\time(h)} = 0\\qquad \\forall a \\in A(h)\n\\end{equation*}%\n\\label{thm:zerovar}%\n\\end{thm}%\nThe key idea behind the proof is that POS ensures that the baseline is updated at a history if and only if the expected value of the history changes. The full proof is in \\iftoggle{appendix}{Appendix~\\ref{sec:zerovar}}{the supplementary materials}.\n\nIn order for the theorem to hold everywhere in the tree, all outcomes must be sampled, which could take a large number of iterations. An alternative is to guarantee that all outcomes are sampled during the early iterations of MCCFR. For example, one could do a full CFR tree walk on the very first iteration, and then sample on subsequent iterations. Alternatively, we can ensure the theorem always holds with smart initialization of the baseline. When there are no regrets accumulated, MCCFR uses an arbitrary strategy. If we have some strategy with known expected values throughout the tree, we can use this strategy as the default MCCFR strategy and initialize the baseline values to the strategy's expected values. Either option guarantees that all regret updates will use zero-variance values.\n\n\\subsection{POS results}\n\nAs in Section~\\ref{sec:results}, we run experiments in Leduc and use CFR+ updates. For the learned baselines, we use exponentially-decaying averaging with $\\alpha=0.5$, which preliminary experiments found to outperform simple averaging when combined with POS. For simplicity and consistency with the experiments of \\citet{Schmid19}, we use uniform sampling and simultaneous updates.\n\n\\begin{figure*}[pth]\n\\centering\n\\begin{subfigure}[t]{0.46\\linewidth}\n\\centering\n \\includegraphics[width=\\linewidth]{figures\/baseline_su-crop.pdf}\n \\caption{Exploitability}\n \\label{fig:posexp}\n\\end{subfigure}\\quad%\n\\begin{subfigure}[t]{0.46\\linewidth}\n\\centering\n \\includegraphics[width=\\linewidth]{figures\/variance_au-crop.pdf}\n \\caption{Counterfactual value variance}\n \\label{fig:var}\n\\end{subfigure}%\n\\caption{Log-log plots of POS MCCFR strategies with various baselines. \\textbf{(a)} Convergence as measured by exploitability. \\textbf{(b)} Empirical variance of counterfactual values (cfvs).}\n\\end{figure*}\n\nFigure~\\ref{fig:posexp} compares the baselines' effects on POS MCCFR. We find that using any baseline provides a significant improvement on using no baseline. The always call baseline performs well early but tales off as it doesn't learn during training. Even with POS, where we always see an entire information set at a time, the learned infoset baseline (VR-MCCFR) is significantly outperformed by the learned history and predictive baselines. This is likely because the learned infoset baseline has to learn the relative weighting between histories in an infoset, while the other baselines always use the current strategy to weight the learned values. Finally, we observe that the predictive baseline has a small, but statistically significant, advantage over the learned history baseline in early iterations.\n\nIn addition, we compare the baselines by directly measuring their variance. We measure the variance of the counterfactual value $\\sum_{h \\in I} \\reach{\\sigma^\\time}_{-P(h)}(h)\\baseutil{(ha)}{\\sigma^\\time}{z^\\time}$ for each pair $I, a$, and we average across all such pairs. Full details are in \\iftoggle{appendix}{Appendix~\\ref{sec:empvar}}{the supplementary materials}. Results are shown in Figure~\\ref{fig:var}. We see that using no baseline results in high and relatively steady variance of counterfactual values. Using the always call baselines also results in steady variance, as nothing is learned, but at approximately an order of magnitude lower than no baseline. Variance with the other baselines improves over time, as the baseline becomes more accurate. The learned history baseline mirrors the learned infoset baseline, but with more than an order of magnitude reduction in variance. The predictive baseline is best of all, and in fact we see Theorem~\\ref{thm:zerovar} in action as the variance drops to zero.\n\n\\section{Related Work}\\todo{Do we need to talk about VR-MCCFR again here?}\n\nAs discussed in the introduction, the use of baseline functions has a long history in RL. Typically these approaches have used state value baselines, with some recent exceptions \\citep{Liu18, Wu18}. \\citet{Tucker18} suggest an explanation for this by isolating the variance terms that come from sampling an immediate action and from sampling the rest of a trajectory. Typical RL baselines only reduce the action variance, so the additional benefit from using a state-action baseline is insignificant when compared to the trajectory variance. In our work, we apply a recursive baseline to reduce both the action and trajectory variances, meaning state-action baselines give a noticeable benefit.\n\nIn RL, the doubly-robust estimator \\citep{Jiang16} has been used to reduce variance settings by the recursive application of control variates \\citep{Thomas16}. Similarly, variance reduction in EFGs via recursive control variates is the basis of the advantage sum estimator \\citep{Zinkevich07} and AIVAT \\citep{Burch18}. All of these techniques construct control variates by evaluating a static policy or strategy, either on the true game or on a static model. In this sense they are equivalent to our static strategy baseline. However, to the best of our knowledge, these techniques have only been used for evaluation of static strategies, rather than for variance reduction during training. Our work extends the EFG techniques to the training domain; we believe that similar ideas can be used in RL, and this is an interesting avenue of future research.\n\nConcurrent to this work, \\citet{Zhou18} also suggested tracking true values of histories in a CFR variant, analogous to our predictive baseline. They use these values for truncating full tree walks, rather than for variance reduction along sampled trajectories. As such, they always initialize their values with a full tree walk, and don't examine gradually learning the values during training.\n\n\\section{Conclusion and Future Work}\n\nIn this work we introduced a new framework for variance reduction in EFGs through the application of a baseline value function. We demonstrated that the existing VR-MCCFR baseline can be described in our framework with a specific baseline function, and we introduced other baseline functions that significantly outperform it in practice. In addition, we introduced a predictive baseline and showed that it gives provably optimal performance under a sampling scheme that we formally define.\n\nThere are three sources of variance when performing sampled updates in EFGs. The first is from sampling trajectory values, the second from sampling individual histories within an information set that is being updated, and the third from sampling which information sets will be updated on a given iteration. By introducing MCCFR with POS, we provably eliminate the first two sources of variance: the first because we have a zero-variance baseline, and the second because we consider all histories within the information set. For the first time, this allows us to select the MCCFR sampling strategy $\\sampling{\\time}$ entirely on the basis of minimizing the third source of variance, by choosing the ``best'' information sets to update. Doing this in a principled way is an exciting avenue for future research.\n\nFinally, we close by discussing function approximation. All of the baselines introduced in this paper require an amount of memory that scales with the size of the game tree. In contrast, baseline functions in RL typically use function approximation, requiring a much smaller number of parameters. Additionally, these functions generalize across states, which can allow for learning an accurate baseline function more quickly. The framework that we introduce in this work is completely compatible with function approximation, and combining the two is an area for future research.\n\n\\small\n\\bibliographystyle{plainnat}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nAn integral domain $D$ is Pr\\\"ufer if $D_M$ is a valuation domain for each maximal ideal $M$ of $D$. A Pr\\\"ufer domain $D$ enjoys an abundance of properties (see for example \\cite{Gilmer}), among which there is the fact that $D$ is integrally closed. By a celebrated result of Krull, every integrally closed domain with quotient field $K$ can be represented as an intersection of valuation domains of $K$. Conversely, it is of extreme importance to establish when a given family of valuation domains of a given field $K$ intersects in a Pr\\\"ufer domain with quotient field $K$. This problem has also connections to real algebraic geometry, since the real holomorphy rings of a formally real function field is well-known to be a Pr\\\"ufer domain (see for example \\cite[\\S 2.1]{FHP}). Different authors have investigated this problem: for example, Gilmer and Roquette gave explicit construction of Pr\\\"ufer domains constructed as intersection of valuation domains, or, which is the same thing, as the integral closure of some subring (see \\cite{GilmPrufer} and \\cite{Roq}, respectively). Recently, Olberding gave a geometric criterion on a subset $Z$ of the Zariski-Riemann space of all the valuation domains of a field in order for the holomorphy ring $\\bigcap_{V\\in Z}V$ to be a Pr\\\"ufer domain; this criterion is given in terms of projective morphisms of $Z$, considered as a locally ringed space, into the projective line (see \\cite{Olb1}). In \\cite{Olb2} Olberding gave a sufficient condition on a family of rank one valuation domains which satisfies certain assumptions so that the intersection of the elements of the family is a Pr\\\"ufer domain. \n\nIn this paper we focus our attention to the relevant class of polynomial rings called integer-valued polynomials. Classically, given an integral domain $D$ with quotient field $K$ and a subset $S$ of $D$, the ring of integer-valued polynomials over $S$ is defined as:\n$${\\rm Int}(S,D)=\\{f\\in K[X] \\mid f(S)\\subseteq D\\}.$$\nFor $S=D$, we set ${\\rm Int}(D,D)={\\rm Int}(D)$. We refer to \\cite{CaCh} for a detailed treatment of this kind of rings. If $D$ is Noetherian, Chabert and McQuillan independently gave sufficient and necessary conditions on $D$ so that ${\\rm Int}(D)$ is Pr\\\"ufer (see \\cite[Theorem VI.1.7]{CaCh}). Later on, Loper generalized their result to a general domain $D$ (see \\cite{LopClass}). The problem of establishing when ${\\rm Int}(S,D)$ is a Pr\\\"ufer domain for a general subset $S$ of $D$ is considerably more difficult, see \\cite{LS} for a recent survey on this problem. Since a necessary condition for ${\\rm Int}(S,D)$ to be Pr\\\"ufer is that $D$ is Pr\\\"ufer (see for example \\cite{LS}), it is reasonable to work locally. Henceforth, we consider $D$ to be equal to a valuation domain $V$. \n\nThe ring ${\\rm Int}(S,V)$ can be represented in the following way as an intersection of a family of valuation domains of the field of rational functions $K(X)$ and the polynomial ring $K[X]$ (which likewise can be represented as an intersection of valuation domains lying over the trivial valuation domain $K$):\n$${\\rm Int}(S,V)=K[X]\\cap \\bigcap_{s\\in S}W_s$$\nwhere, for each $s\\in S$, $W_s$ is the valuation domain of those rational functions which are integer-valued at $s$, i.e.: $W_s=\\{\\varphi\\in K(X) \\mid \\varphi(s)\\in V\\}$. In the language of Roquette \\cite{Roq}, a rational function $\\varphi\\in K(X)$ is holomorphic at $W_s$ (or, equivalently, $\\varphi$ has no pole at $W_s$) if and only if $\\varphi$ is integer-valued at $s$. Clearly, $W_s$ lies over $V$, and, in the case $V$ has rank one, $W_s$ has rank two. The topology on the subspace of the Riemann-Zariski space of $K(X)$ formed by the valuation domains $W_s$, $s\\in S$, has been extensively studied in \\cite{PerTransc}, when $V$ has rank one: in particular, $\\{W_s \\mid s\\in S\\}$ as a subspace of the Zariski-Riemann space of all the valuation domains of $K(X)$ is homeomorphic to $S$, considered as a subset of $V$, endowed with the $V$-adic topology. \n\nFor a general valuation domain $V$, we have the following well-known result (which is now a special case of the aforementioned result of Loper in \\cite{LopClass}):\n\n\\begin{Thm}\\cite[Lemma VI.1.4, Proposition VI.1.5]{CaCh}\\label{IntVPrufer}\nLet $V$ be a valuation domain. Then ${\\rm Int}(V)$ is a Pr\\\"ufer domain if and only if $V$ is a DVR with finite residue field.\n\\end{Thm}\n\nThe first result about when ${\\rm Int}(S,V)$ is Pr\\\"ufer dates back to McQuillan: he showed that if $S$ is a finite set then ${\\rm Int}(S,V)$ is Pr\\\"ufer (more generally, he showed that for a finite subset $S$ of an integral domain $D$, ${\\rm Int}(S,D)$ is Pr\\\"ufer if and only if $D$ is Pr\\\"ufer, see \\cite{McQ}). Later on, Cahen, Chabert and Loper turned their attention to infinite subsets $S$ of a valuation domain $V$, and gave the following sufficient condition (here, precompact means that the topological closure of $S$ in the completion of $V$ is compact).\n\n\\begin{Thm}\\cite[Theorem 4.1]{CCL}\\label{ThmCCL}\nLet $V$ be a valuation domain and $S$ a subset of $V$. If $S$ is a precompact subset of $V$ then ${\\rm Int}(S,V)$ is a Pr\\\"ufer domain.\n\\end{Thm}\n\nWhether the precompact condition on $S$ is also a necessary condition or not was a natural question posed in \\cite{CCL}. If $V$ is a rank one discrete valuation domain, then it is sufficient and necessary that $S$ is precompact in order for ${\\rm Int}(S,V)$ to be Pr\\\"ufer (\\cite[Corollary 4.3]{CCL}). Similarly, Park proved recently that if $S$ is an additive subgroup of any valuation domain $V$, then ${\\rm Int}(S,V)$ is a Pr\\\"ufer domain if and only if $S$ is precompact (\\cite[Theorem 2.7]{MHP}). Unfortunately, already for a non-discrete rank one valuation domain $V$ the precompact condition turned out to be not necessary, as Loper and Werner showed by considering subsets $S$ of $V$ whose elements comprise a pseudo-convergent sequence in the sense of Ostrowski (for all the definitions related to this notion see \\S \\ref{Gps} below). It is worth recalling that the first time this notion has been used in the realm of integer-valued polynomials is in two articles of Chabert (see \\cite{ChabPolCloVal,ChabIntValValField}). Loper and Werner made a thorough study of the rings of polynomials which are integer-valued over a pseudo-convergent sequence $E=\\{s_n\\}_{n\\in\\mathbb{N}}$ of a rank one valuation domain $V$, obtaining the following characterization of when ${\\rm Int}(E,V)$ is Pr\\\"ufer.\n\n\\begin{Thm}\\cite[Theorem 5.2]{LW}\\label{ThmLW}\nLet $V$ be a rank one valuation domain and $E=\\{s_n\\}_{n\\in\\mathbb{N}}$ a pseudo-convergent sequence in $V$. Then ${\\rm Int}(E,V)$ is a Pr\\\"ufer domain if and only if either $E$ is of transcendental type or the breadth ideal of $E$ is the zero ideal.\n\\end{Thm}\n\nIn particular, if $E$ is a pseudo-convergent sequence with non-zero breadth ideal and of transcendental type, then $E$ is not precompact and ${\\rm Int}(E,V)$ is a Pr\\\"ufer domain (\\cite[Example 5.12]{LW}). \n\nIn this paper, we give a sufficient and necessary condition on a general subset $S$ of a rank one valuation domain $V$ so that ${\\rm Int}(S,V)$ is Pr\\\"ufer, generalizing the above result by Loper and Werner. Throughout the paper, we assume that $V$ is a rank one valuation domain with maximal ideal $M$ and quotient field $K$. We denote by $v$ the associated valuation and by $\\Gamma_v$ the value group. In particular, $\\Gamma_v$ is an ordered subgroup of the reals, so that $\\Gamma_v\\subseteq\\mathbb{R}$. Our approach proceeds as follows. We employ a criterion for an integrally closed domain $D$ to be Pr\\\"ufer (which can be found for example in the book of Zariski and Samuel \\cite{ZS2}): it is sufficient and necessary that, for each valuation overring $W$ of $D$ with center a prime ideal $P$ on $D$, the extension of the residue field of $W$ over the quotient field of $D\/P$ is not transcendental. In our setting, a valuation overring $W$ of ${\\rm Int}(S,V)$ which does not satisfy the previous property is a residually transcendental extension of $V$ (i.e.: $W$ lies over $V$ and the residue field of $W$ is a transcendental extension of the residue field of $V$). These valuation domains of the field of rational functions have been completely described by Alexandru and Popescu. Putting together these facts, we show that the lack of the Pr\\\"ufer property for ${\\rm Int}(S,V)$ occurs precisely when $S$ contains a pseudo-monotone sequence in the sense of Chabert which admits a pseudo-limit in the algebraic closure of $K$ (with respect to a suitable extension of $V$). These notions generalize the notions of pseudo-convergent sequence and pseudo-limit in the sense of Ostrowski and Kaplansky, respectively. \n\nHere is a summary of this paper. In \\S \\ref{Gps} we introduce the notion of pseudo-monotone sequence and pseudo-limit given by Chabert. In \\S \\ref{pol clos} we recall a result of Chabert about the fact that the polynomial closure of a subset $S$ of $V$, defined as the largest subset of $V$ over which all the polynomials of ${\\rm Int}(S,V)$ are integer-valued, is a topological closure. In \\S \\ref{Resid transc exten} we recall the aforementioned criterion for an integrally closed domain to be Pr\\\"ufer and an explicit description by Alexandru and Popescu of residually transcendental extensions of a valuation domain, which are crucial for our discussion. Finally, in \\S \\ref{final result}, we give our main result which classifies the subsets $S$ of a rank one valuation domain $V$ for which ${\\rm Int}(S,V)$ is Pr\\\"ufer (see Theorem \\ref{final theorem}). This result is accomplished by describing when an element $\\alpha\\in K$ is a pseudo-limit of a pseudo-monotone sequence contained in $S$: this happens when a closed ball $B(\\alpha,\\gamma)=\\{x\\in K \\mid v(x-\\alpha)\\geq\\gamma\\}$ is contained in the polynomial closure of $S$. From this point of view, the assumption of Theorem \\ref{ThmCCL} is equivalent to the fact that $S$ does not contain any pseudo-monotone sequence, which is a sufficent but not necessary condition for ${\\rm Int}(S,V)$ to be Pr\\\"ufer, as the above example of Loper and Werner shows.\n\n\\section{Preliminaries}\n\n\\subsection{Pseudo-monotone sequences}\\label{Gps}\n\nWe introduce the following notion, which is given by Chabert in \\cite{ChabPolCloVal}. It contains the classical definition of pseudo-convergent sequence of a valuation domain by Ostrowski in \\cite{Ostr} and exploited by Kaplansky in \\cite{Kap} to describe immediate extensions of a valued field.\n\\begin{Def}\nLet $E=\\{s_n\\}_{n\\in\\mathbb{N}}$ be a sequence in $K$. We say that $E$ is a \\emph{pseudo-monotone sequence} (with respect to the valuation $v$) if the sequence $\\{v(s_{n+1}-s_n)\\}_{n\\in\\mathbb{N}}$ is monotone, that is, one of the following conditions holds:\n\\begin{itemize}\n\\item[i)] $v(s_{n+1}-s_n)v(s_{n+2}-s_{n+1})$, $\\forall n\\in\\mathbb{N}$.\n\\end{itemize}\nMore precisely, we say that $E$ is \\emph{pseudo-convergent}, \\emph{pseudo-stationary} or \\emph{pseudo-divergent} in each of the three different cases, respectively. Case i) is precisely the original definition given by Ostrowski in \\cite[\\S 11, p. 368]{Ostr}. Let $\\alpha\\in K$. We say that $\\alpha$ is a \\emph{pseudo-limit} of $E$ in each of the three different cases above if:\n\\begin{itemize}\n\\item[i)] $v(\\alpha-s_n)v(\\alpha-s_{n+1})$, $\\forall n\\in\\mathbb{N}$, or, equivalently, $v(\\alpha-s_{n+1})=v(s_{n+1}-s_n)$, $\\forall n\\in\\mathbb{N}$.\n\\end{itemize}\nWe remark that case i) is the definition of pseudo-limit as given by Kaplansky in \\cite{Kap}. Given a subset $S$ of $K$ and an element $\\alpha$ in $K$, we say that $\\alpha$ is a pseudo-limit of $S$ if $\\alpha$ is a pseudo-limit of a pseudo-monotone sequence of elements of $S$.\n\nThe following limit in $\\mathbb{R}\\cup\\{\\infty\\}$ is called the \\emph{breadth} of a pseudo-monotone sequence $E$, as given in \\cite{ChabPolCloVal}, which generalizes the definition of Ostrowski for pseudo-convergent sequences (\\cite[p. 368]{Ostr}):\n$$\\delta=\\lim_{n\\to\\infty}v(s_{n+1}-s_n).$$\n\\end{Def}\nNote that since $\\{v(s_{n+1}-s_n)\\}_{n\\in\\mathbb{N}}$ is either increasing, decreasing or stationary, the above limit is a well-defined real number and $\\delta$ may not be in $\\Gamma_v$. In the latter case, $V$ is necessarily not discrete. Note that, if $E=\\{s_n\\}_{n\\in\\mathbb{N}}\\subset V$ and $\\alpha$ is a pseudo-limit of $E$, then it is easy to see that the breadth $\\delta$ is greater than or equal to $0$ and $\\alpha\\in V$. We now give some remarks and further definitions for each of the three cases above.\n\n\\subsubsection{Pseudo-convergent sequences}\\label{pcv}\n\\begin{Def}\nLet $E=\\{s_n\\}_{n\\in\\mathbb{N}}$ be a pseudo-convergent sequence in $V$. The following ideal of $V$:\n$${\\rm Br}(E)=\\{b\\in V \\mid v(b)>v(s_{n+1}-s_n),\\forall n\\in\\mathbb{N}\\}$$\nis called the \\emph{breadth ideal} of $E$. \n\nWe say that $E$ is of \\emph{transcendental type} if $v(f(s_n))$ eventually stabilizes for every $f\\in K[X]$. If for some $f\\in K[X]$ the sequence $v(f(s_n))$ is eventually strictly increasing then we say that $E$ is of \\emph{algebraic type}.\n\\end{Def}\n\nClearly, the breadth ideal is the zero ideal if and only if $\\delta=+\\infty$. If $V$ is a discrete rank one valuation domain (DVR), then the breadth ideal is necessarily equal to the zero ideal. In general, this last condition holds exactly when $E$ is a classical Cauchy sequence and then the definition of pseudo-limit boils down to the classical notion of limit (which in this case is unique). Throughout the paper, to avoid confusion, a pseudo-convergent sequence is supposed to have non-zero breadth ideal, and similary, an element $\\alpha\\in K$ is a pseudo-limit of a sequence $E$ if $E$ is a pseudo-convergent sequence in this strict sense. Moreover, in this case if $\\alpha\\in K$ is a pseudo-limit for $E$, then $\\{\\alpha\\}+{\\rm Br}(E)$ is the set of all the pseudo-limits for $E$ (\\cite[Lemma 3]{Kap}).\n\nThe following easy lemma gives a link between the breadth and the breadth ideal for a pseudo-convergent sequence (the inf is considered in $\\mathbb{R}$). \n\\begin{Lemma}\\label{breadth=inf values breadth ideal}\nLet $E=\\{s_n\\}_{n\\in\\mathbb{N}}\\subset V$ be a pseudo-convergent sequence with non-zero breadth ideal. Let\n$$\\delta'=\\inf\\{v(b)\\mid b\\in{\\rm Br}(E)\\}$$\nThen $\\delta'=\\delta$, the breadth of $E$. Moreover, $\\delta\\in\\Gamma_v\\Leftrightarrow {\\rm Br}(E)$ is a principal ideal.\n\\end{Lemma}\n\\begin{proof}\nSince $v(s_{n+1}-s_n)\\gamma\\}$ is a pseudo-limit of $E$. However, if $\\beta\\in K$ is such that $v(\\alpha-\\beta)=\\gamma$, then $v(s_n-\\beta)\\geq \\gamma$ for every $n\\in\\mathbb{N}$. Since for all $n\\not=m$ we have $\\gamma=v(s_n-s_m)=v(s_n-\\beta+\\beta-s_m)$, for at most one $n'\\in\\mathbb{N}$ we may have the strict inequality $v(s_{n'}-\\beta)>\\gamma$. Hence, up to removing one element from $E$, any element of $B(\\alpha,\\gamma)$ is a pseudo-limit of $E$. In this broader sense, any element of $E$ itself is a pseudo-limit of $E$. \n\n\n\\subsubsection{Pseudo-divergent sequences}\\label{Pds}\n\nIf $V$ is discrete, then there are no pseudo-divergent sequences contained in $V$. On the other hand, if $\\alpha\\in K$ is a pseudo-limit of a pseudo-divergent sequence $E=\\{s_n\\}_{n\\in\\mathbb{N}}\\subset K$ with breadth $\\gamma$, then the set of all the pseudo-limits in $K$ of $E$ is equal to the open ball $\\mathring{B}(\\alpha,\\gamma)=\\{x\\in K \\mid v(x-\\alpha)>\\delta\\}$ (see \\cite[Remark 4.7]{ChabPolCloVal}). Note also that any element $s_k\\in E$ is definitively a pseudo-limit of $E$, in the sense that, for all $n>k$ we have $v(s_n-s_k)=v(s_n-s_{n-1})>v(s_{n+1}-s_n)=v(s_{n+1}-s_k)$.\n\n\n\n\\begin{Rem}\\label{V not discrete of V\/M infinite}\nWe have seen that if $V$ admits a pseudo-monotone sequence $E=\\{s_n\\}_{n\\in\\mathbb{N}}$ with breadth $\\gamma\\in\\mathbb{R}$, then $V$ is either non-discrete or the residue field $V\/M$ is infinite. If $E$ is pseudo-stationary, then $V\/M$ is necessarily infinite and if $E$ is pseudo-divergent or pseudo-convergent with non-zero breadth ideal then $V$ is necessarily non-discrete. In particular, the only pseudo-monotone sequences in a DVR are the pseudo-stationary sequences.\n\\end{Rem}\n\n\\subsection{Polynomial closure}\\label{pol clos}\n\\begin{Def}\nLet $S$ be a subset of $K$. The \\emph{polynomial closure} of $S$ is the largest subset of $K$ over which the polynomials of ${\\rm Int}(S,V)$ are integer-valued, namely:\n$$\\overline{S}=\\{s\\in K \\mid \\forall f\\in{\\rm Int}(S,V),f(s)\\in V\\}$$\nEquivalently, the polynomial closure of $S$ is the largest subset $\\overline{S}$ of $K$ such that ${\\rm Int}(S,V)={\\rm Int}(\\overline{S},V)$. A subset $S$ of $K$ such that $S=\\overline{S}$ is called \\emph{polynomially closed}.\n\\end{Def}\nThe main result of Chabert in \\cite{ChabPolCloVal} is the following theorem, which will be essential in \\S \\ref{final result} for the proof of our main result.\n\\begin{Thm}\\cite[Theorem 5.3]{ChabPolCloVal}\\label{Thm Chabert}\nLet $V$ be a valuation domain of rank one. Then the polynomial closure is a topological closure, that is, there exists a topology on $K$ for which the closed sets are exactly the polynomially closed sets. A basis for the closed sets for this topology is given by the finite unions of closed balls $B(a,\\gamma)=\\{x\\in K \\mid v(x-a)\\geq \\gamma\\}$, for $a\\in K$ and $\\gamma\\in\\Gamma_v$.\n\\end{Thm}\n\n\\begin{Def}\nThe topology on the valued field $K$ which has the polynomially closed subsets as closed sets is called \\emph{polynomial topology}.\n\\end{Def}\n\nChabert observes that the polynomial topology is in general weaker than the $v$-adic topology. They coincide if $V$ is discrete and with finite residue field, but the next example (which we will use in the following) shows that they may differ in general.\n\n\\begin{Ex}\\label{polclosure ball}\nGiven $\\alpha\\in K$ and $\\gamma\\in\\mathbb{R}$, in \\cite[Proposition 3.2]{ChabPolCloVal} it is proved that the polynomial closure of the open ball $\\mathring{B}(\\alpha,\\gamma)=\\{x\\in K\\mid v(x-\\alpha)>\\gamma\\}$ (which is closed in the $v$-adic topology) is equal to:\n$$\\overline{\\mathring{B}(\\alpha,\\gamma)}=\\left\\{\n\\begin{array}{ll}\nB(\\alpha,\\overline{\\gamma}),\\;\\textnormal{ where }\\overline\\gamma=\\inf\\{\\lambda\\in\\Gamma_v \\mid \\lambda>\\gamma\\}, &\\textnormal{ if either }v\\textnormal{ is discrete or }\\gamma\\notin\\Gamma_v\\\\\nB(\\alpha,\\gamma),& \\textnormal{ otherwise}\n\\end{array}\\right.$$\n\\end{Ex}\n\n\\begin{Rem}\\label{open sets polyn topology} In particular, given $\\alpha\\in K$, the subsets of the form:\n$$\\bigcap_{i=1}^r\\{x\\in K \\mid v(x-s_i)<\\gamma_i\\}$$\nwhere $s_i\\in K$ and $\\gamma_i\\in \\Gamma_v$ are such that $v(\\alpha-s_i)<\\gamma_i$, for $i=1,\\ldots,r$, form a fundamental system of open neighborhoods of $\\alpha$ for the polynomial topology.\n\\end{Rem}\n\n\n\n\\section{Residually transcendental extensions}\\label{Resid transc exten}\n\nThe following criterion, which appears for example in \\cite[Theorem 10, chapt. VI, \\S 5]{ZS2} or \\cite[Theorem 19.15]{Gilmer}, establishes when an integrally closed domain $D$ is Pr\\\"ufer: $D$ must not admit a valuation overring $V$ whose residue field is a transcendental extension of the quotient field of the residue of $D$ modulo the center of $V$ on $D$. Recall that a valuation overring $V$ of an integral domain $D$ is a valuation domain $V$ contained between $D$ and its quotient field $K$. The center of a valuation overring $V$ of $D$ is the intersection of the maximal ideal of $V$ with $D$.\n\n\\begin{Thm}\\label{criterionZS}\nLet $D$ be an integrally closed domain and $P$ a prime ideal of $D$. Then $D_P$ is a valuation domain if and only if there is no valuation overring $V$ of $D$ centered in $P$ such that the residue field of $V$ is transcendental over the quotient field of $D\/P$.\n\\end{Thm}\n\nBy means of this Theorem, we are going to show that an integrally closed domain of the form ${\\rm Int}(S,V)$, $S\\subseteq V$, is not Pr\\\"ufer exactly when it admits a valuation overring lying over $V$ and whose residue fields extension is transcendental.\n\n\\begin{Def}\nA valuation domain $\\mathcal{W}$ of the field of rational functions $K(X)$ is a \\emph{residually transcendental extension} of $V=\\mathcal{W}\\cap K$ (or simply residually transcendental extension if $V$ is understood) if the residue field of $\\mathcal{W}$ is a transcendental extension of the residue field of $V$.\n\\end{Def}\n\nThe residually transcendental extensions of $V$ to $K(X)$ have been completely described by Alexander and Popescu (\\cite{AP}). In order to describe these valuation domains, we need to introduce the following class of valuations on $K(X)$.\n\n\\begin{Def}\\label{Valphagamma}\nLet $\\alpha\\in K$ and $\\delta$ an element of a value group $\\Gamma$ which contains $\\Gamma_v$. For $f\\in K[X]$ such that $f(X)=a_0+a_1(X-\\alpha)+\\ldots+a_n(X-\\alpha)^n$, we set:\n$$v_{\\alpha,\\delta}(f)=\\inf\\{v(a_i)+i\\delta \\mid i=0,\\ldots,n\\}$$\nThe function $v_{\\alpha,\\delta}$ naturally extends to a valuation on $K(X)$ (\\cite[Chapt. VI, \\S. 10, Lemme 1]{Bourb}). We denote by $V_{\\alpha,\\delta}$ the valuation domain associated to $v_{\\alpha,\\delta}$, i.e.: $V_{\\alpha,\\delta}=\\{\\varphi\\in K(X) \\mid v_{\\alpha,\\delta}(\\varphi)\\geq0\\}$. Clearly, $V_{\\alpha,\\delta}$ lies over $V$. We let also $M_{\\alpha,\\delta}=\\{\\varphi\\in K(X) \\mid v_{\\alpha,\\delta}(\\varphi)>0\\}$ be the maximal ideal of $V_{\\alpha,\\delta}$.\n\\end{Def}\n\\begin{Rem}\\label{descriptionValphagamma}\nNote that, if $\\gamma\\in\\Gamma_v$, $\\gamma\\geq0$ and $d\\in V$ is any element such that $v(d)=\\gamma$, then it is easy to see that:\n\\begin{align*}\nV_{\\alpha,\\gamma}=V\\left[\\frac{X-\\alpha}{d}\\right]_{M\\left[\\frac{X-\\alpha}{d}\\right]},\\;\\;V_{\\alpha,\\gamma}\\cap K[X]=V\\left[\\frac{X-\\alpha}{d}\\right],\\;\\;M_{\\alpha,\\gamma}\\cap K[X]=M\\left[\\frac{X-\\alpha}{d}\\right]\n\\end{align*}\nIn general, if $\\gamma\\in\\mathbb{R}$, then $V_{\\alpha,\\gamma}\\cap K[X]=\\{f(X)=\\sum_{k\\geq 0}a_k(X-\\alpha)^k\\in K[X] \\mid v(a_k)+k\\gamma\\geq0,\\forall k\\}$. As in \\cite[\\S 4]{ChabIntValValField}, in this case we set $V[(X-\\alpha)\/\\gamma]$ to be $V_{\\alpha,\\gamma}\\cap K[X]$.\n\\end{Rem}\n\n\nIn \\cite[p. 580]{APZ1} the authors say that $v_{\\alpha,\\delta}$ is residually transcendental if and only if $\\delta$ has finite order over $\\Gamma_v$. For the sake of the reader we give a self-contained proof here.\n\n\\begin{Lemma}\\label{characterization residually transcendental extensions}\nLet $\\alpha\\in K$ and $\\delta$ an element of a value group $\\Gamma$ which contains $\\Gamma_v$. Then $v_{\\alpha,\\delta}$ is residually transcendental if and only if $\\delta$ has finite order over $\\Gamma_v$, i.e., there exists $n\\in\\mathbb{N}$ such that $n\\delta\\in \\Gamma_v$.\n\\end{Lemma}\n\\begin{proof}\nSuppose there exists $n\\geq 1$ such that $n\\delta=\\gamma=v(c)\\in\\Gamma_v$, for some $c\\in K$. Clearly, the $v_{\\alpha,\\delta}$-adic valuation of $f(X)=\\frac{(X-\\alpha)^n}{c}$ is zero. We claim that over the residue field of $V$ the polynomial $f(X)$ is transcendental. In fact, suppose there exist $a_{d-1},\\ldots,a_0\\in V$ such that\n$$\\overline{f}^{d}+\\overline{a_{d-1}}\\overline{f}^{d-1}+\\ldots+\\overline{a_{1}}\\overline{f}+\\overline{a_{0}}=0$$\nthat is,\n$$g=f^d+a_{d-1}f^{d-1}+\\ldots+a_1 f+a_0\\in M_{\\alpha,\\delta}$$\nHowever, if we set $a_d=1$, we have:\n$$v_{\\alpha,\\delta}(g)=\\inf\\{v(a_i)-in\\delta+ni\\delta \\mid i=0,\\ldots,d\\}=\\inf\\{v(a_i) \\mid i=0,\\ldots,d\\}=0$$\nwhich is a contradiction.\n\nConversely, suppose that $n\\delta\\notin\\Gamma_v$, for each $n\\geq1$. Let $g\\in V_{\\alpha,\\delta}\\setminus M_{\\alpha,\\delta}$, say $g(X)=\\sum_{i\\geq0}^d a_i (X-\\alpha)^i$; then we have\n$$v_{\\alpha,\\delta}(g)=\\inf\\{v(a_i)+i\\delta \\mid i=0,\\ldots,d\\}=0\\Leftrightarrow v(a_0)=0\\;\\;\\&\\;\\; v(a_i)+i\\delta>0, \\forall i=1,\\ldots,d$$\nbecause of the assumption on $\\delta$. Then $g(X)$ is congruent to $g(0)=a_0$ modulo $M_{\\alpha,\\delta}$ so that over the residue field $g(X)$ is algebraic.\n\\end{proof}\n\n\\begin{Rem}\\label{delta in Gammav} \nSuppose that $\\delta\\in\\Gamma_v$; in particular, $v_{\\alpha,\\delta}$ is residually transcendental. If we consider the following expansion $f(X)=b_0+b_1\\frac{X-\\alpha}{d}+\\ldots+b_n(\\frac{X-\\alpha}{d})^n$, where $d\\in K$ is such that $v(d)=\\delta$, then $v_{\\alpha,\\delta}(f)=\\inf\\{v(b_i)\\mid i=0,\\ldots,n\\}$. In particular, by \\cite[Chapt. VI, \\S10, Prop. 2]{Bourb}, $v_{\\alpha,\\delta}$ is the unique valuation on $K(X)=K(\\frac{X-\\alpha}{d})$ for which the image of $\\frac{X-\\alpha}{d}$ in the residue field is transcendental over $V\/M$ (note that $\\frac{X-\\alpha}{d}$ has valuation zero).\n\\end{Rem}\n\nLet $\\overline{K}$ be a fixed algebraic closure of $K$\\footnote{$\\overline{K}$ is not to be confused with the polynomial closure of $K$, which is $K$ itself. Since both symbols are now equally customary, we decide to change neither of them.} and $\\Gamma_{\\overline v}=\\Gamma_v\\otimes_{\\mathbb{Z}}\\mathbb{Q}$, the divisible hull of $\\Gamma_v$. The following theorem characterizes the residually transcendental extensions of $V$ to $K(X)$ (see also \\cite[Theorem 3.11]{Kuh} for an alternative and more recent approach). The theorem holds for any valuation domain (i.e., no matter of its dimension). For the sake of the reader we give a sketch of the proof.\n\n\\begin{Thm}\\label{characterization residually transcendental exts}\\cite[Proposition 2 \\& Th\\'eor\\`eme 11]{AP}\nLet $\\mathcal{W}$ be a residually transcendental extension of $V$ to $K(X)$. Then there exist $\\alpha\\in\\overline{K}$, $\\gamma\\in\\Gamma_{\\overline{v}}$ and a valuation $\\overline{W}$ of $\\overline{K}$ lying over $V$ such that $\\mathcal{W}=\\overline W_{\\alpha,\\gamma}\\cap K(X)$.\n\\end{Thm}\n\\begin{proof} Let $\\overline{\\mathcal{W}}$ be an extension of $\\mathcal{W}$ to $\\overline{K}(X)$. It is clear that $\\overline{\\mathcal{W}}$ is residually transcendental over $\\overline{\\mathcal{W}}\\cap\\overline{K}$. Thus, without loss of generality we may assume that $K$ is algebraically closed. Now, by \\cite[Proposition 2]{AP}, $\\mathcal{W}$ is the valuation domain associated to a valuation $w$ on $K(X)$ which on a polynomial $f\\in K[X]$ is defined as:\n$$w(f)=\\inf_i\\{v(a_i)\\},\\;\\;\\textnormal{ if } f(X)=\\sum_i a_i (aX-b)^i$$\nfor some $a,b\\in K$, $a\\not=0$. Now, if we write $f(X)=\\sum_i b_i(X-\\alpha)^i$ where $b_i=a_i a^i$ and $\\alpha=b\/a$, we get that $v(a_i)=v(b_i)-iv(a)$, so finally $w=v_{\\alpha,\\gamma}$, where $\\gamma=-v(a)$ (see also Remark \\ref{delta in Gammav}).\n\\end{proof}\n\n\\begin{Def}\\label{def Valphagamma}\nGiven $\\alpha\\in\\overline{K}$, $\\gamma\\in\\Gamma_{\\overline{v}}$ and a valuation domain $\\overline{W}$ of $\\overline{K}$ lying over $V$, we denote by $V_{\\alpha,\\gamma}^{\\overline W}$ the valuation domain $\\overline{W}_{\\alpha,\\gamma}\\cap K(X)$. If $\\overline{W}$ is understood we denote $V_{\\alpha,\\gamma}^{\\overline W}$ by $V_{\\alpha,\\gamma}$.\n\\end{Def}\n\n\\begin{Rem}\\label{example}\nLet $(\\alpha,\\gamma)\\in\\overline{K}\\times\\Gamma_{\\overline{v}}$ be fixed. The valuation domain $V_{\\alpha,\\gamma}^{\\overline W}$ depends on the extension $\\overline W$ of $V$ to $\\overline{K}$. For example, let $w,w'$ be the $(2-i)$ and $(2+i)$-adic valuations of $\\mathbb{Q}(i)$, respectively, which extend the $5$-adic valuation on $\\mathbb{Q}$. Then,\n$$\nw_{i,1}(-X+2)=1,\\;\\;w'_{i,1}(-X+2)=0.\n$$\nIn particular, $-X+2$ is a unit in $W'_{i,1}$ and is in the maximal ideal of $W_{i,1}$, so the contractions of these valuation domains to $\\mathbb{Q}(X)$ cannot be the same. \n\nTherefore, whenever we write $V_{\\alpha,\\gamma}$ without any reference to an extension of $V$ to $\\overline{K}$, we are implicitly assuming that such an extension has been fixed in advance. Note that there is no ambiguity in writing $V_{\\alpha,\\gamma}$ whenever $(\\alpha,\\gamma)\\in K\\times\\Gamma_v$.\n\nNote also that, given a valuation domain $V_{\\alpha,\\gamma}^{\\overline W}$, where $(\\alpha,\\gamma)\\in\\overline{K}\\times\\Gamma_{\\overline{v}}$, we may assume that there exists a finite field extension $F$ of $K$ and a valuation domain $W$ of $F$ lying over $V$ such that $\\alpha$ is in $F$ and $\\gamma$ is in $\\Gamma_w$, the value group of $W$, so that $V_{\\alpha,\\gamma}^{\\overline W}=V_{\\alpha,\\gamma}^{ W}$.\n\\end{Rem}\n\n\\begin{Rem}\\label{Valphagamma polynomial linearly ordered}\\label{equality Valphagamma}\nIt is not difficult to prove that the family of rings $V_{\\alpha,\\gamma}\\cap K[X]$, $\\alpha\\in K,\\gamma\\in\\mathbb{R}$, has a natural ordering, namely: \n$$V_{\\alpha_1,\\gamma_1}\\cap K[X]\\subseteq V_{\\alpha_2,\\gamma_2}\\cap K[X]\\Leftrightarrow \\gamma_1\\leq\\gamma_2\\; \\textnormal{ and }v(\\alpha_1-\\alpha_2)\\geq \\gamma_1.$$\nEquivalently, the above containment holds if and only if $B(\\alpha_1,\\gamma_1)\\supseteq B(\\alpha_2,\\gamma_2)$. In particular, \n$$V_{\\alpha_1,\\gamma_1}\\cap K[X]=V_{\\alpha_2,\\gamma_2}\\cap K[X]\\Leftrightarrow\\gamma_1=\\gamma_2\\; \\textnormal{ and }v(\\alpha_1-\\alpha_2)\\geq \\gamma_1,$$\nor, equivalently, $B(\\alpha_1,\\gamma_1)=B(\\alpha_2,\\gamma_2)$. If this last case holds, then $V_{\\alpha_1,\\gamma_1}=V_{\\alpha_2,\\gamma_2}$.\n\nSee also \\cite[Proposition 1.1]{APZ3}, where the same result is given for any valuation $V$ but only for $\\gamma\\in \\Gamma_v\\otimes_{\\mathbb{Z}}\\mathbb{Q}$.\n\\end{Rem}\n\nThe following lemma is based on a well-known result.\n\\begin{Lemma}\\label{ValphagammacapK[X] not Prufer}\nLet $(\\alpha,\\gamma)\\in K\\times\\Gamma_v$. Then $V_{\\alpha,\\gamma}\\cap K[X]$ is not a Pr\\\"ufer domain.\n\\end{Lemma}\n\\begin{proof}\nBy Remark \\ref{descriptionValphagamma}, $V_{\\alpha,\\gamma}\\cap K[X]=V[\\frac{X-\\alpha}{d}]$, where $d\\in K$ is such that $v(d)=\\gamma$. It is a well-known result that $V[\\frac{X-\\alpha}{d}]$ is not a Pr\\\"ufer domain. %\n\\end{proof}\n\n\\begin{Lemma}\\label{polynomial ring not Prufer}\nLet $\\overline\\mathcal{W}$ be a valuation domain of $\\overline{K}(X)$ such that $\\overline\\mathcal{W}\\cap\\overline{K}[X]$ is not Pr\\\"ufer. Then $\\overline\\mathcal{W}\\cap K[X]$ is not Pr\\\"ufer. \n\\end{Lemma}\n\\begin{proof}\nIf $R=\\overline\\mathcal{W}\\cap K[X]$ is Pr\\\"ufer, then its integral closure $\\overline{R}$ in $\\overline{K}(X)$ is Pr\\\"ufer, because $K(X)\\subseteq\\overline{K}(X)$ is an algebraic extension. But it is immediate to see that $\\overline{R}\\subseteq \\overline\\mathcal{W}\\cap\\overline{K}[X]$: in fact, $R\\subset K[X]\\Rightarrow \\overline{R}\\subset \\overline{K}[X]$ and $R\\subset V=\\overline\\mathcal{W}\\cap K(X)$ implies that $\\overline{R}$ is contained in the integral closure of $V$ in $\\overline K(X)$, which is contained in $\\overline\\mathcal{W}$. In particular, $\\overline\\mathcal{W}\\cap\\overline{K}[X]$ would be Pr\\\"ufer, a contradiction.\n\\end{proof}\n\nThe following easy result shows that if $V_{\\alpha,\\gamma}$, $(\\alpha,\\gamma)\\in K\\times\\Gamma_v$, is a valuation overring of ${\\rm Int}(S,V)$, then $\\alpha\\in V$ and $\\gamma\\geq0$.\n\\begin{Lemma}\\label{overrings of V[X]}\nLet $(\\alpha,\\gamma)\\in K\\times\\Gamma_v$. We have $V[X]\\subset V_{\\alpha,\\gamma}$ if and only if $\\alpha\\in V$ and $\\gamma\\geq0$.\n\\end{Lemma}\n\\begin{proof}\nThe statement follows immediately from the equality $v_{\\alpha,\\gamma}(X)=\\min\\{\\gamma,v(\\alpha)\\}$.\n\\end{proof}\n\n\n\\begin{Thm}\\label{criterion 1 general case}\nLet $R\\subseteq K[X]$ be an integrally closed domain with quotient field $K(X)$ and such that $D=R\\cap K$ is a Pr\\\"ufer domain with quotient field $K$. Then $R$ is Pr\\\"ufer if and only if there is no valuation overring $V$ of $D$, an extension $\\overline{W}$ of $V$ to $\\overline{K}$ and $(\\alpha,\\gamma)\\in \\overline K\\times\\Gamma_{\\overline v}$ such that $R\\subset V_{\\alpha,\\gamma}^{\\overline{W}}$.\n\\end{Thm}\nThe theorem is false without the assumption $R\\subseteq K[X]$. For example, $R=V_{\\alpha,\\gamma}$, $(\\alpha,\\gamma)\\in K\\times\\Gamma_v$, is a Pr\\\"ufer domain.\n\\begin{proof}\nSuppose that, for some valuation overring $V$ of $D$ there exist an extension $\\overline{W}$ of $V$ to $\\overline{K}$ and $(\\alpha,\\gamma)\\in \\overline K\\times\\Gamma_{\\overline v}$ such that $R\\subset V_{\\alpha,\\gamma}^{\\overline{W}}$. This last condition is equivalent to $K[X]\\cap V_{\\alpha,\\gamma}^{\\overline{W}}$. By Lemma \\ref{ValphagammacapK[X] not Prufer} and \\ref{polynomial ring not Prufer}, $K[X]\\cap V_{\\alpha,\\gamma}^{\\overline{W}}$ is not Pr\\\"ufer, which implies that $R$ is not Pr\\\"ufer, since an overring of a Pr\\\"ufer domain is Pr\\\"ufer.\n\nConversely, if $R$ is not Pr\\\"ufer, then by Theorem \\ref{criterionZS} there exists a valuation overring $\\mathcal{W}$ of $R$ with maximal ideal $M_{\\mathcal{W}}$ such that $R\/(M_{\\mathcal{W}}\\cap R)\\subset \\mathcal{W}\/M_{\\mathcal{W}}$ is a transcendental extension. Note that $\\mathcal{W}\\cap K=V$ is a valuation overring of $D$, and since the latter ring is Pr\\\"ufer, $D_P=V$, where $P$ is the center of $V$ in $D$. We claim that $\\mathcal{W}$ is a residually transcendental extension of $V$, so that by Theorem \\ref{characterization residually transcendental exts} we have $\\mathcal{W}=V_{\\alpha,\\gamma}^{\\overline{W}}$, for some extension $\\overline{W}$ of $V$ to $\\overline{K}$ and $(\\alpha,\\gamma)\\in\\overline{K}\\times\\Gamma_{\\overline{v}}$. Indeed, we have the following diagram:\n$$\n\\xymatrix{\n&\\mathcal{W}\/M_{\\mathcal{W}}\\\\\nR\/(M_{\\mathcal{W}}\\cap R)\\ar[ur]&\\\\\n&V\/M_V\\ar[uu]\\\\\nD\/P\\ar[ur]\\ar[uu]&\n}\n$$\nBy Theorem \\ref{criterionZS} $D\/P\\subset V\/M_V$ is not a transcendental extension (because $D$ is assumed to be Pr\\\"ufer), therefore the extension $V\/M_V\\subset \\mathcal{W}\/M_{\\mathcal{W}}$ is transcendental and thus $\\mathcal{W}$ is a residually transcendental extension of $V$. The proof is now complete by Theorem \\ref{characterization residually transcendental exts}.\n\\end{proof}\n\n\n\\section{Pseudo-monotone sequences and polynomial closure}\\label{final result}\n\nFor the next lemma, see also \\cite[Lemma 4.1, Theorem 4.3, Proposition 4.5]{ChabIntValValField}.\n\\begin{Lemma}\\label{polynomial ring and int balls}\nLet $\\alpha\\in K$ and $\\gamma\\in\\mathbb{R}$. We have\n\\begin{equation}\\label{polynomial ring contained in int ring}\nV[(X-\\alpha)\/\\gamma]\\subseteq {\\rm Int}(B(\\alpha,\\gamma),V)\n\\end{equation}\nIn particular, if $S\\subseteq V$ is such that ${\\rm Int}(S,V)\\subset V_{\\alpha,\\gamma}$, then $B(\\alpha,\\gamma)\\subseteq \\overline{S}$. The other containment of (\\ref{polynomial ring contained in int ring}) holds if and only if either $V$ is not discrete or $V\/M$ is infinite. In particular, if one of these last conditions holds, then $B(\\alpha,\\gamma)\\subseteq \\overline{S}$ implies ${\\rm Int}(S,V)\\subset V_{\\alpha,\\gamma}$.\n\\end{Lemma}\n\\begin{proof}\nIt is straighforward to verify the containment (\\ref{polynomial ring contained in int ring}). Moreover, if ${\\rm Int}(S,V)\\subset V_{\\alpha,\\gamma}$, we have \n$${\\rm Int}(S,V)={\\rm Int}(\\overline{S},V)\\subseteq V_{\\alpha,\\gamma}\\cap K[X]=V[(X-\\alpha)\/\\gamma]$$\n(the last equality follows by Remark \\ref{descriptionValphagamma}), so the second claim follows. The third claim follows by \\cite[Proposition 4.5]{ChabIntValValField}.\nFinally, the last claim is straightforward.\n\\end{proof}\n\nWe remark that the last statement is false if $V$ is a DVR with finite residue field: in fact, in that case ${\\rm Int}(V)$ is Pr\\\"ufer by Theorem \\ref{IntVPrufer} so by Theorem \\ref{criterion 1 general case} ${\\rm Int}(V)\\not\\subset V_{0,0}$ (note that $V=B(0,0)$).\n\nThe following important result by Chabert shows the connection between pseudo-monotone sequences and polynomial closure.\n\\begin{Prop}\\cite[Prop. 4.8]{ChabPolCloVal}\\label{generalized pseudo sequence and polynomial closure}\nLet $S\\subseteq V$ be a subset. Let $\\{s_n\\}_{n\\in\\mathbb{N}}$ be a pseudo-monotone sequence in $S$ with breadth $\\gamma\\in\\mathbb{R}$ and pseudo-limit $\\alpha\\in V$. Then $B(\\alpha,\\gamma)\\subseteq\\overline{S}$, or, equivalently ${\\rm Int}(S,V)\\subset V_{\\alpha,\\gamma}$. \n\\end{Prop}\nNote that, under the assumption of Proposition \\ref{generalized pseudo sequence and polynomial closure}, by Remark \\ref{V not discrete of V\/M infinite} either $V$ is not discrete or its residue field is infinite, so the last equivalence of Proposition \\ref{generalized pseudo sequence and polynomial closure} follows by Lemma \\ref{polynomial ring and int balls}.\n\nThe aim of this section is to show that Proposition \\ref{generalized pseudo sequence and polynomial closure} can be reversed, in the sense that if $B(\\alpha,\\gamma)$ is the largest ball centered in $\\alpha\\in K$ which is contained in the polynomial closure of $S$, then there exists a pseudo-monotone sequence $E$ of $S$ with pseudo-limit $\\alpha$ and breadth $\\gamma$.\n\n\\begin{Rem}\\label{breadth not in Gamma_v}\nLet $E=\\{s_n\\}_{n\\in\\mathbb{N}}\\subset V$ be a pseudo-monotone sequence in $V$ with breadth $\\gamma$ and pseudo-limit $\\alpha\\in V$. Suppose that $\\gamma=v(d)\\in \\Gamma_v$, for some $d\\in V$. By Proposition \\ref{generalized pseudo sequence and polynomial closure} and Remark \\ref{descriptionValphagamma}, we have\n\\begin{equation}\\label{IntEVsubseteqValphagammacapK[X]}\n{\\rm Int}(E,V)\\subseteq V\\left[\\frac{X-\\alpha}{d}\\right]\n\\end{equation}\nIf $E$ is either pseudo-stationary or pseudo-divergent, then $v(s_n-\\alpha)\\geq v(d)$, so the containment in (\\ref{IntEVsubseteqValphagammacapK[X]}) is an equality. If $E$ is pseudo-convergent, then $v(s_n-\\alpha)<\\gamma$ for all $n\\in\\mathbb{N}$, so we only have the strict containment in (\\ref{IntEVsubseteqValphagammacapK[X]}). \n\nIf $E$ is pseudo-convergent or pseudo-divergent, then $\\gamma$ may not be in $\\Gamma_v$ and may also not be torsion over $\\Gamma_v$ (in which case $V_{\\alpha,\\gamma}$ is not residually transcendental over $V$, by Lemma \\ref{characterization residually transcendental extensions}). However, if $\\gamma\\notin\\Gamma_v$, then there exists $\\gamma'\\in\\Gamma_{v}$, $\\gamma'>\\gamma$ so that ${\\rm Int}(E,V)\\subseteq V_{\\alpha,\\gamma}\\cap K[X]\\subset V_{\\alpha,\\gamma'}\\cap K[X]\\subset V_{\\alpha,\\gamma'}$, and $V_{\\alpha,\\gamma'}$ is residually transcendental over $V$. Therefore, by Theorem \\ref{criterion 1 general case}, ${\\rm Int}(E,V)$ is not a Pr\\\"ufer domain.\n\\end{Rem}\nLet $\\alpha\\in K$ and $\\gamma\\in\\Gamma_v$. For the next lemma, we set\n$$\\partial B(\\alpha,\\gamma)=\\{x\\in K \\mid v(x-\\alpha)=\\gamma\\}$$\n\\begin{Lemma}\\label{pol closure frontier}\nLet $\\alpha\\in V$ and $\\gamma\\in \\Gamma_v$. If either $V$ is not discrete or $V\/M$ is infinite then\n$$\\overline{\\partial B(\\alpha,\\gamma)}=B(\\alpha,\\gamma)$$\nIf $V$ is a DVR with finite residue field, then $\\partial B(\\alpha,\\gamma)$ is polynomially closed.\n\\end{Lemma}\n\\begin{proof}\nLet $d\\in V$ be such that $v(d)=\\gamma$. Note that under the isomorphism $X\\mapsto\\frac{X-\\alpha}{d}$, $\\partial B(\\alpha,\\gamma)=\\{x\\in K \\mid v(x-\\alpha)=\\gamma\\}$ is isomorphic to $V^*=V\\setminus M$ and similarly $B(\\alpha,\\gamma)$ is isomorphic to $V$. Therefore, $\\overline{\\partial B(\\alpha,\\gamma)}=B(\\alpha,\\gamma)$ if and only if $\\overline{V^*}=V$. We prove now the last equality under the current hypothesis. \n\nIf $V\/M$ is infinite, then there exists a sequence $E=\\{s_n\\}_{n\\in\\mathbb{N}}\\subset V$ and an element $s\\in V^*$ such that $v(s_n-s_m)=v(s_n-s)=0$, $\\forall n\\not=m$. Thus, $E$ is pseudo-stationary with pseudo-limit $s$ and by Proposition \\ref{generalized pseudo sequence and polynomial closure} we may conclude.\n\nIf $V\/M$ is finite, let $V^*=\\bigcup_{i=1,\\ldots,n}(a_i+M)$, where $a_i\\notin M$, $\\forall i=1,\\ldots,n$. Since the polynomial closure in this context is a topological closure by Theorem \\ref{Thm Chabert}, we have $\\overline{V^*}=\\bigcup_{i=1,\\ldots,n}(\\overline{a_i+M})=\\bigcup_{i=1,\\ldots,n}(a_i+\\overline{M})$ by \\cite[Proposition IV.1.5, p. 75]{CaCh}. The polynomial closure of $M$ is equal either to $V$ if $V$ is not discrete or to $M$ itself if $V$ is discrete (see Example \\ref{polclosure ball}). The proof is complete.\n\\end{proof}\n\nFor a subset $S$ of $V$ such that ${\\rm Int}(S,V)$ is not Pr\\\"ufer, we know by Theorem \\ref{criterion 1 general case} that there exist an extension $\\overline{W}$ of $V$ to $\\overline{K}$ and $(\\alpha,\\gamma)\\in\\overline{K}\\times \\Gamma_{\\overline{v}}$, $\\Gamma_{\\overline{v}}=\\Gamma_v\\otimes_{\\mathbb{Z}}\\mathbb{Q}$, such that ${\\rm Int}(S,V)\\subset V_{\\alpha,\\gamma}^{\\overline{W}}$. The next two propositions show that it is sufficient to consider the case $(\\alpha,\\gamma)\\in V\\times\\Gamma_v$.\n\n\\begin{Prop}\\label{integral extension int rings}\nLet $S$ be a subset of an integrally closed domain $D$ with quotient field $K$. Let $F$ be an algebraic extension of $K$ and $D_F$ the integral closure of $D$ in $F$. Then the integral closure of ${\\rm Int}(S,D)$ in $F(X)$ is the ring ${\\rm Int}(S,D_F)$. \n\\end{Prop}\n\\begin{proof}\nIt is well-known that ${\\rm Int}(S,D_F)\\subset F[X]$ is integrally closed (see \\cite[Proposition IV.4.1]{CaCh}), so we have just to show that every element of ${\\rm Int}(S,D_F)$ is integral over ${\\rm Int}(S,D)$. Up to enlarging the field $F$, we may assume that $F$ is normal over $K$ (e.g., the algebraic closure of $K$). We are going to show that ${\\rm Int}(S,D)\\subset{\\rm Int}(S,D_F)$ is an integral ring extension under this further assumption. Let then $f\\in {\\rm Int}(S,D_F)$, since $f\\in F[X]$, we know that $f$ satisfies a monic equation over the polynomial ring $K[X]$:\n$$f^n+g_{n-1}f^{n-1}+\\ldots+g_1f+g_0=0,\\;\\;g_i\\in K[X],\\;i=0,\\ldots,n-1.$$\nWe claim that $g_i\\in{\\rm Int}(S,D)$, for $i=0,\\ldots,n-1$. Let $\\Phi(T)=T^n+g_{n-1}T^{n-1}+\\ldots+g_0\\in K[X][T]$. The roots of $\\Phi(T)$ are exactly the conjugates of $f$ under the action of the Galois group ${\\rm Gal}(F\/K)$, which acts on the coefficients of the polynomial $f$. If $\\sigma\\in{\\rm Gal}(F\/K)$, then $\\sigma(f)\\in F[X]$, and, more precisely, $\\sigma(f)\\in {\\rm Int}(S,D_F)$. In fact, for each $s\\in S\\subset K$, since $\\sigma$ leaves each element of $K$ invariant, we have $\\sigma(f)(s)=\\sigma(f(s))$ which still is an element of $D_F$ (which likewise is left invariant under the action of ${\\rm Gal}(F\/K)$). Now, since each coefficient $g_i(X)$ of $\\Phi(T)$ lies in $K[X]$ and is an elementary symmetric function on the elements $\\sigma(f), \\sigma\\in{\\rm Gal}(F\/K)$, we have that $g_i(s)\\in D_F\\cap K=D$, for each $s\\in S$, thus $g_i\\in{\\rm Int}(S,D)$, as claimed.\n\\end{proof}\n\\begin{Prop}\\label{IntSVIntSW}\nLet $(\\alpha,\\gamma)\\in F\\times\\Gamma_{w}$, where $F$ is a finite field extension of $K$ and $W$ is a valuation domain of $F$ lying over $V$. If $S$ is a subset of $V$ such that ${\\rm Int}(S,V)\\subset W_{\\alpha,\\gamma}\\cap K(X)$, then ${\\rm Int}(S,W)\\subset W_{\\alpha,\\gamma}$.\n\\end{Prop}\nNote that the polynomials in ${\\rm Int}(S,V)$ have coefficients in $K$, the quotient field of $V$, while the polynomials in ${\\rm Int}(S,W)$ have coefficients in $F$, the quotient field of $W$.\n\\begin{proof}\nLet $S$ be a subset of $V$ such that ${\\rm Int}(S,V)\\subset W_{\\alpha,\\gamma}\\cap K(X)=V_{\\alpha,\\gamma}$. The integral closure $V_F$ of $V$ in $F$ is equal to an intersection of finitely many rank 1 valuation domains $W=W_1,\\ldots,W_n$ of $F$ lying over $V$. In particular,\n$${\\rm Int}(S,V_F)=\\bigcap_{i=1,\\ldots,n}{\\rm Int}(S,W_i)$$\nLet $M_W$ be the maximal ideal of $W$. If $T=V_F\\setminus (M_W\\cap V_F)$, then $T^{-1}V_F=W$ and since localization commutes with finite intersections we have:\n$$T^{-1}{\\rm Int}(S,V_F)=\\bigcap_{i=1,\\ldots,n}T^{-1}{\\rm Int}(S,W_i)$$\nLet $f\\in K[X]$, say $f(X)=\\frac{g(X)}{d}$, for some $g\\in V_F[X]$ and $d\\in V_F$. Let $\\gamma_i=w_i(d)$, for each $i\\geq 2$. By the approximation theorem for independent valuations (\\cite[Corollaire 1, Chapt. VI, \\S 7]{Bourb}), there exists $t\\in V_F$ such that $w(t)=0$ and $w_i(t)=\\gamma_i$. In particular, $t\\in T$. Then $t\\cdot f\\in {\\rm Int}(S,W_i)$, so that $T^{-1}{\\rm Int}(S,W_i)=K[X]$ for all $i\\geq 2$. Clearly, $T^{-1}{\\rm Int}(S,W)={\\rm Int}(S,W)$. Hence, \n\\begin{equation}\\label{localization}\nT^{-1}{\\rm Int}(S,V_F)={\\rm Int}(S,W)\n\\end{equation}\nSince ${\\rm Int}(S,V_F)$ is the integral closure of ${\\rm Int}(S,V)$ in $F(X)$ by Proposition \\ref{integral extension int rings} and $V_{\\alpha,\\gamma}=W_{\\alpha,\\gamma}\\cap K(X)$ is an overring of ${\\rm Int}(S,V)$, it follows that $W_{\\alpha,\\gamma}$ is an overring of ${\\rm Int}(S,V_F)$. \n\nLet now $f\\in {\\rm Int}(S,W)$. By (\\ref{localization}) there exists $d\\in T$ such that $d\\cdot f\\in {\\rm Int}(S,V_F)\\subset W_{\\alpha,\\gamma}$. Since $d\\in W^*$ is also a unit in $W_{\\alpha,\\gamma}$, it follows that $f\\in W_{\\alpha,\\gamma}$. This shows that ${\\rm Int}(S,W)\\subset W_{\\alpha,\\gamma}$, as wanted. Note that, since $S\\subseteq V\\subset W$, by Lemma \\ref{overrings of V[X]}, $\\alpha\\in W$ and $\\gamma\\geq0$.\n\\end{proof}\n\nFor a subset $S$ of $V$, $\\alpha\\in K$ and $\\gamma\\in\\mathbb{R}$, we set \n\\begin{align*}\nS_{\\alpha,<\\gamma}=&\\{s\\in S \\mid v(s-\\alpha)<\\gamma\\}\\\\\nS_{\\alpha,>\\gamma}=&\\{s\\in S \\mid v(s-\\alpha)>\\gamma\\}\\\\\nS_{\\alpha,\\gamma}=&\\{s\\in S \\mid v(s-\\alpha)=\\gamma\\}\n\\end{align*}\nNote that if $S_{\\alpha,\\gamma}$ is not empty then $\\gamma\\in\\Gamma_v$.\n\\begin{Prop}\\label{polynomial closure Salpha=gamma}\nLet $S\\subseteq V$, $\\alpha\\in V$ and $\\gamma\\in\\Gamma_v$. Then $B(\\alpha,\\gamma)\\subseteq \\overline{S_{\\alpha,\\gamma}}$ if and only if there exists a pseudo-monotone sequence $E=\\{s_n\\}_{n\\in\\mathbb{N}}$ in $S_{\\alpha,\\gamma}$ with breadth $\\gamma$ such that $E$ is either pseudo-stationary and with pseudo-limit $\\alpha$ or pseudo-divergent and with a pseudo-limit in $S_{\\alpha,\\gamma}$. \n\nIn particular, if $V$ is a DVR, then $E$ can only be pseudo-stationary.\n\\end{Prop}\n\\begin{proof}\n\nThe 'if' part follows from Proposition \\ref{generalized pseudo sequence and polynomial closure}.\n\nConversely, suppose that $B(\\alpha,\\gamma)\\subseteq \\overline{S_{\\alpha,\\gamma}}$. Note that this assumption necessarily implies that either $V$ is non-discrete or $V\/M$ is infinite. In fact, $\\overline{S_{\\alpha,\\gamma}}\\subseteq\\overline{\\partial B(\\alpha,\\gamma)}$ and $\\partial B(\\alpha,\\gamma)$ is polynomially closed if $V$ is a DVR with finite residue field (Lemma \\ref{pol closure frontier}), so in this case $\\overline{S_{\\alpha,\\gamma}}$ could not contain $B(\\alpha,\\gamma)$. \n\nSuppose that there is no pseudo-divergent sequence $E=\\{s_n\\}_{n\\in\\mathbb{N}}$ in $S_{\\alpha,\\gamma}$ with breadth $\\gamma$ and with pseudo-limit $s\\in S_{\\alpha,\\gamma}$. This is equivalent to the following: for each $s\\in S_{\\alpha,\\gamma}$, let $\\gamma_s=\\inf\\{v(s-s')\\mid s'\\in S_{\\alpha,\\gamma},v(s'-s)>\\gamma\\}$. Then $\\gamma_s>\\gamma$, for each $s\\in S_{\\alpha,\\gamma}$ (note that, a priori, for $s\\not=s'\\in S_{\\alpha,\\gamma}$ we have $v(s-s')\\geq\\gamma$). We construct now a pseudo-stationary sequence in $S_{\\alpha,\\gamma}$ with pseudo-limit $\\alpha$ and breadth $\\gamma$. Let $s_1\\in S_{\\alpha,\\gamma}$. Then $\\alpha$ belongs to the set $U_1=\\{x\\in K \\mid v(x-s_1)<\\gamma_{s_1}\\}$, which is open in the polynomial topology by Remark \\ref{open sets polyn topology}, and since $\\alpha\\in\\overline{S_{\\alpha,\\gamma}}$ by assumption, there exists $s_2\\in S_{\\alpha,\\gamma}\\cap \\{x\\in K \\mid v(x-s_1)<\\gamma_{s_1}\\}$. By definition of $\\gamma_{s_1}$, we must have $v(s_1-s_2)=\\gamma$. Now, we consider the open set $U_2=\\{x\\in K \\mid v(x-s_i)<\\gamma_{s_i},i=1,2\\}$. Since $\\alpha\\in U_2$ there exists $s_3\\in U_2\\cap S_{\\alpha,\\gamma}$, so that $v(s_3-s_i)<\\gamma_{s_i}$, for $i=1,2$, which implies that $v(s_3-s_i)=\\gamma$, for $i=1,2$. If we continue in this way, we get a pseudo-stationary sequence $E=\\{s_n\\}_{n\\in\\mathbb{N}}\\subseteq S_{\\alpha,\\gamma}$ with pseudo-limit $\\alpha$ and breadth $\\gamma$, as wanted.\n\nThe last claim follows immediately from \\S \\ref{Pds}.\n\\end{proof}\n\n\\vskip0.5cm\nIn the next two results, we consider an integral domain $R\\subset K[X]$ with quotient field $K(X)$ such that $R\\subset V_{\\alpha,\\gamma'}$, for some $(\\alpha,\\gamma')\\in V\\times\\Gamma_v$ (in particular, $R$ is not Pr\\\"ufer by Theorem \\ref{criterion 1 general case}). If $\\alpha$ is fixed, then by Remark \\ref{Valphagamma polynomial linearly ordered} the set of rings $\\{K[X]\\cap V_{\\alpha,\\gamma'} \\mid \\gamma'\\in\\Gamma_v\\}$ is linearly ordered. In the following we consider the infimum in $\\mathbb{R}$ of the set $\\{\\gamma'\\in\\Gamma_v \\mid R\\subset V_{\\alpha,\\gamma'}\\}$. \n\n\n\\begin{Lemma}\\label{limit gamma'}\nLet $R\\subset K[X]$ be an integral domain with quotient field $K(X)$. Suppose $R\\subset V_{\\alpha,\\gamma'}$ for some $\\alpha\\in V$ and $\\gamma'\\in\\Gamma_v$ and let $\\gamma=\\inf\\{\\gamma'\\in \\Gamma_v \\mid R\\subset V_{\\alpha,\\gamma'}\\}\\in\\mathbb{R}$. Then $R\\subset V_{\\alpha,\\gamma}$.\n\\end{Lemma}\nIn particular, $\\gamma$ is a minimum if and only if $\\gamma\\in\\Gamma_v$. Note that, if $V$ is nondiscrete, it may well be that $\\gamma\\in\\mathbb{R}\\setminus\\Gamma_v$. \n\\begin{proof}\nLet $f\\in R$, with $f(X)=a_0+a_1(X-\\alpha)+\\ldots+a_d(X-\\alpha)^d$. Then \n$$f\\in V_{\\alpha,\\gamma'}\\Leftrightarrow \\inf\\{v(a_i)+i\\gamma' \\mid i=0,\\ldots,d\\}\\geq0\\Leftrightarrow a_0\\in V,\\gamma'\\geq-\\frac{v(a_i)}{i},i=1,\\ldots,d$$\nSince $\\gamma$ is the infimum of the $\\gamma'$ with the above property in particular we have \n$$a_0\\in V,\\gamma\\geq-\\frac{v(a_i)}{i},i=1,\\ldots,d$$\nthat is, $v_{\\alpha,\\gamma}(f)=\\inf\\{v(a_i)+i\\gamma\\mid i=0,\\ldots,d\\}\\geq0\\Leftrightarrow f\\in V_{\\alpha,\\gamma}$.\n\\end{proof}\nBy Lemma \\ref{polynomial ring and int balls}, the next theorem shows that if $B(\\alpha,\\gamma)$ is the largest ball centered in $\\alpha$ contained in the polynomial closure $\\overline{S}$ of $S$, then there exists a pseudo-monotone sequence in $S$ with breadth $\\gamma$ and pseudo-limit in $V$, which is equal either to $\\alpha$ or to $\\alpha+t$, where $t\\in V$ has valuation $\\gamma$. This result is the desired converse to Proposition \\ref{generalized pseudo sequence and polynomial closure}.\n\n\\begin{Thm}\\label{existence of generalized pseudo-sequence}\nLet $S\\subseteq V$ be a subset such that ${\\rm Int}(S,V)\\subset V_{\\alpha,\\gamma'}$, for some $\\alpha\\in V$ and $\\gamma'\\in\\Gamma_v$. Let $\\gamma=\\inf\\{\\gamma'\\in\\Gamma_v \\mid {\\rm Int}(S,V)\\subset V_{\\alpha,\\gamma'}\\}\\in\\mathbb{R}$. Then there exists a pseudo-monotone sequence $E=\\{s_n\\}_{n\\in\\mathbb{N}}\\subseteq S$ with breadth $\\gamma$ such that one of the following conditions holds:\n\\begin{itemize}\n\\item[1)] $E\\subseteq S_{\\alpha,<\\gamma}$ is pseudo-convergent with pseudo-limit $\\alpha$.\n\\item[2)] $E\\subseteq S_{\\alpha,>\\gamma}$ is pseudo-divergent with pseudo-limit $\\alpha$. \n\\item[3)] $E\\subseteq S_{\\alpha,\\gamma}$ is pseudo-divergent with a pseudo-limit $s\\in S_{\\alpha,\\gamma}$. \n\\item[4)] $E\\subseteq S_{\\alpha,\\gamma}$ is pseudo-stationary with pseudo-limit $\\alpha$.\n\\end{itemize}\nMoreover, condition 1) holds if and only if $\\sup\\{v(s-\\alpha) \\mid s\\in S_{\\alpha,<\\gamma}\\}=\\gamma$, condition 2) holds if and only if $\\inf\\{v(s-\\alpha) \\mid s\\in S_{\\alpha,>\\gamma}\\}=\\gamma$ and conditions 3) or 4) hold if and only if $B(\\alpha,\\gamma)\\subseteq\\overline{S_{\\alpha,\\gamma}}$. In these last two cases, $\\gamma$ is a minimum$\\Leftrightarrow\\gamma\\in\\Gamma_v$.\n\nIn particular, if $V$ is discrete, $\\gamma$ is a minimum and only case 4) holds.\n\\end{Thm}\n\\begin{Rem}\nNote that by Lemma \\ref{limit gamma'} we have ${\\rm Int}(S,V)\\subset V_{\\alpha,\\gamma}$ either in the case $\\gamma\\in\\Gamma_v\\Leftrightarrow \\gamma$ is a minimum or $\\gamma\\in\\mathbb{R}\\setminus\\Gamma_v$. Note also that in case 3), where $s\\in S_{\\alpha,\\gamma}$ is a pseudo-limit of a pseudo-divergent sequence $E=\\{s_n\\}_{n\\in\\mathbb{N}}\\subseteq S_{\\alpha,\\gamma}$, we have $V_{\\alpha,\\gamma}=V_{s,\\gamma}$, by Remark \\ref{equality Valphagamma}.\n\\end{Rem}\n\\begin{proof}\nSince by Theorem \\ref{criterion 1 general case} ${\\rm Int}(S,V)$ is not a Pr\\\"ufer domain, by \nTheorem \\ref{IntVPrufer} either $V$ is not discrete or its residue field is infinite, otherwise ${\\rm Int}(V)$ would be Pr\\\"ufer and in particular its overring ${\\rm Int}(S,V)$ would be Pr\\\"ufer, too.\n\nWe consider the following real numbers: \n\\begin{align*}\n\\gamma_1=&\\sup\\{v(s-\\alpha) \\mid s\\in S_{\\alpha,<\\gamma}\\}\\\\\n\\gamma_2=&\\inf\\{v(s-\\alpha) \\mid s\\in S_{\\alpha,>\\gamma}\\}\n\\end{align*}\nClearly, we have $\\gamma_1\\leq\\gamma\\leq\\gamma_2$. Since $S_{\\alpha,>\\gamma}\\subseteq B(\\alpha,\\gamma_2)$ and every closed ball is polynomially closed (Theorem \\ref{Thm Chabert}), we have:\n\\begin{equation}\\label{Salpha>gamma}\n\\overline{S_{\\alpha,>\\gamma}}\\subseteq B(\\alpha,\\gamma_2)\n\\end{equation}\nIf $\\gamma_1=\\gamma$, then there exists a pseudo-convergent sequence $\\{s_n\\}_{n\\in\\mathbb{N}}\\subseteq S_{\\alpha,<\\gamma}$ with pseudo-limit $\\alpha$ and breadth $\\gamma$, that is:\n$$v(s_n-\\alpha)\\gamma}$ with $\\alpha$ as pseudo-limit and breadth $\\gamma$:\n$$v(s_n-\\alpha)>v(s_{n+1}-\\alpha)\\searrow\\gamma$$\nHence, if either $\\gamma_1=\\gamma$ or $\\gamma_2=\\gamma$ we are done.\n\nSuppose from now on that\n$$\\gamma_1<\\gamma<\\gamma_2.$$\nWe are going to show that under these conditions $\\gamma$ is a minimum (or, equivalently, $\\gamma\\in\\Gamma_v$), by means of the fact that $S_{\\alpha,\\gamma}$ is non-empty. We claim first that $\\{v(s-\\alpha)\\mid s\\in S_{\\alpha,<\\gamma})\\}$ is finite; in fact, if that were not true, there would exist a sequence $\\{s_n\\}_{n\\in\\mathbb{N}}\\subseteq S_{\\alpha,<\\gamma}$ either pseudo-convergent or pseudo-divergent with breadth $\\gamma'<\\gamma$, so that ${\\rm Int}(S,V)\\subset V_{\\alpha,\\gamma'}$ by Proposition \\ref{generalized pseudo sequence and polynomial closure}, contrary to the assumption on $\\gamma$. Therefore $\\gamma_1$ is a maximum and we may assume that:\n$$\\{v(s-\\alpha)\\mid s\\in S_{\\alpha,<\\gamma}\\}=\\{\\gamma_1,\\ldots,\\gamma_r\\},\\;\\;\\gamma_r<\\ldots<\\gamma_1<\\gamma.$$\nFor each $i=1,\\ldots,r$ we set:\n$$S_{\\alpha,\\gamma_i}=\\{s\\in S_{\\alpha,<\\gamma} \\mid v(s-\\alpha)=\\gamma_i\\}$$\nso that\n$$S_{\\alpha,<\\gamma}=\\bigcup_{i=1,\\ldots,r}S_{\\alpha,\\gamma_i}.$$\nFor each $i=1,\\ldots,r$, there is no pseudo-stationary sequence $\\{s_n\\}_{n\\in\\mathbb{N}}\\subset S_{\\alpha,\\gamma_i}$ with breadth $\\gamma_i$ and pseudo-limit $\\alpha$, otherwise by Proposition \\ref{generalized pseudo sequence and polynomial closure} ${\\rm Int}(S,V)\\subset V_{\\alpha,\\gamma_i}$, in contradiction with the definition of $\\gamma$. Hence, for each $i\\in\\{1,\\ldots,r\\}$, there exist finitely many $s_{i,j}\\in S_{\\alpha,\\gamma_i}$, $j\\in I_i$, such that the following holds:\n$$\\forall s\\in S_{\\alpha,\\gamma_i}, \\exists j\\in I_i \\textnormal{ such that }v(s-s_{i,j})>\\gamma_i.$$ \nFor each $j\\in I_i$, we set $S_{\\alpha,\\gamma_i,j}=\\{s\\in S_{\\alpha,\\gamma_i} \\mid v(s-s_{i,j})>\\gamma_i\\}$ and $\\gamma_{i,j}=\\inf\\{v(s-s_{i,j}) \\mid s\\in S_{\\alpha,\\gamma_i,j}\\}$. If $\\gamma_{i,j}=\\gamma_i$, then there exists a pseudo-divergent sequence with pseudo-limit $s_{i,j}$ and breadth $\\gamma_i$ so that by Proposition \\ref{generalized pseudo sequence and polynomial closure} we would have $B(s_{i,j},\\gamma_i)=B(\\alpha,\\gamma_i)\\subseteq\\overline{S}$ (recall that $v(\\alpha-s_{i,j})=\\gamma_i$), which is equivalent to ${\\rm Int}(S,V)\\subset V_{\\alpha,\\gamma_i}$ by Lemma \\ref{polynomial ring and int balls}, contrary to the assumption on $\\gamma$. Thus, $\\gamma_{i,j}>\\gamma_i$ for all $j\\in I_i$ (and for all $i\\in 1,\\ldots,r$). Finally, we have showed that\n$$S_{\\alpha,\\gamma_i}\\subseteq\\bigcup_{j\\in I_i}B(s_{i,j},\\gamma_{i,j})$$\nso that\n\\begin{equation}\\label{Salpha\\gamma}}$$\nWe claim that $\\partial B(\\alpha,\\gamma')\\subseteq \\overline{S_{\\alpha,\\gamma}}$. In fact, let $\\beta\\in V$ be such that $v(\\beta-\\alpha)=\\gamma'$. If $\\beta\\in \\overline{S_{\\alpha,<\\gamma}}$ then by (\\ref{Salpha\\gamma_i$, a contradiction. Similarly, $\\beta$ is not in $\\overline{S_{\\alpha,>\\gamma}}$, by(\\ref{Salpha>gamma}) and the fact that $\\gamma'<\\gamma_2$. Therefore $\\partial B(\\alpha,\\gamma')\\subseteq \\overline{S_{\\alpha,\\gamma}}$, as claimed. In particular, $S_{\\alpha,\\gamma}\\not=\\emptyset$, $\\gamma\\in\\Gamma_v$ and so $\\gamma=\\min\\{\\gamma'\\in\\Gamma_v \\mid {\\rm Int}(S,V)\\subset V_{\\alpha,\\gamma'}\\}$. We may therefore assume that $\\gamma'=\\gamma$. By Lemma \\ref{pol closure frontier} (recall that either $V$ is not discrete or its residue field is infinite) we have $ B(\\alpha,\\gamma)\\subseteq \\overline{S_{\\alpha,\\gamma}}$, so that, by Proposition \\ref{polynomial closure Salpha=gamma}, there exists a pseudo-monotone sequence $E=\\{s_n\\}_{n\\in\\mathbb{N}}\\subseteq S_{\\alpha,\\gamma}$ with breadth $\\gamma$ such that either $E$ is pseudo-stationary and has pseudo-limit $\\alpha$ or $E$ is pseudo-divergent and has pseudo-limit in $S_{\\alpha,\\gamma}$. The proof is now complete.\n\\end{proof}\n\nAs we have already said, if ${\\rm Int}(S,V)$ is not Pr\\\"ufer then there might be residually transcendental valuation overrings which are not of the form $V_{\\alpha,\\gamma}$ with $(\\alpha,\\gamma)\\in K\\times\\Gamma_v$. For example, if $E=\\{s_n\\}_{n\\in\\mathbb{N}}\\subset V$ is a pseudo-convergent sequence of algebraic type without pseudo-limits in $K$, then ${\\rm Int}(E,V)$ is not Pr\\\"ufer by Theorem \\ref{ThmLW}; by Theorem \\ref{existence of generalized pseudo-sequence} is not difficult to show that ${\\rm Int}(S,V)\\not\\subset V_{\\alpha,\\gamma}$ for every $(\\alpha,\\gamma)\\in K\\times\\Gamma_v$. However, we may reduce our discussion to this case by means of Proposition \\ref{IntSVIntSW}, as the following proposition shows.\n\n\\begin{Prop}\\label{reduction to K}\nLet $S\\subseteq V$ be such that ${\\rm Int}(S,V)\\subset V_{\\alpha,\\gamma'}=W_{\\alpha,\\gamma'}\\cap K(X)$ for some $(\\alpha,\\gamma')\\in F\\times\\Gamma_{w}$, where $F$ is a finite extension of $K$ and $W$ is a valuation domain of $F$ lying over $V$. Let $\\gamma=\\inf\\{\\gamma'\\in\\Gamma_w \\mid {\\rm Int}(S,V)\\subset V_{\\alpha,\\gamma'}=W_{\\alpha,\\gamma'}\\cap K(X)\\}$. Then there exists a pseudo-monotone sequence $E=\\{s_n\\}_{n\\in\\mathbb{N}}\\subseteq S$ with breadth $\\gamma$ and pseudo-limit which is equal either to $\\alpha$ or belongs to $\\{x\\in W \\mid w(x-\\alpha)=\\gamma\\}$.\n\nIf $V$ is a DVR, then $E$ is pseudo-stationary, the breadth $\\gamma$ is in $\\Gamma_v$ and there exists $\\beta\\in V$ which is a pseudo-limit of $E$, so that, in particular, $V_{\\alpha,\\gamma}=V_{\\beta,\\gamma}$.\n\\end{Prop}\n\\begin{proof}\nSuppose that ${\\rm Int}(S,V)\\subset V_{\\alpha,\\gamma'}=W_{\\alpha,\\gamma'}\\cap K(X)$ as in the assumptions of the proposition. By Proposition \\ref{IntSVIntSW}, ${\\rm Int}(S,W)\\subset W_{\\alpha,\\gamma'}$. Note that, by Lemma \\ref{overrings of V[X]}, $\\alpha\\in W$ and $\\gamma'\\geq 0$. By Theorem \\ref{existence of generalized pseudo-sequence}, there exists a pseudo-monotone sequence $\\{s_n\\}_{n\\in\\mathbb{N}}\\subseteq S$ whose breadth is $\\gamma$ and has pseudo-limit which is either $\\alpha$ or belongs to $\\{x\\in W \\mid w(x-\\alpha)=\\gamma\\}$.\n\nIn the case $V$ is discrete, by Theorem \\ref{existence of generalized pseudo-sequence} $E$ is necessarily pseudo-stationary with breadth $\\gamma$ and $\\alpha$ is a pseudo-limit of $E$. Since the $s_n$'s are elements of $K$, $w(s_n-s_m)=v(s_n-s_m)$, so that $\\gamma\\in\\Gamma_v$. Moreover, by \\S \\ref{Remark pseudo-stationary}, any element of $E=\\{s_n\\}_{n\\in\\mathbb{N}}$ can be considered as a pseudo-limit of $E$, so the last equality follows from Remark \\ref{equality Valphagamma}.\n\\end{proof}\nFinally, the next theorem characterizes the subsets $S$ of $V$ for which ${\\rm Int}(S,V)$ is Pr\\\"ufer. We give first the following generalization of the definition of pseudo-limit. \n\n\\begin{Def}\nLet $E=\\{s_n\\}_{n\\in\\mathbb{N}}$ be a pseudo-monotone sequence of $K$ and $\\alpha\\in\\overline{K}$. We say that $\\alpha$ is a pseudo-limit of $E$ if there exists a valuation $w$ of $K(\\alpha)$ which lies above $v$ such that $\\alpha$ is a pseudo-limit of $E$ with respect to $w$ (clearly, $E$ is a pseudo-monotone sequence with respect to $w$). \n\\end{Def}\n\nWe recall that for our convention (see \\S\\ref{pcv}) a pseudo-convergent sequence $E$ has non-zero breadth ideal. With this terminology, the next theorem shows that ${\\rm Int}(S,V)$ is Pr\\\"ufer if and only if $S$ does not admit any pseudo-limit in $\\overline{K}$. This theorem generalizes the main result of Loper and Werner (Theorem \\ref{ThmLW}).\n\\begin{Thm}\\label{final theorem}\nLet $S\\subseteq V$. Then ${\\rm Int}(S,V)$ is a Pr\\\"ufer domain if and only if $S$ does not contain a pseudo-monotone sequence $E=\\{s_n\\}_{n\\in\\mathbb{N}}$ which has a pseudo-limit $\\alpha\\in\\overline{K}$. \n\\end{Thm}\n\\begin{proof}\nSuppose there exists a pseudo-monotone sequence $E=\\{s_n\\}_{n\\in\\mathbb{N}}\\subseteq S$ with breadth $\\gamma\\in \\mathbb{R}$ and pseudo-limit $\\alpha\\in\\overline{K}$. If $E$ is pseudo-stationary, then we know that $\\gamma\\in\\Gamma_v$ and any element $s$ of $E$ is a pseudo-limit of $E$ (\\S \\ref{Remark pseudo-stationary}). Then by Proposition \\ref{generalized pseudo sequence and polynomial closure}, ${\\rm Int}(S,V)\\subset V_{s,\\gamma}=V_{\\alpha,\\gamma}$, so by Theorem \\ref{criterion 1 general case} ${\\rm Int}(S,V)$ is not Pr\\\"ufer. Suppose now that $E$ is either pseudo-convergent or pseudo-divergent, and let $F$ be a finite extension of $K$ which contains $\\alpha$. Let $W$ be a valuation domain of $F$ lying over $V$ (which is necessarily of rank one) for which $\\alpha$ is a pseudo-limit of $E$ (which clearly is a pseudo-monotone sequence with respect to the associated valuation $w$). Clearly, $\\alpha\\in W$. By Proposition \\ref{generalized pseudo sequence and polynomial closure}, it follows that ${\\rm Int}(S,W)\\subset W_{\\alpha,\\gamma}$ and contracting down to $K[X]$ we get ${\\rm Int}(S,V)\\subset V_{\\alpha,\\gamma}=W_{\\alpha,\\gamma}\\cap K(X)$, so, by Theorem \\ref{criterion 1 general case} and Remark \\ref{breadth not in Gamma_v}, ${\\rm Int}(S,V)$ is not Pr\\\"ufer.\n\nConversely, suppose that ${\\rm Int}(S,V)$ is not Pr\\\"ufer. By Theorem \\ref{criterion 1 general case}, there exists $(\\alpha,\\gamma')\\in \\overline{K}\\times\\Gamma_{\\overline{v}}$ and an extension $\\overline{W}$ of $V$ to $\\overline K$ such that ${\\rm Int}(S,V)\\subset V_{\\alpha,\\gamma'}^{\\overline{W}}$. As in Remark \\ref{example}, let $F$ be a finite extension of $K$ and $W=\\overline{W}\\cap F$ such that $(\\alpha,\\gamma')\\in F\\times\\Gamma_w$. Let $\\gamma=\\inf\\{\\gamma'\\in\\Gamma_w \\mid {\\rm Int}(S,V)\\subset V_{\\alpha,\\gamma'}=W_{\\alpha,\\gamma'}\\cap K(X)\\}\\in \\mathbb{R}$. By Proposition \\ref{reduction to K}, there exists a pseudo-monotone sequence $E=\\{s_n\\}_{n\\in\\mathbb{N}}\\subseteq S$ with breadth $\\gamma$ and pseudo-limit which is equal either to $\\alpha$ or to $\\alpha+t$, where $t\\in W$ is such that $w(t)=\\gamma$.\n\\end{proof}\n\nWe summarize here some known results and new characterizations of when ${\\rm Int}(S,V)$ is a Pr\\\"ufer domain, when $V$ is a DVR. Recall that, as already remarked by Loper and Werner, if $V$ is a non-discrete rank one valuation domain, there are subsets $S$ of $V$ which are not precompact but ${\\rm Int}(S,V)$ is Pr\\\"ufer (see the Introduction).\n\\begin{Cor}\nLet $V$ be a DVR and $S\\subseteq V$. Then the following conditions are equivalent:\n\\begin{itemize}\n\\item[i)] ${\\rm Int}(S,V)$ is Pr\\\"ufer.\n\\item[ii)] there is no pseudo-stationary sequence contained in $S$.\n\\item[iii)] $S$ is precompact.\n\\item[iv)] there is no $(\\alpha,\\gamma)\\in V\\times\\Gamma_v$ such that ${\\rm Int}(S,V)\\subset V_{\\alpha,\\gamma}$.\n\\end{itemize}\n\\end{Cor}\n\\begin{proof}\nIt is easy to see that ii) and iii) are equivalent if $V$ is discrete. In fact, $S$ is precompact if and only if $S$ modulo $M^n$ is finite for each $n\\geq 1$ (\\cite[Proposition 1.2]{CCL}). Now, if the latter condition holds, then there cannot be any pseudo-stationary sequence in $V$ by \\S \\ref{Remark pseudo-stationary}. Conversely, if $S$ modulo $M^n$ is infinite for some $n\\geq 1$, then there exists $\\{s_m\\}_{m\\in\\mathbb{N}}\\subseteq S$ such that $v(s_m-s_k)$ (1MeV)$^4$ \\cite{Cline}. However, a different constraint for the brane tension from current tests for deviation\nfrom Newton`s law was obtained in Refs.\\cite{test1,test2} in which\nit is restricted to $\\tau\\geq $(10\n TeV)$^4$.\n\n\nIn the following, we will consider that the\n constant $\\Lambda_4=0$, and once \nthe inflation epoch initiates, the quantity $\\xi\/a^4$ will\nrapidly become unimportant, with which the modified Friedmann\nEq.(\\ref{eq1}) becomes\\cite{3}\n\n\\begin{equation}\n3H^{2}=\\kappa\\,\\rho\\left[1+\\frac{\\rho}{2\\tau}\\right]. \\label{HC}\n\\end{equation}\n\n\n\nIn order to describe the matter, we consider that the energy density $\\rho$ corresponds to a standard scalar field $\\phi $, where\nthe energy density $\\rho(\\phi)$ and the pressure $P(\\phi)$ are\ndefined as\n$\\rho=\\frac{\\dot{\\phi}^{2}}{2}+V(\\phi )$, and $P\n\\frac{\\dot{\\phi}^{2}}{2}-V(\\phi )$, respectively. Here, the\nquantity\n $V(\\phi )=V$ denotes the\nscalar potential. We also consider that the scalar field $\\phi$\nis a homogeneous scalar field i.e., $\\phi=\\phi(t)$ and also this\nfield is confined to the brane \\cite{2,3}. In this context, the\ndynamics of the scalar field can be written as\n\\begin{equation}\n\\dot{\\rho}+3H(\\rho +P)=0, \\label{key_01}\n\\end{equation\nor equivalently\n\\begin{equation}\n\\ddot{\\phi}+3H\\dot{\\phi}+V'=0, \\label{ecdf}\n\\end{equation\nwhere $V'=\\partial V(\\phi)\/\\partial \\phi$. Here the dots mean\nderivatives with respect to the cosmological time.\n\n\n\nBy assuming the slow roll approximation in which the energy\ndensity $\\rho\\sim V(\\phi)$, then the Eq.(\\ref{HC}) reduces\nto\\cite{2,3}\n\\begin{equation}\n3H^{2}\\approx\\kappa\\,V\\left[1+\\frac{V}{2\\tau}\\right], \\label{HC2}\n\\end{equation}\nand Eq.(\\ref{ecdf}) can be written as\n\\begin{equation}\n3H\\dot{\\phi}\\approx-V'. \\label{ecdf2}\n\\end{equation\nFollowing Ref.\\cite{2} we can introduce the slow roll parameters\n$\\epsilon$ and $\\eta$ defined as\n\\begin{equation}\n\\epsilon=\\frac{1}{2\\kappa}\\left(\\frac{V'}{V}\\right)^2\\,\\frac{(1+V\/\\tau)}{(1+V\/2\\tau)^2},\\,\\,\\,\\,\\,\\mbox{and}\\,\\,\\,\\,\\,\\,\\,\\eta=\\frac{1}{\\kappa}\\frac{V''}{V(1+V\/2\\tau)}.\\label{sp}\n\\end{equation}\nOn the other hand, introducing the number of $e$-folding $N$ between\ntwo different values of the time $t$ and $t_e$ gives \n\\begin{equation}\nN=\\int_t^{t_e}\\,H\\,\ndt\\simeq\\kappa\\int_{\\phi_e}^{\\phi}\\,\\frac{V}{V'}\\left(1+\\frac{V}{2\\tau}\\right)\\,d\\phi,\\label{N1}\n\\end{equation}\nwhere $t_e$ corresponds to the end of the inflationary stage and here we\nhave considered the slow roll approximation.\n\n\n\nIn the context of the brane world the power spectrum\n${\\mathcal{P}_{\\mathcal{R}}}$ of the curvature\nperturbations assuming the slow-roll approximation is given by\n\\cite{4}.\n\\begin{equation}\n {\\mathcal{P}_{\\mathcal{R}}}\n =\\left(\\frac{H^2}{\\dot{\\phi}^2}\\right)\\,\\left(\\frac{H}{2\\pi}\\right)^2\\simeq\\frac{\\kappa^3}\n {12\\pi^2}\\,\\frac{V^3}{V'\\,^2}\\,\\left(1+\\frac{V}{2\\tau}\\right)^3.\\label{Pet}\n\\end{equation}\nThe scalar spectral index $n_{s}$ is defined as $n_{s}-1=\\frac{d\\ln \\,\n\\mathcal{P}_{R}}}{d\\ln k}$ and in terms of the slow roll\nparameters $\\epsilon$ and $\\eta$ can be written as\\cite{4}\n\\begin{equation}\nn_s-1=-6\\epsilon+2\\eta.\\label{ns1}\n\\end{equation}\nHere we have used Eqs.(\\ref{sp}) and (\\ref{Pet}), respectively.\n\n\n\nIt is well known that the tensor-perturbation during inflation\nwould produce gravitational waves. In the braneworld the tensor\n perturbation\nis more complicated than the standard expression obtained in GR, where the\namplitude of the tensor perturbations ${\\mathcal{P}\n_{g}\\propto H^2$. Because the braneworld gravitons propagate in\nthe bulk, the amplitude of the tensor perturbation suffers a\nmodification \\cite{t}, wherewith\n\\begin{equation}\n{\\mathcal{P}\n_{g}=8\\kappa \\,\\left( \\frac{H}{2\\pi }\\right)\n^{2}F^{2}(x)\\label{PGB},\n\\end{equation}\nwhere the quantity $x=Hm_{p}\\sqrt{3\/(4\\pi \\tau )}$ and the\nfunction $F(x)$ is defined as\n\\begin{equation}\nF(x)=\\left[ \\sqrt{1+x^{2}}-x^{2}\\sinh ^{-1}(1\/x)\\right] ^{-1\/2},\n\\label{Fx}\n\\end{equation}\nin which the correction given by the function $F(x)$, appeared from the normalization of a zero-mod\n\\cite{t}. In particular in the limit in which the tension $\\tau\\gg\nV$, the function $F(x)\\rightarrow 1$ and then ${\\mathcal{P}\n_{g}\\propto H^2$.\n\n\nAn important observational quantity is the tensor to scalar ratio\n$r$, defined as\n$r=\\left(\\frac{{\\mathcal{P}}_g}{P_{\\mathcal{R}}}\\right)$. Thus,\ncombining Eqs.(\\ref{Pet}) and (\\ref{PGB}), the tensor-scalar\nratio, $r$, is given by\n\\begin{equation}\nr=\\left(\n\\frac{{\\mathcal{P}}_{g}}{\\mathcal{P}_{\\mathcal{R}}}\\right)\n\\simeq\\frac{8}{\\kappa}\\left(\\frac{V'}{V}\\right)^2\\,\\left(1+\\frac{V}{2\\tau}\\right)^{-3}\\,F^2(V).\n\\label{Rk1}\n\\end{equation\nHere, we have considered that the quantity $x$ can be rewritten in\nterms of the effective potential from Eq.(\\ref{HC2}).\n\n\n\n\\section{Reconstruction on brane}\\label{2}\n\nIn this section we consider the methodology in order to\nreconstruct the background variables, considering the scalar\nspectral index in terms of the number of $e$-folds in the framework\nof brane-world. As a first part, we rewrite the scalar spectral\nindex given by Eq.(\\ref{ns1}), as a function of the number of\n$e$-folds $N$ and its derivatives. In this form, obtaining the index\n$n_s=n_s(N)$,\n we should find the potential $V=V(N)$ in terms of the number of $e$-folding $N$.\n Subsequently, utilizing the relation given by Eq.(\\ref{N1}), we\nshould obtain the $e$-folds $N$ as a function of the scalar field\n$\\phi$ i.e., $N=N(\\phi)$. Finally, considering these relations, we\ncan reconstruct the effective potential $V(\\phi)$ in order to\nsatisfy a specific attractor $n_s(N)$.\n\nIn this way, we start by\nrewriting the standard slow roll parameters $\\epsilon$ and $\\eta$\nin terms of the number of $e$-folds $N$. Thus, the derivative of\nthe scalar potential $V'$ from Eq.(\\ref{N1}) can be rewritten as\n\\begin{equation}\nV'=\\frac{dV}{d\\phi}=V_{,\\,N}\\frac{dN}{d\\phi},\\,\\,\\,\\,\\,\\mbox{in\nwhich}\\,\\,\\,\\,V'\\,^2=\\kappa V\\,\\left(1+\\frac{V}{2\\tau}\\right)\\,V_{,\\,N},\\label{dV}\n\\end{equation}\nand this suggests that $V_{,\\,N}$ is a positive quantity.\n In the following, we will consider that the\nnotation $V_{,\\,N}$ corresponds to $dV\/dN$, $V_{,\\,NN}$ denotes $d^2V\/dN^2$, etc.\n\nAnalogously, we can rewrite $V''$ as\n$$\nV''=\\frac{\\kappa}{2V_{,\\,N}}\\,\\left[V_{,\\,N}^2\\,\\left(1+\\frac{V}{\\tau}\\right)+V\\left(1+\n\\frac{V}{2\\tau}\\right)V_{,\\,NN}\\right].\n$$\n\nIn this form, the slow roll parameter $\\epsilon$ can be rewritten as\n\\begin{equation}\n \\epsilon=\\frac{1}{2}\\,\\frac{\\left(1+\\frac{V}{\\tau}\\right)}{V\\left(1+\\frac{V}{2\\tau}\\right)}\\,\\,V_{,\\,N},\n\\end{equation}\nand the parameter $\\eta$ as\n\\begin{equation}\n\\eta=\\frac{1}{2V}\\,\\frac{\\left(1+\\frac{V}{\\tau}\\right)}{\\left(1+\\frac{V}{2\\tau}\\right)}\\,V_{,\\,N}+\n\\frac{V_{,\\,NN}}{2V_{,\\,N}}\\,\\,,\n\\end{equation}\nrespectively. Here, we have considered that $V'>0$.\n\nAlso, from Eqs.(\\ref{N1}) and (\\ref{dV}) we can rewritten $dN\/d\\phi$ as\n\\begin{equation}\n\\frac{dN}{d\\phi}=\\sqrt{\\frac{\\kappa\nV}{V_{,\\,N}}}\\,\\sqrt{\\left(1+\\frac{V}{2\\tau}\\right)}\\,.\\label{Nf}\n\\end{equation}\n\nIn this way, by using Eq.(\\ref{ns1}) we find that the scalar\nspectral index can be rewritten as\n\\begin{equation}\n n_s-1=-2\\frac{\\left(1+\\frac{V}{\\tau}\\right)}{V\\left(1+\\frac{V}{2\\tau}\\right)}\\,V_{,\\,N}+\n \\frac{V_{,\\,NN}}{V_{,\\,N}},\n\\end{equation}\nor equivalently\n\\begin{equation}\nn_s-1=-2\\left[\\ln\\left(V\\left[1+\\frac{V}{2\\tau}\\right]\\right)\\right]_{,\\,N}+[\\ln\\,V_{,\\,N}]_{,\\,N}=\n\\left[\\ln\\left(\\frac{V_{,\\,N}}{V^2\\left(1+\\frac{V}{2\\tau}\\right)^2}\\right)\\right]_{,\\,N}.\\label{R1}\n\\end{equation}\nWe also note that in the limit in which $\\tau\\rightarrow \\infty$,\nEq.(\\ref{R1}) reduces to GR, in which $n_s-1=\\left(\\ln\n\\frac{V_{,\\,N}}{V^2}\\right)_{,\\,N}$, see Ref.\\cite{Chiba:2015zpa}.\n\nFrom Eq.(\\ref{R1}) we have\n\\begin{equation}\n\\frac{V_{,\\,N}}{V^2(1+V\/2\\tau)^2}=e^{\\int\\,{(n_s-1)}dN}.\\label{dVN}\n\\end{equation}\nThis equation gives us the effective potential $V(N)$ for a\nspecific attractor $n_s(N)$. Thus, integrating we have\n\\begin{equation}\n \\frac{1}{\\tau}\\,\\ln\\left(\\frac{1+V\/2\\tau}{V\/2\\tau}\\right)-\\frac{(1+V\/\\tau)}{V(1+V\/2\\tau)}=\n \\int \\left[e^{\\int\\,{(n_s-1)}dN}\\right]\\,dN\\,.\n\\end{equation}\nHowever, this equation results in a transcendental equation for the scalar\npotential $V$ and this result does not permit one to obtain the relation $V=V(N)$.\n\nWe also note that by combining Eqs.(\\ref{Nf}) and (\\ref{dVN}), we\nobtain that the relation between the number of $e$-folds $N$ and\nthe scalar field $\\phi$ can be written as\n\\begin{equation}\n\\left[\\sqrt{V\\,\\left(1+\\frac{V}{2\\tau}\\right)}\\,e^{\\int\\,{\\frac{(n_s-1)}{2}dN}}\\right]\\,dN=d\\phi.\n\\end{equation}\n\nOn the other hand, from Eq.(\\ref{Rk1}) the tensor-scalar ratio,\n$r$ can be rewritten as\n\\begin{equation}\nr(N)\n\\simeq\\left(\\frac{4}{\\tau}\\right)\\,V_{,\\,N}\\,\\left(1+\\frac{V}{2\\tau}\\right)^{-3}\\,F^2(V).\n\\label{Rk2} \\end{equation}\n\nIn the following we will consider the high energy limit in which\n$\\rho\\simeq V\\gg\\tau$, in order to obtain an analytical solution in\nthe reconstruction of the scalar potential in terms of the scalar\nfield $V(\\phi)$.\n\n\n\\section{High energy:Reconstruction from the attractor\n$n_s(N)$}\\label{3}\n\nIn this section we consider the high energy limit ($V\\gg\\tau$) in\norder to reconstruct the scalar potential, considering as an \nattractor the scalar spectral index in terms of the number of\n$e$-folds i.e., $n_s=n_s(N)$. In this limit, the derivatives $V'$\nand $V''$ can be rewritten as\n\\begin{equation}\n V'\\,^2=\\frac{\\kappa}{2\\tau}\\,V^2\\,V_{,\\,N},\\,\\,\\,\\,\\mbox{and}\\,\\,\\,\\,V''=\\frac{\\kappa}{2\\tau}\\,V\\,\n \\left[V_{,\\,N}+\\frac{V\\,V_{,\\,NN}}{2V_{,\\,N}}\\right].\n\\end{equation}\nIn this way, the relation between the number $N$ and the scalar\nfield $\\phi$ in this limit becomes\n\\begin{equation}\n \\frac{dN}{d\\phi}=\\frac{\\kappa}{2\\tau}\\,\\left(\\frac{V^2}{V'}\\right)=\n \\left(\\frac{\\kappa}{2\\tau}\\right)^{1\/2}\\,\\frac{V}{\\sqrt{V_{,\\,N}}}.\\label{NF2}\n\\end{equation}\nFrom Eq.(\\ref{ns1}) we find that the scalar spectral index $n_s$ results in \n\\begin{equation}\nn_s-1=\\frac{4\\tau}{\\kappa}\\left[V''-\\frac{3V'\\,^2}{V}\\right]\\,\\frac{1}{V^2}=-4\\frac{V_{,\\,N}}{V}\n+\\frac{V_{,\\,NN}}{V_{,\\,N}},\n\\end{equation}\nor equivalently\n\\begin{equation}\nn_s-1=-4[\\ln\\,V]_{,\\,N}+[\\ln\nV_{,\\,N}]_{,\\,N}=\\left[\\ln\\left(\\frac{V_{,\\,N}}{V^4}\\right)\\right]_{,\\,N}.\\label{nN2}\n\\end{equation}\nWe note that the relation between the scalar potential and the scalar spectral\nindex given by Eq.(\\ref{nN2})\n becomes independent of the brane tension $\\tau$ in the high energy limit.\n\n\n\nFrom Eq.(\\ref{nN2}), the scalar potential in terms of the number of $e$-foldings can be\nwritten as\n\\begin{equation}\nV=V(N)=\\left[-3\\int\\left(e^{\\int(n_s-1)dN}\\right)\\,dN\\right]^{-1\/3},\\label{VN2}\n\\end{equation}\nwhere $\\int\\left(e^{\\int(n_s-1)dN}\\right)\\,dN<0$, in order to make\ncertain that the potential $V(N)>0$.\n\n\n\nNow, by combining Eqs.(\\ref{NF2}) and (\\ref{nN2}), we find that\nthe relation between $N$ and $\\phi$ is given by the general expression\n\\begin{equation}\n\\left[V\\,e^{\\int \\frac{(n_s-1)}{2}dN}\\right]\\,dN=\\left(\\frac{\\kappa}{2\\tau}\\right)^{1\/2}\\,d\\phi,\\label{ff}\n\\end{equation}\nwhere $V$ is given by Eq.(\\ref{VN2}).\n\nIn this form, Eqs.(\\ref{VN2}) and (\\ref{ff}) are the fundamental relations in order to\nbuild the scalar potential $V(\\phi)$ for an attractor point $n_s(N)$, in the framework of the high energy limit\n in brane world inflation.\n\nOn the other hand, in the high energy limit in which $V\\gg \\tau$,\nthe function $F^2(x)$ given by Eq.(\\ref{Rk1}) becomes\n$F^2(x)\\approx\\frac{3}{2}x=\\frac{3}{2}\\frac{V}{\\tau}$. In this\nform, in the high energy limit the tensor to scalar ratio $r$\nbecomes\n\\begin{equation}\nr\\simeq\\,48\\tau\\,\\left(\\frac{V_{,\\,N}}{V^2}\\right).\\label{r11}\n\\end{equation}\nHere we have considered Eq.(\\ref{Rk2}).\n\n\n\\subsection{An example of $n_s=n_s(N)$.}\\label{4}\nIn order to develop the reconstruction of the scalar potential\n$V(\\phi)$ in the brane world inflation, we consider the famous\nattractor $n_s(N)$ given by\n\\begin{equation}\n n_s(N)=n_s=1-\\frac{2}{N},\\label{nsN}\n\\end{equation}\nas example.\n\nFrom the attractor (\\ref{nsN}), we find that considering\nEq.(\\ref{nN2}) we have $\\frac{V_{,\\,N}}{V^4}=\\alpha\/N^2$, in which\n$\\alpha$ corresponds to a constant of integration (with units of $m_p^{-12}$) and\n since $V_{,\\,N}>0$, then the constant of integration $\\alpha>0$. In this form, the effective\npotential as function of the number of $e$-foldings $N$ from\nEq.(\\ref{VN2}) becomes\n\\begin{equation}\nV(N)=3^{-1\/3}\\,\\left[\\frac{\\alpha}{N}+\\beta\\right]^{-1\/3},\\label{poth}\n\\end{equation}\nwhere $\\beta$ denotes a new constant of integration. Here, the new\nconstant of integration $\\beta$ with units of $m_p^{-12}$, can be\nconsidered $\\beta=0$ or $\\beta\\neq 0$.\n\n\nIn the high energy limit, we find that the power spectrum ${\\mathcal{P}_{\\mathcal{R}}}$ given by Eq.(\\ref{Pet})\n can be rewritten as\n\\begin{equation}\n{\\mathcal{P}_{\\mathcal{R}}}\\simeq\\frac{1}{12\\pi^2}\\,\\left(\\frac{\\kappa V^2}{2\\tau}\\right)^3\\,\\frac{1}{V'\\,^2}=\n\\frac{\\kappa^2}{48\\pi^2\\tau^2}\\,\\left(\\frac{V^4}{V_{,\\,N}}\\right)=\\frac{\\kappa^2}{48\\pi^2\\tau^2}\\,\n\\left(\\frac{N^2}{\\alpha}\\right).\\label{P3}\n\\end{equation}\nNote that this result does not depend of the constant of integration\n$\\beta$. From Eq.(\\ref{P3}), it is possible to write the\nconstant of integration $\\alpha$ in terms of the number $N$,\n${\\mathcal{P}_{\\mathcal{R}}}$ and the tension $\\tau$ as\n\\begin{equation}\n\\alpha=\\frac{\\kappa^2}{48\\pi^2\\tau^2}\\,\n\\left(\\frac{N^2}{{\\mathcal{P}_{\\mathcal{R}}}}\\right).\\label{aa}\n\\end{equation}\nIn particular by considering $N=60$ and ${\\mathcal{P}_{\\mathcal{R}}}=2.2\\times\n10^{-9}$, we obtain that the constant of integration $\\alpha\\simeq 3\\times 10^9\n(\\kappa\/\\tau)^2$.\n\n\n\n\n\n\n\nOn the other hand, from Eq.(\\ref{r11}) the tensor to scalar ratio\ncan be rewritten as\n\\begin{equation}\nr\\simeq\\,48\\tau\\,\\left(\\frac{V_{,\\,N}}{V^2}\\right)=48\\tau\\alpha\\,\\left(\\frac{V^2}{N^2}\\right)=48\\tau\\alpha\\,\\left(\\frac{3^{-2\/3}}{N^2(\\alpha\/N+\\beta)^{2\/3}}\\right).\n\\end{equation}\nNote that considering the attractor $n_s$ given by Eq.(\\ref{nsN}),\nwe can find a relation between the tensor to scalar ratio $r$ with\nthe scalar spectral index or the consistency relation becomes\n\\begin{equation}\nr(n_s)\\simeq\n\\left(\\frac{12\\alpha\\tau}{3^{2\/3}}\\right)\\,\\left[\\frac{\\alpha(1-n_s)}{2}+\n\\beta\\right]^{-2\/3}\\,(1-n_s)^2.\\label{37}\n\\end{equation}\n\nIn the following, we will analyze the cases separately in which the constant of integration $\\beta$\ntakes the values $\\beta=0$ and $\\beta\\neq 0$, in order to reconstruct the effective\npotential $V(\\phi)$.\n\n\nFor the case $\\beta=0$, we obtain that the relation between the number of\n$e$-foldings $N$ and scalar field $\\phi$ considering Eqs.(\\ref{ff}), (\\ref{nsN}) and (\\ref{poth})\nbecomes\n\\begin{equation}\n N(\\phi)=N=\\frac{1}{3^2\\alpha^{1\/2}}\\,\\left(\\frac{\\kappa}{2\\tau}\\right)^{3\/2}\\,\\,(\\phi-\\phi_0)^3,\\label{N12}\n\\end{equation}\nwhere $\\phi_0$ corresponds to a constant of integration. In this way, in the high energy limit we find that\nthe reconstruction of the effective potential as a function of the scalar field for the case $\\beta=0$\nand assuming the attractor $n_s-1=-2\/N$\n is\ngiven by\n\\begin{equation}\nV(\\phi)=V_0\\,\\,(\\phi-\\phi_0),\\,\\,\\,\\mbox{where}\\,\\,\\,\\,\\,\\,\\,V_0=\\left(\\frac{\\kappa}{18\\tau\\alpha}\\right)^{1\/2}.\\label{P12}\n\\end{equation}\nAlso, we note that for the case $\\beta=0$, the consistency relation $r=r(n_s)$\nhas a dependence $r(n_s)\\propto (1-n_s)^{4\/3}$. In particular, by considering $n_s=0.964$, $N=56$\nand\n${\\mathcal{P}_{\\mathcal{R}}}=2.2\\times\n10^{-9}$, we find an upper bound for the brane tension given by $\\tau<\n10^{-13}m_p^4$,\n from the condition $r<0.07$. For this bound on $\\tau$, we have used\n Eq.(\\ref{37}). Now, from Eq.(\\ref{aa}) and considering $N=60$ and ${\\mathcal{P}_{\\mathcal{R}}}=2.2\\times\n10^{-9}$, together with the upper limit on $\\tau$, we obtain a\nlower limit for the constant $\\alpha$ given by $\\alpha>1.9\\times\n10^{38}m_p^{-12}$.\n\n\\begin{figure}[th]\n{{\\vspace{0.0 cm}\\includegraphics[width=4.5in,angle=0,clip=true]{fig1a.eps}}}\n{{\\vspace{-1.5 cm}{\\includegraphics[width=4.5in,angle=0,clip=true]{fig1b.eps}}}}\n{\\vspace{-1.0 cm}\\caption{ The upper and lower panels show the tensor-to-scalar ratio $r$ as a\nfunction of the scalar spectral index $n_s$, for\nthree different values of the brane tension $\\tau$. In both panels we have\nconsidered the two-marginalized constraints jointly as 68$\\%$ and 95$\\%$ C.L. at $k=0.002$ Mpc$^{-1}$ from the Planck 2018 results\n\\cite{Planck2018}. Also,\nin both panels the solid,\ndashed and dotted lines correspond to the values of brane tension\n$\\tau\/m_p^4=10^{-12},10^{-13}$ and $10^{-14}$, respectively. In the upper panel we show\nthe consistency relation for the specific\ncase in which the constant $\\beta=0$ and we have used\n$\\alpha=10^{38}m_p^{-12}$. In the lower panel we show the case in which $\\beta\\neq 0$\nand we have considered\n$\\beta=\\alpha\/60$ and $\\alpha=3\\times 10^{9}(\\kappa\/\\tau)^2$, respectively.\n \\label{fig1}}}\n\\end{figure}\n\n\n\n\nOn the other hand, in the reconstruction for the situation in which the constant of integration $\\beta\\neq 0$, we find that\n considering Eq.(\\ref{ff}) the relation between $dN$ and $d\\phi$ can be written as\n\\begin{equation}\n\\frac{dN}{[\\alpha N^2+\\beta\nN^3]^{1\/3}}=\\frac{dN}{\\beta^{1\/3}[\\alpha_0 N^2+\nN^3]^{1\/3}}=C_1 d\\phi,\\,\\,\\,\\,\\;\\:\\;\\mbox{where}\\,\\,\\,\\,\\,\\;\\;\\;\nC_1=3^{1\/3}\\left(\\frac{\\kappa}{2\\alpha\\tau}\\right)^{1\/2},\\label{NF6}\n\\end{equation}\nand the quantity $\\alpha_0=\\alpha\/\\beta$. In the following, we will consider for simplicity the case in which\n the constant of integration $\\beta>0$ i.e, $\\alpha_0>0$.\nWe also note that the integration of Eq.(\\ref{NF6}) does not permit one to\nobtain an analytical solution for the number of $e$-folds as a function of the scalar field i.e.,\n$N=N(\\phi)$. In this sense,\nthe solution of Eq.(\\ref{NF6}) can be written as\n$$\n \\sqrt{3}\\arctan\\left[\\left(1+2\\left[1+\\frac{\\alpha_0}{N}\\right]^{-1\/3}\\right)\/\\sqrt{3}\\right]+\\frac{1}{2}\\ln\n \\left[1+\\left(1+\\frac{\\alpha_0}{N}\\right)^{-2\/3}+\\left(1+\\frac{\\alpha_0}{N}\\right)^{-1\/3}\\right]\n$$\n\\begin{equation}\n -\\ln\\left[1-\\left(1+\\frac{\\alpha_0}{N}\\right)^{-1\/3}\\right]=\\beta^{1\/3}\\,C_1\\,(\\phi-\\phi_0),\n \\label{sol3}\n\\end{equation}\nwhere $\\phi_0$ denotes a constant of integration.\n\nNumerically, we note that in the limit in which\n$\\alpha\/\\beta=\\alpha_0N$, we note that\n the first term of Eq.(\\ref{sol3}) dominates with\nwhich (see Fig.\\ref{fig2} )\n\\begin{equation}\n \\sqrt{3}\\arctan\\left[\\left(1+\n 2\\left[1+\\frac{\\alpha_0}{N}\\right]^{-1\/3}\\right)\/\\sqrt{3}\\right]\\approx\\beta^{1\/3}\\,C_1\\,(\\phi-\\phi_0).\n\\end{equation}\n\nIn this form, we obtain that the reconstruction in the limit in which $\\alpha_0>N$\nbecomes\n\\begin{equation}\n V(\\phi)\\approx\\frac{1}{2(3\\beta)^{1\/3}}\\left[\\sqrt{3}\\tan(\\beta^{1\/3}C_1(\\phi-\\phi_0)\/\\sqrt{3})-1\\right].\\label{pp6}\n\\end{equation}\nHere the range for the scalar field is given by\n$\\frac{\\sqrt{3}\\,\\pi}{6\\beta^{1\/3}C_1}+\\phi_0\\lesssim \\phi\n\\lesssim \\frac{\\sqrt{3}\\,\\pi}{2\\beta^{1\/3}C_1}+\\phi_0$.\n\n\\begin{figure}[th]\n{{\\hspace{0cm}\\includegraphics[width=3.2in,angle=0,clip=true]{fig3.eps}}}\n{\\vspace{-0.1 cm}\\caption{ Evolution of the three terms on the right given\nby Eq.(\\ref{sol3}) versus the dimensionless quantity\n$\\frac{\\alpha}{\\beta\\,N}=\\frac{\\alpha_0}{N}$. Here, the dotted, dashed, and solid lines\ndenote the first, second and third terms of Eq.(\\ref{sol3}), respectively.\n \\label{fig2}}}\n\\end{figure}\n\nIn Fig.\\ref{fig1} we show the ratio $r$ versus the spectral index $n_s$, for three different values\nof the brane tension $\\tau$. In both panels we consider the two-marginalized constraints for\nthe consistency relation $r=r(n_s)$ (at 68$\\%$ and 95$\\%$ CL at $k=0.002$ Mpc$^{-1}$ ) from the new\nPlanck data \\cite{Planck2018}. In the upper panel we consider the special case\nin which the constant of integration $\\beta=0$, where the consistency relation is given by Eq.(\\ref{37}). Here, we take the value\n$\\alpha=10^{38}m_p^{-12}$. In the lower panel we take into account the case in\nwhich $\\beta\\neq 0$ and for the relation $r=r(n_s)$ we have used Eq.(\\ref{37}). In this case\n we have considered the specific value of $\\beta$ at $N=60$ (point limit $\\alpha_0\/N=1$ or $\\beta=\\alpha\/N$)\n wherewith\n$\\beta=\\alpha\/60$ and $\\alpha=3\\times 10^{9}(\\kappa\/\\tau)^2$, respectively.\nAlso, in both panels the solid,\ndashed and dotted lines correspond to the values of brane tension\n$\\tau\/m_p^4=10^{-12},10^{-13}$ and $10^{-14}$, respectively. In particular for the\ncase $\\beta=0$ we find that the brane tension has an upper limit given by $\\tau<10^{-13}\nm_p$, as can be seen of the upper panel of Fig.\\ref{fig1}. For the case in which\n$\\beta\\neq 0$ we find that in the particular case in which $\\beta=\\alpha\/60$, the\nvalue of the brane tension $\\tau<10^{-12}m_p^4$ is well corroborated by Planck\n2018 results, see lower panel of Fig.\\ref{fig1}.\nThis suggests that the value of the constant of integration $\\beta$ modifies the\nthe upper bound on the brane tension. We note that in the case in which the\nconstant $\\beta>\\alpha\/60$ the upper limit on the brane tension increases and in the\nopposite case ($\\beta<\\alpha\/60$\n) the upper limit on $\\tau$ decreases.\n\n\nIn Fig.\\ref{fig2} we show the behavior of the three terms on the right of\n Eq.(\\ref{sol3}) versus the dimensionless quantity\n$\\frac{\\alpha}{\\beta\\,N}=\\frac{\\alpha_0}{N}$. We note that for the limit in\nwhich $\\alpha_0N$ the dominant term corresponds to the first\nexpression of\n Eq.(\\ref{sol3}) given by dotted line in Fig.\\ref{fig2}.\n\n\n\n\nIn order to clarify our above results, we can study some specific\nlimits for the ratio $\\alpha\/(\\beta N)=\\alpha_0\/N$ in which\n$\\alpha_0\/N\\ll 1$ and $\\alpha_0\/N\\gg 1$. As a first approximation we\nconsider the case in which $\\alpha_0\/ N \\ll 1$ or $\\alpha_0\\ll N$.\nFor this limit we find from Eq.(\\ref{NF6}) that the relation\n$N=N(\\phi)$ is given by\n\\begin{equation}\n N(\\phi)=\\exp[\\beta^{1\/3}\\,C_1\\,(\\phi-\\phi_0)],\n\\end{equation}\nwhere $\\phi_0$ denotes a constant of integration. Thus, considering\nthe limit $\\alpha\/(\\beta )=\\alpha_0\\ll N$ we obtain that the\neffective potential $V(\\phi)$ given by Eq.(\\ref{poth}) becomes a\nconstant and equal to $V(\\phi)=(3\\beta)^{-1\/3}$. In fact, this\nresult indicates an accelerated expansion de Sitter or de Sitter\ninflation, since in the high energy limit and considering the\nslow-roll approximation, we have $H\\propto V=$constant. Note that\nthis constant potential coincides with the potential given by\nEq.(\\ref{pp8}) when $\\beta^{1\/3}\\,C_1\\,[\\phi-\\phi_0]\\gg 1$. We also observe that\nfor the consistency relation $r=r(n_s)$, we get $r=48\\tau\n\\alpha\/[(3\\beta)^{2\/3}N^2]\\propto (1-n_s)^2$ (see Eq.(\\ref{37})).\n\n\n\n\n\n\nFor the case in which $\\alpha\/(\\beta N )\\gg 1$ or $\\alpha_0\\gg N$, we find from Eq.(\\ref{NF6}) that the relation $N=N(\\phi)$\ncoincides with the case $\\beta=0$ i.e., Eq.(\\ref{N12}) and then\nthe effective potential $V(\\phi)$ changes linearly with the scalar\nfield according to Eq.(\\ref{P12}) in which $V(\\phi)\\propto \\phi$.\nThis effective potential agrees with the potential given by\nEq.(\\ref{pp6}) assuming that the argument\n$\\beta^{1\/3}C_1(\\phi-\\phi_0)\/\\sqrt{3}<1.$\n\n\n\n\\section{High energy:Reconstruction from the attractor\n$r(N)$}\\label{6}\n\n\nIn this section we consider the hight energy limit, in order to reconstruct the\neffective potential $V(\\phi)$, but from a different point of view. In order to\nreconstruct the scalar potential, we consider as an attractor the tensor to scalar\nratio in terms of the number of $e-$foldings $N$ i.e., $r=r(N)$. In this sense,\nconsidering Eq.(\\ref{r11}) we obtain that the potential effective $V(N)$ can be written as\n\\begin{equation}\n V= V(N)=-48\\tau \\,\\left[\\int\\,r\\,dN\\right]^{-1}.\\label{VN3}\n\\end{equation}\n\nNow from Eq.(\\ref{NF2}) we find that the relation between the number $N$ and the scalar field $\\phi$\nis given by\n\\begin{equation}\n r^{1\/2}\\,\\frac{dN}{d\\phi}=(24\\kappa)^{1\/2}.\\label{Nrb}\n\\end{equation}\nHere, we have considered Eq.(\\ref{r11}).\n\nIn this context, we can obtain the scalar spectral index $n_s$ as a function of\nthe number of $e$-folds $N$, combining the expressions given by Eqs.(\\ref{nN2}) and (\\ref{VN3}) for\n a specific attractor $r=r(N)$. Thus, the scalar spectral index can be\n rewritten as\n\\begin{equation}\n n_s-1=\\,\\left[\\ln\\left(\\frac{r}{48\\tau\n V^2}\\right)\\right]_{,\\,N}.\\label{ns4}\n\\end{equation}\nHere, the potential $V$ is given by Eq.(\\ref{VN3}).\n\n\\subsection{An example of $r=r(N)$.}\\label{7}\n\nIn order to develop the reconstruction of the scalar potential\n$V(\\phi)$ in the brane world inflation, we consider that the attractor\nfor the tensor to scalar ratio as a function of the number of $e$-\nfolds $r(N)$ is given by\n\\begin{equation}\n r(N)=\\frac{\\alpha_1}{N^2},\\label{rN3}\n\\end{equation}\nwhere $\\alpha_1>0$ corresponds to a constant (dimensionless). For\nthis attractor the cases in which $\\alpha_1=12$, was analyzed in\nRef.\\cite{T}, and the specific value $\\alpha_1=8$, was obtained in\nRef. \\cite{Chiba:2015zpa}.\n\n\nIn\nparticular considering $N=60$ and $r<0.07$, we find that the value\nof the constant $\\alpha_1<252$.\n\n\nBy combining Eqs.(\\ref{VN3}) and (\\ref{rN3}) we obtain that the\nscalar potential in terms of the number of $e$-foldings becomes\n\\begin{equation}\nV(N)=\\frac{1}{\\alpha_2\/N+\\beta_1},\\,\\,\\,\\,\\,\\,\\,\\mbox{where}\\,\\,\\,\\,\\,\\,\\alpha_2=\\frac{\\alpha_1}{48\\tau}.\\label{Pot5}\n\\end{equation}\nHere the quantity $\\beta_1$ corresponds to a constant of integration\nwith units of $m_p^{-4}$.\n\nIn order to obtain the relation between the number $N$ and the\nscalar field $\\phi$, we consider Eq.(\\ref{Nrb}) together with the\nattractor given by Eq.(\\ref{rN3}) obtaining\n\\begin{equation}\nN=\\exp\\left[\\sqrt{\\frac{24\\kappa}{\\alpha_1}}\\,(\\phi-\\phi_0)\\right],\n\\end{equation}\nwhere $\\phi_0$ denotes a new constant of integration. Thus, the\nreconstruction of the scalar potential in terms of the scalar\nfield can be written as\n\\begin{equation}\nV(\\phi)=\\left(\\alpha_2\\,\\exp\\left[-\\sqrt{\\frac{24\\kappa}{\\alpha_1}}\n\\,(\\phi-\\phi_0)\\right]+\\beta_1\\right)^{-1}.\\label{pote5}\n\\end{equation}\n\nIn particular assuming that $\\beta_1>0$ and $\\alpha_2\/\\beta_1\\gg\nN$, the effective potential has the behavior of an exponential\npotential i.e., $V(\\phi)\\propto\ne^{(\\sqrt{24\\kappa\/\\alpha_1}\\,)\\,\\phi}$ (recall the we have considered that $V'>0$). In the inverse case in\nwhich $N\\gg \\alpha_2\/\\beta_1$, the scalar potential corresponds to\na constant potential $V(\\phi)=$ constant.\n\n\nIn the context of the cosmological perturbations, we find in the high energy limit the power spectrum\nbecomes\n\\begin{equation}\n{\\mathcal{P}_{\\mathcal{R}}}\\simeq\\frac{1}{12\\pi^2}\\,\\left(\\frac{\\kappa\nV^2}{2\\tau}\\right)^3\\,\\frac{1}{V'\\,^2}=\n\\frac{\\kappa^2}{48\\pi^2\\tau^2}\\,\\left(\\frac{V^4}{V_{,\\,N}}\\right)=\\frac{\\kappa^2}{48\\pi^2\\tau^2}\\,\\frac{N^2}{\\alpha_2}\\,\\left(\\frac{\\alpha_2}{N}+\\beta_1\\right)^{-2}.\n\\end{equation}\nHere we have used Eqs.(\\ref{Pet}) and (\\ref{Pot5}), respectively.\nThus, we can write the constant $\\beta_1$ in terms of\nthe scalar spectrum ${\\mathcal{P}_{\\mathcal{R}}}$, the number of\n$e$-folds $N$ and the constant $\\alpha_2$ as\n\\begin{equation}\n\\beta_1=\\sqrt{\\frac{1}{3\\,\\alpha_2\\,{\\mathcal{P}_{\\mathcal{R}}}}}\\,\\left(\\frac{\\kappa\\,N}{4\\pi\\,\\tau}\\right)-\\frac{\\alpha_2}{N}\n=\\sqrt{\\frac{\\alpha_2}{3\\,\\,{\\mathcal{P}_{\\mathcal{R}}}}}\\,\\left(\\frac{12\\kappa\\,N}{\\pi\\,\\alpha_1}\\right)-\\frac{\\alpha_2}{N}.\\label{a10}\n\\end{equation}\n\nOn the other hand, from Eq.(\\ref{ns4}) we find that the relation between the scalar\nindex $n_s$ and the number of $e$-foldings is given by\n\\begin{equation}\nn_s-1=\\frac{2}{N}\\left[\\frac{\\beta_1}{\\alpha_2\/N+\\beta_1}-2\\right].\\label{nsN6}\n\\end{equation}\n\nNote that in the specific case in which $N\\gg \\alpha_2\/\\beta_1$,\nthe scalar spectral index $n_s$ gives the famous attractor\n$n_s-1=-2\/N$.\n\nNow, from Eq.(\\ref{nsN6}) we can find the constant $\\beta_1$ in terms\nof $n_s$, $N$ and $\\alpha_2$ as\n\\begin{equation}\n\\beta_1=\\frac{[N(n_s-1)+4]}{N[(1-n_s)N-2]}\\,\\alpha_2.\\label{a12}\n\\end{equation}\nNote that for the values $n_s=0.964$ and $N=60$, we have that the\nratio $\\alpha_2\/\\beta_1\\sim 5$. This suggests that the limit $\\alpha_2\/\\beta\\gg N$\nis not satisfied for large $N$, then the exponential potential $V(\\phi)\\propto e^\\phi$\ndoes not work in the braneworld. This analysis for the exponential potential \n in the framework of a brane\n coincides with that obtained in\nRef.\\cite{Tsujikawa:2003zd}. Thus, the reconstruction of the effective potential $V(\\phi)$\nis given by Eq.(\\ref{pote5}) for large $N$ and an appropriate limit corresponds to $N\\gg \\alpha_2\/\\beta_1$,\nwhere the behavior of the scalar potential becomes constant.\n\nIn this form, combining Eqs.(\\ref{a10}) and (\\ref{a12}) we find\nthat the tension $\\tau$ as function of the observables $n_s$ and\n${\\mathcal{P}_{\\mathcal{R}}}$ together with the number of\n$e$-foldings $N$ and $\\alpha_1$ becomes\n\\begin{equation}\n\\tau=\\left(\\frac{{\\mathcal{P}_{\\mathcal{R}}}\\;\\pi^2\\,\\alpha_1^3}\n{4\\,\\times12^3\\,\\kappa^2\\,N^4}\\right)\\,\\,\\left[\\frac{[N(n_s-1)+4]}{[(1-n_s)N-2]}+1\\right]^2.\n\\end{equation}\nHere we have used that $\\alpha_2=\\alpha_1\/(48\\tau)$.\n\nIn particular assuming that the spectral index $n_s=0.964$,\n the spectrum ${\\mathcal{P}_{\\mathcal{R}}}\\simeq 2.2\\times 10^{-9}$ and $N=60$, we\n obtain that the constraint on\n the brane tension $\\tau$ is given by\n\\begin{equation}\n \\tau\\simeq\\, 6\\times 10^{-20}\\,\\alpha_1^3\\,\\,m_p^4.\\label{rel}\n\\end{equation}\nNote that Eq.(\\ref{rel}) gives a relation between the brane tension and the\nparameter $\\alpha_1$. Now,\nby assuming that $\\alpha_1<252$ in order to obtain $r<0.07$ at $N=60$, we find\nthat the upper bound for the brane tension becomes\n$$\n\\tau<9.6\\times 10^{-13} \\,m_p^4\\simeq 10^{-12}\\,m_p^4.\n$$\n\n\nOn the other hand, from Eq.(\\ref{nsN6}) we find that the relation\nbetween the scalar index and the tensor to scalar ratio, can be\nwritten as\n\\begin{equation}\nn_s-1=-\\frac{2\\,r^{1\/2}}{\\alpha_1^{1\/2}}\\,\n\\left[\\frac{2\\alpha_2+\\beta_1\\sqrt{\\alpha_1\/r}}{\\alpha_2+\\beta_1\\sqrt{\\alpha_1\/r}}\\right].\\label{rr4}\n\\end{equation}\nHere we have used the attractor given by Eq.(\\ref{rN3}).\n\n\\begin{figure}[th]\n{{\\hspace{0cm}\\includegraphics[width=4.5in,angle=0,clip=true]{fig2.eps}}}\n{\\vspace{-1.0 cm}\\caption{ As before, we show the tensor-to-scalar ratio $r$ as a\nfunction of the scalar spectral index $n_s$ from Planck 2018 results\\cite{Planck2018} for\nthree different values of the brane tension $\\tau$ but assuming the attractor \n$r(N)\\propto N^{-2}$ as the starting point.\nSolid, dotted and\ndashed lines correspond to the values of brane tension\n$\\tau\/m_p^4=10^{-11},10^{-12}$ and $10^{-13}$, respectively.\n \\label{fig3}}}\n\\end{figure}\n\nIn Fig.\\ref{fig3} we show the tensor to scalar ratio versus the scalar spectral\nindex for three different values of the brane tension considering the attractor $r(N)=\\alpha_1\\,\nN^{-2}$. Here we have used Eq.(\\ref{rr4}) and the solid, dotted and\ndashed lines correspond to the values of brane tension\n$\\tau\/m_p^4=10^{-11},10^{-12}$ and $10^{-13}$, respectively. From this plot we check that the\nupper limit for the brane tension given by $\\tau<10^{-12}m_p^4$ is well corroborated\nfrom Planck data.\n\n\n\\section{Conclusions \\label{conclu}}\n\nIn this article we have analyzed the reconstruction of the background\nin the context of braneworld inflation.\nConsidering a general formalism of reconstruction, we have obtained an\nexpression for the effective potential under\n the slow roll\napproximation. In order to obtain analytical solutions in the reconstruction on the brane,\n we\nhave considered the high energy limit in which the energy density\n$\\rho\\simeq V\\gg \\tau$. In this analysis for the reconstruction of\nthe background, we have considered the parametrization of the\nscalar spectral index or the tensor to scalar ratio as function of\nthe number of $e$-foldings $N$.\n In\nthis general description we have found from the cosmological\nparameter $n_s(N)$ or the parameter $r(N)$, integrable solutions\n for the effective potential depending on the cosmological attractor $n_s(N)$ or $r(N)$.\n\n For the reconstruction from the attractor associated with scalar spectral index\n $n_s(N)$, we have assumed the famous attractor $n_s=1-2\/N$ as an example.\n From this attractor, we have obtained that the consistency relation $r=r(n_s)$\n is given by Eq.(\\ref{37}) and from the power spectrum we have found that the\n integration constant $\\alpha$ depends on the brane tensor, see Eq.(\\ref{aa}).\n On the other hand,\n depending on the value of the second constant of integration $\\beta$,\n we have found different results for the reconstruction of the effective potential $V(\\phi)$.\n In particular for the specific case in which the constant $\\beta=0$, we have\n obtained that the reconstruction of the effective potential corresponds to a potential $V(\\phi)\\propto\\,\\phi$. Also, assuming that\n the observational constraint on the tensor to scalar ratio $r<0.07$, we have found an upper\n limit for the brane tension given by $\\tau< 10^{-13}m_p^4$, wherewith the brane model\n is well supported by the Planck data, see upper panel of Fig.\\ref{fig1}. In\n this same context,\n for the case in\n which the constant of integration $\\beta\\neq 0$, we have found a\n transcendental equation for the number of $e$-folds as a function of the\n scalar field $N=N(\\phi)$ and the reconstruction does not work. However, as a first approximation we have analyzed the\n dominant terms of the transcendental equation in\n order to give an approach to the reconstruction of the effective\n potential; see Fig.\\ref{fig2}. Also, we have considered the extreme limits $\\alpha_0\/N\\ll 1$\n and $\\alpha_0\/N\\gg 1$, in order to find analytical expressions for the\n potential $V(\\phi)$. In this approach, we have obtained that in the limit in\n which $\\alpha_0\/N\\gg 1$, the effective potential coincides with the case in\n which the constant of integration $\\beta=0$, where the effective potential\n changes linearly with the scalar field.\n\nOn the other hand, we have explored the possibility of the\nreconstruction in the framework of braneworld inflation,\nconsidering as an attractor the tensor to scalar ratio in terms of\nthe number of $e$-foldings i.e., $r=r(N)$. Here we have found\ngeneral relation in order to build the effective potential. As a\nspecific example, we have considered the attractor $r(N)\\propto\nN^{-2}$. Here, we have obtained that the reconstruction of the\neffective potential is given by Eq.(\\ref{pote5}). In particular,\nconsidering the limit in which $\\alpha_2\/\\beta_1\\gg N$, we have\nobtained that the effective potential corresponds to an\nexponential potential i.e., $V(\\phi)\\propto e^{\\phi}$; however\nthis limit does not work. In the inverse limit, we have found that\nthe effective\n potential $V(\\phi)=$ constant. Also, utilizing the observables as the scalar\n spectral index and the power spectrum together with the number of $e$-folds, we\n have found a relation between the brane tension and the\n associated parameter $\\alpha_1$ to the attractor $r(N)$. Thus, by considering\n that $\\alpha_1<252$, in order to obtain $r<0.07$ at $N=60$, we have found an\n upper bound on the brane tension given by $\\tau<10^{-12} m_p^4$ and this\n constraint is well corroborated with Planck data, see\n Fig.\\ref{fig3}.\n\n\nWe have also found that in the framework of braneworld inflation,\n the incorporation of the\nadditional term in Friedmann's equation affects substantially the\nreconstruction of the effective potential $V(\\phi)$, considering\nthe simplest attractors, such as $n_s(N)-1\\propto N^{-1}$ or\n$r(N)\\propto N^{-2}$. In this respect, we have shown that in order\nto obtain analytical solutions for the reconstruction of\n$V(\\phi)$, the attractor $r(N)$, is an adequate methodology to be\nconsidered.\n\n We conclude with some comments concerning the way to\ndistinguish the reconstruction in the braneworld and GR\ninflationary models from the methodology used.\n For the famous attractor $n_s-1=-2\/N$, we have found that\nthe reconstruction from $n_s(N)$ in braneworld inflation does not\nwork unlike in GR. Here, we have shown that for a specific case\nin which the integration constants is zero, the reconstruction\nfrom $n_s(N)$ works. On the other hand, by assuming the reconstruction of\nbraneworld inflation from the attractor $r(N)$, we have been able\nto rebuild our model as it occurs in the framework of GR. This suggests that\nthe version of reconstruction from $r(N)$ is a suitable ansatz to\nbe used for the reconstruction of braneworld inflation.\n\nFinally, in this paper we have not addressed the reconstruction of the braneworld \nmodel as a fluid, considering an ansatz on the effective EoS as a function of the \nnumber of $e-$folds. We hope to return to this methodology in the near future. \n\n\n\n\\begin{acknowledgments}\nThe author thanks to Manuel Gonzalez-Espinoza and Nelson Videla for useful discussions.\nThis work was supported\nby Proyecto VRIEA-PUCV N$_{0}$. 039.309\/2018.\n\\end{acknowledgments}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nDeep neural networks have been achieving remarkable success in a wide range of classification tasks in recent years. Accompanying increasingly accurate prediction of the classification probability, it is of equal importance to quantify the uncertainty of the classification probability produced by deep neural networks. Without a careful characterization of such an uncertainty, the prediction of deep neural networks can be questionable, unusable, and in the extreme case incur considerable loss \\cite{wang2016deep}. For example, deep reinforcement learning suffers from a strikingly low reproducibility due to high uncertainty of the predictions \\cite{henderson2017deep}. Uncertainty quantification can be challenging though; for instance, \\cite{guo2017calibration} argued that modern neural networks architectures are poor in producing well-calibrated probability in binary classification. Recognizing such challenges, there have been recent proposals to estimate and quantify the uncertainty of output from deep neural networks, and we review those methods in Section \\ref{sec:related}. Despite the progress, however, uncertainty quantification of deep neural networks remains relatively underdeveloped \\cite{kendall2017uncertainties}. \n\nIn this paper, we propose deep Dirichlet mixture networks to produce, in addition to a point estimator of the classification probabilities, an associated credible interval (region) that covers the true probabilities at a desired level. We begin with the binary classification problem and employ the Beta mixture model to approximate the probability distribution of the true but \\text{random} probability. We then extend to the general multi-class classification using the Dirichlet mixture model. Our key idea is to view the classification probability as a random quantity, rather than a deterministic value in $[0,1]$. We seek to estimate the distribution of this random quantity using the Beta or the Dirichlet mixture, which we show is flexible enough to model any continuous distribution on $[0,1]$. We achieve the estimation by adding an extra layer in a typical deep neural network architecture, without having to substantially modify the overall structure of the network. Then based on the estimated distribution, we produce both a point estimate and a credible interval for the classification probability. This credible interval provides an explicit quantification of the classification variability, and can greatly facilitate our decision making. For instance, a point estimate of high probability to have a disease may be regarded as lack of confidence if the corresponding credible interval is wide. By contrast, a point estimate with a narrow credible interval may be seen as a more convincing diagnosis. \n\nThe feasibility of our proposal is built upon a crucial observation that, in many classification applications such as medical diagnosis, there exist more than one class labels. For instance, a patient's computed tomography image may be evaluated by two doctors, each giving a binary diagnosis of existence of cancer. In Section \\ref{sec:realdata}, we illustrate with an example of diagnosis of Alzheimer's disease (AD) using patients' anatomical magnetic resonance imaging. For each patient, there is a binary diagnosis status as AD or healthy control, along with additional cognitive scores that are strongly correlated with and carry crucial information about one's AD status. We thus consider the dichotomized version of the cognitive scores, combine them with the diagnosis status, and feed them together into our deep Dirichlet mixture networks to obtain a credible interval of the classification probability. We remark that, existence of multiple labels is common rather than an exception in a variety of real world applications.\n\nOur proposal provides a useful addition to the essential yet currently still growing inferential machinery to deep neural networks learning. Our method is simple, fast, effective, and can couple with any existing deep neural network structure. In particular, it adopts a frequentist inference perspective, but produces a Bayesian-style outcome of credible intervals. \n\n\n\n\\subsection{Related Work}\n\\label{sec:related}\n\n\nThere has been development of uncertainty quantification of artificial neural networks since two decades ago. Early examples include the delta method \\cite{hwang1997prediction}, and the Bootstrap methods \\cite{efron1994introduction,heskes1997practical,carney1999confidence}. However, the former requires computing the Hessian matrix and is computationally expensive, whereas the latter hinges on an unbiased prediction. When the prediction is biased, the total variance is to be underestimated, which would in turn result in a narrower credible interval.\n\nAnother important line of research is Bayesian neural networks \\cite{mackay1992evidence, mackay1992practical}, which treat model parameters as distributions, and thus can produce an explicit uncertainty quantification in addition to a point estimate. The main drawback is the prohibitive computational cost of running MCMC algorithms. There have been some recent proposals aiming to address this issue, most notably, \\cite{gal2016dropout, li2017dropout} that used the dropout tricks. Our proposal, however, is a frequentist solution, and thus we have chosen not to numerically compare with those Bayesian approaches. \n\nAnother widely used uncertainty quantification method is the mean variance estimation (MVE) approach \\cite{nix1994estimating}. It models the data noise using a normal distribution, and employs a neural network to output the mean and variance. The optimization is done by minimizing the negative log-likelihood function. It has mainly been designed for regression tasks, and is less suitable for classification.\n\nThere are some more recent proposals of uncertainty quantification. One is the lower and upper bound estimation (LUBE) \\cite{khosravi2011lower,quan2014short}. LUBE has been proven successful in numerous applications. However, its loss function is non-differentiable and gradient descent cannot be applied for optimization. The quality-driven prediction interval method (QD) has recently been proposed to improve LUBE \\cite{pearce2018high}. It is a distribution-free method by outputting prediction's upper bound and lower bound. The uncertainty can be estimated by measuring the distance between the two bounds. Unlike LUBE, the objective function of QD can be optimized by gradient descent. But similar to MVE, it is designed for regression tasks. Confidence network is another method to estimate confidence by adding an output node next to the softmax probabilities \\cite{devries2018confidence}. This method is suitable for classification. Although its original goal was for out-of-distribution detection, its confidence can be used to represent the intrinsic uncertainty. Later in Section \\ref{sec:compare}, we numerically compare our method with MVE, QD, and confidence network. \n\nWe also clarify that our proposed framework is different from the mixture density network \\cite{bishop1994mixture}. The latter trains a neural network to model the distribution of the outcome using a mixture distribution. By contrast, we aim to learn the distribution of the classification probabilities and to quantify their variations. \n\n\n\n\n\n\\section{Dirichlet Mixture Networks}\n\nIn this section, we describe our proposed Dirichlet mixture networks. We begin with the case of binary classification, where the Dirichlet mixture models reduce to the simpler Beta mixture models. Although a simpler case, the binary classification is sufficient to capture all the key ingredients of our general approach and thus loses no generality. At the end of this section, we discuss the extension to the multi-class case.\n\n\n\n\\subsection{Loss Function}\n\nWe begin with a description of the key idea of our proposal. Let $\\{1, 2\\}$ denote the two classes. Given an observational unit $\\x$, e.g., an image, we view the probability $p_{\\x}$ that $\\x$ belongs to class 1 as a random variable, instead of a deterministic value in $[0, 1]$. We then seek to estimate the probability density function $f(p; \\x)$ of $p_{\\x}$. This function encodes the \\textit{intrinsic} uncertainty of the classification problem. A point estimate of the classification probability only focuses on the mean, $\\int_0^1 f(p; \\x) \\dx p$, which is not sufficient for an informed decision making without an explicit quantification of its variability. For example, it can happen that, for two observational units $\\x$ and $\\x'$, their mean probabilities, and thus their point estimates of the classification probability, are the same. However, the densities are far apart from each other, leading to completely different variabilities, and different interpretations of the classification results. Figure \\ref{fig:illlustration} shows an illustration. Our proposal then seeks to estimate the density function $f(p; \\x)$ for each $\\x$. \n\n\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[width=8.3cm]{images\/introduction.PNG}\n\\caption{Illustration of the setting. Two or more labels are generated with the \\textit{same} probability $p_{\\x}$, which is \\textit{randomly} drawn from a distribution that we wish to estimate.}\n\\label{fig:illlustration}\n\\end{figure}\n\nA difficulty arising from this estimation problem is that $f$ in general can be any density function on $[0, 1]$. To address this, we propose to simplify the problem by restricting to the case where $f$ is a Beta mixture; i.e., \n\\begin{equation}\\label{eq:beta_density_mix}\nf(p; \\x) = \\sum_{k=1}^K w^k \\frac{p^{\\alpha^k_1 -1} (1-p)^{\\alpha^k_2 -1}}{\\B(\\alpha^k_1, \\alpha^k_2)},\n\\end{equation}\nwhere $\\B(\\cdot, \\cdot)$ is the Beta function, and the parameters $w^k, \\bm\\alpha^k = (\\alpha^k_1, \\alpha^k_2)$ are smooth functions of $\\x$, $k = 1, \\ldots, K$. The weights $w^k$ satisfy that $w^1 + \\cdots + w^K = 1$. Later we show that this Beta mixture distribution is flexible enough to adequately model almost any distribution $f$ on $[0,1]$. \n\nWith the form of density function \\eqref{eq:beta_density_mix} in place, our goal then turns to estimate the positive parameters $\\alpha_1^k, \\alpha_2^k$, and $w^k$. To do so, we derive the loss function that is to be minimized by deep neural networks.\n\nWe employ the negative log-likelihood function from \\eqref{eq:beta_density_mix} as the loss function. For the $j$th observational unit of the training data, $j = 1, \\ldots, n$, let $\\x_j$ denote the input, e.g., the subject's image scan, and $\\y_j = \\left( y_j^{(1)}, \\ldots, y_j^{(m_j)} \\right)$ denote the vector of labels taking values from $\\{1, 2\\}$. Here we assume $m_j \\geq 2$, reflecting that there are more than one class label for each observational unit. Write $\\w = (w_1, \\ldots, w_K)$ and $\\bm \\alpha = (\\bm\\alpha^1, \\ldots, \\bm\\alpha^K)$. By integrating out $p$, the likelihood function for the observed pair $(\\x_j, \\y_j)$ is \n\\[\n\\begin{aligned}\n&L_j(\\w, \\bm\\alpha; \\x_j, \\y_j) \\\\\n&= \\int_0^1 p^{\\sum_{l=1}^{m_j} \\bm{1}(y_j^{(l)}= 1)} (1-p)^{\\sum_{l=1}^{m_j} \\bm{1}(y_j^{(l)}=2)} f(p; \\x_j) \\dx p.\n\\end{aligned}\n\\]\nWrite $S_{ij} = \\sum_{l=1}^{m_j} \\bm{1} \\left( y_j^{(l)} = i \\right)$, where $\\bm{1}(\\cdot)$ is the indicator function, $i = 1, 2$, $j = 1, \\ldots, n$, and this term quantifies the number of times $\\x_j$ is labeled $i$. Plugging \\eqref{eq:beta_density_mix} into $L_j$, we get \n\\[\n\\begin{aligned}\n& L_j(\\w, \\bm\\alpha; \\x_j, \\y_j) \\\\\n&= \\int_0^1 p^{S_{1j}} (1-p)^{S_{2j}} \\sum_{k=1}^K \\frac{w^k p^{\\alpha^k_1 -1} (1-p)^{\\alpha^k_2 -1}}{\\B(\\alpha^k_1, \\alpha^k_2)} \\dx p\\\\\n&= \\sum_{k=1}^K \\int_0^1 \\frac{w^k}{\\B(\\alpha^k_1, \\alpha^k_2)} p^{\\alpha^k_1 -1 + S_{1j}} (1-p)^{\\alpha^k_2 -1 + S_{2j}} \\dx p.\n\\end{aligned}\n\\]\nBy a basic property of Beta functions, we further get\n\\[\nL_j(\\w, \\bm\\alpha; x_j, \\bm y_j) = \\sum_{k=1}^K \\frac{w^k\\B(\\alpha^k_1 + S_{1j}, \\alpha^k_2 + S_{2j})}{\\B(\\alpha^k_1, \\alpha^k_2)}.\n\\]\nAggregating all $n$ observational units, we obtain the full negative log-likelihood function, \n\\begin{equation}\\label{eq:full_like}\n\\begin{aligned}\n& -\\ell(\\w, \\bm\\alpha; \\x_1, \\y_1, \\ldots, \\x_n, \\y_n) \\\\\n& = -\\sum_{j=1}^n \\log \\left[\\sum_{k=1}^K \\frac{w^k\\B(\\alpha^k_1 + S_{1j}, \\alpha^k_2 + S_{2j})}{\\B(\\alpha^k_1, \\alpha^k_2)} \\right].\n\\end{aligned}\n\\end{equation}\n\nWe then propose to employ a deep neural network learner to estimate $\\w$ and $\\bm \\alpha$. \n\n\n\n\\subsection{Credible Intervals}\n\\label{sec:credible-interval}\n\nTo train our model, we simply replace the existing loss function of a deep neural network, e.g., the cross-entropy, with the negative log-likelihood function given in (2). Therefore, we can take advantage of current deep learning framework such as PyTorch for automatic gradient calculation. Then we use the mini-batch gradient descent to optimize the entire neural network's weights. Once the training is finished, we obtain the estimate of the parameters of the mixture distribution, $\\{\\bm{w}, \\bm{\\alpha}\\}$.\n\nOne implementation detail to notice is that the Beta function has no closed form derivative. To address this issue, we used fast log gamma algorithm to obtain an approximation of the Beta function, which is available in PyTorch. Also, we applied the softmax function to the weights of the mixtures to ensure that $w_1 + ... + w_K = 1$, and took the exponential of $\\bm{\\alpha}^1, \\ldots \\bm{\\alpha}^K$ to ensure that these parameters remain positive as required. \n\nGiven the estimated parameters $\\widehat{\\w}, \\widehat{\\bm \\alpha}$ from the deep mixture networks, we next construct the credible interval for explicit uncertainty quantification. For a new observation $\\x_0$, the estimated distribution of the classification probability $p_{\\x_0}$ takes the form\n\\[\n\\hat f(p; \\x_0) = \\sum_{k=1}^K \\hat w^k(\\x_0) \\frac{p^{\\hat\\alpha^k_1(\\x_0) -1} (1-p)^{\\hat\\alpha^k_2(\\x_0) -1}}{\\B(\\hat\\alpha^k_1(\\x_0), \\hat\\alpha^k_2(\\x_0))},\n\\]\nwhere we write $\\hat{w}^k, \\hat{\\alpha}^k_1, \\hat{\\alpha}^k_2$ in the form of explicit functions of $\\x_0$. The expectation of this estimated density $\\int_0^1 \\hat f(p; \\x_0) \\dx p$ is an approximately unbiased estimator of $p_{\\x_0}$. Meanwhile, we can construct the two-sided credible interval of $p_{\\x_0}$ with the nominal level $\\alpha \\in (0,1)$ as \n\\[\n\\left[ \\widehat Q_{\\frac{\\alpha}{2}}, \\widehat Q_{1 - \\frac{\\alpha}{2}} \\right],\n\\]\nwhere $\\widehat Q_{\\frac{\\alpha}{2}}$ and $\\widehat Q_{1 - \\frac{\\alpha}{2}}$ are the $\\alpha\/2$ and $1 - \\alpha\/2$ quantiles of the estimated density $\\hat{f}(p; \\x_0)$. Similarly, we can construct the upper and lower credible intervals as \n\\[\n\\left[0, \\widehat Q_{1 - \\alpha}\\right], \\text{ and } \\left[\\widehat Q_{\\alpha}, 1\\right],\n\\]\nrespectively, where $\\widehat Q_{\\alpha}$ and $\\widehat Q_{1 - \\alpha}$ are the $\\alpha$ and $1 - \\alpha$ quantiles of the estimated density $\\hat{f}(p; \\x_0)$. \n\nNext we justify our choice of Beta mixture for the distribution of classification probability, by showing that any density function under certain regularity conditions can be approximated well by a Beta mixture. Specifically, denote by $\\mathcal P$ the set of all probability density functions $f$ on $[0, 1]$ with at most countable discontinuities that satisfy\n\\begin{align}\\nonumber\n\\int_0^1 f(p) \\left|\\log f(p)\\right| \\dx p < \\infty.\n\\end{align}\nIt is shown in \\cite{article} that any $f \\in \\mathcal{P}$ can be approximated arbitrarily well by a sequence of Beta mixtures. That is, for any $f \\in \\mathcal{P}$ and any $\\epsilon > 0$, there exists a Beta mixture distribution $f_{\\B}$ such that\n\\begin{align}\\nonumber\n\\mathrm{D}_{\\mathrm{KL}} \\left(f \\| f_{\\B} \\right) \\leq \\epsilon,\n\\end{align}\nwhere $\\mathrm{D}_{\\mathrm{KL}} (\\cdot \\| \\cdot)$ denotes the Kullback-Leibler divergence. This result establishes the validity of approximating a general distribution function using a Beta mixture. The proof of this result starts by recognizing that $f$ can be accurately approximated by piecewise constant functions on $[0,1]$ due to a countable number of discontinuities. Next, each constant piece is a limit of a sequence of Bernstein polynomials, which are infinite Beta mixtures with integer parameters \\cite{verdinelli1998bayesian, petrone2002consistency}. \n\n\n\n\\subsection{Multiple-class Classification}\n\\label{sec:extens-mult-labels}\n\nWe next extend our method to the general case of multi-class classification. It follows seamlessly from the prior development except that now the labels $\\y_j = \\left( y_j^{(1)}, \\ldots, y_j^{(m_j)} \\right)$ take values from $\\{1, 2, \\ldots, d\\}$, where $d$ is the total number of classes. Given an observation $\\x$, the multinomial distribution over $\\{1, 2, \\ldots, d\\}$ is represented by $\\p = (p_1, \\ldots, p_d)$, which, as a point in the simplex $\\Delta = \\{(c_1, \\ldots, c_d): c_i \\ge 0, c_1 + \\cdots + c_d = 1\\}$, is assumed to follow a Dirichlet mixture\n\\[\nf(\\p; \\x) = \\sum_{k=1}^K w^k \\frac1{\\B(\\bm\\alpha^k)} \\prod_{i=1}^d p_i^{\\alpha^k_i -1}, \n\\]\nwhere the generalized Beta function takes the form\n\\[\n\\B(\\bm\\alpha) = \\frac{\\prod_{i=1}^d \\Gamma(\\alpha_i)}{\\Gamma(\\alpha_1 + \\cdots + \\alpha_d)}.\n\\]\nThe likelihood of the $j$th observation is \n\\[\nL_j = \\int_{\\Delta} \\left( \\prod_{i=1}^d p_i^{S_{ij}} \\right) \\sum_{k=1}^K w^k \\frac1{\\B (\\bm\\alpha^k)} \\prod_{i=1}^d p_i^{\\alpha^k_i -1} \\dx \\bm p, \n\\]\nwhere $S_{ij} = \\sum_{l=1}^{m_j} \\bm{1}\\left( y_j^{(l)}= i \\right)$. Accordingly, the negative log-likelihood function is\n\\[\n\\footnotesize\n\\begin{aligned}\n&-\\ell(\\w, \\bm\\alpha; \\x_1, \\y_1, \\ldots, \\x_n, \\y_n) \\\\\n&= -\\sum_{j=1}^n \\log \\left[\\sum_{k=1}^K \\frac{w^k\\B\\left( \\alpha^k_1 + S_{1j}, \\ldots, \\alpha^k_d + S_{dj} \\right)}{\\B(\\bm\\alpha^k)} \\right].\n\\end{aligned}\n\\normalsize\n\\]\nThis is the loss function to be minimized in the Dirichlet mixture networks.\n\n\n\n\n\n\\section{Simulations}\n\\label{sec:simulations}\n\n\\subsection{Simulations on Coverage Proportion}\n\nWe first investigate the empirical coverage of the proposed credible interval. We used the MNIST handwritten digits data, and converted the ten outcomes (0-9) first to two classes (0-4 as Class 1, and 5-9 as Class 2), then to three classes (0-2 as Class 1, 3-6 as Class 2, and 7-9 as Class 3). In order to create multiple labels for each image, we trained a LeNet-5 \\cite{lecun1998gradient} to output the classification probability $p_i$, then sampled multiple labels for the same input image based on a binomial or multinomial distribution with $p_i$ as the parameter. We further divided the simulated data into training and testing sets. We calculated the empirical coverage as the proportion in the testing set that the corresponding $p_i$ falls in the constructed credible interval. We assessed the coverage performance by examining how close the empirical coverage is to the nominal coverage between the interval of 75\\% and 95\\%. Ideally, the empirical coverage should be the same as the nominal level. \n\n\n\\begin{figure}[h]\n\\centering\n\\subfloat[Two labels]{{\\includegraphics[width=4cm]{images\/sim_2o2o.png}}}\n\\,\n\\subfloat[Three labels]{{\\includegraphics[width=4cm]{images\/sim_2o3o.png}}}\n\\caption{Empirical coverage of the estimated credible interval for a two-class classification task, with the two-label setting shown in (a), and the three-label setting in (b). The blue line represents the empirical coverage of the estimated credible interval. The orange 45-degree line represents the ideal estimation. The closer the two lines, the better the estimation.}\n\\label{sim2}\n\\end{figure}\n\nFigure \\ref{sim2} reports the simulation results for the two-class classification task, where panel (a) is when there are two labels available for each input, and panel (b) is when there are three labels available. The orange 45-degree line represents the ideal coverage. The blue line represents the empirical coverage of the credible interval produced by our method. It is seen that our constructed credible interval covers 98.19\\% of the truth with the 95\\% nominal level for the two-label scenario, and 98.17\\% for the three-label scenario. In general, the empirical coverage is close or slightly larger than the nominal value, suggesting that the credible interval is reasonably accurate. Moreover, the interval becomes more accurate with more labels on each input. \n\n\\begin{figure}[h]\n\\centering\n\\subfloat[Class 1 with two labels]{{\\includegraphics[width=4cm]{images\/sim_3c2o_p0.png}}}\n\\,\n\\subfloat[Class 2 with two labels]{{\\includegraphics[width=4cm]{images\/sim_3c2o_p1.png}}}\n\\,\n\\subfloat[Class 1 with three labels]{{\\includegraphics[width=4cm]{images\/sim_3c3o_p0.png}}}\n\\,\n\\subfloat[Class 2 with three labels]{{\\includegraphics[width=4cm]{images\/sim_3c3o_p1.png}}}\n\\caption{Empirical coverage of the estimated credible interval for a three-class classification task, with the two-label setting shown in (a) and (b), and the three-label setting in (c) and (d). The blue line represents the empirical coverage of the estimated credible interval. The orange 45-degree line represents the ideal estimation. The closer the two lines, the better the estimation. For each graph, the probability is calculated in the one-vs-all fashion; e.g., (a) represents the credible interval of Class 1 versus Classes 2 and 3 combined.}\n\\label{sim3}\n\\end{figure}\n\nFigure \\ref{sim3} reports the simulation results for the three-class classification task, where panels (a) and (b) are when there are two labels available, and panels (c) and (d) are when there are three labels available. A similar qualitative pattern is observed in Figure \\ref{sim3} as in Figure \\ref{sim2}, indicating that our method works well for the three-class classification problem. \n\n\n\n\\subsection{Comparison with Alternative Methods}\n\\label{sec:compare}\n\nWe next compare our method with three alternatives that serve as the baselines, the confidence network \\cite{devries2018confidence}, the mean variance estimation (MVE) \\cite{nix1994estimating}, and the quality-driven prediction interval method (QD) \\cite{pearce2018high}. We have chosen those methods as baselines, as they also targeted to quantify the intrinsic variability and represented the most recent state-of-the-art solutions to this problem. \n\n\\begin{figure}[h]\n \\centering\n \\subfloat[Plot for $f_1=\\frac{\\psi_1}{\\psi_2} + 1$]{{\\includegraphics[width=4.1cm]{images\/function0.png}}}\n \\subfloat[Plot for $f_2=\\frac{\\psi_2}{\\psi_1} + 1$]{{\\includegraphics[width=4.1cm]{images\/function1.png}}}\n \\,\n \\subfloat[Scatter Plot for 1000 samples]{{\\includegraphics[width=6cm]{images\/simSamples.png}}}\n \\caption{Data is generated from a Bernoulli distribution whose parameter is sampled from a $\\B$ distribution with parameter($f_1$, $f_2$). (a) and (b) show the 3D landscapes. (c) shows 1,000 samples from this distribution with two labels for each data point. Green means all labels are 1. Red means all labels are 2. Yellow means that labels are a mix of 1 and 2.}\n \\label{samples}\n\\end{figure}\n\nTo facilitate graphical presentation of the results, we simulated the input data $\\x$ from two-dimensional Gaussian mixtures. Specifically, we first sampled $\\x$ from a mixture of two Gaussians with means at $(-2, 2)$ and $(2, -2)$, and denote its probability density function as $\\psi_1$. We then sampled $\\x$ from another mixture of two Gaussians with means at $(2, 2)$ and $(-2, -2)$, and denote its probability density function as $\\psi_2$. For each Gaussian component, the variance is set at 0.7. We then sampled the probability $p$ of belonging to Class 1 from a Beta distribution with the parameters $\\psi_1 \/ \\psi_2 + 1$ and $\\psi_2 \/ \\psi_1 + 1$. Finally, we sampled the class labels from a Bernoulli distribution with the probability of success $p$. At each input sample $\\x$, we sampled two class labels. For a fair comparison, we duplicate the data for the baseline methods that only use one class label. Figure \\ref{samples} (c) shows a scatter plot of 1,000 samples, each with two labels. The green dots correspond to the samples whose class labels are 0 in both replications, the red dots are 1 in both replications, and the yellow dots are those samples whose class labels are different in two replications. Most of the yellow dots are located along the two axis that separate the four quadrants. \n\n\\begin{figure}[h]\n \\centering\n \\subfloat[Ideal]{{\\includegraphics[width=4.5cm]{images\/Ideal.png}}}\n \\hspace{3cm}\n \\subfloat[Our Approach]{{\\includegraphics[width=4cm]{images\/Beta.png}}}\n \\,\n \\subfloat[MVE]{{\\includegraphics[width=4cm]{images\/MVE.png}}}\n \\,\n \\subfloat[QD]{{\\includegraphics[width=4cm]{images\/QD.png}}}\n \\,\n \\subfloat[Confidence network]{{\\includegraphics[width=4cm]{images\/confidence.png}}}\n \\,\n \\caption{Variance contour plots of our approach and baselines. (a) shows the ideal variance plot. (b) is the result of our approach. (c), (d), (e) are the results of baselines. Blue means low data-noise, and yellow means high data-noise. From the results, (b) our approach looks most similar to the ideal.}\n \\label{baselines}\n\\end{figure}\n\nFigure \\ref{baselines} reports the contour of the estimated variance. Panel (a) is the true variance contour for the simulated data, obtained numerically from the data generation. It shows that the largest variance occurs along the two axises that separate the four quadrants. Panel (b) is the result of our approach. We used ten mixtures here. The predicted mean and variance were calculated using the law of total expectation and total variance. Our method achieved a 98.4\\% classification accuracy. More importantly, it successfully captured the variability of the classification probability and produced a variance contour that looks similar to (a). Panel (c) is the result of the mean variance estimation \\cite{devries2018confidence}. It also achieved a 98.4\\% classification accuracy, but it failed to correctly characterize the variability. This is partly due to that it models the variability as Gaussian. (d) is the result of the quality-driven prediction interval method \\cite{pearce2018high}. It only obtained a 89.1\\% classification accuracy. As a distribution-free method, it predicted a higher variability in the center, but ignored other highly variable regions. (e) is the result of the confidence network \\cite{devries2018confidence}. It achieved a 98.1\\% classification accuracy, a reasonably well variability estimation. Overall, our method achieved the best performance while maintaining a high classification accuracy.\n\n\\begin{figure}[h]\n \\centering\n \\subfloat[Point (0,0)]{{\\includegraphics[width=4cm]{images\/point0_0.png}}}\n \\,\n \\subfloat[Point (1,1)]{{\\includegraphics[width=4cm]{images\/point1_1.png}}}\n \\caption{Beta mixture density functions outputted by the neural network. (a) is the result at point (0,0). (b) is the result at point (1,1). Point (0,0) clearly has a higher variance.}\n \\label{PDFplots}\n\\end{figure}\n\nFigure \\ref{PDFplots} shows the density function of the outputted distributions. At point (0,0), it indeed has a higher variance. \n\n\n\n\n\n\\section{Real Data Analysis}\n\\label{sec:realdata}\n\n\\subsection{Data Description }\n\nWe illustrate our proposed method on a medical imaging diagnosis application. We remark that, although the example dataset is small in size, with only thousands of image scans, our method is equally applicable to both small and large datasets. \n\nAlzheimer's Disease (AD) is the leading form of dementia in elderly subjects, and is characterized by progressive and irreversible impairment of cognitive and memory functions. With the aging of the worldwide population, it has become an international imperative to understand, diagnose, and treat this disorder. The goal of the analysis is to diagnose patients with AD based on their anatomical magnetic resonance imaging (MRI) scans. Being able to provide an explicit uncertainty quantification for this classification task, which is potentially challenging and of a high-risk, is especially meaningful. The dataset we analyzed was obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI). For each patient, in addition to his or her diagnosis status as AD or normal control, two cognitive scores were also recorded. One is the Mini-Mental State Examination (MMSE) score, which examines orientation to time and place, immediate and delayed recall of three words, attention and calculation, language and vision-constructional functions. The other is the Global Clinical Dementia Rating (CDR-global) score, which is a combination of assessments of six domains, including memory, orientation, judgment and problem solving, community affairs, home and hobbies, and personal care. Although MMSE and CDR-global are not used directly for diagnosis, their values are strongly correlated with and carry crucial information about one's AD status. Therefore, we took the dichotomized cognitive scores, and used them as labels in addition to the diagnosis status. \n\nWe used the ADNI 1-Year 1.5T dataset, with totally 1,660 images. We resized all the images to the dimension $96\\times96\\times80$. The diagnosis contains three classes: normal control (NC), mild cognitive impairment (MCI), and Alzheimer disease (AD). Among them, MCI is a prodromal stage of AD. Since the main motivation is to identify patients with AD, we combined NC and MCI as one class, referred as NC+MCI, and AD as the other class, and formulated the problem as a binary classification task. We used three types of assessments to obtain the three classification labels: the doctor's diagnostic assessment, the CDR-global score, and the MMSE score. For the CDR-global score, we used 0 for NC, 0.5 for MCI, and 1 for AD. For the MMSE score, we used 28-30 as NC, 24-27 as MCI, and 0-23 as AD. Table \\ref{data} summarizes the number of patients in each class with respect to the three different assessments. \n\n\\begin{table}[t]\n\\begin{center}\n\\begin{tabular}{l*{6}{c}r}\n & Diagnosis & CDR-global & MMSE \\\\ \\hline\nNC & 500 & 664 & 785 \\\\\nMCI & 822 & 830 & 570 \\\\\nAD & 338 & 166 & 305 \\\\ \\hline\nTotal & 1660 & 1660 & 1660\\\\\n\\end{tabular}\n\\caption{Detailed patient statistics}\n\\label{data}\n\\end{center}\n\\end{table}\n\n\n\n\\subsection{Classifier and Results}\n\nFigure~\\ref{arch} describes the architecture of our neural network based classifier. We used two consecutive 3D convolutional filters followed by max pooling layers. The input is $96\\times96\\times80$ image. The first convolutional kernel size is $ 5\\times5\\times5$, and the max pooling kernel is $5\\times5\\times5$. The second convolutional kernel is $3\\times3\\times3$, and the following max pooling kernel is $2\\times2\\times2$. We chose sixteen as the batch size, and $1e-6$ as the learning rate. We chose a $K=3$-component Beta mixture. \n\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[width=8.2cm]{images\/Diagram.png}\n\\caption{Architecture of the neural network used in the real data experiment.}\n\\label{arch}\n\\end{figure}\n\nWe randomly selected 90\\% of the data for training and the remaining 10\\% for testing. We plotted the credible interval of all the 166 testing subjects with respect to their predicted probability of having AD or not in Figure \\ref{real}(a). We then separated the testing data into three groups: the subjects with their assessments unanimously labeled as NC+MCI (green dots), the subjects with their assessments unanimously labeled as AD (red dots), and the subjects with their assessments with a mix of NC+MCI and AD (blue dots, and referred as MIX). \n\n\n\\begin{figure}[t!]\n\\centering\n\\subfloat[Credible interval for 166 subjects]{{\\includegraphics[width=4cm]{images\/real_2c3o_scatter.png}}}\n\\,\n\\subfloat[Patients with all NC+MCI label]{{\\includegraphics[width=4cm]{images\/2c3o_CI_NCMCI.png}}}\n\\,\n\\subfloat[Patients with all AD label]{{\\includegraphics[width=4cm]{images\/2c3o_CI_AD.png}}}\n\\,\n\\subfloat[Patients with mixed AD and NC+MCI label]{{\\includegraphics[width=4cm]{images\/2c3o_CI_MIX.png}}}\n\\caption{Credible intervals constructed in the real data experiment.}\n\\label{real}\n\\end{figure}\n\n\n\nWe observe that, for the patients in the NC+MCI category, 95\\% of them were estimated to have a smaller than 0.1 probability of being AD, and a tight credible interval with the width smaller than 0.15. We further randomly selected five patients in the NC+MCI category and plotted their credible intervals in Figure \\ref{real}(b). Each has a close to 0 probability of having AD, and each with a tight credible interval. For patients in the AD category, most exhibit the same pattern of having a tight credible interval, with a few potential outliers. For the patients in the MIX category, we randomly selected five patients and plotted their predicted classification probability with the associated credible interval in Figure~\\ref{real}(c). We see that Subject 4 was classified as AD with only 0.45 probability but has a large credible interval of width 0.3. We took a closer look at this subject, and found that the wide interval may be due to inaccurate labeling. The threshold value we applied to dichotomize the MMSE score was 23, in that a subject with the MMSE below or equal to 23 is classified as AD. Subject 4 happens to be on the boundary line of 23. This explains why the classifier produced a wide credible interval. In Figure \\ref{real}(a), we also observe that the classifier is less confident in classifying the patients in the MIX category, in that almost all the blue dots are above the 0.15 credible interval. We again randomly selected five patients in the MIX category and plotted their predicted classification probabilities with the corresponding credible intervals in Figure~\\ref{real}(d). Comparing to Figure~\\ref{real}(b), the credible intervals for patients in the MIX category are much wider than those in the unanimous NC+MCI category. \n\n\n\n\n\n\n\\section{Conclusion}\n\nWe present a new approach, deep Dirichlet mixture networks, to explicitly quantify the uncertainty of the classification probability produced by deep neural networks. Our approach, simple but effective, takes advantage of the availability of multiple class labels for the same input sample, which is common in numerous scientific applications. It provides a useful addition to the inferential machinery for deep neural networks based learning.\n\nThere remains several open questions for future investigation. Methodologically, we currently assume that multiple class labels for each observational sample are of the same quality. In practice, different sources of information may have different levels of accuracy. It is warranted to investigate how to take this into account in our approach. Theoretically, Petrone and Wasserman \\cite{petrone2002consistency} obtained the convergence rate of the Bernstein polynomials. Our Dirichlet mixture distribution should at least have a comparable convergence rate. This rate can guide us theoretically on how many distributions in the mixture should we need. We leave these problems as our future research. \n\n\n\n\n\n\\newpage\n\n\\section{Dirichlet Mixture Networks}\nIn this section, we introduce the Dirichlet mixture networks, highlighting their flexibility of dealing with a large range of uncertainty quantification tasks. We begin with the case of binary classification, where Dirichlet mixtures reduce to the simpler Beta mixtures. Although perhaps the simplest example, the binary case is sufficient to capture all ingredients of our general approach and, thus, no generality is lost. For completeness, extensions to the multi-class case are considered in Section \\ref{sec:extens-mult-labels}.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=8cm]{images\/introduction.PNG}\n\\caption{Illustration of the setting. Two or more labels are generated with the \\textit{same} probability $p_x$, which is \\textit{randomly} drawn from a distribution that we wish to estimate.}\n\\label{fig:cat}\n\\end{figure}\n\n\n\n\n\nLet $\\{1, 2\\}$ denote the two classes. Given an observational unit $x$ (e.g., the image in Figure~\\ref{fig:cat}), our model assumes that the probability $p_x$ that $x$ belongs to the first class is a \\textit{random variable}, instead of a deterministic value in $[0, 1]$. Denote by $f(p; x)$ defined on $[0, 1]$ the probability density function of $p_x$, which encodes the \\textit{intrinsic} uncertainty of the classification problem. In particular, merely knowing the mean probability $\\int_0^1 f(p) \\dx p$ is not sufficient for informed decision-making. For example, it can happen that, for $x, x'$, their mean probabilities are the same but the densities are far apart from each other. \n\nOur Dirichlet mixture networks aim to estimate the density function $f(p;x)$ for each $x$. We begin by introducing the loss function to be minimized via deep learning as below.\n\n\\subsection{Loss function}\\label{sec:likelihood}\n\n\nA difficulty arising from this estimation problem is that $f$ in general can be any density function on $[0, 1]$. Recognizing this difficulty, we propose to simplify the problem by restricting our focus to the case where $f$ is a Beta mixture:\n\\begin{equation}\\label{eq:beta_density_mix}\nf(p; x) = \\sum_{k=1}^K w^k \\frac{p^{\\alpha^k_1 -1} (1-p)^{\\alpha^k_2 -1}}{\\B(\\alpha^k_1, \\alpha^k_2)},\n\\end{equation}\nwhere $\\B(\\cdot, \\cdot)$ is the Beta function, and the parameters $w^k, \\bm\\alpha^k = (\\alpha^k_1, \\alpha^k_2)$ are (smooth) functions of $x$, that is, $w^k = w^k(x), \\alpha^k_1 = \\alpha^k_1(x)$, and $\\alpha^k_2 = \\alpha^k_2(x)$. The weights $w^k$ satisfy $w^1 + \\cdots + x^K = 1$. With this density form in place, our goal is to estimate the parameters $\\alpha_1^k, \\alpha_2^k > 0$, and $w^k$ for $1 \\le k \\le K$. Before proceeding to the next step, it is worth pointing out that the restriction on the density function class is nearly harmless due to the universal approximation of Beta mixtures. Section~\\ref{sec:universaility} will discuss this in detail.\n\nThe next step is to obtain the negative log-likelihood function, which serves as the the loss function used in the mixture networks. Let $x_1, \\ldots, x_n$ be the features of $n$ observational units, each of which are associated with labels $y_j^{(1)}, \\ldots, y_j^{(m_j)}$ taking values from $\\{1, 2\\}$ for $1 \\le j \\le n$. Through integrating out $p$, the likelihood of $w^k, \\bm\\alpha^k$ for the observed pair $x_j, \\bm y_j = (y_j^{(1)}, \\ldots, y_j^{(m_j)})$ is \n\\[\n\\begin{aligned}\n&L_j(\\bm w, \\bm\\alpha^1, \\ldots, \\bm\\alpha^K; x_j, \\bm y_j) \\\\\n&= \\int_0^1 p^{\\sum_{l=1}^{m_j} \\bm{1}(y_j^{(l)}= 1)} (1-p)^{\\sum_{l=1}^{m_j} \\bm{1}(y_j^{(l)}=2)} f(p; x_j) \\dx p.\n\\end{aligned}\n\\]\nWrite $S_{ij} = \\sum_{l=1}^{m_j} \\bm{1}(y_j^{(l)} = i)$ for $i = 1, 2$ and $j = 1, \\ldots, n$. Note that $S_{ij}$ is the number of times $x_j$ is labeled $i$. Plugging \\eqref{eq:beta_density_mix} into the latest display gives\n\\[\n\\begin{aligned}\n&L_j(\\bm w, \\bm\\alpha^1, \\ldots, \\bm\\alpha^K; x_j, \\bm y_j)\\\\\n&= \\int_0^1 p^{S_{1j}} (1-p)^{S_{2j}} \\sum_{k=1}^K \\frac{w^k p^{\\alpha^k_1 -1} (1-p)^{\\alpha^k_2 -1}}{\\B(\\alpha^k_1, \\alpha^k_2)} \\dx p\\\\\n &= \\sum_{k=1}^K \\int_0^1 \\frac{w^k}{\\B(\\alpha^k_1, \\alpha^k_2)} p^{\\alpha^k_1 -1 + S_{1j}} (1-p)^{\\alpha^k_2 -1 + S_{2j}} \\dx p.\n\\end{aligned}\n\\]\nMaking use of a basic property of Beta functions, we get\n\\[\nL_j(\\bm w, \\bm\\alpha^1, \\ldots, \\bm\\alpha^K; x_j, \\bm y_j) = \\sum_{k=1}^K \\frac{w^k\\B(\\alpha^k_1 + S_{1j}, \\alpha^k_2 + S_{2j})}{\\B(\\alpha^k_1, \\alpha^k_2)}.\n\\]\nAggregating all the $n$ observed units, we finally obtain the full negative log-likelihood given as\n\\begin{equation}\\label{eq:full_like}\n\\begin{aligned}\n&-\\ell(\\bm w, \\bm\\alpha^1, \\ldots, \\bm\\alpha^K; x_1, \\bm y_1, \\ldots, x_n, \\bm y_n) = \\\\\n&-\\sum_{j=1}^n \\log \\left[\\sum_{k=1}^K \\frac{w^k(x_j)\\B(\\alpha^k_1(x_j) + S_{1j}, \\alpha^k_2(x_j) + S_{2j})}{\\B(\\alpha^k_1(x_j), \\alpha^k_2(x_j))} \\right].\n\\end{aligned}\n\\end{equation}\n\nMoving back to our neural network architecture, the negative log-likelihood $-\\ell$ serves as the loss function. In short, our aim is to learn $\\bm w(x), \\bm\\alpha^1(x), \\ldots, \\bm\\alpha^K(x)$ as functions of $x$ by minimizing \\eqref{eq:full_like} using the powerful machinery of deep learning.\n\n\n\n\n\\subsection{Credible Intervals}\n\\label{sec:credible-interval}\n\nIn this section, we discuss how to explicitly quantify the uncertainty for a freshly collected unit, say $x$. Let $\\hat{\\bm w}(x), \\hat{\\bm\\alpha}^1(x), \\ldots, \\hat{\\bm\\alpha}^K(x)$ be the functions learned by the deep mixture networks. For the unit $x$, the estimated distribution of the probability $p(x)$ takes the form\n\\[\n\\hat f(p; x) = \\sum_{k=1}^K \\hat w^k(x) \\frac{p^{\\hat\\alpha^k_1(x) -1} (1-p)^{\\hat\\alpha^k_2(x) -1}}{\\B(\\hat\\alpha^k_1(x), \\hat\\alpha^k_2(x))}.\n\\]\nIntuitively, if the density function $\\hat f(p; x)$ is widely spread in the unit interval $[0, 1]$, much uncertainty is contained in the classification problem and, as a consequence, it is crucial to account for the uncertainty in interpreting the predictions of the neural networks.\n\nTo be concrete, in addition to giving a point estimator of the random probability $p_x$, $\\hat f(p; x)$ can be used to construct one-sided and two-sided credible intervals for $p_x$. Intuitively, the expectation of this estimated density $\\int_0^1 \\hat f(p; x)\\dx p$ is an unbiased estimator of $p_x$. Next, to construct credible intervals, let $\\alpha \\in (0, 1)$ be the nominal level. For a two-sided interval, let $\\widehat Q_{\\frac{\\alpha}{2}}$ and $\\widehat Q_{1 - \\frac{\\alpha}{2}}$ be the $\\alpha\/2$ and $1 - \\alpha\/2$ quantiles of the estimated density $\\hat f$, respectively. Using the quantiles, the two-sided credible interval for $p_x$ at level $1 - \\alpha$ takes the form\n\\[\n\\left[ \\widehat Q_{\\frac{\\alpha}{2}}, \\widehat Q_{1 - \\frac{\\alpha}{2}} \\right].\n\\]\nLikewise, the upper and lower credible intervals both at level $1 - \\alpha$ are given as\n\\[\n\\left[0, \\widehat Q_{1 - \\alpha}\\right], \\text{ and } \\left[\\widehat Q_{\\alpha}, 1\\right],\n\\]\nrespectively.\n\n\\subsection{Theoretical Analysis}\\label{sec:universaility}\n\nThe uncertainty assessment carried out in Sections~\\ref{sec:likelihood} and \\ref{sec:credible-interval} considers the case where the density function is a Beta mixture. In this section, we show that this relaxation is mild in the sense that any density functions under certain regularity conditions can be approximated well by Beta mixtures. Explicitly, denote by $\\mathcal P$ the set of all probability density functions $f$ on $[0, 1]$ with at most countable discontinuities that satisfy\n\\begin{align}\\nonumber\n\\int_0^1 f(p) \\left|\\log f(p)\\right| \\dx p < \\infty.\n\\end{align}\nIn \\cite{article}, it is shown that any $f \\in \\mathcal{P}$ can be approximated arbitrarily close by a sequence of Beta mixtures. Precisely, we have the following theorem.\n\\begin{theorem}[\\cite{article}]\\label{thm1}\nFor any $f \\in \\mathcal{P}$ and any $\\epsilon > 0$, there exists a Beta mixture distribution $f_{\\B}$ such that\n\\begin{align}\\nonumber\n\\mathrm{D}_{\\mathrm{KL}} \\left(f \\| f_{\\B} \\right) \\leq \\epsilon,\n\\end{align}\nwhere $\\mathrm{D}_{\\mathrm{KL}} (\\cdot \\| \\cdot)$ denotes the Kullback-Leibler divergence.\n\\end{theorem}\n\nThis theorem shows the validity of approximating general distribution functions using Beta mixtures. In particular, Pinsker's inequality implies that the density can be approximated arbitrarily close in the total variance distance. In short, the proof of this result starts by recognizing that $f$ can be accurately approximated by piecewise constant functions on $[0,1]$ due to a countable number of discontinuities. Next, as shown in \\cite{verdinelli1998bayesian,petrone2002consistency}, each constant piece is a limit of a sequence of Bernstein polynomials, which are infinite Beta mixtures with integer parameters.\n\nFor completeness, we remark it is unclear how to approximate $f(p; x)$ uniformly over $x$ using a sequence of Beta mixtures whose parameters are functions of $x$. It is likely that the approximation remains if the function $f(p, x)$ is smooth in $x$ in a certain sense, for example, $\\int^1_0 |f(p; x) - f(p; x')| \\dx p \\le c \\|x - x'\\|$ for some $c > 0$. Provided that this is true, the universal approximation theorem ensures that the functional $f$ can be arbitrarily approximated by Beta mixtures with parameters taking form of feed-forward networks. We leave this challenging problem for future investigation.\n\n\n\n\n\n\n\\subsection{Extension to Multiple Classes}\n\\label{sec:extens-mult-labels}\nTo conclude, we extend our method to the general case of multi-class labels. The new setting follows seamlessly from the prior development except that now the labels $y_j^{(1)}, \\ldots, y_j^{(m_j)}$ take values from $\\{1, 2, \\ldots, d \\}$, where $d$ is the number of classes. Given a unit $x$, the multinomial distribution over $1, 2, \\ldots, d$ is represented by $(p_1, \\ldots, p_d)$, which, as a point in the simplex $\\Delta = \\{(c_1, \\ldots, c_d): c_i \\ge 0, c_1 + \\cdots + c_d = 1\\}$, is assumed to follow a Dirichlet mixture\n\\[\nf(\\bm p; x) = \\sum_{k=1}^K w^k \\frac1{\\B(\\bm\\alpha^k)} \\prod_{i=1}^d p_i^{\\alpha^k_i -1}.\n\\]\nAbove, the generalized Beta function takes the form\n\\[\n\\B(\\bm\\alpha) = \\frac{\\prod_{i=1}^d \\Gamma(\\alpha_i)}{\\Gamma(\\alpha_1 + \\cdots + \\alpha_d)}.\n\\]\nThe likelihood of the observations $y_j^{(1)}, \\ldots, y_j^{(m_j)}$ is\n\\[\nL_j = \\int_{\\Delta} \\left( \\prod_{i=1}^d p_i^{S_{ij}} \\right) \\sum_{k=1}^K w^k(x_j) \\frac1{\\B (\\bm\\alpha^k(x_j))} \\prod_{i=1}^d p_i^{\\alpha^k_i(x_j) -1} \\dx \\bm p.\n\\]\nRecall that $S_{ij} = \\sum_{l=1}^{m_j} \\bm{1}(y_j^{(l)}= i)$. As such, the (full) negative log-likelihood is\n\\[\n\\footnotesize\n\\begin{aligned}\n&-\\ell(\\bm w, \\bm\\alpha^1, \\ldots, \\bm\\alpha^K; x_1, \\bm y_1, \\ldots, x_n, \\bm y_n) \\\\\n&= -\\sum_{j=1}^n \\log \\left[\\sum_{k=1}^K \\frac{w^k(x_j)\\B(\\alpha^k_1(x_j) + S_{1j}, \\ldots, \\alpha^k_d(x_j) + S_{dj})}{\\B(\\bm\\alpha^k(x_j))} \\right].\n\\end{aligned}\n\\normalsize\n\\]\nThis is the loss function to be minimized in the Dirichlet mixture networks.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}