diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzhfip" "b/data_all_eng_slimpj/shuffled/split2/finalzzhfip" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzhfip" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction} \n\nIn this note we consider a fourth order invariant related to the classical \nYamabe invariant. \n\n\\subsection{Basic definitions} \n\nLet $n \\geq 5$ and let $(M,g)$ be a compact Riemannian \nmanifold without boundary, and define \n\\begin{equation} \\label{q_curv_defn} \nQ_g = -\\frac{1}{2(n-1)} \\Delta_g R_g - \\frac{2}{(n-2)^2} |\\operatorname{Ric}_g |^2 + \n\\frac{n^3 - 4n^2 + 16 n - 16}{8 (n-1)^2(n-2)^2} R_g^2 ,\n\\end{equation} \nwhere $\\Delta_g$ is the Laplace-Beltrami operator, $R_g$ is the \nscalar curvature, and $\\operatorname{Ric}_g$ is the Ricci curvature. We simplify \nthe expression of of $Q_g$ in \\eqref{q_curv_defn} by introducing \nthe Schouten tensor \n\\begin{equation} \\label{schouten_defn} \nA_g = \\frac{1}{n-2} \\left ( \\operatorname{Ric}_g - \\frac{R_g g}{2(n-1)} \\right ) , \n\\qquad J_g = \\operatorname{tr}_g(A_g) = \\frac{R_g}{2(n-1)}, \n\\end{equation} \nso that \n\\begin{equation} \\label{q_curv_defn2} \nQ_g = - \\Delta_g J_g - 2 |A_g|^2 + \\frac{n}{2} J_g^2. \n\\end{equation} \n\nThe $Q$-curvature transforms nicely under a conformal change, \nnamely\n\\begin{equation} \\label{trans_rule1} \n\\widetilde g = u^{\\frac{4}{n-4}} g \\Rightarrow \nQ_{\\widetilde g} = \\frac{2}{n-4} u^{- \\frac{n+4}{n-4}} P_g(u), \n\\end{equation} \nwhere $P_g$ is the Paneitz operator \n\\begin{equation} \\label{paneitz_op_defn} \nP_g(u) = (-\\Delta_g)^2 (u) + \\operatorname{div} ( 4A_g (\\nabla u, \n\\cdot ) - (n-2) J_g \\nabla u) + \\frac{n-4}{2} Q_g u. \n\\end{equation} \nIn general the Paneitz operator transforms according to the \nrule \n\\begin{equation} \\label{trans_rule2}\n\\widetilde g = u^{\\frac{4}{n-4}} g \\Rightarrow P_{\\widetilde g}(v) \n= u^{-\\frac{n+4}{n-4}} P_g(uv). \n\\end{equation} \nOne recovers \\eqref{trans_rule1} by substituting $v=1$ \ninto \\eqref{trans_rule2}. \n\nS. Paneitz \\cite{Pan1} introduced the operator \\eqref{paneitz_op_defn} \nand explored some of its transformation properties. Afterwards \nT. Branson \\cite{Bran1, Bran2} extended Paneitz' definition and studied \nthe associated $Q$-curvature. The interested reader can find excellent \nsurveys in \\cite{BG, CEOY, HY}. \n\nThe transformation rules \\eqref{trans_rule1} and \\eqref{trans_rule2} \nmotivate us to define the energy \nfunction $\\mathcal{Q} : [g] \\rightarrow \\R$, where $[g] = \\{ \n\\widetilde g = u^{\\frac{4}{n-4}} g : u \\in \\mathcal{C}^\\infty (M) , \nu>0\\}$ is the conformal class of $g$, by \n\\begin{equation} \\label{q_energy} \n\\mathcal{Q} (\\widetilde g) = \\frac{\\int_M Q_{\\widetilde g} \nd\\mu_{\\widetilde g}}{ (\\operatorname{Vol}_{\\widetilde g} (M)\n)^{\\frac{n-4}{n}}} = \\frac{2}{n-4} \\frac{\\int_M u P_g(u) \nd\\mu_g} {\\left ( \\int_M u^{\\frac{2n}{n-4}} d\\mu_g \\right )^{\\frac{n-4}{n}}} .\n\\end{equation}\nWe then the conformal invariant \n\\begin{eqnarray} \\label{paneitz_inv1} \n\\mathcal{Y}_4^+ ([g],M) & = & \\inf_{\\widetilde g \\in [g]}\n\\mathcal{Q}(\\widetilde g) = \\inf \\left \\{ \\frac{\\int_M \nQ_{\\widetilde g} d\\mu_{\\widetilde g}} {(\\operatorname{Vol}\n_{\\widetilde g}(M))^{\\frac{n-4}{n}}} : \\widetilde g \\in [g] \\right \\} \n\\\\ \\nonumber \n& = & \\inf \\left \\{ \\frac{2}{n-4} \\frac{\\int_M u P_g(u) d\\mu_g}\n{\\left ( \\int_M u^{\\frac{2n}{n-4}} d\\mu_g\\right )^{\\frac{n-4}{n}}} : \nu \\in \\mathcal{C}^\\infty (M) , u>0 \\right \\} ,\n\\end{eqnarray} \nwhich is a fourth-order analog of the famous Yamabe invariant,\nand the differential invariant \n\\begin{equation} \\label{paneitz_inv2} \n\\mathbb{Y}_4^+(M) = \\sup_{[g] \\in \\mathfrak{c}} \n\\mathcal{Y}_4^+ ([g],M) = \\sup_{[g] \\in \\mathfrak{c}}\n\\inf_{\\widetilde g \\in [g]} \\left \\{ \\frac{ \\int_M \nQ_{\\widetilde g} d\\mu_{\\widetilde g} } { ( \n\\operatorname{Vol}_{\\widetilde g} (M))^{\\frac{n-4}{n}}} \n\\right \\} , \n\\end{equation} \nwhere $\\mathfrak{c}$ is the space of conformal classes on the \nmanifold $M$. \nThe subscript $4$ in both $\\mathcal{Y}_4^+$ and in $\\mathbb{Y}_4^+$ \nrefers to the fact that the underlying differential operator is fourth-order, \nwhile the $+$ refers to the fact that we require all test functions in the \ninfimum for $\\mathcal{Y}_4^+$ must all be positive. Naturally one may \nalso define \n$$\\mathcal{Y}_4([g],M) = \\inf \\left \\{ \\frac{2}{n-4} \\frac{\\int_M u \nP_g(u) d\\mu_g} { \\left ( \\int_M |u|^{\\frac{2n}{n-4}} d\\mu_g \\right \n)^{\\frac{n-4}{n}} } : u \\in \\mathcal{C}^\\infty(M), u \\not \\equiv 0 \n\\right \\},$$ \nand clearly $\\mathcal{Y}_4([g],M) \\leq \\mathcal{Y}_4^+([g],M)$. \n\n\\subsection{Scalar curvature and the Yamabe invariant} \n\nMuch of the work devoted to the Paneitz operator $P_g$ \nand its associated $Q$-curvature is motivated by results about the total \nscalar curvature functional and its associated Yamabe invariant. \n\nGiven a compact Riemannian manifold $(M,g)$ without boundary one \ndefines the total scalar curvature functional on the conformal class \n$[g]$ as\n\\begin{equation} \\label{tot_scal_curv1} \n\\mathcal{R} (\\widetilde g) = \\frac{\\int_M R_{\\widetilde g} \nd\\mu_{\\widetilde g}} {(\\operatorname{Vol}_{\\widetilde g} (M) \n)^{\\frac{n-2}{n}} }. \n\\end{equation} \nOne can simplify this expression using the transformation rule \n\\begin{equation} \\label{trans_rule3} \n\\widetilde g = u^{\\frac{4}{n-2}} g \\Rightarrow R_{\\widetilde g} \n= \\frac{n-2}{4(n-1)} u^{-\\frac{n+2}{n-2}} \\mathcal{L}_g(u),\n\\end{equation} \nwhere $\\mathcal{L}_g$ is the conformal Laplacian \n\\begin{equation} \\label{conf_lap} \n\\mathcal{L}_g = -\\Delta_g + \\frac{4(n-1)}{n-2} R_g ,\n\\end{equation} \nwhich enjoys the transformation rule \n\\begin{equation} \\label{trans_rule4} \n\\widetilde g = u^{\\frac{4}{n-2}} g \\Rightarrow\n\\mathcal{L}_{\\widetilde g} (v) = u^{-\\frac{n+2}{n-2}} \n\\mathcal{L}_g (uv). \n\\end{equation} \nObserve that these transformation rules mean we can \nrewrite $\\mathcal{R}$ as \n\\begin{equation} \\label{tot_scal_curv2} \n\\mathcal{R}(u^{\\frac{4}{n-2}} g ) = \\frac{n-2}{4(n-1)} \n\\frac{\\int_M u \\mathcal{L}_g(u) d\\mu_g}{\\left ( \\int_M\nu^{\\frac{2n}{n-2}} d\\mu_g \\right )^{\\frac{n-2}{n}}}.\n\\end{equation} \n\nThe classical Yamabe invariants are \n\\begin{eqnarray} \\label{yam_inv1} \n\\mathcal{Y}([g],M) & = & \\inf_{\\widetilde g \\in [g]} \\mathcal{R}\n(\\widetilde g) = \\inf \\left \\{ \\frac{\\int_M R_{\\widetilde g} d\\mu_{\\widetilde g}}\n{(\\operatorname{Vol}_{\\widetilde g} (M))^{\\frac{n-2}{n}}} : \\widetilde g \n\\in [g] \\right \\} \\\\ \\nonumber \n& = & \\inf \\left \\{ \\frac{n-2}{4(n-1)} \\frac{\\int_M u \\mathcal{L}_g(u) \nd\\mu_g}{\\left ( \\int_M u^{\\frac{2n}{n-2}} d\\mu_g \\right )^{\\frac{n-2}{n}}}\n: u \\in \\mathcal{C}^\\infty(M) , u>0 \\right \\}\n\\end{eqnarray}\nand \n\\begin{equation} \\label{yam_inv2} \n\\mathbb{Y}(M) = \\sup_{[g] \\in \\mathfrak{c}} \\mathcal{Y}([g], M) \n= \\sup_{[g] \\in \\mathfrak{c}} \\inf_{\\widetilde g \\in [g]} \n\\frac{\\int_M R_{\\widetilde g} d\\mu_{\\widetilde g}}{ (\n\\operatorname{Vol}_{\\widetilde g} (M) )^{\\frac{n-2}{n}}} . \n\\end{equation} \nIn contrast to the fourth-order case, in this situation the \nmaximum principle implies \n$$\\mathcal{Y}([g],M) = \\inf \\left \\{ \\frac{n-2}{4(n-1)} \\frac{\\int_M u \\mathcal{L}_g(u) \nd\\mu_g}{\\left ( \\int_M |u|^{\\frac{2n}{n-2}} d\\mu_g \\right )^{\\frac{n-2}{n}}}\n: u \\in \\mathcal{C}^\\infty(M) , u \\not \\equiv 0 \\right \\} . \n$$\nIn particular, minimizing the functional $\\mathcal{R}$ over \nall nontrivial functions in $W^{1,2}(M)$ will automatically yield \na positive minimizer. On the other hand, minimizers of $\\mathcal{Y}_4\n([g],M)$, if they exist, might change sign. \n\nYamabe \\cite{Y} first defined these two invariants while investigating \nthe the problem of finding a constant scalar curvature metric \nin a given conformal class. Aubin \n\\cite{Aub} proved that $\\mathcal{Y}([g],M) \\leq \\mathcal{Y}\n([g_0],\\Ss^n)$, where $g_0$ is the round metric on the sphere $\\Ss^n$, \nand proved that if $\\mathcal{Y}([g],M) < \\mathcal{Y}([g_0],M)$ there \nexists a smooth, constant scalar curvature metric $\\widetilde g \\in [g]$\nsuch that $\\mathcal{R}(\\widetilde g) = \\mathcal{Y}([g],M)$. \nIn \\cite{Sch} Schoen completed Yamabe's program, proving \nthat $\\mathcal{Y}([g], M) < \\mathcal{Y}([g_0], \\Ss^n)$ for each \nconformal class $[g] \\neq [g_0]$. In particular, $\\mathcal{Y}([g],M) \n< \\mathcal{Y}([g_0], \\Ss^n)$ whenever $M$ is not the sphere. \n\nOn the other hand, the equality $\\mathbb{Y}(M) = \\mathbb{Y}(\\Ss^n)$ \nmay occur even when $M$ is not the sphere. In particular, \nSchoen \\cite{Sch_var} found an explicit sequence of metrics \n$g_k$ on the product $\\Ss^1 \\times \\Ss^{n-1}$ such that \n$\\mathcal{Y}([g_k], \\Ss^1 \\times \\Ss^{n-1}) \\rightarrow \\mathcal{Y}\n([g_0], \\Ss^n)$ as \n$k \\rightarrow \\infty$, and so $\\mathbb{Y}\\mathbb(\\Ss^1 \n\\times \\Ss^{n-1}) = \\mathbb{Y}(\\Ss^n)$. As the underlying manifolds \nare not diffeomorphic, the equality above cannot be realized by \na smooth metric on $\\Ss^1 \\times \\Ss^{n-1}$.\n\n\\subsection{Previous results and our main theorem} \n\nWe summarize some previous theorems regarding the invariant \n$\\mathcal{Y}_4^+ ([g],M)$. Esposito \nand Robert \\cite{ER} showed that $\\mathcal{Y}_4^+([g],M)$ is \nfinite for each conformal class $[g]$ on $M$. \nTo state the next result we define \n$$\\mathcal{Y}_4^* ([g],M) = \\inf_{\\widetilde g \\in [g], R_{\\widetilde g} > 0}\n\\mathcal{Q} (\\widetilde g).$$\nGursky, Hang and Lin \\cite{GHL} proved that if $n= \\dim(M) \\geq 6$ and \nif $\\mathcal{Y}([g],M) > 0$ \nand $\\mathcal{Y}_4^*([g],M)>0$ then \n$$\\mathcal{Y}_4([g],M) = \\mathcal{Y}_4^+ ([g],M) = \\mathcal{Y}_4^* \n([g],M) .$$\nShortly thereafter Hang and Yang \\cite{HY} proved that if $\\mathcal{Y}([g],M)>0$ \nand $Q_g \\geq 0$ with $Q_g \\not \\equiv 0$ then \n$$\\mathcal{Y}_4([g],M) = \\mathcal{Y}_4^+([g],M) \\leq \\mathcal{Y}_4 \n([g_0], \\Ss^n),$$ \nand that equality in the last inequality implies $[g] = [g_0]$. Moreover, \nunder these hypotheses there exists a smooth, constant $Q$-curvature \nmetric $\\widetilde g \\in [g]$ such that $\\mathcal{Q}(\\widetilde g) = \n\\mathcal{Y}_4^+ ([g],M)$. \n\nOur main result is the following theorem. \n\\begin{thm} \\label{main_thm} \nThere exists a sequence of metrics $g_k$ on the product \n$\\Ss^1 \\times \\Ss^{n-1}$ such that $\\mathcal{Y}_4^+ ([g_k], \n\\Ss^1 \\times \\Ss^{n-1}) \n\\rightarrow \\mathcal{Y}_4^+ ([g_0], \\Ss^n)$, where $g_0$ is the standard \nround metric on $\\Ss^n$. As a consequence $\\mathbb{Y}_4^+ \n(\\Ss^1 \\times \\Ss^{n-1}) = \\mathbb{Y}_4^+ (\\Ss^n)$. \n\\end{thm}\n\n\\begin{rmk} The theorem of Hang and Yang \\cite{HY} \nreferenced above implies \nthe equality $\\mathbb{Y}_4^+ (\\Ss^{n-1} \\times \\Ss^1) = \\mathbb{Y}_4^+\n(\\Ss^n)$ cannot be realized by a smooth metric on $\\Ss^{n-1} \\times \n\\Ss^1$. \n\\end{rmk}\n\nWe base our proof of Theorem \\ref{main_thm} on the explicit \nexamples of the Delaunay metrics recently discovered by \nFrank and K\\\"onig \\cite{FK}, following the example of \nR. Schoen \\cite{Sch_var}. \n\n\\section{Proof of our main theorem} \n\nIn this section we present a proof of Theorem \\ref{main_thm}, \nusing the Delaunay metrics of Frank and K\\\"onig as our sequence \nof metrics. We first present some preliminary facts we require \nin our proof, and then carefully describe the Delaunay metrics, \nverifying some of their properties. Finally we complete the \nproof of Theorem \\ref{main_thm}. \n\n\\subsection{Preliminaries} \n\nWe begin with the well-known variational characterization of \nconstant $Q$-curvature metrics. One can find the following \ncomputation in \\cite{Rob}, among other places, but we \ninclude it for the reader's convenience. \n\nIt will be convenient to let $p^\\# = \\frac{2n}{n-4}$, let $\\| \\cdot \\|_p$\ndenote the $L^p$-norm on $(M, d\\mu_g)$, and define the bilinear \nform $\\mathcal{E} (u,v) = \\int_M v P_g(u) d\\mu_g$. Observe that \n\\begin{eqnarray} \\label{paneitz_bilin_form} \n\\mathcal{E} (u,v) & = & \\int_M v P_g(u) d\\mu_g \\\\ \\nonumber \n& = & \\int_M v \\left ( \\Delta_g ^2 u+ 4 \\operatorname{div} (A_g(\\nabla u, \\cdot)\n- (n-2) \\operatorname{div} (J_g \\nabla u ) + \\frac{n-4}{2} Q_g u \\right ) d\\mu_g\n\\\\ \\nonumber \n& = & \\int_M \\Delta_g v \\Delta_g u - 4 A_g(\\nabla u, \\nabla v) + (n-2) J_g \n\\langle \\nabla u, \\nabla v \\rangle + \\frac{n-4}{2} Q_g uv d\\mu_g \n\\\\ \\nonumber \n& = & \\int_M u \\left ( \\Delta_g ^2 v+ 4 \\operatorname{div} (A_g(\\nabla v, \\cdot)\n- (n-2) \\operatorname{div} (J_g \\nabla v ) + \\frac{n-4}{2} Q_g v \\right ) d\\mu_g\n\\\\ \\nonumber \n& = & \\mathcal{E} (v,u) ,\n\\end{eqnarray} \nand so $\\mathcal{E}$ is symmetric. We denote $\\mathcal{E}(u,u) \n= \\mathcal{E}(u)$. \n\n\\begin{lemma} \nLet $W^{2,2}_+(M)$ denote the subspace of (almost everywhere) \npositive functions in $W^{2,2}(M)$. The functional \n$$ W^{2,2}_+(M) \\ni \nu \\mapsto \\mathcal{Q}(u^{\\frac{4}{n-4}} g) $$\nis differentiable and its total derivative is \n\\begin{equation} \\label{derivative_paneitz_func}\nD\\mathcal{Q}(u) (v) = \\frac{4}{(n-4) \\| u \\|_{p^\\#}^2} \\int_M\nv \\left ( P_g(u) - \\| u \\|_{p^\\#}^{-p^\\#} \\mathcal{E}(u) u^{\\frac{n+4}{n-4}}\n\\right ) d\\mu_g .\n\\end{equation} \n\\end{lemma}\n\n\\begin{proof} \nLet $u \\in W^{2,2}_+(M) \\cap \\mathcal{C}^0(M)$ and choose $v \\in W^{2,2}(M)$ such that \n$0<\\sup |v| < \\frac{1}{2} \\inf u$, which is possible because $M$ is a \ncompact manifold. In particular, $u+tv \\in W^{2,2}_+(M)$ for \n$04$ we have $0 < v_{cyl} < 1$ and \n$$v_{sph} (0) = 1 = \\max (v_{sph} (t)), \\qquad \\dot v_{sph}(t) < 0 \n\\textrm{ for }t>0, \\qquad \\dot v_{sph} (t) >0 \\textrm{ for }t< 0.$$\n\nFrank and K\\\"onig recently classified all positive global solutions \nof the ODE \\eqref{paneitz_ode1}, proving there exists a periodic \nsolution $v_a$ for each $a \\in (v_{cyl},1)$ attaining its maximal \nvalue of $a$ when $t=0$. Moreover, they show any global, \npositive solution of \\eqref{paneitz_pde1} must either have the \nform $v(t) = v_a(t+T)$ or $v(t) = (\\cosh (t+T))^{\\frac{4-n}{2}}$ for \nsome $T \\in \\R$, \nor $v \\equiv \\left ( \\frac{n(n-4)}{n^2-4} \\right )^{\\frac{n-4}{8}}$. We \ncall $g_{v_a} = v_a^{\\frac{4}{n-4}} (dt^2 + d\\theta^2)$ the \n{\\bf Delaunay metric} with Delaunay parameter $a$. \n\nEach solution $v_a$ is periodic with period $T_a$, attains its maximal \nvalue at each integer multiple of $T_a$, attains its minimal value at \neach half-integer multiple of $T_a$, and is symmtric about each \nof its critical points. Moreover, the period $T_a$ is an increasing \nfunction of $a$ with $\\lim_{a \\nearrow 1} T_a = \\infty$ and \n$\\lim_{a \\searrow v_{cyl}} T_a = T_{cyl}$, where $T_{cyl}$ is \nthe formal period of $v_{cyl}$, given by \n\\begin{equation} \\label{cyl_period} \nT_{cyl} = \\frac{2\\pi}{\\mu}, \\qquad \\mu = \\frac{1}{2} \\sqrt{ \n\\sqrt{n^4 - 64 n + 64} - n(n-4) + 8}. \n\\end{equation} \nThe period $T_{cyl}$ is the fundamental period of the linearization \nof the operator $P_g$, linearized about the cylindrical solution (see \nSection 3.3 of \\cite{R}). One can also show $\\sup v_a(t) \n\\leq 1$ for each Delaunay parameter $a$. We \nlet $\\epsilon (a) = \\min_{t \\in \\R} v_a(t)$. \n\nWe define the energy \n\\begin{equation} \\label{del_energy} \n\\mathcal{H} (v) = - \\dot v \\dddot v + \\frac{1}{2} (\\ddot v)^2 + \\left ( \n\\frac{n(n-4)+8}{4} \\right ) \\dot v^2 - \\frac{n^2(n-4)^2}{32} v^2 + \n\\frac{(n-4)^2(n^2-4)}{32} v^{\\frac{2n}{n-4}}.\n\\end{equation} \nDifferentiating $\\mathcal{H}$ with respect to $t$ we find \n$$ \\frac{d}{dt} \\mathcal{H} = \n-\\dot v \\left ( \\ddddot v - \\left ( \\frac{n(n-4)+8}{2} \\right ) \\ddot v + \n\\frac{n^2(n-4)^2}{16} - \\frac{n(n-4)(n^2-4)}{16} v^{\\frac{n+4}{n-4}} \n\\right ) ,$$\nand so $\\mathcal{H}(v)$ is constant if $v$ satisfies \\eqref{paneitz_ode1}. \nEvaluating this energy on the cylindrical and spherical \nsolutions we find \n\\begin{equation} \\label{cyl_sph_energy} \n\\mathcal{H}_{cyl} = \\mathcal{H} (v_{cyl}) = - \\frac{n(n-4)^2}{8} \n\\left ( \\frac{n(n-4)}{n^2-4} \\right )^{\\frac{n-4}{4}} < 0, \\qquad \n\\mathcal{H}_{sph} = \\mathcal{H} (v_{sph}) = 0. \n\\end{equation} \n\nRestricting attention to the $(v,\\dot v)$ in phase space we see \nthat the level set $\\{ \\mathcal{H} = 0 \\} \\cap \\{ \\ddot v = 0, \n\\dddot v = 0\\}$ consists entirely of the solution curve of $v_{sph}$ \ntogether with the point $(0,0)$. For each \n$$0 < H < -\\mathcal{H}_{cyl} = \\frac{n(n-4)^2}{8} \n\\left ( \\frac{n(n-4)}{n^2-4} \\right )^{\\frac{n-4}{8}}$$ \nthe level set $\\{ \\mathcal{H} = - H\\} \\cap \\{ \\ddot v = 0, \\dddot v = 0\\}$ is \na closed curve associated to the Delaunay solution $v_a$ for \nsome $a \\in (v_{cyl}, 1)$. Combining Theorems 1, 2, and 3 \nof \\cite{vdB} we find that these solution curves do not cross and \nand that the energy level completely determines the Delaunay solution. \nIn particular, we see \nthat $\\lim_{a \\nearrow 1} \\epsilon(a) = 0$. We sketch some of these \nsolution curves in the phase plane in Figure \\ref{phase_plane_fig} \nbelow. \n\n\\begin{figure} [h] \n\\centering\n\\begin{tikzpicture}\n\\coordinate (0) at (-2,-2); \n\\draw[->] (0,1) -- (8,1) coordinate[label = {below:$v$}] (xmax);\n\\draw[->] (1,-2) -- (1,4) coordinate[label = {right:$\\dot v$}] (ymax);\n\\draw[thick, blue,->>] (1,1) .. controls (2,4) and (5.5,4) .. (6,1);\n\\draw[thick, blue, ->>] (6,1) .. controls (5.5,-2) and (2,-2).. (1,1);\n\\draw[thick,red,->>] (2,1) .. controls (2.5,3) and (4.5,3) .. (5,1);\n\\draw[thick,red,->>] (5,1) .. controls (4.5,-1) and (2.5,-1) .. (2,1); \n\\draw[thick, green,->>] (2.8,1) .. controls (3.1,2) and (4,2) .. (4.2,1); \n\\draw[thick,green,->>] (4.2,1) .. controls (4,0) and (3.1,0) .. (2.8,1); \n\\node at (3.5,1) {$*$}; \n\\node at (6.8,-0.8) {cylindrical solution}; \n\\draw [->] (6.5,-0.5) -- (3.55,1); \n\\draw [->] (7,4) -- (5.5,2.3); \n\\node at (7.1,4.2) {spherical solution}; \n\\end{tikzpicture} \n\\caption{This figure shows the level curves of $\\mathcal{H}$ \nin the $(v,\\dot v)$ phase-plane.} \\label{phase_plane_fig}\n\\end{figure}\n\n\\subsection{Completion of the proof}\n\nFor each $T>0$ we consider metrics of the form $g_v = \nv^{\\frac{4}{n-4}} (dt^2 + d\\theta^2)$ on $\\Ss^1_T \\times \n\\Ss^{n-1}$, identifying $\\Ss^1_T$ with the \ninterval $[-T\/2,T\/2]$. Observe that the metric $g_1 = dt^2 + \nd\\theta^2$ has constant positive scalar curvature equal to \n$(n-1)(n-2)$, as well as positive $Q$-curvature $\\frac{(n-1)((n-1)^2-4)}{8}$, \nand so we may apply the theorem of Hang and Yang to \nconclude \n$$\\mathcal{Y}_4^+ ([dt^2 + d\\theta^2], \\Ss^1_T \\times \\Ss^{n-1})\n= \\mathcal{Y}_4 ([dt^2+d\\theta^2], \\Ss^1_T \\times \\Ss^{n-1}) < \n\\mathcal{Y}_4^+ ( [g_0], \\Ss^n).$$\nEach critical point of $\\mathcal{Q}$ \nin the conformal class $[dt^2 + d\\theta^2]$ must be a \nconstant $Q$-curvature metric on $\\Ss^1 \\times \\Ss^{n-1}$. \nWe pull this constant $Q$-curvature metric on $\\Ss^1_T \\times \\Ss^{n-1}$ \nback to the universal cover $\\R \\times \\Ss^{n-1}$, obtaining a \nsmooth, positive. $T$-periodic function \n$v : \\R \\times \\Ss^n \\rightarrow (0, \\infty)$\nsatisfying \\eqref{paneitz_pde1}. As we discussed above, Frank and \nK\\\"onig classified these solutions as either the constant \ncylindrical solution $v_{cyl}$, translates of the spherical \nsolution $v_{sph}$, or translates of a Delaunay solution \n$v_a$ for some $a \\in (v_{cyl}, 1)$. \n\nThe number of constant $Q$-curvature metrics in the \nconformal class $[dt^2 + d\\theta^2]$ on $\\Ss^1_T \\times \n\\Ss^{n-1}$ depends on $T$ in the following way. As in \nour previous discussion, we normalize the value of the \n$Q$-curvature to be $\\frac{n(n^2-4)}{8}$. The cylindrical \nsolution $v_{cyl}$ is the only solution when $0 < \nT \\leq T_{cyl}$, where $T_{cyl}$ is given in \\eqref{cyl_period}. \nFor $T_{cyl} < T \\leq 2T_{cyl}$ we have two constant $Q$-curvature \nmetrics, namely the cylinder and the Delaunay metric with Delaunay \nparameter $a$ such that $T = T_a$. When $2T_{cyl} < T\\leq 3T_{cyl}$ \nwe obtain $3$ constant $Q$-curvature metrics, namely the \ncylindrical solution $v_{cyl}$, the Delaunay solution $v_a$ \nsuch that $T_a = T$, and the Delaunay solution $v_\\alpha$ \nsuch that $T_\\alpha = T\/2$. Continuing inductively, when \n$(k-1)T_{cyl} < T \\leq k T_{cyl}$ we obtain $k$ distinct \nconstant $Q$-curvature metrics, namely the cylindrical \nsolution $v_{cyl}$ together with the Delaunay solution $v_{a_l}$\nwith $T_{a_l} = T\/l$ for each $l = 1,2,\\dots,k-1$. \n\nFor each $T> T_{cyl}$ the Delaunay solution $v_{a_1}$ such that \n$T_a = T$ solves the initial value problem $v_a(0) = a$, \n$\\dot v_a(0) = 0$. By the results in \\cite{vdB} these two initial \nconditions actually uniquely determine a solution \nof \\eqref{paneitz_ode1}. Combining this uniqueness of \nwith the fact that $\\lim_{a \\nearrow 1} T_a = \\infty$ we \nconclude $v_a \\rightarrow v_{sph} = (\\cosh t)^{\\frac{4-n}{2}}$ \nas $a \\nearrow \\infty$. \nMoreover, because each $\\| v_a \\|_\\infty \\leq 1$, this \nconvergence is uniform on compact subsets by the Arzela-Ascoli \ntheorem. \n\nNext we show that $v_{a_1}$ is the only stable critical \npoint of $\\mathcal{Q}$ among $\\{ v_{cyl}, v_{a_1}, v_{a_2}, \n\\dots, v_{a_{k-1}} \\}$. The function $w_{a_l} = \\dot v_{a_l}$ \nsatisfies $L_{a_l} (w_{a_l}) = 0$, where $L_a$ is the linearization \nof \\eqref{paneitz_pde1} about $v_{a_l}$. Observe that \n$$\\{ t \\in [-T\/2, T\/2] : w_{a_l} > 0\\} = \\bigcup_{j=-\\lfloor l\/2 \n\\rfloor}^{\\lfloor l\/2 \\rfloor} \\left ( j T_{a_l} , \\left ( \\frac{2j+1}{2} \\right ) \nT_{a_l} \\right ),$$\nwhere $\\lfloor l\/2 \\rfloor$ denotes the greatest non-negative integer \nless than or equal to $l\/2$. When $l \\geq 2$ the number of nodal \ndomains combined with Strum-Liouville theory implies $-L_{a_l}$ \nhas at least $l$ negative eigenvalues, and so $v_{a_l}$ cannot be a \nstable critical point of $\\mathcal{Q}$. Furthermore the function \n$w_0 = \\cos (\\mu t)$, where $\\mu$ is given by \\eqref{cyl_period}, \nsatisfies $L_{cyl} (w_0) =0$, where $L_{cyl}$ is the linearization \nof \\eqref{paneitz_pde1} about $v_{cyl}$. When $T> 2 T_{cyl}$ the \nfunction $w_0$ has at least $2$ disjoint regions on which it is \npositive, so $v_{cyl}$ cannot be a stable critical point of $\\mathcal{Q}$ \nfor large values of $T$. \n\nWe conclude that $v_a$ minimizes $\\mathcal{Q}$ over \nthe conformal class $[dt^2 + d\\theta^2]$ on $\\Ss^1_{T_a} \n\\times \\Ss^{n-1}$, and so \n\\begin{eqnarray} \\label{del_tot_q_curv}\n\\mathcal{Y}_4^+ ([dt^2 + d\\theta^2], \\Ss^1_{T_a} \n\\times \\Ss^{n-1}) & = & \n\\mathcal{Q}(g_{v_a}) = \\frac{2}{n-4} \\frac{\\int_{\\Ss^1 \n\\times \\Ss^{n-1}} Q_{g_{v_a}} \nd\\mu_{g_{v_a}} }{(\\operatorname{Vol}_{g_{v_a}}(\\Ss^1 \n\\times \\Ss^{n-1}) )^{\\frac{n-4}{n}}} \\\\ \\nonumber \n& = & \\frac{2}{n-4} \\cdot \\frac{n(n^2-4)}{8} \\frac \n{\\operatorname{Vol}_{g_{v_a}}(\\Ss^1 \\times \\Ss^{n-1}))}\n{(\\operatorname{Vol}_{g_{v_a}}(\\Ss^1 \n\\times \\Ss^{n-1}) )^{\\frac{n-4}{n}}} \\\\ \\nonumber \n& = & \\frac{n(n^2-4)}{4(n-4)} \\frac{ \\int_{-T_a\/2}^{T_a\/2} \\int_{\\Ss^{n-1}}\nv^{\\frac{2n}{n-4}} d\\theta dt} { \\left ( \\int_{-T_a\/2}^{T_a\/2} \n\\int_{\\Ss^{n-1}} v^{\\frac{2n}{n-4}} d\\theta dt \\right )^{\\frac{n-4}{n}}} \n\\\\ \\nonumber \n& = & \\frac{n(n^2-4)}{4(n-4)} |\\Ss^{n-1}|^{n\/4}\n\\left ( \\int_{-T_a\/2}^{T_a\/2} v_a^{\\frac{2n}{n-4}} dt \\right )^{n\/4} .\n\\end{eqnarray}\n\nFinally, we let $a \\nearrow 1$ in \\eqref{del_tot_q_curv} to see \n\\begin{eqnarray*} \n\\mathbb{Y}_4^+ (\\Ss^1 \\times \\Ss^{n-1}) & \\geq & \n\\lim_{a \\nearrow 1} \\mathcal{Y}_4^+ ([dt^2 + d\\theta^2], \n\\Ss^1_{T_a} \\times \\Ss^{n-1}) \\\\ \n& = & \\frac{n(n^2-4)}{4(n-4)} |\\Ss^{n-1}|^{n\/4} \n\\left ( \\int_{-\\infty}^\\infty (v_{sph} (t) )^{\\frac{2n}{n-4}} dt \n\\right )^{n\/4} \\\\ \n& = & \\frac{n(n^2-4)}{4(n-4)} |\\Ss^{n-1}|^{n\/4} \\left ( \n\\int_\\R (\\cosh t)^{-n} dt \\right )^{n\/4} \\\\ \n& = & \\frac{n(n^2-4)}{4(n-4)} |\\Ss^{n-1}|^{n\/4} \n\\left ( \\int_0^\\infty \\left ( \\frac{1+r^2}{2} \\right )^{-n} r^{n-1}\ndr \\right )^{n\/4}\\\\ \n& = & \\mathcal{Q} (g_0) = \\mathcal{Y}_4^+ ([g_0], \n\\Ss^n) = \\mathbb{Y}_4^+(\\Ss^n),\n\\end{eqnarray*} \nwhere $r = e^{-t}$. This completes our proof. \\hfill $\\square$\n\n\n\\begin {thebibliography} {999}\n\n\\bibitem{Aub} T. Aubin. {\\it \\'Equations diff\\'erentielles non lin\\'eaires et \nprobl\\`eme de Yamabe concernant la courbure scalaire.} J. Math. Pures \nAppl. {\\bf 55} (1976), 269--296. \n\n\\bibitem{vdB} J. van den Berg. {\\it The phase-plane picture for a \nclass of fourth-order conservative differential equations.} J. Differential \nEquations {\\bf 161} (2000) 110--153. \n\n\\bibitem {Bran1} T. Branson. {\\it Differential operators canonically associated to a \nconformal structure.} Math. Scandinavia. {\\bf 57} (1985), 293--345. \n\n\\bibitem {Bran2} T. Branson. {\\it Group representations arising from Lorentz \nconformal geometry.} J. Funct. Anal. {\\bf 74} (1987), 199--291.\n\n\\bibitem {BG} T. Branson and A. R. Gover. {\\it Origins, applications and generalisations \nof the $Q$-curvature.} Acta Appl. Math. {\\bf 102} (2008), 131--146. \n\n\\bibitem {CEOY} S.-Y. A. Chang, M. Eastwood, B. \\O rsted, and P. Yang. \n{\\it What is $Q$-curvature?} Acta Appl. Math. {\\bf 102} (2008), 119--125. \n\n\\bibitem {ER} P. Esposito and F. Robert. {\\it Mountain-pass critical points \nfor Paneitz-Branson operators.} Calc. Var. Partial Differential \nEquations {\\bf 15} (2002), 493--517. \n\n\\bibitem {FK} R. Frank and T. K\\\"onig. {\\it Classification of \npositive solutions to a nonlinear biharmonic equation with critical \nexponent.} Anal. PDE {\\bf 12} (2019), 1101--1113.\n\n\\bibitem {GHL} M. Gursky, F. Hang, and Y.-J. Lin. {\\it Riemannian \nmanifolds with positive Yamabe invariant and Paneitz operator.} \nInt. Math. Res. Not. {\\bf 2016} (2016), 1348--1367. \n\n\\bibitem {HY} F. Hang and P. Yang. {\\it Lectures on the fourth order $Q$-curvature\nequation.} Geometric analysis around scalar curvature, Lect. Notes Ser. Inst. Math. \nSci. Natl. Univ. Singap. {\\bf 31} (2016), 1--33. \n\n\\bibitem {HY2} F. Hang and P. Yang. {\\it $Q$-curvature on a class of manifolds \nwith dimension at least $5$.} Comm. Pure Appl. Math. {\\bf 69} (2016), 1452--1491. \n\n\\bibitem {Pan1} S. Paneitz. {\\it A quartic conformally covariant differential operator \nfor arbitrary pseudo-Riemannian manifolds.} SIGMA Symmetry Integrability Geom. \nMethods Appl. {\\bf 4} (2008), 3 pages (preprint from 1983). \n\n\\bibitem {R} J. Ratzkin. {\\it On constant $Q$-curvature metrics with \nisolated singularities.} preprint, {\\tt arXiv:2001.07984}. \n\n\\bibitem {Rob} F. Robert. {\\it Fourth order equations with critical \ngrowth in Riemannian geometry.} private notes, available at \n{\\tt http:\/\/www.iecl.univ-lorraine.fr\/$\\sim$Frederic.Robert\/}\n\n\\bibitem{Sch} R. Schoen. {\\it Conformal deformation of a Riemannian \nmetric to constant scalar curvature.} J. Diff. Geom. {\\bf 20} (1984), 479--495. \n\n\\bibitem{Sch_var} R. Schoen. {\\it Variational theory for the total scalar \ncurvature functional for Riemannian metrics and related topics.} \nin {\\it Topics in calculus of variations.} Lecture Notes in Math. {\\bf 1365}, \nSpringer-Verlag (1989), 120--154. \n\n\\bibitem {Y} H. Yamabe. {\\it On the deformation of Riemannian structures \non a compact manifold.} Osaka Math. J. {\\bf 12} (1960), 21--37. \n\n\\end {thebibliography}\n\n\\end {document} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\t\n\tIn this paper, we are mainly concerned with the existence and nonexistence of positives solutions to the following semilinear elliptic inequalities\n\t\\begin{equation}\\label{ieq}\n\t\t\\Delta u+u^p\\left|\\nabla u\\right|^q\\leq0,\\quad\\mbox{on $G$},\n\t\\end{equation}\n\twhere $(p, q)\\in \\mathbb{R}^2$, $G=(V, E)$ is an infinite connected locally finite graph, and $V$\n\tis the collection of the vertices, and $E$ is the collection of edges. Throughout the paper,\n\tthere exists only at most one edge for any two\n\tdistinct vertices, and exists no any edge from a vertex to itself.\n\t\n\tIf there is an edge connecting $x, y\\in V$, we say\n\t$x\\sim y$. On each edge, let us define an edge weight $\\mu: E \\to(0,\\infty)$, which satisfies\n\t$\\mu_{xy} = \\mu_{yx}$, and $\\mu_{xy} > 0$ if $x\\sim y$. Here $\\mu$ can be understood as a map from\n\t$V\\times V\\to[0,\\infty)$ by adding that $\\mu_{xy}=0$ provided that there is no edge connecting $x, y$.\n\tSuch graph $G=(V, E, \\mu)$ with edge weight $\\mu$ is called a weighted graph. Sometimes, we use $(V, \\mu)$ to denote\n\tthe weighted graph $G$ for brevity.\n\t\n\tIn this paper, we say condition $(p_0)$ is is satisfied on $G$: if there exists a constant $p_0\\geq 1$ such that for any $x\\sim y$ in $V$,\n\\begin{equation}\n\t\\frac{\\mu_{xy}}{\\mu(x)}\\geq \\frac{1}{p_0}.\\tag{$p_0$}\n\\end{equation}\n\t\n\t\n\tFor each vertex $x\\in V$, let us define vertex measure $\\mu(x)=\\sum\\limits_{x \\sim y} \\mu_{xy}$,\n\tand the Laplace operator $\\Delta$ on $G$ (see \\cite{G2}) as\n\t\\begin{equation}\\label{lap}\n\t\\Delta u(x)=\\sum\\limits_{y \\sim x} \\frac{\\mu_{xy}}{\\mu(x)}(u(y)-u(x)),\\quad\\mbox{for $u\\in \\mathcal{l}(V)$},\n\t\\end{equation}\n\tand define the gradient form $\\Gamma$ (see \\cite{BHLLMY}.) as\n\t$$\\Gamma(f,g)=\\sum\\limits_{y \\sim x}\\frac{\\mu_{xy}}{2\\mu(x)}(f(y)-f(x))(g(y)-g(x)), \\quad\\mbox{for $f,g\\in \\mathcal{l}(V)$},$$\nthen the norm of gradient is defined by\n\t\\begin{align}\\label{gra}\n\t\t|\\nabla u(x)|=\\sqrt{\\Gamma(u,u)}=\\sqrt{\\sum\\limits_{y \\sim x}\\frac{\\mu_{xy}}{2\\mu(x)}(u(y)-u(x))^2},\n\t\t\\quad\\mbox{for $u\\in \\mathcal{l}(V)$},\n\t\\end{align}\n\twhere $\\mathcal{l}(V)$ is the collection\n\tof all real functions on $V$.\n\t\n\tBesides, there exists graph distance $d(x,y)$ on $G$ which means the minimal\n\tnumber of edges in a path among all possible paths connecting $x$ and $y$ in $V$. Fix some referenced vertex $o\\in V$, and for integer $n\\geq1$, let\n\t$$B(o,n):=\\{x \\in V: d(o,x) \\leq n\\}$$\n\tbe the closed ball centered at $o$ with radius $n$, and the volume of $B(o, n)$ be\n\t$$\\mu(B(o, n))=\\sum_{x\\in B(o,n)}\\mu(x).$$\n\t\nRecently, the study on elliptic equation on weighted graphs has attracted a lot of attentions, see \\cite{CM} \\cite{GHJ}, \\cite{GLY2} \\cite{GLY3}\n\\cite{HSZ}, \\cite{HWY} \\cite{LiuY}.\n\tIn this paper we would like to solve the following problem: what kind of sharp\n\tassumptions on $\\mu(B(o,n))$ can suffice the nonexistence of nontrivial positive solution $u$ to (\\ref{ieq})?\n\tThere are two folds in this problem: the first one is to find these volume growth and to prove the nonexistence\n\tresults; the second is to show these volume assumptions are sharp.\n\t\n\t\n\tBy using the volume assumption to obtain the nonexistence and existence of solution to elliptic differential inequalities\n\tis widely used in the literature. Recall the famous Nash-Williams' test (e.g. \\cite{W}): if\n\t\\begin{eqnarray}\\label{votest}\n\t\t\\sum^{\\infty}_{n=1}\\frac{n}{\\mu(B(o,n))}=\\infty,\n\t\\end{eqnarray}\n\tthen any nonnegative solution on $(V, \\mu)$ is identically equal to a constant, or equivalently to say, $(V, \\mu)$ is parabolic\n\tor recurrent.\n\t\n\n\t\n\tThe notion of parabolicity of graph can be regarded as a generalization of the parabolicity of manifolds,\n\tsee Cheng-Yau's paper \\cite{CY}, and Grigor'yan \\cite{G85}, Karp\\cite{K}, Varopoulos \\cite{V} for further developments.\n\t\n\t\n\tRecently, Gu, Sun, and Huang in \\cite{GSH} proved that, for $p>1$, if condition $(p_0)$ is satisfied, and if\n\t\\begin{eqnarray}\n\t\t\\mu(B(o,n))\\lesssim n^{\\frac{2p}{p-1}}(\\ln n)^{\\frac{1}{p-1}},\n\t\\end{eqnarray}\n\tthen $\\Delta u+u^p\\leq0$ admits no positive solution. While for $01$.\n\t\nGu-Sun-Huang's result can be considered as a discrete version obtained by Grigor'yan-Sun on manifold case in \\cite{GS1}, where they proved that\n\\begin{eqnarray*}\n\\mu(B(o,r))\\lesssim r^{\\frac{2p}{p-1}}(\\ln r)^{\\frac{1}{p-1}}, \\quad\\mbox{for all large enough $r$},\n\\end{eqnarray*}\nthen there exists no nonnegative solution to $\\Delta u+u^p\\leq0$ on geodesically complete noncompact manifolds.\tHere $\\mu$ is Riemannian measure\non manifolds.\n\n\tMotivated by these results, we would like to study problem (\\ref{ieq}) involving\n\tgradient terms on weighted graphs. To express our classification more clearly, let us divide $R^2$\n\tinto six parts (Figure \\ref{fig1}).\n\t\\begin{align*}\n\t\t&G_1=\\{(p,q)|p\\geq 0,1-p0,\\;\\text{or}\\;p+q=1,q<0\\}\\\\\n\t\t&G_6=\\{(p,q)|p<1-q,q<1,\\;\\text{or}\\; (p,q)=(1,0)\\}\\\\\n\t\\end{align*}\n\n\t\\begin{figure}[h]\n\t\t\\begin{tikzpicture}[x={(0.8cm,0cm)},y={(0cm,0.8cm)}]\\label{fig1}\n\t\t\t\\draw[->] (-6,0)--(6,0) ;\n\t\t\t\\draw[->] (0,-4)--(0,6);\n\t\t\t\\fill[green,opacity=0.7] (-6,6)--(6,6)--(6,4)--(-6,4) ;\n\t\t\t\\fill[blue,opacity=0.7] (0,4)--(6,4)--(6,-4)--(0,2) ;\n\t\t\t\\fill[yellow,opacity=0.7] (0,4)--(-6,4)--(-6,2)--(0,2);\n\t\t\t\\fill[red,opacity=0.7] (0,2)--(-6,2)--(-6,-4)--(6,-4);\n\t\t\t\\draw[very thick,dashed] (-6,2)--(0,2);\n\t\t\t\\draw[very thick,dotted] (6,-4)--(2,0) (2,0)--(0,2);\n\t\t\t\\fill[blue,opacity=0.7] (7.7,5.4) rectangle(8.3,4.8);\n\t\t\t\\fill[green,opacity=0.7] (7.7,3.6) rectangle(8.3,3);\n\t\t\t\\fill[yellow,opacity=0.7] (7.7,1.8) rectangle(8.3,1.2);\n\t\t\t\\fill[red,opacity=0.7] (7.7,0) rectangle(8.3,-0.6);\n\t\t\t\\fill[red,opacity=1] (2,0) circle(0.05);\n\t\t\t\\draw[dashed] (7.5,-1.9)--(8.5,-1.9);\n\t\t\t\\draw[dotted] (7.5,-3.3)--(8.5,-3.3);\n\t\t\t\\node[above] at (0,6) {$q$};\n\t\t\t\\node[right] at (6,0) {$p$};\n\t\t\t\\node[below] at (-5,4) {\\tiny{$q=2$}};\n\t\t\t\\node[below] at (-5,2) {\\tiny{$q=1$}};\n\t\t\t\\node[below] at (1.7,0) {\\tiny{$(1,0)$}};\n\t\t\t\\node[below] at (3,-1.6) {\\tiny{$p+q=1$}};\n\t\t\t\\node[right] at (8.3,5.1) {\\small{$G_1$}};\n\t\t\t\\node[right] at (8.3,3.3) {\\small{$G_2$}};\n\t\t\t\\node[right] at (8.3,1.6) {\\small{$G_3$}};\n\t\t\t\\node[right] at (8.3,-0.3) {\\small{$G_6$}};\n\t\t\t\\node[right] at (8.5,-1.9) {\\small{$G_4$}};\n\t\t\t\\node[right] at (8.5,-3.3) {\\small{$G_5$}};\n\t\t\\end{tikzpicture}\n\t\\caption{}\n\t\\end{figure}\n\t\n\n\tOur main results are as follows:\n\t\\begin{theorem}\\label{thm1} \\rm{\n\t\t\tLet $G=(V,E, \\mu)$ be an infinite, connected, locally finite graph on which condition $(p_0)$\n\t\t\tis satisfied. Fix some $o\\in V$.\n\t\t\t\\begin{enumerate}\n\t\t\t\t\\item[(I).]{Assume $(p,q)\\in G_1$. If\n\t\t\t\t\t\\begin{align}\\label{vol-1}\n\t\t\t\t\t\t\\mu(B(o,n)) \\lesssim n^{\\frac{2p+q}{p+q-1}}(\\ln{n})^{\\frac{1}{p+q-1}},\n\t\t\t\t\t\t\\quad\\mbox{for all $n>>1$},\n\t\t\t\t\t\\end{align}\n\t\t\t\t\tthen (\\ref{ieq}) admits no nontrivial positive solution.}\n\t\t\t\t\n\t\t\t\t\\item[(II).]{Assume $(p,q)\\in G_2$. If\n\t\t\t\t\t\\begin{align}\\label{vol-2}\n\t\t\t\t\t\t\\mu(B(o,n)) \\lesssim n^{2}\\ln{n},\\quad \\mbox{for all $n>>1$},\n\t\t\t\t\t\\end{align}\n\t\t\t\t\tthen (\\ref{ieq}) admits no nontrivial positive solution.}\n\t\t\t\t\n\t\t\t\t\\item[(III).]{Assume $(p,q)\\in G_3$. If\n\t\t\t\t\t\\begin{align}\\label{vol-3}\n\t\t\t\t\t\t\\mu(B(o,n)) \\lesssim n^{\\frac{q}{q-1}}(\\ln{n})^{\\frac{1}{q-1}},\\quad \\mbox{for all $n>>1$},\n\t\t\t\t\t\\end{align}\n\t\t\t\t\tthen (\\ref{ieq}) admits no nontrivial positive solution.}\n\t\t\t\t\n\t\t\t\t\\item[(IV).]{Assume $(p,q)\\in G_4$. For any given $\\alpha>0$, if\n\t\t\t\t\t\\begin{align}\\label{vol-4}\n\t\t\t\t\t\t\\mu(B(o,n)) \\lesssim n^{\\alpha}, \\quad \\mbox{for all $n>>1$},\n\t\t\t\t\t\\end{align}\n\t\t\t\t\tthen (\\ref{ieq}) admits no nontrivial positive solution.}\n\t\t\t\t\n\t\t\t\t\\item[(V).]{Assume $(p,q)\\in G_5$. There exists some $k_0>0$, for any given $\\kappa$ satisfying $0<\\kappa>1$},\n\t\t\t\t\t\\end{align}\n\t\t\t\t\tthen (\\ref{ieq}) admits no nontrivial positive solution.}\n\t\t\t\t\n\t\t\t\n\t\t\\end{enumerate}}\n\t\\end{theorem}\n\t\\begin{remark}\n\t\tIn Theorem \\ref{thm1} (II), by Nash-Williams's test, (\\ref{vol-2}) condition can be relaxed to (\\ref{votest}).\n\t\\end{remark}\n\n\n\t\\textbf{Notations.} In the above and below, the letters $C,C',C_0,C_1,c_0,c_1$... denote positive constants whose values are unimportant and may vary at different occurrences. $A\\lesssim B$ means that the quotient of $A$ and $B$ is bounded from the above, $A\\gtrsim B$ means that the quotient of $A$ and $B$ is bounded from below, and $A\\asymp B$ means both $A\\lesssim B$\n\tand $A\\gtrsim B$ hold.\n\t\n\\vskip1ex\n\nFor $(p,q)\\in G_6$, we have the following nonexistence result.\n\\begin{theorem}\\label{thm1-1} \\rm{\nLet $G=(V,E, \\mu)$ be an infinite, connected, locally finite graph.\nUnder any of the following two assumptions\n\\begin{enumerate}\n\\item[(1).]\n{Assume $(p_0)$ condition is satisfied, and $p<1-q, q<0$; }\n\\item[(2).]{If either $p<1-q, 0\\leq q<1$ or $(p,q)=(1,0)$;}\n\\end{enumerate}\nthen (\\ref{ieq}) admits no nontrivial positive solution.}\n\\end{theorem}\n\n\\begin{remark}\\rm{\nLet us compare our results with the one obtained by Sun-Xiao-Xu on manifolds in \\cite{SXX}.\nOn manifolds, $(p,q)\\in\\mathbb{R}^2$ are also divided into six parts:\n\\begin{eqnarray*}\n&G_1^{\\prime}=\\{p\\geq0,1-p0$, there exists a weight $\\mu$ on $T_N$ satisfying\n\t\t\t\t\t\\begin{align}\\label{e-vol-1}\n\t\t\t\t\t\t\\mu(B(o,n)) \\asymp n^{\\frac{2p+q}{p+q-1}}(\\ln{n})^{\\frac{1}{p+q-1}+\\epsilon},\n\t\t\t\t\t\t\\quad\\mbox{for $n\\geq 2$,}\n\t\t\t\t\t\\end{align}\n\t\t\t\t\tsuch that (\\ref{ieq}) admits a nontrivial positive solution on $(V, \\mu)$.}\n\t\t\t\t\n\t\t\t\t\\item[(II).]{Assume $(p,q)\\in G_2$. For any arbitrary small $\\epsilon>0$, there exists a weight $\\mu$ on $T_N$ satisfying\n\t\t\t\t\t\\begin{align}\\label{e-vol-2}\n\t\t\t\t\t\t\\mu(B(o,n)) \\asymp n^{2}(\\ln{n})^{1+\\epsilon},\\quad \\mbox{for $n\\geq 2$,}\n\t\t\t\t\t\\end{align}\n\t\t\t\t\tsuch that (\\ref{ieq}) admits a nontrivial positive solution on $(V, \\mu)$.}\n\t\t\t\t\n\t\t\t\t\\item[(III).]{Assume $(p,q)\\in G_3$. For any arbitrary small $\\epsilon>0$, there exists a weight $\\mu$ on $T_N$ satisfying\n\t\t\t\t\t\\begin{align}\\label{e-vol-3}\n\t\t\t\t\t\t\\mu(B(o,n)) \\asymp n^{\\frac{q}{q-1}}(\\ln{n})^{\\frac{1}{q-1}+\\epsilon},\\quad \\mbox{for $n\\geq 2$,}\n\t\t\t\t\t\\end{align}\n\t\t\t\t\tsuch that (\\ref{ieq}) admits a nontrivial positive solution on $(V, \\mu)$.}\n\t\t\t\t\n\t\t\t\t\\item[(IV).]{Assume $(p,q)\\in G_4$. Given $\\lambda>0$, there exists a weight $\\mu$ on $T_N$ satisfying\n\t\t\t\t\t\\begin{align}\\label{e-vol-4}\n\t\t\t\t\t\t\\mu(B(o,n)) \\asymp e^{\\lambda n}, \\quad \\mbox{for $n\\geq 2$,}\n\t\t\t\t\t\\end{align}\n\t\t\t\t\tsuch that (\\ref{ieq}) admits a nontrivial positive solution on $(V, \\mu)$,}\n\t\t\t\t\n\t\t\t\t\\item[(V).]{Assume $(p,q)\\in G_5$. Then there exist a weight $\\mu$ on $T_N$, and a positive constant $\\lambda$ satisfying\n\t\t\t\t\t\\begin{align}\\label{e-vol-5}\n\t\t\t\t\t\t\\mu(B(o,n)) \\asymp e^{\\lambda n}\\qquad \\mbox{for $n\\geq 2$,}\n\t\t\t\t\t\\end{align}\n\t\t\t\t\tsuch that (\\ref{ieq}) admits a nontrivial positive solution on $(V, \\mu)$.}\n\t\t\\end{enumerate}}\n\t\\end{theorem}\n\t\n\t\\section{Proof of Theorem \\ref{thm1} and \\ref{thm1-1}}\n\tBefore proceeding to the proof of Theorem \\ref{thm1}, we first introduce Lemma \\ref{lem1}\nand \\ref{lem2}, which play important roles in proof of Theorem \\ref{thm1}.\n\t\t\\begin{lemma}\\label{lem1}\\rm{\n\t\t\tLet $(V,\\mu)$ satisfies $(p_0)$ condition. If $u$ is a nonnegative solution to (\\ref{ieq}), then either $u\\equiv 0$ or $u>0$ and\n\t\t\t\\begin{align}\\label{ep}\n\t\t\t\t\\frac{1}{p_0}\\leq\\frac{u(x)}{u(y)} \\leq p_0, \\quad\\mbox{if $y\\sim x$}.\n\t\t\\end{align}}\n\t\\end{lemma}\n\t\\begin{proof}\n\t\tThe proof is similar to \\cite[Lemma 3.1]{GSH}.\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\\end{proof}\nIn the following, for brevity, we denote by\n $$\\nabla_{xy}f=f(y)-f(x),\\quad \\mbox{for $f\\in \\mathcal{l}(V)$}.$$\n\nLet $\\Omega$ be a non-empty subset of $V$, we say that $u$ satisfies\n\\begin{align}\\label{vieq}\n\t\\Delta u(x)\n\t+ u(x)^{p} |\\nabla u(x)|^{q}\\leq 0,\\quad\\mbox{ when $x\\in \\Omega $}.\\end{align}\nmeans that (\\ref{vieq}) holds only for these vertices $x\\in \\Omega$, where $\\Delta u$ and $|\\nabla u|$ are still defined by\n(\\ref{lap}) and (\\ref{gra}) respectively in $V$.\n\\begin{lemma}\\label{lem2}\\rm{\n\t\tAssume $p+q\\neq 1$,\n $(V,\\mu)$ satisfies $(p_0)$ condition,\n and $\\Omega$ is a non-empty subset of $V$.\n\t\tLet $u$ be a nontrivial positive function on $V$ which satisfies (\\ref{vieq}), and $\\frac{1}{p_0}\\leq\\frac{u(x)}{u(y)} \\leq p_0$ for any $y\\sim x$. Furthermore, when $\\Omega\\not=V$, assume $u$ also satisfies $u(y)-u(x)\\geq 0$ for any $(x,y) \\in\\{(x,y)|y\\sim x, x\\in\\Omega\\mbox{ and }y\\in \\Omega^c\\}$.\n\t\tThen there exists a positive pair\n\t\t$(s,t)$ such that for any $0\\leq\\varphi\\leq 1$ with compact support in $\\Omega $, the following estimates hold:\n\t\t\\begin{align}\\label{est-1}\n\t\t\t&\\sum\\limits_{x \\in \\Omega }\\mu(x) u(x)^{p-t}|\\nabla u(x)|^q\\varphi(x)^s \\nonumber\\\\\n\t\t\t\\leq& C_{p_0,t} (2s)^{\\frac{2p+q+t(q-2)}{p+q-1}} t^{- \\frac{p+t(q-1)}{p+q-1}}\n\t\t\t\\left(\\sum_{\\substack {x,y\\in \\Omega \\\\ \\nabla_{xy} \\varphi \\neq 0}} \\mu_{xy}\\varphi(x)^s u(x)^{p-t}|\\nabla u(x)|^q\\right)^{\\frac{1-t}{p+q-t}}\\nonumber\\\\\n\t\t\t&\\times\\left(\\sum\\limits_{x,y\\in \\Omega } \\mu_{xy} |\\nabla_{xy}\\varphi|^{\\frac{2p+q+t(q-2)}{p+q-1}}\\right)^{\\frac{p+q-1}{p+q-t}},\n\t\t\\end{align}\n\t\tand\n\t\t\\begin{align}\\label{est-2}\n\t\t\t\\sum\\limits_{x \\in \\Omega }\\mu(x) u(x)^{p-t} |\\nabla u(x)|^q \\varphi(x)^s\n\t\t\t\\leq& ( C'_{p_0,t})^{\\frac{p+q-t}{p+q-1}} (2s)^{\\frac{2p+q+t(q-2)}{p+q-1}} t^{-\\frac{p+t(q-1)}{p+q-1}}\\nonumber\\\\\n\t\t\t&\\times\\sum\\limits_{x,y\\in \\Omega } \\mu_{xy} |\\nabla_{xy}\\varphi|^{\\frac{2p+q+t(q-2)}{p+q-1}},\n\t\t\\end{align}\n\t\twhere\n\t\t$C_{p_0,t}= \\frac{(\\sqrt{2p_0}(1+p^t_0))^{\\frac{p + t(q-1)}{p + q-t}+1} (p_0^{t+1})^{\\frac{p + t(q-1)}{p + q-t}}}{4}$, $C'_{p_0,t} =( C_{p_0,t})^{\\frac{p+q-t}{p+q-1}}$,\n\t\tand $s,t$ satisfy\n\t\t\\begin{equation}\\label{st-cond}\n\t\t\t\\left\\{\n\t\t\t\\begin{array}{lr}\n\t\t\t\t\\frac{2p + q + t(q-2)}{p + q-t} > 1, \\\\\n\t\t\t\t\\frac{p+q-t}{1-t} > 1,\\\\\n\t\t\t\ts > \\frac{2p+q+t(q-2)}{p+q-1}.\n\t\t\t\\end{array}\n\t\t\t\\right.\n\t\\end{equation}}\n\\end{lemma}\n\n\\begin{proof}\n\tFor $ \\varphi \\in \\mathcal{l}(\\Omega) $ with compact support in $\\Omega $, define $\\psi=\\varphi^s u^{-t}$, where $(s,t)$ are to be chosen later.\n\t\n\tMultiplying both sides of (\\ref{vieq}) by $\\mu(x)\\psi(x)$ and summing up over all $x\\in \\Omega $, we obtain\n\t\\begin{equation*}\n\t\t\\sum\\limits_{x \\in \\Omega,y\\in V }\\mu_{xy}(\\nabla_{xy}u)\\psi(x)\n\t\t+\\sum\\limits_{x \\in \\Omega }\\mu(x) u(x)^{p} |\\nabla u(x)|^{q}\\psi(x)\\leq 0,\n\t\\end{equation*}\nIt follows that\n\\begin{align}\\label{lem2-1}\n\t&\\sum\\limits_{x,y \\in \\Omega}\\mu_{xy}(\\nabla_{xy}u)\\psi(x)+\\sum\\limits_{x \\in \\Omega, y\\in \\Omega^c}\\mu_{xy}(\\nabla_{xy}u)\\psi(x)\\nonumber \\\\\n\t&+\\sum\\limits_{x \\in \\Omega }\\mu(x) u(x)^{p} |\\nabla u(x)|^{q}\\psi(x)\\leq 0.\n\\end{align}\nSpecially, when $\\Omega=V$, we have $\\sum\\limits_{x \\in \\Omega, y\\in \\Omega^c}\\mu_{xy}(\\nabla_{xy}u)\\psi(x) =0$.\n\n\tNoting\n\t\\begin{equation*}\n\t\t\\sum\\limits_{x,y \\in \\Omega }\\mu_{xy}(\\nabla_{xy}u)\\psi(x)=\n\t\t-\\frac{1}{2}\\sum\\limits_{x,y \\in \\Omega }\\mu_{xy}(\\nabla_{xy}u)(\\nabla_{xy}\\psi),\n\t\\end{equation*}\n\tand\n\t\\begin{equation*}\n\t\t\\nabla_{xy}\\psi=\\nabla_{xy}(\\varphi^s u^{-t})=\n\t\tu(y)^{-t} \\nabla_{xy}(\\varphi^s)+ \\varphi(x)^s \\nabla_{xy}(u^{-t}),\n\t\\end{equation*}\n\twe obtain\n\t\\begin{align*}\n\t\n\t\t\t\\sum\\limits_{x,y \\in \\Omega }\\mu_{xy}(\\nabla_{xy}u)\\psi(x)=&\n\t\t\t-\\frac{1}{2}\\sum\\limits_{x,y \\in \\Omega }\\mu_{xy} u(y)^{-t} (\\nabla_{xy}u) \\nabla_{xy}(\\varphi^s)\\nonumber\\\\\n\t\t\t&-\\frac{1}{2}\\sum\\limits_{x,y \\in \\Omega}\\mu_{xy} \\varphi(x)^s (\\nabla_{xy}u)\n\t\t\t\\nabla_{xy}(u^{-t}).\n\t\n\t\\end{align*}\n\tThen (\\ref{lem2-1}) is transformed to\n\t\\begin{align}\\label{lem2-2}\n\t\t&-\\frac{1}{2}\\sum\\limits_{x,y \\in \\Omega }\\mu_{xy} \\varphi(x)^s (\\nabla_{xy}u)\n\t\t\\nabla_{xy}(u^{-t})+\\sum\\limits_{x \\in \\Omega, y\\in \\Omega^c}\\mu_{xy}(\\nabla_{xy}u)\\varphi(x)^su(x)^{-t}\\nonumber \\\\\n\t\t& +\\sum\\limits_{x \\in \\Omega }\\mu(x) u(x)^{p-t} |\\nabla u(x)|^{q} \\varphi(x)^s\n\t\t\\leq \\frac{1}{2}\\sum\\limits_{x,y \\in \\Omega }\\mu_{xy} u(y)^{-t} (\\nabla_{xy}u) \\nabla_{xy}(\\varphi^s).\n\t\\end{align}\nUsing the mid-value theorem, we have some $\\xi$ which is between $u(y)$ and $u(x)$, such that\n\t\\begin{equation*}\n\t\t\\nabla_{xy}(u^{-t})=u(y)^t-u(x)^t=-t\\xi^{-t-1}(u(y)-u(x))\n\t\t=-t\\xi^{-t-1}\\nabla_{xy}u,\n\t\\end{equation*}\n\tBy $\\frac{1}{p_0}\\leq\\frac{u(x)}{u(y)} \\leq p_0$,\n\n\twe have\n\t$\\frac{u(x)}{p_0} \\leq \\xi \\leq u(x) p_0$, and\n\t\\begin{align}\\label{lem2-3-1}\n\t\t&-\\frac{1}{2}\\sum\\limits_{x,y \\in \\Omega }\\mu_{xy} \\varphi(x)^s (\\nabla_{xy}u)\n\t\t\\nabla_{xy}(u^{-t})\\nonumber\\\\\n\t\t&=\\frac{t}{2}\\sum\\limits_{x,y \\in \\Omega }\\mu_{xy} \\varphi(x)^s (\\nabla_{xy}u)^2\n\t\t\\xi^{-t-1}\\nonumber\\\\\n\t\t&\\geq \\frac{t}{2p_0^{t+1}}\\sum\\limits_{x,y \\in \\Omega }\\mu_{xy} \\varphi(x)^su(x)^{-t-1} (\\nabla_{xy}u)^2,\n\t\\end{align}\nBy $0<\\frac{u(y)-u(x)}{u(x)}\\leq \\frac{p_0u(x)-u(x)}{u(x)}=p_0-1$, we obtain\n\\begin{align}\\label{lem2-3-2}\n\t&\\sum\\limits_{x \\in \\Omega, y\\in \\Omega^c}\\mu_{xy}(\\nabla_{xy}u)\\varphi(x)^su(x)^{-t}\n\t\\nonumber\\\\&\n\t\\geq \\frac{1}{p_0-1}\\sum\\limits_{x \\in \\Omega, y\\in \\Omega^c}\\mu_{xy}(\\nabla_{xy}u)^2\\varphi(x)^su(x)^{-t-1}\n\t\\nonumber\\\\&\n\t\\geq \\frac{t}{2p_0^{t+1}}\\sum\\limits_{x \\in \\Omega, y\\in \\Omega^c}\\mu_{xy}(\\nabla_{xy}u)^2\\varphi(x)^su(x)^{-t-1}.\n\\end{align}\nwhere we have used that if $\\Omega\\neq V$, $u(y)>u(x)$ for $x\\in\\Omega$, $y\\in\\Omega^c$, and $2p_0^{t+1}\\geq t(p_0-1)$ holds for all $t\\geq0$.\n\t\n\tCombining (\\ref{lem2-3-1}) with (\\ref{lem2-3-2}), we get\n\t\\begin{align}\\label{lem2-3}\n\t\t-\\frac{1}{2}\\sum\\limits_{x,y \\in \\Omega }&\\mu_{xy} \\varphi(x)^s (\\nabla_{xy}u)\n\t\t\\nabla_{xy}(u^{-t})+\\sum\\limits_{x \\in \\Omega, y\\in \\Omega^c}\\mu_{xy}(\\nabla_{xy}u)\\varphi(x)^su(x)^{-t}\n\t\t\\nonumber\\\\&\n\t\t\\geq \\frac{t}{2p_0^{t+1}}\\sum\\limits_{x \\in \\Omega, y\\in V}\\mu_{xy}(\\nabla_{xy}u)^2\\varphi(x)^su(x)^{-t-1}\n\t\t\\nonumber\\\\&\n\t\t=\\frac{t}{p_0^{t+1}}\\sum\\limits_{x \\in \\Omega}\\mu(x)|\\nabla u(x)|^2\\varphi(x)^su(x)^{-t-1}.\n\t\\end{align}\nEspecially, when $\\Omega=V$, (\\ref{lem2-3}) can be deduced from (\\ref{lem2-3-1}) directly.\n\n\tBy the mid-value theorem, there is some $\\eta$ between $\\varphi(x)$ and $\\varphi(y)$ such that\n\t\\begin{equation}\\label{lem2-4}\n\t\t\\nabla_{xy}(\\varphi^s)=s\\eta^{s-1}(\\varphi(y)-\\varphi(x))\n\t\t=s\\eta^{s-1}\\nabla_{xy}\\varphi.\n\t\\end{equation}\n\tSubstituting (\\ref{lem2-4}) and (\\ref{lem2-3}) into (\\ref{lem2-2}), we have\n\t\\begin{align}\\label{lem2-5}\n\t\t&\\frac{t}{p_0^{t+1}}\\sum\\limits_{x \\in \\Omega }\\mu(x) \\varphi(x)^s u(x)^{-t-1} |\\nabla u(x)|^2\n\t\t+\\sum\\limits_{x \\in \\Omega }\\mu(x) u(x)^{p-t} |\\nabla u(x)|^{q} \\varphi(x)^s\\nonumber\\\\&\n\t\t\\leq \\frac{s}{2}\\sum\\limits_{x,y \\in \\Omega }\\mu_{xy} u(y)^{-t}\\eta^{s-1} (\\nabla_{xy}u)( \\nabla_{xy}\\varphi).\n\t\\end{align}\n\tObserving that $|\\nabla u(x)|^2=\\sum\\limits_{y \\in V }\\frac{\\mu_{xy}}{2\\mu(x)}(\\nabla_{xy}u)^2$, and\n\t$ \\frac{1}{2p_0} \\leq \\frac{\\mu_{xy}}{2\\mu(x)} \\leq \\frac{1}{2}$, we derive\n\t\\begin{align}\\label{grd}\n\t\t|\\nabla_{xy}u| \\leq \\sqrt{2p_0}|\\nabla u(x)|,\\quad \\mbox{for any $y\\sim x$.}\n\t\\end{align}\n\t\n\t Since $\\eta^{s-1} \\leq\\varphi(x)^{s-1}+\\varphi(y)^{s-1} $, $\\frac{u(x)}{p_0} \\leq \\xi \\leq u(x) p_0 $, and (\\ref{grd}), we have\n\t\\begin{align}\\label{lem2-6}\n\t\t\\frac{s}{2}&\\sum\\limits_{x,y \\in \\Omega }\\mu_{xy} u(y)^{-t}\\eta^{s-1} (\\nabla_{xy}u)( \\nabla_{xy}\\varphi)\\nonumber \\\\\n\t\t&\\leq \\frac{s}{2}\\sum\\limits_{x,y \\in \\Omega }\\mu_{xy} u(y)^{-t}(\\varphi(x)^{s-1}+\\varphi(y)^{s-1}) (\\nabla_{xy}u)( \\nabla_{xy}\\varphi)\\nonumber \\\\\n\t\t&\\leq \\frac{s}{2}(1+p^t_0)\\sum\\limits_{x,y \\in \\Omega }\\mu_{xy} u(x)^{-t}\\varphi(x)^{s-1} (\\nabla_{xy}u)( \\nabla_{xy}\\varphi)\\nonumber \\\\\n\t\t&\\leq \\frac{s}{2}(1+p^t_0)\\sum\\limits_{x,y \\in \\Omega }\\mu_{xy} u(x)^{-t}\\varphi(x)^{s-1} |\\nabla_{xy}u|| \\nabla_{xy}\\varphi|\\nonumber\\\\\n\t\t&\\leq\\frac{s}{2}\\sqrt{2p_0} (1+p^t_0)\\sum\\limits_{x,y \\in \\Omega }\\mu_{xy} u(x)^{-t}\\varphi(x)^{s-1} |\\nabla u(x)||\\nabla_{xy}\\varphi|.\n\t\\end{align}\n\tLet\n\t\\begin{equation}\\label{def-ab}\n\t\ta=\\frac{2p + q + t(q-2)}{p + q-t},\\quad b=\\frac{2p + q + t(q-2)}{p + t(q-1)},\n\t\\end{equation}\n\tand $t$ to be chosen later such that $a, b\\geq 1$.\n\t\n\tBy applying Young's inequality, we obtain\n\t\\begin{align}\\label{lem2-7}\n\t\t\\frac{s}{2}&\\sum\\limits_{x,y \\in \\Omega }\\mu_{xy} u(x)^{-t}\\varphi(x)^{s-1} |\\nabla u(x)||\\nabla_{xy}\\varphi|\n\t\t\\nonumber \\\\\n\t\t=&\\frac{1}{2}\\sum\\limits_{x,y \\in \\Omega }\\left(\\mu_{xy}^{\\frac{1}{b}}(\\frac{t}{2})^{\\frac{1}{b}}\n\t\tu(x)^{-\\frac{t+1}{b}} |\\nabla u(x)|^{\\frac{2}{b}} \\varphi(x)^{\\frac{s}{b}}\\right)\\nonumber \\\\\n\t\t&\\quad\\quad\\times \\left(\\mu_{xy}^{\\frac{1}{a}}s(\\frac{t}{2})^{-\\frac{1}{b}}\n\t\tu(x)^{-t+\\frac{t+1}{b}} |\\nabla u(x)|^{1-\\frac{2}{b}} \\varphi(x)^{s-1-\\frac{s}{b}} |\\nabla_{xy}\\varphi|\\right) \\nonumber\\\\\n\t\t\\leq& \\frac{\\epsilon t}{4}\\sum\\limits_{x,y \\in \\Omega } \\mu_{xy} u(x)^{-t-1} |\\nabla u(x)|^2 \\varphi(x)^s\\nonumber \\\\\n\t\t&+\\epsilon ^{-\\frac{a}{b}}s^a 2^{a-1} t^{1-a}\\sum\\limits_{x,y \\in \\Omega } \\mu_{xy} u(x)^{-t+a-1} |\\nabla u(x)|^{2-a} \\varphi(x)^{s-a}|\\nabla_{xy}\\varphi|^a\\nonumber \\\\\n\t\t\\leq & \\frac{\\epsilon t}{4}\\sum\\limits_{x \\in \\Omega } \\mu(x) u(x)^{-t-1} |\\nabla u(x)|^2 \\varphi(x)^s\\nonumber \\\\\n\t\t&+\\epsilon ^{-\\frac{a}{b}}s^a 2^{a-1} t^{1-a}\\sum\\limits_{x,y \\in \\Omega } \\mu_{xy} u(x)^{-t+a-1} |\\nabla u(x)|^{2-a} \\varphi(x)^{s-a}|\\nabla_{xy}\\varphi|^a.\n\t\\end{align}\nwhere we used $\\sum\\limits_{y \\in \\Omega} \\mu_{xy}\\leq \\mu(x)$, for any $x\\in \\Omega$.\n\n\tLetting $\\epsilon =\\frac{2}{\\sqrt{2p_0} (1+p^t_0) p_0^{t+1}}$, and substituting (\\ref{lem2-7}) into (\\ref{lem2-6}), we obtain\n\t\\begin{align}\\label{lem2-8}\n\t\t\\frac{s}{2}&\\sum\\limits_{x,y \\in V}\\mu_{xy} u(y)^{-t}\\eta^{s-1} (\\nabla_{xy}u)( \\nabla_{xy}\\varphi)\\nonumber \\\\\n\t\t\\leq& \\frac{t}{2p_0^{t+1}}\\sum\\limits_{x \\in \\Omega } \\mu(x) u(x)^{-t-1} |\\nabla u(x)|^2 \\varphi(x)^s\\nonumber \\\\\n\t\t&+C_{p_0,t}(2s)^a t^{1-a}\\sum\\limits_{x,y \\in \\Omega } \\mu_{xy} u(x)^{-t+a-1} |\\nabla u(x)|^{2-a} \\varphi(x)^{s-a} |\\nabla_{xy}\\varphi|^a,\n\t\\end{align}\n\twhere\n\t$$C_{p_0,t}= \\frac{(\\sqrt{2p_0}(1+p^t_0))^{\\frac{p + t(q-1)}{p + q-t}+1} (p_0^{t+1})^{\\frac{p + t(q-1)}{p + q-t}}}{4}.$$\n\tCombining (\\ref{lem2-8}) with (\\ref{lem2-5}), we have\n\t\\begin{align*}\n\t\t&\\frac{t}{2p_0^{t+1}} \\sum\\limits_{x \\in \\Omega }\\mu(x)\\varphi(x)^s u(x)^{-t-1} |\\nabla u(x)|^2\n\t\t+\\sum\\limits_{x \\in \\Omega }\\mu(x) u(x)^{p-t} |\\nabla u(x)|^{q} \\varphi(x)^s \\\\\n\t\t&\n\t\t\\leq C_{p_0,t}(2s)^a t^{1-a}\\sum\\limits_{x,y \\in \\Omega } \\mu_{xy} u(x)^{-t+a-1} |\\nabla u(x)|^{2-a} \\varphi(x)^{s-a} |\\nabla_{xy}\\varphi|^a.\n\t\\end{align*}\n\tIt follows that\n\t\\begin{align}\\label{lem2-10}\n\t\t&\\sum\\limits_{x \\in \\Omega }\\mu(x) u(x)^{p-t} |\\nabla u(x)|^{q} \\varphi(x)^s \\nonumber\\\\\n\t\t\\leq& C_{p_0,t}(2s)^a t^{1-a}\\sum\\limits_{x,y \\in \\Omega } \\mu_{xy} u(x)^{-t+a-1} |\\nabla u(x)|^{2-a} \\varphi(x)^{s-a} |\\nabla_{xy}\\varphi|^a.\n\t\\end{align}\n\t\n\tDefining $\\gamma=\\frac{p+q-t}{1-t},\\rho=\\frac{p+q-t}{p+q-1}$, and choosing $t$ to make $\\gamma,\\rho>1$, and applying H\\\"{o}lder's inequality to RHS of (\\ref{lem2-10}), we obtain\n\t\\begin{align}\\label{lem2-11}\n\t\t&\\sum\\limits_{x,y \\in \\Omega } \\mu_{xy} u(x)^{-t+a-1} |\\nabla u(x)|^{2-a} \\varphi(x)^{s-a}|\\nabla_{xy}\\varphi|^a \\nonumber\\\\&\n\t\t=\\sum\\limits_{x,y\\in \\Omega \\atop \\nabla_{xy} \\varphi \\neq 0} \\mu_{xy}\\left(u(x)^{-t+a-1} |\\nabla u(x)|^{2-a}\\varphi(x)^{\\frac{s}{\\gamma}}\\right)\\left(\\varphi(x)^{s-a-\\frac{s}{\\gamma}}|\\nabla_{xy}\\varphi|^a \\right) \\nonumber\\\\&\n\t\t\\leq \\left(\\sum\\limits_{x,y\\in \\Omega \\atop \\nabla_{xy} \\varphi \\neq 0} \\mu_{xy}u(x)^{p-t}\n\t\t|\\nabla u(x)|^q \\varphi(x)^s\\right)^{\\frac{1}{\\gamma}}\n\t\t\\left(\\sum\\limits_{x,y\\in \\Omega } \\mu_{xy} \\varphi(x)^{s-a\\rho} |\\nabla_{xy}\\varphi|^{a\\rho}\\right)^{\\frac{1}{\\rho}},\n\t\\end{align}\n\t\n\t\n\tChoosing large enough $s$ to let $ s \\geq a\\rho$, and noticing $0 \\leq \\varphi \\leq 1$, we derive\n\t\\begin{align}\\label{lem2-12}\n\t\t&\\sum\\limits_{x,y \\in \\Omega } \\mu_{xy} u(x)^{-t+a-1} |\\nabla u(x)|^{2-a} \\varphi(x)^{s-a}|\\nabla_{xy}\\varphi|^a \\nonumber\\\\&\n\t\t\\leq \\left(\\sum\\limits_{x,y\\in \\Omega \\atop \\nabla_{xy} \\varphi \\neq 0} \\mu_{xy}u(x)^{p-t}\n\t\t|\\nabla u(x)|^q \\varphi(x)^s\\right)^{\\frac{1}{\\gamma}}\n\t\t\\left(\\sum\\limits_{x,y\\in \\Omega } \\mu_{xy} |\\nabla_{xy}\\varphi|^{a\\rho}\\right)^{\\frac{1}{\\rho}}.\n\t\\end{align}\nSubstituting (\\ref{lem2-12}) into (\\ref{lem2-10}), we get\n\t\\begin{align*}\n\t\t&\\sum\\limits_{x \\in \\Omega }\\mu(x) u(x)^{p-t} |\\nabla u(x)|^{q} \\varphi(x)^s \\nonumber\\\\&\n\t\t\\leq C_{p_0,t}(2s)^a t^{1-a} \\left(\\sum\\limits_{x,y\\in \\Omega \\atop \\nabla_{xy} \\varphi \\neq 0} \\mu_{xy}u(x)^{p-t}\n\t\t|\\nabla u(x)|^q \\varphi(x)^s\\right)^{\\frac{1}{\\gamma}}\n\t\t\\left(\\sum\\limits_{x,y\\in \\Omega } \\mu_{xy} |\\nabla_{xy}\\varphi|^{a\\rho}\\right)^{\\frac{1}{\\rho}}.\n\t\\end{align*}\n\tCombining the above with (\\ref{def-ab}), we derive\n\t\\begin{align*}\n\t\t&\\sum\\limits_{x \\in \\Omega }\\mu(x) u(x)^{p-t} |\\nabla u(x)|^{q} \\varphi(x)^s \\nonumber\\\\\n\t\t\\leq& C_{p_0,t}(2s)^{\\frac{2p + q + t(q-2)}{p + q-t}} t^{-\\frac{p + t(q-1)}{p + q-t}}\\left(\\sum\\limits_{x,y\\in \\Omega \\atop \\nabla_{xy} \\varphi \\neq 0} \\mu_{xy}u(x)^{p-t}\n\t\t|\\nabla u(x)|^q \\varphi(x)^s\\right)^{\\frac{1-t}{p+q-t}} \\nonumber\\\\&\n\t\t\\times \\left(\\sum\\limits_{x,y\\in \\Omega } \\mu_{xy} |\\nabla_{xy}\\varphi|^{\\frac{2p+q+t(q-2)}{p+q-1}}\\right)^{\\frac{p+q-1}{p+q-t}}.\n\t\\end{align*}\n\tthen (\\ref{est-1}) follows.\n\t\n\tNoting $\\sum\\limits_{x \\in \\Omega }\\mu(x)u(x)^{p-t} |\\nabla u(x)|^{q} \\varphi(x)^s$ is finite and\n\t\\begin{align*}\n\t\t&\\sum\\limits_{x,y\\in \\Omega \\atop \\nabla_{xy} \\varphi \\neq 0} \\mu_{xy}u(x)^{p-t}\n\t\t|\\nabla u(x)|^q \\varphi(x)^s\\leq \\sum\\limits_{x\\in \\Omega,y\\in V} \\mu_{xy}u(x)^{p-t}\n\t\t|\\nabla u(x)|^q \\varphi(x)^s\\\\&\n\t\t=\\sum\\limits_{x \\in \\Omega }\\mu(x)u(x)^{p-t} |\\nabla u(x)|^{q} \\varphi(x)^s,\n\t\\end{align*}\nwe obtain\n\t\\begin{align*}\n&\\left( \\sum\\limits_{x \\in \\Omega }\\mu(x) u(x)^{p-t} |\\nabla u(x)|^{q} \\varphi(x)^s \\right)^{\\frac{p+q-1}{p+q-t}} \\nonumber\\\\&\n\\leq C_{p_0,t}(2s)^{\\frac{2p + q + t(q-2)}{p + q-t}} t^{-\\frac{p + t(q-1)}{p + q-t}} \\left(\\sum\\limits_{x,y\\in \\Omega } \\mu_{xy} |\\nabla_{xy}\\varphi|^{\\frac{2p+q+t(q-2)}{p+q-1}}\\right)^{\\frac{p+q-1}{p+q-t}}.\n\\end{align*}\nHence we have\n\t\\begin{align*}\n\t\t\\sum\\limits_{x \\in \\Omega }\\mu(x) u(x)^{p-t} |\\nabla u(x)|^{q} \\varphi(x)^s\n\t\t\\leq&( C_{p_0,t})^{\\frac{p+q-t}{p+q-1}} (2s)^{\\frac{2p+q+t(q-2)}{p+q-1}} t^{-\\frac{p+t(q-1)}{p+q-1}}\\nonumber\\\\\n\t\t&\\times\n\t\t\\sum\\limits_{x,y\\in \\Omega } \\mu_{xy} |\\nabla_{xy}\\varphi|^{\\frac{2p+q+t(q-2)}{p+q-1}},\n\t\\end{align*}\n\twhich implies (\\ref{est-2}). Hence, we complete the proof.\n\\end{proof}\n\n\n\\begin{remark}\\label{rem}\n\tIn Lemma \\ref{lem2}, since $s$ is only needed to be chosen large enough, it\n\tsuffices to verify that such $t$ exists. For our convenience, let us divide $R^2\\setminus \\{p+q=1\\}$ into four different parts $K_1$, $K_2$, $K_3$, $K_4$ (see figure \\ref{fig2})\n\t\\begin{align*}\n\t\t&K_1=\\{(p,q)|p<1-q, q\\leq1\\},\\quad K_2=\\{(p,q)|p\\geq0, 1-p1-q, q>1\\}, \\quad K_4=\\{(p,q)|p<0,1] (-6,0)--(4,0) ;\n\t\t\\draw[->] (0,-2)--(0,6);\n\t\t\\fill[green,opacity=0.7] (0,2)--(4,2)--(4,6)--(-4,6) ;\n\t\t\\fill[blue,opacity=0.7] (0,2)--(4,2)--(4,-2);\n\t\t\\fill[yellow,opacity=0.7] (0,2)--(-6,2)--(-6,6)--(-4,6);\n\t\t\\fill[red,opacity=0.7] (0,2)--(-6,2)--(-6,-2)--(4,-2);\n\t\t\\fill[red,opacity=0.7] (5.7,5.4) rectangle(6.3,4.8);\n\t\t\\fill[blue,opacity=0.7] (5.7,3.6) rectangle(6.3,3);\n\t\t\\fill[green,opacity=0.7] (5.7,1.8) rectangle(6.3,1.2);\n\t\t\\fill[yellow,opacity=0.7] (5.7,0) rectangle(6.3,-0.6);\n\t\t\\node[above] at (0,6) {$q$};\n\t\t\\node[right] at (4,0) {$p$};\n\t\t\\node[below] at (-5,2) {\\tiny{$q=1$}};\n\t\t\\node[below] at (-2,3.5) {\\tiny{$p+q=1$}};\n\t\t\\node[right] at (6.3,5.1) {\\small{$K_1$}};\n\t\t\\node[right] at (6.3,3.3) {\\small{$K_2$}};\n\t\t\\node[right] at (6.3,1.6) {\\small{$K_3$}};\n\t\t\\node[right] at (6.3,-0.3) {\\small{$K_4$}};\n\t\\end{tikzpicture}\n\t\\caption{}\n\t\\label{fig2}\n\\end{figure}\nAccording to the location of $(p, q)$, we choose $t$ in the following way\n\\begin{enumerate}\n\\item[1.]{When $(p, q)\\in K_1$, take $t>1$; }\n\\item[2.]{When $(p, q)\\in K_2$, take $01$ and $q<2$, we obtain\n \\begin{align}\\label{I-3}\n\t\t&\\sum\\limits_{x \\in V}\\mu(x) u(x)^{p-1\/i} |\\nabla u(x)|^q \\varphi_i(x)^s\n\t\t\\nonumber \\\\&\n\t\t\\lesssim i^{-1-\\frac{1-1\/i}{p+q-1}} \\sum\\limits^{2i}_{k=i-1}\n\t\t2^{\\frac{k(2-q)}{i(p+q-1)}} k^{\\frac{1}{p+q-1}}\n\t\t\\nonumber \\\\&\n\t\t\\lesssim i^{-1+\\frac{1\/i}{p+q-1}} \\sum\\limits^{2i}_{k=i-1}\n\t\t2^{\\frac{k(2-q)}{i(p+q-1)}}\n\t\t\\nonumber \\\\&\n\t\t\\lesssim i^{\\frac{1\/i}{p+q-1}}.\n\t\\end{align}\nConsequently from letting $i \\rightarrow \\infty$ in (\\ref{I-3})\n\t$$\\sum\\limits_{x \\in V}\\mu(x) u(x)^{p} |\\nabla u(x)|^q <\\infty. $$\n\n\n\tSubstituting $\\varphi=\\varphi_i$ and $t=\\frac{1}{i}$ into (\\ref{est-1}), and repeating the same procedures, we have\n\t$$\\lim_{i\\to\\infty}\\sum\\limits_{x \\in V}\\mu(x) u(x)^{p-1\/i} |\\nabla u(x)|^q \\varphi_i(x)^s\n\t= 0,$$\n\tnamely\n\t$$\\sum\\limits_{x \\in V}\\mu(x) u(x)^{p} |\\nabla u(x)|^q = 0,$$\n\twhich is a contradiction to the assumption that $u$ is nontrivial. Hence, the proof of Theorem \\ref{thm1} (I) is complete.\n\\end{proof}\n\\begin{proof}[\\rm\\textbf{Proof of Theorem \\ref{thm1} (II)}]\n\tLet us divide the proof into three cases:\n\t\\begin{enumerate}\n\t\t\\item[(II-1).]{$(p,q)\\in \\{p+q>1,q>2\\};$}\n\t\t\\item[(II-2).]{$(p,q)\\in \\{p+q=1,q>2\\};$}\n\t\t\\item[(II-3).]{$(p,q)\\in \\{p+q<1,q>2\\}.$}\n\t\\end{enumerate}\n\t\n\tIn case (II-1), it follows that $(p,q)\\in K_2$. Hence let\n\t\\begin{align*}\n\t\tt=1-\\frac{1}{i},\n\t\\end{align*}\n\tand $s$ be some large fixed constant.\n\t\n\tLetting $\\Omega=V$ in Lemma \\ref{lem2}, substituting $\\varphi=\\varphi_i$ from (\\ref{def-phi}) into (\\ref{est-2}), and using the same technique as in (\\ref{I-3}), we obtain\n\t\\begin{align}\\label{2-1}\n\t\t&\\sum\\limits_{x \\in V}\\mu(x) u(x)^{p-t} |\\nabla u(x)|^q \\varphi_i(x)^s\n\t\t\\nonumber \\\\&\n\t\t\\lesssim C_{p_0,t} (2s)^{\\frac{2p+q+t(q-2)}{p+q-1}} \\frac{t^{-\\frac{p+t(q-1)}{p+q-1}}}{i^{\\frac{2p+q+t(q-2)}{p+q-1}}}\n\t\t\\sum\\limits^{2i}_{k=i-1}\n\t\t\\mu(B_k) 2^{-k\\frac{2p+q+t(q-2)}{p+q-1}},\n\t\\end{align}\n\tCombining with (\\ref{vol-3}) and (\\ref{2-1}), and noting that $ C_{p_0,t} (2s)^{\\frac{2p+q+t(q-2)}{p+q-1}}t^{-\\frac{p+t(q-1)}{p+q-1}}$ is uniformly bounded\n\tfor $ i $, we obtain\n\t\\begin{align}\\label{2-2}\n\t\t&\\sum\\limits_{x \\in V}\\mu(x) u(x)^{p-t} |\\nabla u(x)|^q \\varphi_i(x)^s\n\t\t\\nonumber \\\\&\n\t\t\\lesssim i^{-\\frac{2p+q+t(q-2)}{p+q-1}}\n\t\t\\sum\\limits^{2i}_{k=i-1}\n\t\t\\mu(B_k) 2^{-k\\frac{2p+q+t(q-2)}{p+q-1}}\n\t\t\\nonumber \\\\&\n\t\t\\lesssim i^{-\\frac{2p+q+t(q-2)}{p+q-1}}\n\t\t\\sum\\limits^{2i}_{k=i-1} 2^{k(2-\\frac{2p+q+t(q-2)}{p+q-1})}k\n\t\t\\nonumber \\\\&\n\t\t\\lesssim i^{1-\\frac{2p+q+t(q-2)}{p+q-1}}\n\t\t\\sum\\limits^{2i}_{k=i-1} 2^{\\frac{k(q-2)}{i(p+q-1)}}.\n\t\\end{align}\n\tSubstituting $ t=1-\\frac{1}{i}$ into (\\ref{2-2}), we get\n\t\\begin{align}\\label{2-3}\n\t\t&\\sum\\limits_{x \\in V}\\mu(x) u(x)^{p-1+\\frac{1}{i}} |\\nabla u(x)|^q \\varphi_i(x)^s\n\t\t\\nonumber \\\\&\n\t\t\\lesssim i^{1-\\frac{2p+q+t(q-2)}{p+q-1}}\n\t\t\\sum\\limits^{2i}_{k=i-1} 2^{\\frac{k(q-2)}{i(p+q-1)}}\n\t\t\\nonumber \\\\&\n\t\t\\lesssim i^{\\frac{q-2}{i(p+q-1)}}.\n\t\\end{align}\n\tLetting $i \\rightarrow \\infty$ in (\\ref{2-3}), we have\n\t$$\\sum\\limits_{x \\in V}\\mu(x) u(x)^{p-1} |\\nabla u(x)|^q <\\infty. $$\n\tSubstituting $\\varphi=\\varphi_i$ and $t=1-\\frac{1}{i}$ into (\\ref{est-1}), and repeating the same procedures as in the proof of Theorem \\ref{thm1} (I), we derive\n\t\\begin{align}\n\t\t\\sum\\limits_{x \\in V}\\mu(x) u(x)^{p-1} |\\nabla u(x)|^q = 0.\n\t\\end{align}\n\twhich yields a contradiction with the nontrivialness of $u$.\n\t\n\tIn case (II-2),\n\tdenote $\\Omega_k=\\{ x\\in V|00 $ for any $k>k_0$. Now fix such $k$, let $v=\\frac{u}{k}$, and $v$ satisfies\n\t$$ \\Delta v+v^p\\left|\\nabla v\\right|^q\\leq0, \\quad\\mbox{ on $V$}.$$\n\tObiviously $00$, thus $(p^{\\prime},q)\\in \\{ p^{\\prime}+q=p+q+\\epsilon>1,q>2 \\} $, consequently\n\t$(p^{\\prime},q)\\in$ (II-1).\n\t\n\tFrom the definition of $v(x)$ and $\\Omega_k$ , we know $\\frac{1}{p_0}\\leq\\frac{v(x)}{v(y)}=\\frac{u(x)}{u(y)} \\leq p_0$ and $v(y)-v(x)\\geq 0$ when $x\\in \\Omega_k$, $y\\in \\Omega_k^c$ .\nHence by Lemma \\ref{lem2}, and by taking the same procedure as in case (II-1) except replacing $V$ with $\\Omega_k$, we arrive\n\t\\begin{align}\\label{2-2-4}\n\t\t\\sum\\limits_{x \\in \\Omega_k}\\mu(x) v(x)^{p'-1} |\\nabla v(x)|^q = 0,\\quad\\mbox{ on $\\Omega_k$}.\n\t\\end{align}\n\tLet $k_i=\\max\\{u(x)|d(o,x)\\leq i\\}+k_0$, we have\n\t$B(o,i) \\subset \\Omega_{k_i} $. Taking $k=k_i$ in (\\ref{2-2-4}), we obtain that\n\t$v\\equiv cons.$ in $B(o,i)$, which implies that $u\\equiv cons.$ in $B(o,i)$.\n\t\n\tLetting $i\\rightarrow \\infty$, we get $u\\equiv cons.$ in $V$, which is a contradiction with that $u$ is nontrivial.\n\t\n\tIn case (II-3), by taking the same argument as in case (II-1) and letting\n\t$$t=1+\\frac{1}{i},$$\n\twe finish the proof of Theorem \\ref{thm1} (II).\n\\end{proof}\n\n\\begin{proof}[\\rm\\textbf{Proof of Theorem \\ref{thm1} (\\uppercase\\expandafter{\\romannumeral3})}]\n\tLet us divide the proof into three cases:\n\t\\begin{enumerate}\n\t\t\\item[(III-1).]{$(p,q)\\in G_3 \\cap \\{p+q>1\\}$;}\\\\\n\t\t\\item[(III-2).]{$(p,q)\\in G_3 \\cap \\{p+q=1\\};$}\\\\\n\t\t\\item[(III-3).]{$(p,q)\\in G_3 \\cap \\{p+q<1\\}.$}\\\\\n\t\\end{enumerate}\n\t\n\tIn case (III-1), since $(p,q)\\in K_3$, we take\n\t\\begin{align*}\n\t\tt=-\\frac{p}{q-1}+\\frac{1}{i}.\n\t\\end{align*}\n\tand $b$ to be some large fixed constant.\n\t\n\tLetting $\\Omega=V$ in lemma \\ref{lem2}, substituting $\\varphi=\\varphi_i$ to (\\ref{est-2}), and using the same procedure as before, we obtain\n\t\\begin{align*}\n\t\t\\sum\\limits_{x \\in V}\\mu(x) u(x)^{p-t} |\\nabla u(x)|^q \\varphi_i(x)^s\n\t\t\\lesssim i^{-\\frac{2p+q+t(q-2)}{p+q-1}}\n\t\t\\sum\\limits^{2i}_{k=i-1}\n\t\t\\mu(B_k) 2^{-k\\frac{2p+q+t(q-2)}{p+q-1}}.\n\t\\end{align*}\n\tCombining with (\\ref{vol-3}), we obtain\n\t\\begin{align}\\label{3-1-10}\n\t\t&\\sum\\limits_{x \\in V}\\mu(x) u(x)^{\\frac{pq}{q-1}+\\frac{1}{i}} |\\nabla u|^q \\varphi_i(x)^s\n\t\t\\nonumber \\\\ &\n\t\t\\lesssim i^{-\\frac{2p+q+t(q-2)}{p+q-1}}\n\t\t\\sum\\limits^{2i}_{k=i-1}\n\t\t2^{k\\left(\\frac{q}{q-1}-\\frac{2p+q+t(q-2)}{p+q-1}\\right)}k^{\\frac{1}{q-1}}\n\t\t\\nonumber \\\\ &\n\t\t\\lesssim i^{-\\frac{2p+q+t(q-2)}{p+q-1}}\n\t\t\\sum\\limits^{2i}_{k=i-1}\n\t\t2^{-\\frac{k(q-2)}{i(p+q-1)}}k^{\\frac{1}{q-1}}\n\t\t\\nonumber \\\\ &\n\t\t\\lesssim i^{\\frac{1}{q-1}-\\frac{2p+q+t(q-2)}{p+q-1}+1}\n\t\t\\nonumber \\\\ &\n\t\t=i^{-\\frac{q-2}{i(p+q-1)}},\n\t\\end{align}\n\twhere we have used that\n\t$$\\frac{q}{q-1}-\\frac{2p+q+t(q-2)}{p+q-1}=-\\frac{q-2}{i(p+q-1)}.$$\n\tThen letting $i\\to \\infty$ in (\\ref{3-1-10}), we obtain\n\t$$\\sum\\limits_{x \\in V}\\mu(x) u(x)^{\\frac{pq}{q-1}} |\\nabla u(x)|^q <\\infty.$$\n\tRepeating the same procedure as in proof of Theorem \\ref{thm1} (I), we derive\n\t$$\\sum\\limits_{x \\in V}\\mu(x) u(x)^{\\frac{pq}{q-1}} |\\nabla u(x)|^q=0,$$\n\twhich contradicts with that $u$ is a nontrivial positive solution.\n\t\n\tIn case (III-2), we take the same procedure as in case (II-2) except letting\n\t$0<\\epsilon<-\\frac{p}{2}$, thus $ (p^{\\prime},q)=(p+\\epsilon,q) \\in$ (III-1).\n\t\n\tIn case (III-3), we repeat the same argument as in case (III-1) except\n taking\n\t\\begin{align*}\n\t\tt=-\\frac{p}{q-1}-\\frac{1}{i}.\n\t\\end{align*}\n\tHence, we complete proof of Theorem \\ref{thm1} (III).\n\\end{proof}\n\\begin{proof}[\\rm\\textbf{Proof of Theorem \\ref{thm1} (IV)}]\n\tSince here $p<0, q=1$, we choose\n\t$$t=l+\\frac{1}{i},\\qquad s=-\\frac{l}{p}+2+\\frac{1}{i},$$\n\twhere $l>1$ is to be chosen later.\n\t\n\tLetting $\\Omega=V$ in Lemma \\ref{lem2}, substituting $\\varphi=\\varphi_i$ into (\\ref{est-2}), and repeating the same procedure, we obtain\n\t\\begin{align*}\n\t\t&\\sum\\limits_{x \\in V}\\mu(x) u(x)^{p-t} |\\nabla u(x)|^q \\varphi_i(x)^s\n\t\t\\nonumber \\\\&\n\t\t\\lesssim i^{-\\frac{2p+q-t}{p}}\n\t\t\\sum\\limits^{2i}_{k=i-1} \\mu(B_k) 2^{-k\\frac{2p+q-t}{p}}\n\t\t\\nonumber \\\\&\n\t\t\\lesssim i^{-\\frac{2p+q-t}{p}}\n\t\t\\sum\\limits^{2i}_{k=i-1} 2^{k(\\alpha-\\frac{2p+q-t}{p})}.\n\t\\end{align*}\n\tLetting $l$ be a fixed large enough constant such that for all $i$\n\t$$\\alpha-\\frac{2p+q-t}{p}<0,$$\n\twe obtain\n\t\\begin{align}\\label{4-1-3}\n\t\t\\sum\\limits_{x \\in V}\\mu(x) u(x)^{p-t} |\\nabla u(x)|^q \\varphi_i(x)^s\n\t\t\\lesssim i^{1-\\frac{2p+q-t}{p}}.\n\t\\end{align}\n\tFurther, we require that $l$ satisfies\n\t$$1-\\frac{2p+q-t}{p}0\\};$}\n\t\t\\item[(V-2).]{$(p,q)\\in \\{p+q=1,p>1, q<0\\}$.}\n\t\\end{enumerate}\n\tFrom (\\ref{ieq}), we have\n\t$$\\sum\\limits_{y \\in V}\\frac{\\mu_{xy}}{\\mu(x)}u(y)-u(x)\n\t+u(x)^{p} |\\nabla u(x)|^{q}\\leq 0,$$\n\tthat is\n$$\\sum\\limits_{y \\in V}\\frac{\\mu_{xy}}{\\mu(x)}u(y)\\leq u(x)(1-u(x)^{p-1} |\\nabla u(x)|^{q}),$$\n\twhich implies\n\t\\begin{align}\\label{5-ieq}\n\t\tu(x)^{p-1} |\\nabla u(x)|^{q}\\leq 1.\n\t\\end{align}\n\t\n\tIn case (V-1), since $p+q=1$, and $q>0 $, we obtain\n\t\\begin{align}\\label{5-1}\n\t\t|\\nabla u(x)|\\leq u(x).\n\t\\end{align}\n\tCombining this with (\\ref{ieq}), noting $p\\geq0$, we derive\n\t\\begin{align*}\n\t\t\\Delta u(x)+(u(x)^{-p} |\\nabla u(x)|^{p}) u(x)^{p} |\\nabla u(x)|^{q}\\leq \\Delta u(x)+u(x)^{p} |\\nabla u(x)|^{q}\\leq 0,\n\t\\end{align*}\n\twhich is\n\t\\begin{align}\n\t\t\\Delta u(x)+\\left|\\nabla u(x)\\right|\\leq 0.\n\t\\end{align}\n\t\nSet $\\Omega'_k:=\\{ x\\in V|00 $. Now let $v=\\frac{u}{k}$, it follows that $v$ satisfies\n\t$$ \\Delta v(x)+\\left|\\nabla v(x)\\right|\\leq0.$$\nNoting $01$ is to be chosen later.\n\t\n\tFrom the definition of $\\Omega'_k$, we know when $\\mu_{xy}\\neq 0$, $y\\in V \\setminus \\Omega'_k$ $x\\in \\Omega'_k$, we have\n\t\\begin{align*}\n\t\t\\left\\{\n\t\t\\begin{array}{lr}\n\t\t\tv(y)>v(x), \\quad \\mbox{when $v(y)\\geq1$,}\\\\\n\t\t\tv(y)=v(x), \\quad \\mbox{when $v(y)<1$, and $|\\nabla v(y)|=0$.}\n\t\t\\end{array}\n\t\t\\right.\n\t\\end{align*}\nIn both cases, we have $v(y)-v(x)\\geq 0$.\n\n\tThen, for any $x\\in \\Omega'_k$, we obtain\n\t\\begin{align}\\label{5-3}\n\t\t\\Delta v(x)&=\\sum\\limits_{y\\in V} \\frac{\\mu_{xy}}{\\mu(x)}(v(y)-v(x))\\nonumber\\\\&\n\t\t=\\sum\\limits_{y\\in \\Omega'_k} \\frac{\\mu_{xy}}{\\mu(x)}(v(y)-v(x))+\n\t\t\\sum\\limits_{y\\in V \\setminus \\Omega'_k} \\frac{\\mu_{xy}}{\\mu(x)}(v(y)-v(x))\\nonumber\\\\&\n\t\t\\geq \\sum\\limits_{y\\in \\Omega'_k} \\frac{\\mu_{xy}}{\\mu(x)}(v(y)-v(x)).\n\t\\end{align}\n\tSimilarly\n\t\\begin{align}\\label{5-4}\n\t\t|\\nabla v(x)|&=\\sqrt{\\sum\\limits_{y \\in V}\\frac{\\mu_{xy}}{2\\mu(x)}(\\nabla_{xy}v)^2} \\nonumber\\\\&\n\t\t=\\sqrt{\\sum\\limits_{y \\in\\Omega'_k}\\frac{\\mu_{xy}}{2\\mu(x)}(\\nabla_{xy}v)^2+\n\t\t\t\\sum\\limits_{y \\in(\\Omega'_k)^c}\\frac{\\mu_{xy}}{2\\mu(x)}(\\nabla_{xy}v)^2}\n\t\t\\nonumber\\\\&\n\t\t\\geq \\sqrt{\\sum\\limits_{y\\in \\Omega'_k}\\frac{\\mu_{xy}}{2\\mu(x)}(\\nabla_{xy}v)^2}\n\t\t:=|\\nabla_{\\Omega'_k} v(x)|.\n\t\\end{align}\n\tIt is should be know that $|\\nabla_{\\Omega'_k} u(x)|$ is not the norm of gradient of $ u $ in $\\Omega'_k$, since $\\mu(x)$ there is still the measure of $x$ in $ V $, instead of in $\\Omega$. Noticing $\\Omega'_k$ is a subset of $V$, we have $\\sum\\limits_{y\\in \\Omega}\\mu_{xy}\\leq\\sum\\limits_{y\\in V}\\mu_{xy}=\\mu(x)$.\n\t\n\tSubsitituting (\\ref{5-3}) and (\\ref{5-4}) into (\\ref{5-2}), we obtain\n\t\\begin{align}\\label{5-5}\n\t\t\\sum\\limits_{y\\in \\Omega'_k} \\frac{\\mu_{xy}}{\\mu(x)}(v(y)-v(x))+|\\nabla_{\\Omega'_k} v(x)|^{\\lambda}\\leq0, \\quad \\mbox{on $\\Omega'_k$}.\n\t\\end{align}\n\t\n\tMultiplying (\\ref{5-5}) by $\\mu(x)h_n^z$ in (\\ref{hn}) and summing up over $x \\in \\Omega'_k$, we have\n\t\\begin{align}\\label{5-6}\n\t\t\\sum\\limits_{x \\in \\Omega'_k}\\mu(x)|\\nabla_{\\Omega'_k} v(x)|^{\\lambda}h_n(x)^z\n\t\t&\\leq -\\sum\\limits_{x,y \\in \\Omega'_k} \\mu_{xy} h_n(x)^z(\\nabla_{xy}v) \\nonumber \\\\&\n\t\t=\\frac{1}{2}\\sum\\limits_{x,y \\in \\Omega'_k} \\mu_{xy} (\\nabla_{xy}h_n^z)(\\nabla_{xy}v) \\nonumber \\\\&\n\t\t=\\frac{z}{2}\\sum\\limits_{x,y \\in \\Omega'_k} \\mu_{xy} \\eta^{z-1}(\\nabla_{xy}h_n)(\\nabla_{xy}v),\n\t\\end{align}\n\twhere $ \\eta>0$ is between $h_n(x)$ and $h_n(y)$.\n\t\n\tNoting that $|\\nabla_{\\Omega'_k} v(x)|^2=\\sum\\limits_{y\\in \\Omega'_k}\\frac{\\mu_{xy}}{2\\mu(x)}(\\nabla_{xy}v)^2$, and\n\t$ \\frac{1}{2p_0} \\leq \\frac{\\mu_{xy}}{2\\mu(x)} \\leq \\frac{1}{2}$, we derive\n\t\\begin{align}\\label{grd2}\n\t\t|\\nabla_{xy}v| \\leq \\sqrt{2p_0}|\\nabla_{\\Omega'_k} v(x)|\\qquad \\mbox{for any $y\\sim x$.}\n\t\\end{align}\n\t\n\tCombining (\\ref{grd2}) and $ \\eta^{z-1} \\leq max(h_n(x)^{z-1},h_n(y)^{z-1})\\leq h_n(x)^{z-1}+h_n(y)^{z-1} $ with (\\ref{5-6}), and applying H\\\"{o}lder's inequality, we obtain\n\t\\begin{align}\n\t\t&\\sum\\limits_{x \\in \\Omega'_k}\\mu(x)|\\nabla_{\\Omega'_k} v(x)|^{\\lambda}h_n(x)^z\n\t\t\\nonumber \\\\&\n\t\t\\leq \\frac{z}{2}\\sum\\limits_{x,y \\in \\Omega'_k} \\mu_{xy} (h_n(x)^{z-1}+h_n(y)^{z-1})(\\nabla_{xy}h_n)(\\nabla_{xy}v) \\nonumber \\\\&\n\t\t=z\\sum\\limits_{x,y \\in \\Omega'_k} \\mu_{xy} h_n(x)^{z-1}(\\nabla_{xy}h_n)(\\nabla_{xy}v)\t\t\\nonumber \\\\&\n\t\t\\leq z\\sum\\limits_{x,y \\in \\Omega'_k} \\mu_{xy} h_n(x)^{z-1}|\\nabla_{xy}h_n||\\nabla_{xy}v| \\nonumber \\\\&\n\t\t\\leq\\sqrt{2p_0}z\\sum\\limits_{x,y \\in \\Omega'_k} \\mu_{xy} h_n(x)^{z-1}|\\nabla_{xy}h_n||\\nabla_{\\Omega'_k} v(x)| \\nonumber \\\\&\n\t\t\\leq \\sqrt{2p_0}z\\left( \\sum\\limits_{x,y \\in \\Omega'_k} \\mu_{xy}|\\nabla_{\\Omega'_k} v(x)|^{\\lambda}h_n(x)^{z}\\right)^{\\frac{1}{\\lambda}}\n\t\t\\left( \\sum\\limits_{x,y \\in \\Omega'_k} \\mu_{xy}|\\nabla_{xy}h_n|^{z}\\right)^{\\frac{\\lambda-1}{\\lambda}}\\nonumber \\\\&\n\t\t\t\\leq \\sqrt{2p_0}z\\left( \\sum\\limits_{x \\in \\Omega'_k} \\mu(x)|\\nabla_{\\Omega'_k} v(x)|^{\\lambda}h_n(x)^{z}\\right)^{\\frac{1}{\\lambda}}\n\t\t\\left( \\sum\\limits_{x,y \\in \\Omega'_k} \\mu_{xy}|\\nabla_{xy}h_n|^{z}\\right)^{\\frac{\\lambda-1}{\\lambda}},\n\t\\end{align}\n\twhere we take\n\t$$z=\\dfrac{\\lambda}{\\lambda-1}.$$\n\t\n\tBy the boundedness of $\\sum\\limits_{x \\in \\Omega'_k}\\mu(x)|\\nabla_{\\Omega'_k} v(x)|^{\\lambda}h_n^z$, and $h_n=1$ in $ B(o,n) $, and (\\ref{vol-5}), we obtain\n\t\\begin{align}\\label{5-7}\n\t\t&\\sum\\limits_{x \\in \\Omega'_k\\cap B(o,n) }\\mu(x)|\\nabla_{\\Omega'_k}v(x)|^{\\lambda}\n\t\t\\nonumber \\\\&\n\t\t\\leq (\\sqrt{2p_0}z)^z \\sum\\limits_{x,y \\in \\Omega'_k} \\mu_{xy}|\\nabla_{xy}h_n|^{z}\n\t\t\\nonumber \\\\&\n\t\t\\leq (\\frac{\\sqrt{2p_0}z}{n})^z \\mu(B(o,2n))\n\t\t\\nonumber \\\\&\n\t\t\\lesssim (\\frac{\\sqrt{2p_0}z}{n})^z e^{2\\kappa n},\n\t\\end{align}\n\twhere we have used that $|\\nabla_{xy}h_n|\\leq \\frac{1}{n}$, and $|\\nabla_{xy}h_n|=0$ for $x,y\\in B(o,2n)^c$.\n\tSet\n\t\\begin{align}\\label{5-8}\n\t\tz=\\theta n,\n\t\\end{align}\n\twhere $\\theta$ is a fixed positive constant to be determined later. It is easy to see that $\\lambda \\to 1_+$ is equivalent to $n \\to \\infty$.\n\t\n\tNow let $\\lambda \\to 1_+$ in (\\ref{5-7}), by (\\ref{5-8}), we obtain\n\t\\begin{align}\\label{5-9}\n\t\t\\sum\\limits_{x \\in \\Omega'_k }\\mu(x)|\\nabla_{\\Omega'_k}v(x)|&\\leq \\lim_{\\lambda \\to 1_+}\\sum\\limits_{x \\in \\Omega'_k\\cap B(o,n) }\\mu(x)|\\nabla_{\\Omega'_k}v(x)|^{\\lambda}\n\t\t\\nonumber \\\\&\n\t\t\\lesssim \\lim_{n\\to \\infty} (\\frac{\\sqrt{2p_0}z}{n})^z e^{2\\kappa n}\\nonumber \\\\&\n\t\t\\asymp \\lim_{n\\to \\infty} e^{n(2\\kappa+\\theta\\ln(\\sqrt{2p_0}\\theta))}.\n\t\\end{align}\n\tIf we have\n\t\\begin{align}\\label{5-10}\n\t\t2\\kappa+\\theta\\ln(\\sqrt{2p_0}\\theta)<0,\n\t\\end{align}\n\twhich is equivalent to\n\t$$e^{2\\kappa}<\\frac{1}{(\\sqrt{2p_0}\\theta)^\\theta}.$$\n\t\n\tSince $\\frac{1}{(\\sqrt{2p_0}\\theta)^\\theta}$ attains its maximum at $\\theta=\\frac{1}{\\sqrt{2p_0}e}$. Hence\n\tif $\\kappa$ satisfies\n\t\\begin{align}\n\t\t0<\\kappa<\\kappa_0=\\frac{1}{2\\sqrt{2p_0}e},\n\t\\end{align}\n\tthere always exists $\\theta>0$ such that (\\ref{5-10}) holds.\n\t\n\tUnder the above choice of $\\kappa$ and $\\theta$, from (\\ref{5-9}), we obtain\n\t\\begin{align}\\label{5-11}\n\t\t\\sum\\limits_{x \\in \\Omega'_k }\\mu(x)|\\nabla_{\\Omega'_k}v(x)|=0.\n\t\\end{align}\n\t\n\tHence, $x\\in \\Omega'_k$, $|\\nabla_{\\Omega'_k}v(x)|=0$. We calim that $|\\nabla v(x)|=0$ for any $x\\in \\Omega'_k$. Then (\\ref{5-11}) contradicts with the definition of $\\Omega'_k$.\n\t\n\tNow assume there exists some $x_0\\in \\Omega'_k$ satisfying $ |\\nabla_{\\Omega'_k}v(x_0)|=0 $ but $|\\nabla v(x_0)|\\not=0$.\n\t\n\tWe define $ U=\\{y| y\\sim x_0 \\mbox{ and } u(y) \\not= u(x_0)\\}$, it is easy to see $U\\subset (\\Omega'_k)^c$ since\n\t$ |\\nabla_{\\Omega'_k}v(x_0)|=0 $. For any point $y\\in U$ , we derrive $v(y)>v(x_0)$. Otherwise $v(y)0$,\n\twe derrive that $y\\in \\Omega'_k$, which contradicts with $y\\in U$.\n\t\n\tNow we consider the laplacian of $v(x_0)$. Noticing $u(y)= u(x_0)$ for any $y\\in U^c$, we can obtain\n\t\\begin{align*}\n\t\t&\\Delta v(x_0)=\\sum\\limits_{y\\in V} \\frac{\\mu_{x_0y}}{\\mu(x_0)}(v(y)-v(x_0))\\nonumber\\\\&\n\t\t=\\sum\\limits_{y\\in U} \\frac{\\mu_{x_0y}}{\\mu(x_0)}(v(y)-v(x_0))+\n\t\t\\sum\\limits_{y\\in U^c} \\frac{\\mu_{x_0y}}{\\mu(x_0)}(v(y)-v(x_0))\\nonumber\\\\&\n\t\t=\\sum\\limits_{y\\in U} \\frac{\\mu_{x_0y}}{\\mu(x_0)}(v(y)-v(x_0))>0,\n\t\\end{align*}\n\twhich contradicts with (\\ref{5-2}).\n\t\n\tIn case (V-2), since $p+q=1$ $q<0$, from (\\ref{5-ieq}), we obtain\n\t\\begin{align}\\label{5-2-1}\n\t\tu(x)\\leq|\\nabla u(x)|.\n\t\\end{align}\n\t\n\tDefine $\\Omega_k=\\{ x\\in V|00$ such that $\\mu(\\Omega_k)>0 $ for any $k>k_0$. Now fix $k$, let $v=\\frac{u}{k}$, which satisfies\n\t$$ \\Delta v+v^p\\left|\\nabla v\\right|^q\\leq0, \\quad\\mbox{ on $V$}.$$\n\tIt is easy to see that $00$ is to be determined later.\n\t\t\n\tSince $(p',q):=(p+\\epsilon,q)\\in K_2$, $p'+q=1+\\epsilon$, we fix $00$ such that $2\\kappa+\\theta\\ln(\\frac{2\\theta C_1}{t})<0$.\n\t\n\tNoticing $p+q=1$, since $\\frac{p+t(q-1)}{p+q-t}=1-q$, we have\n\t$$C_1=\\frac{(\\sqrt{2p_0}(1+p^t_0))^{\\frac{p+ t(q-1)}{p + q-t}+1} (p_0^{t+1})^{\\frac{p + t(q-1)}{p+ q-t}}}{4}=\\frac{(\\sqrt{2p_0}(1+p^t_0))^{2-q} (p_0^{t+1})^{1-q}}{4}.$$\n\t\n\tSince $\\frac{t}{4C_1e}=\\frac{t}{(e\\sqrt{2p_0}(1+p^t_0))^{2-q} (p_0^{t+1})^{1-q}}$, $t\\in[0,1]$ attains its maximum at $t=1$. Then for any $0<\\kappa<\\kappa_0:=\\frac{1}{(e\\sqrt{2p_0}(1+p_0))^{2-q} (p_0^{2})^{1-q}}$, there exists some $0 u(x_1), \\quad|\\nabla u(x_1)|\\neq 0,\\quad \\text{and}\\quad \\Delta u(x_1)<0.$$\n\tInductively, for $x_i$, we can find $x_{i+1}\\in V$ such that $u(x_{i+1})=\\min\\limits_{y \\sim x_i}u(y)$ and\n\t$$u(x_i)> u(x_{i+1}),\\quad|\\nabla u(x_{i+1})|\\neq 0,\\quad\\text{and}\\quad \\Delta u(x_{i+1})<0. $$\n\tCombining $u(x_{i+1})< u(x_i)<\\cdot\\cdot\\cdot< u(x_0)$ with $u(x)>0$, by Monotone Convergence theorem, we obtain there exists nonegative constant $u_0$ such that\n\t\\begin{align}\\label{6-lim}\n\t\t\\lim\\limits_{i\\to \\infty}u(x_i)=u_0.\n\t\\end{align}\n\tSince\n\\begin{equation}\\label{6-lap}\n\t0>\\Delta u(x_i)=\\sum\\limits_{y \\sim x_i} \\frac{\\mu_{x_iy}}{\\mu(x_i)}u(y)-u(x_i)\\geq u(x_{i+1})-u(x_i).\n\\end{equation}\n\tby Squeeze theorem, we know\n\t\\begin{align}\\label{6-lim-1}\n\t\t\\lim_{i\\to \\infty}\\Delta u(x_i)=0.\n\t\\end{align}\n\tIt follows from (\\ref{ieq}) that\n\t\\begin{align}\\label{6-lim-2}\n\t\t\\lim_{i\\to \\infty}u(x_i)^p|\\nabla u(x_i)|^q=0.\n\t\\end{align}\n\tApplying Jesnsen's inequality, we obtain\n\t\\begin{align*}\n\t\t|\\Delta u(x_i)|^2&=\\left| \\sum\\limits_{y \\sim x_i}\\frac{\\mu_{x_iy}}{\\mu(x_i)}(u(y)-y(x_i))\\right|^2\\nonumber \\\\&\n\t\t\\leq\\left( \\sum\\limits_{y \\sim x_i}\\frac{\\mu_{x_iy}}{\\mu(x_i)}|u(y)-y(x_i)|\\right)^2\\nonumber \\\\&\n\t\t\\leq \\sum\\limits_{y \\sim x_i}\\frac{\\mu_{x_iy}}{\\mu(x_i)}(u(y)-y(x_i))^2=2|\\nabla u(x_i)|^2,\n\t\\end{align*}\n\nConsequently\n\t\\begin{align}\\label{6-ieq}\n\t\t|\\Delta u(x_i)|\\leq\\sqrt{2}|\\nabla u(x_i)|.\n\t\\end{align}\n\tCombining (\\ref{ieq}) with (\\ref{6-ieq}), we derive\n\t\\begin{align*}\n\t\tu(x_i)^p|\\nabla u(x_i)|^q\\leq -\\Delta u(x_i)\\leq|\\Delta u(x_i)|\\leq\\sqrt{2}|\\nabla u(x_i)|.\n\t\\end{align*}\n\tNoticing $u(x_i)>0$ and $|\\nabla u(x_i)|\\not=0$, we obtain\n\t\\begin{align}\\label{6-ieq-2}\n\t\t1\\leq\t\\sqrt{2}u(x_i)^{-p}|\\nabla u(x_i)|^{1-q}.\n\t\\end{align}\n\nLet us finish the proof by dividing into the following cases:\n\t\\begin{enumerate}\n\t\\item[(1).]{$q=0$, $00$.\n\nIn case (2), by (\\ref{ieq}), we have\n\\begin{align*}\n\t\\Delta u(x_i)+1\\leq 0,\n\\end{align*}\nwhich contradicts with (\\ref{6-lim-1})\n\nIn case (3), since $q=0$, $p<0$, we derive a contradiction from (\\ref{6-lim})-(\\ref{6-lim-2}).\n\nIn case (4), we divide the proof into two cases:\n\\begin{enumerate}\n\t\\item[(4-1).]{there exists some $k_0$, such that when $i>k_0$, $|\\nabla u(x_i)|\\leq \\lambda(u(x_i)-u(x_{i+1}))$, where $\\lambda=u(x_0)^{\\frac{1-p-q}{q}}$}.\n\t\\item[(4-2).]{there exists a series $\\{i_k\\}$, such that $i_k\\to\\infty$ as $k\\to \\infty$ and $|\\nabla u(x_{i_k})|> \\lambda(u(x_{i_k})-u(x_{i_k+1}))$.}\n\\end{enumerate}\n\nIn case (4-1), since $1-q>0$, by substituting $|\\nabla u(x_i)|\\leq \\lambda(u(x_i)-u(x_{i+1}))$ into (\\ref{6-ieq-2}), we obtain\n$$1\\leq \\sqrt{2}\\lambda^{1-q}u(x_i)^{-p}(u(x_i)-u(x_{i+1}))^{1-q},$$\nwhich implies by (\\ref{6-lim}) that $p>0$ and $\\lim\\limits_{i\\to \\infty}u(x_i)=u_0=0$.\n\nNoticing $u(x_i)-u(x_{i+1}) \\lambda (u(x_{i_k})-u(x_{i_k+1}))$, we obtain\n$$\\lambda (u(x_{i_k})-u(x_{i_k+1}))\\leq |\\nabla u(x_{i_k})|\\leq (u(x_{i_k})-u(x_{i_k+1}))^{\\frac{1}{q}}u(x_{i_k})^{-\\frac{p}{q}},$$\nwhich implies\n$$\\lambda\\leq (u(x_{i_k})-u(x_{i_k+1}))^{\\frac{1}{q}-1}u(x_{i_k})^{-\\frac{p}{q}}\\leq u(x_{i_k})^{\\frac{1-p-q}{q}} .$$\nNoting $\\lambda=u(x_0)^{\\frac{1-p-q}{q}}$, and $u(x_i)u(x_i)}\\frac{\\mu_{x_iy}}{\\mu(x_i)}(u(y)-u(x_i))+\\sum_{y\\sim x_i\\atop u(y)\\leq u(x_i)}\\frac{\\mu_{x_iy}}{\\mu(x_i)}(u(y)-u(x_i)),\n\\end{align*}\nIt follows that\n\\begin{align*}\n\t\\sum_{y\\sim x_i\\atop u(y)>u(x_i)}\\frac{\\mu_{x_iy}}{\\mu(x_i)}(u(y)-u(x_i))\\leq\\sum_{y\\sim x_i\\atop u(y)\\leq u(x_i)}\\frac{\\mu_{x_iy}}{\\mu(x_i)}(u(x_i)-u(y)).\n\\end{align*}\nHence, we know for any $y_0\\sim x_i$, and $u(y_0)>u(x_i)$, and using $(p_0)$ condition, we have\n\\begin{align}\\label{6-5}\n\t\\frac{1}{p_0}(u(y_0)-u(x_i))&\\leq\\frac{\\mu_{x_iy_0}}{\\mu(x_i)}((u(y_0)-u(x_i))\\nonumber\\\\\n\t&\\leq\t\\sum_{y\\sim x_i\\atop u(y>u(x_i))}\\frac{\\mu_{x_iy}}{\\mu(x_i)}(u(y)-u(x_i))\\nonumber\\\\\n\t&\\leq \\sum\\limits_{y \\sim x_i \\atop u(y)\\leq u(x_i)}\\frac{\\mu_{x_iy}}{\\mu(x_i)}(u(x_i)-u(y)) \\leq u(x_i)-u(x_{i+1}).\n\\end{align}\nHere we have also used that $u(x_{i+1})=\\min\\limits_{y\\sim x_i}u(y) u(x_i)}\\frac{\\mu_{x_iy}}{2\\mu(x_i)}(u(y)-u(x_i))^2+\\sum\\limits_{y \\sim x_i \\atop u(y)\\leq u(x_i)}\\frac{\\mu_{x_iy}}{2\\mu(x_i)}(u(y)-u(x_i))^2\n\t\\\\ &\n\t\\leq \\frac{p_0^2}{2}(u(x_{i+1})-u(x_i))^2+\\frac{1}{2}(u(x_{i+1})-u(x_i))^2\n\t\\\\ &\n\t= \\frac{1+p_0^2}{2}(u(x_{i+1})-u(x_i))^2,\n\\end{align*}\nwhich is\n\\begin{align}\\label{6-6}\n\t|\\nabla u(x_i)|\\leq \\sqrt{\\frac{1+p_0^2}{2}}(u(x_i)-u(x_{i+1})),\n\\end{align}\nIt follows that\n\\begin{align}\\label{6-lim-3}\n\t\\lim_{i\\to \\infty}|\\nabla u(x_i)|=0.\n\\end{align}.\n\n\tIn case (5), noticing $p>0$, $q< 0$ and combining (\\ref{6-lim}) (\\ref{6-lim-2}) and (\\ref{6-lim-3}), we derive\n$$\\lim_{i\\to \\infty}u(x_i)=0.$$\n\tBy (\\ref{ieq}), we have\n\t\\begin{align*}\n\t\t0&\\geq\\sum\\limits_{y \\sim x_i} \\frac{\\mu_{x_iy}}{\\mu(x_i)}(u(y)-u(x_i))+u(x_i)^{p}|\\nabla u(x_i)|^q\\\\&\n\t\t=\\sum\\limits_{y \\sim x_i} \\frac{\\mu_{x_iy}}{\\mu(x_i)}u(y)-u(x_i)(1-u(x_i)^{p-1}|\\nabla u(x_i)|^q),\n\t\\end{align*}\n\twhich implies\n\t\\begin{align}\\label{6-0}\n\t\t1-u(x_i)^{p-1}|\\nabla u(x_i)|^q\\geq 0.\n\t\\end{align}\n\tCombining (\\ref{6-0}) with (\\ref{6-6}), we obtain\n\t\\begin{align}\\label{6-7}\n\t\t1\\geq u(x_i)^{p-1}|\\nabla u(x_i)|^q\\geq (\\frac{p_0^2+1}{2})^{\\frac{q}{2}}u(x_i)^{p-1}(u(x_i)-u(x_{i+1}))^q.\n\t\\end{align}\n Since $q<0$, we have\n \\begin{align}\\label{6-8}\n \t1\\geq (\\frac{p_0^2+1}{2})^{\\frac{q}{2}}u(x_i)^{p-1}(u(x_i)-u(x_{i+1}))^q\\geq(\\frac{p_0^2+1}{2})^{\\frac{q}{2}} u(x_i)^{p+q-1}.\n \\end{align}\n\t Using $p<1-q$, and by letting $i\\to \\infty$, we obtain a contradiction from (\\ref{6-8}) with $\\lim\\limits_{i\\to \\infty}u(x_i)=0$.\n\t\n\t\n\tIn case (6), noticing $p=0$, $q<0$, we obtain a contradiction by (\\ref{6-lim-2}) and (\\ref{6-lim-3}).\n\t\n\tIn case (7), since $p<0$, $q< 0$, substituting (\\ref{6-lim}) and (\\ref{6-lim-3}) into\n\t(\\ref{6-lim-2}), we derive a contradiction.\n\tHence, we complete the proof of Theorem \\ref{thm1-1}.\n\\end{proof}\n\n\\section{Proof of Theorem \\ref{thm2}}\n\nBefore presenting the proof of Theorem \\ref{thm2}, for our convenience, let us introduce some notations. For fixed integer $n\\geq0$, let us denote by $D_n: = \\{x \\in T_N : d(o, x) = n\\}$ the collection of all the vertices with distance $n$ from $o$, and denote by $E_n$ the collection of all the edges from vertices in $D_n$ to vertices in $D_{n+1}$.\n\n\\begin{proof}[\\rm{Proof of Theorem \\ref{thm2} (I)}]\n\tWhen $(p, q)\\in G_1$, let us take $\\mu$ and $u$ as follows\n\t\\begin{align}\\label{mu1}\n\t\t&\\mu_{xy}=\\mu_n=\\frac{(n+n_0)^{\\frac{p+1}{p+q-1}}(\\ln{(n+n_0)})^{\\frac{1}{p+q-1}+\\epsilon}}{(N-1)^n},\\quad \\mbox{for any $(x,y)\\in E_n$, $n\\geq0$}.\n\t\\end{align}\n\t\\begin{align}\\label{u1}\n\t\t&u(x)=u_n=\\frac{\\delta}{(n+n_0)^{\\frac{2-q}{p+q-1}}(\\ln{(n+n_0)})^{\\frac{1}{p+q-1}}},\n\t\t\\quad \\mbox{for any $x\\in D_n$, $n\\geq0$}.\n\t\\end{align}\n\twhere $n_0 \\geq 2$ and $\\delta > 0$ are to be chosen later.\n\t\nFirst, under the above choice of $\\mu$,\nfor $n \\geq 2$, we obtain\n\t$$\\mu(B(o,n))=\\sum\\limits^n_{k=0}\\mu(D_k)\\asymp \\sum\\limits^n_{k=0}(N-1)^k \\mu_k\\asymp n^{\\frac{2p+q}{p+q-1}}(\\ln{n})^{\\frac{1}{p+q-1}+\\epsilon},$$\nwhich implies that (\\ref{e-vol-1}) holds.\n\nNext we verify that (\\ref{ieq}) holds for the above choice of $\\mu$ and $u$, namely the following two inequalities hold:\n\t\\begin{align}\\label{en0}\n\t\tu_1-u_0+u_0^p\\left[\\frac{(u_0-u_1)^2}{2}\\right]^{\\frac{q}{2}} \\leq 0,\n\t\\end{align}\nand\n\t\\begin{align}\\label{en}\n\t\t&\\frac{(N-1)\\mu_n u_{n+1}+\\mu_{n-1} u_{n-1}}{(N-1)\\mu_n+\\mu_{n-1}}-u_n\\nonumber\\\\\n&+u_n^p\\left[\\frac{(N-1)\\mu_n(u_{n+1}-u_n)^2+\\mu_{n-1}(u_{n-1}-u_n)^2}{2(N-1)\\mu_n+2\\mu_{n-1}}\\right]^{\\frac{q}{2}} \\leq 0.\n\t\\end{align}\nFor brevity, we denote\n$$\\lambda=\\frac{p+1}{p+q-1},\\quad \\beta=\\frac{1}{p+q-1}, \\quad\\sigma=\\frac{2-q}{p+q-1}.$$\nNow let us deal with cases of $n = 0$ and $n\\geq1$.\n\t\n\t\\textbf{Case of $n = 0$}. Combining with (\\ref{u1}) and (\\ref{mu1}), then (\\ref{en0}) is equivalent to\n\t\\begin{align*}\n\t\t&\\frac{\\delta}{(n_0+1)^{\\sigma}(\\ln{(n_0+1)})^{\\beta}}-\\frac{\\delta}{n_0^{\\sigma}(\\ln{n_0})^{\\beta}}+\\left( \\frac{\\delta}{n_0^{\\sigma}(\\ln{n_0})^{\\beta}}\\right) ^p \\nonumber \\\\&\n\t\t\\times \\left[ \\frac{1}{2}\\left( \\frac{\\delta}{n_0^{\\sigma}(\\ln{n_0})^{\\beta}}-\\frac{\\delta}{(n_0+1)^{\\sigma}(\\ln{(n_0+1)})^{\\beta}}\\right) ^2\\right] ^{\\frac{q}{2}} \\leq 0,\n\t\\end{align*}\n\tthe above is satisfied if we choose $\\delta \\leq \\delta_0$ with\n\t\\begin{align*}\n\t\t\\delta_0&= 2^{\\frac{q}{2(p+q-1)}} \\left[\\frac{1}{n_0^{\\sigma}(\\ln{n_0})^{\\beta}}-\\frac{1}{(n_0+1)^{\\sigma}(\\ln{(n_0+1)})^{\\beta}} \\right]^{\\frac{1}{p+q-1}} \\\\ &\n\t\t\\times\\left( n_0^{\\sigma}(\\ln{n_0})^{\\beta}\\right)^{\\frac{p}{p+q-1}} \\left[ \\frac{1}{n_0^{\\sigma}(\\ln{n_0})^{\\beta}}-\\frac{1}{(n_0+1)^{\\sigma}(\\ln{(n_0+1)})^{\\beta}}\\right]^{-\\frac{q}{p+q-1}}.\n\t\\end{align*}\n\t\n\t\\textbf{Case of $n\\geq1$}. Combining with (\\ref{u1}) and (\\ref{mu1}), then (\\ref{en}) is equivalent to\n\t\\begin{align*}\n\t\t&\\frac{\\delta\\frac{(n+n_0)^{\\lambda}(\\ln{(n+n_0)})^{\\beta+\\epsilon}}{(n+n_0+1)^{\\sigma}(\\ln{(n+n_0+1)})^{\\beta}}+\\delta\\frac{(n+n_0-1)^{\\lambda}(\\ln{(n+n_0-1)})^{\\beta+\\epsilon}}{(n+n_0-1)^{\\sigma}(\\ln{(n+n_0-1)})^{\\beta}}}{(n+n_0)^{\\lambda}(\\ln{(n+n_0)})^{\\beta+\\epsilon}+(n+n_0-1)^{\\lambda}(\\ln{(n+n_0-1)})^{\\beta+\\epsilon}}\t\\nonumber \\\\ &\n\t\t-\\frac{\\delta}{(n+n_0)^{\\sigma}(\\ln{(n+n_0)})^{\\beta}}+\\left(\\frac{\\delta}{(n+n_0)^{\\sigma}(\\ln{(n+n_0)})^{\\beta}} \\right)^p \\nonumber \\\\ &\n\t\t\\times\\delta^q\\left[ \\frac{(n+n_0)^{\\lambda}(\\ln{(n+n_0)})^{\\beta+\\epsilon}\\left(\\frac{1}{(n+n_0)^{\\sigma}(\\ln{(n+n_0)})^{\\beta}}-\\frac{1}{(n+n_0+1)^{\\sigma}(\\ln{(n+n_0+1)})^{\\beta}} \\right)^2 }{2(n+n_0)^{\\lambda}(\\ln{(n+n_0)})^{\\beta+\\epsilon}+2(n+n_0-1)^{\\lambda}(\\ln{(n+n_0-1)})^{\\beta+\\epsilon}}\\right.\\nonumber \\\\& \\left.\n\t\t+\\frac{(n+n_0-1)^{\\lambda}(\\ln{(n+n_0-1)})^{\\beta+\\epsilon}\\left(\\frac{1}{(n+n_0-1)^{\\sigma}(\\ln{(n+n_0-1)})^{\\beta}}-\\frac{1}{(n+n_0)^{\\sigma}(\\ln{(n+n_0)})^{\\beta}} \\right)^2 }{2(n+n_0)^{\\lambda}(\\ln{(n+n_0)})^{\\beta+\\epsilon}+2(n+n_0-1)^{\\lambda}(\\ln{(n+n_0-1)})^{\\beta+\\epsilon}}\n\t\t\\right]^\\frac{q}{2}\\leq0,\n\t\\end{align*}\n\tThe above is equivalent to the following estimate holds for all $n\\geq1$\n\\begin{align}\\label{3-1-1}\n\\delta^{p+q-1}\\leq \\Lambda_1(n+n_0),\n\\end{align}\nwhere\n\t\\begin{align}\\label{3-1-2}\n\t\t\\Lambda_1(n):=&(n^{\\sigma}(\\ln{n})^{\\beta})^{p-1}\\left[ 1-\\frac{n^{\\sigma}(\\ln{n})^{\\beta}\\left( \\frac{n^{\\lambda}(\\ln{n})^{\\beta+\\epsilon}}{(n+1)^{\\sigma}(\\ln{(n+1)})^{\\beta}}+\\frac{(n-1)^{\\lambda}(\\ln{(n-1)})^{\\beta+\\epsilon}}{(n-1)^{\\sigma}(\\ln{(n-1)})^{\\beta}}\\right) }{n^{\\lambda}(\\ln{n})^{\\beta+\\epsilon}+(n-1)^{\\lambda}(\\ln{(n-1)})^{\\beta+\\epsilon}}\\right] \\nonumber \\\\\n&\n\t\t\\times\\left[ \\frac{n^{\\lambda}(\\ln{n})^{\\beta+\\epsilon}\\left(\\frac{1}{n^{\\sigma}(\\ln{n})^{\\beta}}-\\frac{1}{(n+1)^{\\sigma}(\\ln{(n+1)})^{\\beta}} \\right)^2 }{2n^{\\lambda}(\\ln{n})^{\\beta+\\epsilon}+2(n-1)^{\\lambda}(\\ln{(n-1)})^{\\beta+\\epsilon}}\\right.\\nonumber \\\\& \\left.\n\t\t+\\frac{(n-1)^{\\lambda}(\\ln{(n-1)})^{\\beta+\\epsilon}\\left(\\frac{1}{(n-1)^{\\sigma}(\\ln{(n-1)})^{\\beta}}-\\frac{1}{n^{\\sigma}(\\ln{n})^{\\beta}} \\right)^2 }{2n^{\\lambda}(\\ln{n})^{\\beta+\\epsilon}+2(n-1)^{\\lambda}(\\ln{(n-1)})^{\\beta+\\epsilon}}\n\t\t\\right]^{-\\frac{q}{2}}.\n\t\\end{align}\nHence, we have\n\t\\begin{align}\\label{3-1-lim}\n\t\t\\lim_{n\\to \\infty} \\Lambda_1(n)=(\\frac{2}{\\sigma})^{\\frac{q}{2}-1} \\epsilon.\n\t\\end{align}\nThe detailed calculation of (\\ref{3-1-lim}) is as follows: by using the facts that\n$$\\frac{\\ln{(n-1)}}{\\ln{n}}=1-\\frac{1}{n\\ln{n}}-\\frac{1}{2n^2\\ln{n}}+o(\\frac{1}{2n^2\\ln{n}}),$$\nand\n\t$$(1-\\frac{1}{n})^{\\alpha}=1-\\frac{\\alpha}{n}-\\frac{\\alpha(\\alpha-1)}{2n^2}+O(\\frac{1}{n^3}).$$\nand let us first deal with term\n\t\\begin{align}\\label{A1}\n\t\tA_1(n):=&\\left( \\frac{n^{\\lambda}(\\ln{n})^{\\beta+\\epsilon}\\left(\\frac{1}{n^{\\sigma}(\\ln{n})^{\\beta}}-\\frac{1}{(n+1)^{\\sigma}(\\ln{(n+1)})^{\\beta}} \\right)^2 }{2n^{\\lambda}(\\ln{n})^{\\beta+\\epsilon}+2(n-1)^{\\lambda}(\\ln{(n-1)})^{\\beta+\\epsilon}}\\right.\\nonumber \\\\& \\left.\n\t\t+\\frac{(n-1)^{\\lambda}(\\ln{(n-1)})^{\\beta+\\epsilon}\\left(\\frac{1}{(n-1)^{\\sigma}(\\ln{(n-1)})^{\\beta}}-\\frac{1}{n^{\\sigma}(\\ln{n})^{\\beta}} \\right)^2 }{2n^{\\lambda}(\\ln{n})^{\\beta+\\epsilon}+2(n-1)^{\\lambda}(\\ln{(n-1)})^{\\beta+\\epsilon}}\n\t\t\\right)^{-\\frac{q}{2}},\n\t\\end{align}\n\twhich is equal to\n\t\\begin{align}\\label{3-1-3}\n\t\t2^{\\frac{q}{2}}\\left( \\frac{1+(1-\\frac{1}{n})^{\\lambda}(\\frac{\\ln{(n-1)}}{\\ln{n}})^{\\beta+\\epsilon}}{\\left(\\frac{1}{n^{\\sigma}(\\ln{n})^{\\beta}}-\\frac{1}{(n+1)^{\\sigma}(\\ln{(n+1)})^{\\beta}} \\right)^2+(1-\\frac{1}{n})^{\\lambda}(\\frac{\\ln{(n-1)}}{\\ln{n}})^{\\beta+\\epsilon}\\left(\\frac{1}{(n-1)^{\\sigma}(\\ln{(n-1)})^{\\beta}}-\\frac{1}{n^{\\sigma}(\\ln{n})^{\\beta}} \\right)^2}\\right) ^{\\frac{q}{2}}.\n\t\\end{align}\n\tNotice that\n\t\\begin{align}\\label{3-1-4}\n\t\t\\frac{1}{n^{\\sigma}(\\ln{n})^{\\beta}}-\\frac{1}{(n+1)^{\\sigma}(\\ln{(n+1)})^{\\beta}} &=\\frac{1}{n^{\\sigma}(\\ln{n})^{\\beta}}\\left( 1-(1-\\frac{1}{n+1})^{\\sigma}(\\frac{\\ln{n}}{\\ln{(n+1)}})^{\\beta}\\right) \\nonumber\\\\&\n\t\t=\\frac{1}{n^{\\sigma}(\\ln{n})^{\\beta}}\\left(\\frac{\\sigma}{n+1} +\\frac{\\beta}{(n+1)\\ln{(n+1)}}+o(\\frac{1}{n\\ln{n}})\\right)\\nonumber\\\\&\n\t\t=\\frac{1}{n^{\\sigma}(\\ln{n})^{\\beta}}\\left(\\frac{\\sigma}{n} +o(\\frac{1}{n})\\right).\n\t\\end{align}\n\tSimilarly\n\t\\begin{align}\\label{3-1-5}\n\t\t\\frac{1}{(n-1)^{\\sigma}(\\ln{(n-1)})^{\\beta}}-\\frac{1}{n^{\\sigma}(\\ln{n})^{\\beta}}\n\t\t=\\frac{1}{n^{\\sigma}(\\ln{n})^{\\beta}}\\left(\\frac{\\sigma}{n} +o(\\frac{1}{n})\\right).\n\t\\end{align}\n\tCombining (\\ref{3-1-4}) and (\\ref{3-1-5}) with (\\ref{3-1-3}), we obtain\n\t\\begin{align}\\label{3-1-6}\n\t\tA_1(n)=(\\frac{2}{\\sigma})^{\\frac{q}{2}}(n^{\\sigma+1}(\\ln{n})^{\\beta})^q\\left( \\frac{2+o(1)}{2+o(1)}\\right)^{\\frac{q}{2}}\n\t\t\\asymp (\\frac{2}{\\sigma})^{\\frac{q}{2}}(n^{\\sigma+1}(\\ln{n})^{\\beta})^q.\n\t\\end{align}\n\tSubstituting (\\ref{3-1-6}) into (\\ref{3-1-2}), we have\n\t\\begin{align*}\n\t\t\\lim_{n\\to\\infty}\\Lambda_1(n) =&\\lim_{n\\to\\infty} (\\frac{2}{\\sigma})^{\\frac{q}{2}}(n^{\\sigma}(\\ln{n})^{\\beta})^{p+q-1}n^q \\left( 1-\\frac{n^{\\sigma}(\\ln{n})^{\\beta}\\left( \\frac{n^{\\lambda}(\\ln{n})^{\\beta+\\epsilon}}{(n+1)^{\\sigma}(\\ln{(n+1)})^{\\beta}}+\\frac{(n-1)^{\\lambda}(\\ln{(n-1)})^{\\beta+\\epsilon}}{(n-1)^{\\sigma}(\\ln{(n-1)})^{\\beta}}\\right) }{n^{\\lambda}(\\ln{n})^{\\beta+\\epsilon}+(n-1)^{\\lambda}(\\ln{(n-1)})^{\\beta+\\epsilon}}\\right)\n\t\t\\nonumber\\\\\n\t\t= &\\lim_{n\\to\\infty}(\\frac{2}{\\sigma})^{\\frac{q}{2}}(n^{\\sigma}(\\ln{n})^{\\beta})^{p+q-1}n^q \\left( 1-\\frac{(1-\\frac{1}{n+1})^{\\sigma}(\\frac{\\ln{n}}{\\ln{(n+1)}})^{\\beta}\n+(1-\\frac{1}{n})^{\\lambda-\\sigma}(\\frac{\\ln{(n-1)}}{\\ln{n}})^{\\epsilon}}{1+(1-\\frac{1}{n})^{\\lambda}(\\frac{\\ln{(n-1)}}{\\ln{n}})^{\\beta+\\epsilon}}\\right)\n\\nonumber\\\\\n=&\\lim_{n\\to\\infty}(\\frac{2}{\\sigma})^{\\frac{q}{2}}n^{2}\\ln{n}\n\t\t\\left( 1-\\frac{(1-\\frac{1}{n+1})^{\\sigma}(\\frac{\\ln{n}}{\\ln{(n+1)}})^{\\beta}+(1-\\frac{1}{n})(\\frac{\\ln{(n-1)}}\n{\\ln{n}})^{\\epsilon}}{1+(1-\\frac{1}{n})^{\\lambda}(\\frac{\\ln{(n-1)}}{\\ln{n}})^{\\beta+\\epsilon}}\\right).\n\t\\end{align*}\nwhere we have used that $\\lambda=\\frac{p+1}{p+q-1}$, $\\beta=\\frac{1}{p+q-1}$, $\\sigma=\\frac{2-q}{p+q-1}=\\lambda-1$.\n\n\tApplying Taylor expansion technique, we obtain\n\t\\begin{align*}\n\t\t\\lim_{n\\to\\infty}\\Lambda_1(n)\n\t\t=(\\frac{2}{\\sigma})^{\\frac{q}{2}-1} \\epsilon,\n\t\\end{align*}\n\twhich yields (\\ref{3-1-lim}).\n\tThis implies that there exists some large enough $n_0$ such that for all $n\\geq 0$, the RHS of (\\ref{3-1-1}) is bounded from above.\n\n\tFinally, take $n_0$ as above and $\\delta=\\min\\{\\delta_0, \\delta_1\\}$, where we choose $\\delta_1=\\left[\\frac{(\\frac{2}{\\sigma})^{\\frac{q}{2}-1} \\epsilon}{2}\\right]^{\\frac{1}{p+q-1}}$.\nIt follows that when $(p, q)\\in G_1$, $u$ is a solution to (\\ref{ieq}). Hence we complete the proof for Theorem \\ref{thm2} (I).\n\\end{proof}\n\\vskip1ex\n\n\\begin{proof}[\\rm{Proof of Theorem \\ref{thm2} (II)}]\n\tWhen $(p, q)\\in G_2$, take $\\mu$ and $u$ as follows\n\t\\begin{align}\\label{mu2}\n\t\t&\\mu_{xy}=\\mu_n=\\frac{(n+n_0)(\\ln{(n+n_0)})^{1+\\epsilon}}{(N-1)^n},\\quad \\mbox{for any $(x,y)\\in E_n$, $n\\geq0$},\n\t\\end{align}\n\t\\begin{align}\\label{u2}\n\t\t&u(x)=u_n=\\frac{1}{(\\ln{(n+n_0)})^{\\frac{\\epsilon}{2}}}+1,\n\t\t\\quad \\mbox{for any $x\\in D_n$, $n\\geq0$}.\n\t\\end{align}\n\tFor $n \\geq 2$, it is easy to verify that\n\t$$\\mu(B(o,n))\\asymp n^{2}(\\ln{n})^{1+\\epsilon},$$\nwhich yields (\\ref{e-vol-2}) holds.\n\n\nNow we need to verify that (\\ref{en0}) and (\\ref{en}) holds under the above choice of $\\mu$ and $u$.\n\t\n\t\\textbf{Case of $n = 0$}. Substituting (\\ref{u2}) and (\\ref{mu2}) into (\\ref{en0}), we obtain\n\t\\begin{align}\\label{3-2-1}\n\t\t&\\frac{1}{(\\ln{(n_0+1)})^{\\frac{\\epsilon}{2}}}-\\frac{1}{(\\ln{n_0})^{\\frac{\\epsilon}{2}}}+\\left( \\frac{1}{(\\ln{n_0})^{\\frac{\\epsilon}{2}}}+1\\right) ^p\\nonumber \\\\&\n\t\t\\times \\left[\\frac{1}{2} \\left( \\frac{1}{(\\ln{n_0})^{\\frac{\\epsilon}{2}}}-\\frac{1}{(\\ln{(n_0+1)})^{\\frac{\\epsilon}{2}}}\\right)^2 \\right] ^{\\frac{q}{2}} \\leq 0.\n\t\\end{align}\n\n\tIt is easy to verify that the above holds for $q\\geq 2$ when $n_0$ is large enough.\n\t\n\t\\textbf{Case of $n\\geq1$}. By substituting (\\ref{u2}) and (\\ref{mu2}), we know (\\ref{en}) is equivalent to\n\t\\begin{align*}\n\t\t&\\frac{\\frac{(n+n_0)(\\ln{(n+n_0)})^{1+\\epsilon}}{ (\\ln{(n+n_0+1)})^{\\frac{\\epsilon}{2}}}+\\frac{(n+n_0-1)(\\ln{(n+n_0-1)})^{1+\\epsilon}}{(\\ln{(n+n_0-1)})^{\\frac{\\epsilon}{2}}}}{(n+n_0)(\\ln{(n+n_0)})^{1+\\epsilon}+(n+n_0-1)(\\ln{(n+n_0-1)})^{1+\\epsilon}}\t\\nonumber \\\\ &\n\t\t-\\frac{1}{(\\ln{(n+n_0)})^{\\frac{\\epsilon}{2}}}+\\left(\\frac{1}{(\\ln{(n+n_0)})^{\\frac{\\epsilon}{2}}}+1 \\right)^p \\nonumber \\\\ &\n\t\t\\times \\left( \\frac{(n+n_0)(\\ln{(n+n_0)})^{1+\\epsilon}\\left(\\frac{1}{ (\\ln{(n+n_0)})^{\\frac{\\epsilon}{2}}}-\\frac{1}{(\\ln{(n+n_0+1)})^{\\frac{\\epsilon}{2}}} \\right)^2}{2(n+n_0)(\\ln{(n+n_0)})^{1+\\epsilon}+2(n+n_0-1)(\\ln{(n+n_0-1)})^{1+\\epsilon}}\\right.\\nonumber \\\\& \\left.\n\t\t+\\frac{(n+n_0-1)(\\ln{(n+n_0-1)})^{1+\\epsilon}\\left(\\frac{1}{(\\ln{(n+n_0-1)})^{\\frac{\\epsilon}{2}}}-\\frac{1}{(\\ln{(n+n_0)})^{\\frac{\\epsilon}{2}}} \\right)^2}{2(n+n_0)(\\ln{(n+n_0)})^{1+\\epsilon}+2(n+n_0-1)(\\ln{(n+n_0-1)})^{1+\\epsilon}}\n\t\t\\right)^\\frac{q}{2} \\leq0,\n\t\\end{align*}\n\tnamely\n\\begin{equation}\\label{3-2-2}\n1\\leq \\Lambda_2(n+n_0),\n\\end{equation}\nwhere\n\t\\begin{align}\\label{3-2-3}\n\t\t\\Lambda_2(n):=&\\left(1-\\frac{(\\ln{n})^{\\frac{\\epsilon}{2}}\\left( \\frac{n(\\ln{n})^{1+\\epsilon}}{(\\ln{(n+1)})^{\\frac{\\epsilon}{2}}}+\n\t\t\t\\frac{(n-1)(\\ln{(n-1)})^{1+\\epsilon}}{(\\ln{(n-1)})^{\\frac{\\epsilon}{2}}}\\right)}{n(\\ln{n})^{1+\\epsilon}+(n-1)(\\ln{(n-1)})^{1+\\epsilon}}\\right)\n\t\t\\left( \\frac{1}{(\\ln{n})^{\\frac{\\epsilon}{2}}}+1\\right)^{-p} ((\\ln{n})^{\\frac{\\epsilon}{2}})^{-1}\\nonumber \\\\ &\n\t\t\\times\\left(\n\t\t\\frac{2n(\\ln{n})^{1+\\epsilon}+2(n-1)(\\ln{(n-1)})^{1+\\epsilon}}{n(\\ln{n})^{1+\\epsilon}\\left(\\frac{1}{(\\ln{n})^{\\frac{\\epsilon}{2}}}-\\frac{1}{(\\ln{(n+1)})^{\\frac{\\epsilon}{2}}} \\right)^2+(n-1)(\\ln{(n-1)})^{1+\\epsilon}\\left(\\frac{1}{(\\ln{(n-1)})^{\\frac{\\epsilon}{2}}}-\\frac{1}{(\\ln{n})^{\\frac{\\epsilon}{2}}} \\right)^2}\n\t\t\\right)^{\\frac{q}{2}}.\n\t\\end{align}\nApplying the same argument as in proof of Theorem \\ref{thm2} (I), we have\n\t\\begin{align}\\label{3-2-lim}\n\t\t\\lim_{n\\to \\infty} \\Lambda_2(n)=\\infty.\n\t\\end{align}\n\tThis implies that there exists some large $n_0$ such that both of (\\ref{3-2-1}) and (\\ref{3-2-2}) hold, hence we complete the proof for Theorem \\ref{thm2} (II).\n\\end{proof}\n\\vskip1ex\n\n\\begin{proof}[\\rm{Proof of Theorem \\ref{thm2} (III)}]\n\tWhen $(p, q)\\in G_3$, take $\\mu$ and $u$ as follows\n\t\\begin{align}\\label{mu3}\n\t\t&\\mu_{xy}=\\mu_n=\\frac{(n+n_0)^{\\frac{1}{q-1}}(\\ln{(n+n_0)})^{\\frac{1}{q-1}+\\epsilon}}{(N-1)^n},\\quad \\mbox{for any $(x,y)\\in E_n$, $n\\geq0$}.\n\t\\end{align}\n\t\\begin{align}\\label{u3}\n\t\t&u(x)=u_n=\\frac{\\delta}{(n+n_0)^{\\frac{2-q}{q-1}}(\\ln{(n+n_0)})^{\\frac{1}{q-1}}}+1,\n\t\t\\quad \\mbox{for any $x\\in D_n$, $n\\geq0$},\n\t\\end{align}\n\twhere $n_0 \\geq 2$ and $0<\\delta <1 $ are to be chosen later.\n\t\nUnder the choice of $\\mu$ in (\\ref{mu3}),\nit is easy to verify that (\\ref{e-vol-3}) holds.\n\n For brevity, let us denote $\\beta=\\frac{1}{q-1}$, $\\sigma=\\frac{2-q}{q-1}$.\n\n Let us deal with cases of $n = 0$ and $n \\geq1$.\n\n\t\\textbf{Case of $n = 0$}. Substituting (\\ref{u3}) and (\\ref{mu3}) into (\\ref{en0}), we have\n\t\\begin{align*}\n\t\t&\\frac{\\delta}{(n_0+1)^{\\sigma}(\\ln{(n_0+1)})^{\\beta}}-\\frac{\\delta}{n_0^{\\sigma}(\\ln{n_0})^{\\beta}}+\\left( \\frac{\\delta}{n_0^{\\sigma}(\\ln{n_0})^{\\beta}}+1\\right) ^p \\nonumber \\\\&\n\t\t\\times \\left( \\frac{1}{2}\\left( \\frac{\\delta}{n_0^{\\sigma}(\\ln{n_0})^{\\beta}}-\\frac{\\delta}{(n_0+1)^{\\sigma}(\\ln{(n_0+1)})^{\\beta}}\\right) ^2\\right) ^{\\frac{q}{2}} \\leq 0,\n\t\\end{align*}\n\tsuch $\\delta$ is accessible by letting $\\delta \\leq \\delta_0$, where\n\t\\begin{align*}\n\t\t\\delta_0&= 2^{\\frac{q}{2(q-1)}} \\left(\\frac{1}{n_0^{\\sigma}(\\ln{n_0})^{\\beta}}-\\frac{1}{(n_0+1)^{\\sigma}(\\ln{(n_0+1)})^{\\beta}}\\right)^{\\frac{1}{q-1}} \\\\ &\n\t\t\\times \\left( \\frac{1}{n_0^{\\sigma}(\\ln{n_0})^{\\beta}}+1\\right) ^{\\frac{-p}{q-1}}\\left( \\frac{1}{n_0^{\\sigma}(\\ln{n_0})^{\\beta}}-\\frac{1}{(n_0+1)^{\\sigma}(\\ln{(n_0+1)})^{\\beta}}\\right)^{-\\frac{q}{q-1}},\n\t\\end{align*}\n\twhere we used $\\delta<1$, $p<0$.\n\t\n\t\\textbf{Case of $n\\geq1$}. By substituting (\\ref{u3}) and (\\ref{mu3}), (\\ref{en}) is equivalent to\n\t\\begin{align*} &\\frac{\\delta\\frac{(n+n_0)^{\\beta}(\\ln{(n+n_0)})^{\\beta+\\epsilon}}{(n+n_0+1)^{\\sigma}(\\ln{(n+n_0+1)})^{\\beta}}+\\delta\\frac{(n+n_0-1)^{\\beta}(\\ln{(n+n_0-1)})^{\\beta+\\epsilon}}{(n+n_0-1)^{\\sigma}(\\ln{(n+n_0-1)})^{\\beta}}}{(n+n_0)^{\\beta}(\\ln{(n+n_0)})^{\\beta+\\epsilon}+(n+n_0-1)^{\\beta}(\\ln{(n+n_0-1)})^{\\beta+\\epsilon}}\t\\nonumber \\\\ &\n\t\t-\\frac{\\delta}{(n+n_0)^{\\sigma}(\\ln{(n+n_0)})^{\\beta}}+\\left(\\frac{\\delta}{(n+n_0)^{\\sigma}(\\ln{(n+n_0)})^{\\beta}} +1\\right)^p \\nonumber \\\\ &\n\t\t\\times\\delta^q\\left( \\frac{(n+n_0)^{\\beta}(\\ln{(n+n_0)})^{\\beta+\\epsilon}\\left(\\frac{1}{(n+n_0)^{\\sigma}(\\ln{(n+n_0)})^{\\beta}}-\\frac{1}{(n+n_0+1)^{\\sigma}(\\ln{(n+n_0+1)})^{\\beta}} \\right)^2 }{2(n+n_0)^{\\beta}(\\ln{(n+n_0)})^{\\beta+\\epsilon}+2(n+n_0-1)^{\\beta}(\\ln{(n+n_0-1)})^{\\beta+\\epsilon}}\\right.\\nonumber \\\\& \\left.\t\t+\\frac{(n+n_0-1)^{\\beta}(\\ln{(n+n_0-1)})^{\\beta+\\epsilon}\\left(\\frac{1}{(n+n_0-1)^{\\sigma}(\\ln{(n+n_0-1)})^{\\beta}}-\\frac{1}{(n+n_0)^{\\sigma}(\\ln{(n+n_0)})^{\\beta}} \\right)^2 }{2(n+n_0)^{\\beta}(\\ln{(n+n_0)})^{\\beta+\\epsilon}+2(n+n_0-1)^{\\beta}(\\ln{(n+n_0-1)})^{\\beta+\\epsilon}}\n\t\t\\right)^\\frac{q}{2}\\leq0,\n\t\\end{align*}\n\twhich is equivalent to\n\t\\begin{equation*}\\label{3-3-1}\n\t\t\\delta^{q-1}\\leq \\Lambda_3(n+n_0),\n\t\\end{equation*}\nwhere\n\\begin{eqnarray*}\\label{3-3-2}\n\t\\Lambda_3(n):=&\\left( 1-\\frac{n^{\\sigma}(\\ln{n})^{\\beta}\\left( \\frac{n^{\\beta}(\\ln{n})^{\\beta+\\epsilon}}{(n+1)^{\\sigma}(\\ln{(n+1)})^{\\beta}}+\\frac{(n-1)^{\\beta}(\\ln{(n-1)})^{\\beta+\\epsilon}}{(n-1)^{\\sigma}(\\ln{(n-1)})^{\\beta}}\\right) }{n^{\\beta}(\\ln{n})^{\\beta+\\epsilon}+(n-1)^{\\beta}(\\ln{(n-1)})^{\\beta+\\epsilon}}\\right) \\\\ &\n\t\t\\times \\left(\\frac{1}{(n)^{\\sigma}(\\ln{(n)})^{\\beta}} +1\\right)^{-p}((n)^{\\sigma}(\\ln{(n)})^{\\beta})^{-1} \\\\ &\n\t\t\\times\\left( \\frac{n^{\\beta}(\\ln{n})^{\\beta+\\epsilon}\\left(\\frac{1}{n^{\\sigma}(\\ln{n})^{\\beta}}-\\frac{1}{(n+1)^{\\sigma}(\\ln{(n+1)})^{\\beta}} \\right)^2 }{2n^{\\beta}(\\ln{n})^{\\beta+\\epsilon}+2(n-1)^{\\beta}(\\ln{(n-1)})^{\\beta+\\epsilon}}\\right.\\\\& \\left.\n\t\t+\\frac{(n-1)^{\\beta}(\\ln{(n-1)})^{\\beta+\\epsilon}\\left(\\frac{1}{(n-1)^{\\sigma}(\\ln{(n-1)})^{\\beta}}-\\frac{1}{n^{\\sigma}(\\ln{n})^{\\beta}} \\right)^2 }{2n^{\\beta}(\\ln{n})^{\\beta+\\epsilon}+2(n-1)^{\\beta}(\\ln{(n-1)})^{\\beta+\\epsilon}}\n\t\t\\right)^{-\\frac{q}{2}}.\n\t\\end{eqnarray*}\n\tBy the similiar computation of the proof of Theorem \\ref{thm2} (I), we have that\n\t\\begin{align*}\n\t\t\\lim_{n\\to \\infty} \\Lambda_3(n)=(\\frac{2}{\\sigma})^{\\frac{q}{2}-1} \\epsilon.\n\t\\end{align*}\n\n\nThis implies that there exists some large $n_0$ such that for all $n\\geq 0$, the RHS of (\\ref{3-3-1}) is bounded from above by $\\delta_1^p$, where $\\delta_1=\\left[\\frac{(\\frac{2}{\\sigma})^{\\frac{q}{2}-1} \\epsilon}{2}\\right]^{\\frac{1}{q-1}}$.\n\t\n\tFinally, choosing $n_0$ as above and $\\delta=\\min\\{\\delta_0, \\delta_1\\}$, we obtain that when $(p, q)\\in G_3$, $u$ is a solution to (\\ref{ieq}). Hence we complete the proof for Theorem \\ref{thm2} (III).\n\\end{proof}\n\\vskip1ex\n\n\\begin{proof}[\\rm{Proof of Theorem \\ref{thm2} (IV)}]\n\tWhen $(p, q)\\in G_4$, we take $\\mu$ and $u$ as follows\n\t\\begin{equation}\\label{mu4}\n\t\t\\mu_{xy}=\\mu_n=\\frac{\\lambda e^{\\lambda(n+n_0)}}{(N-1)^n},\\quad \\mbox{for any $(x,y)\\in E_n$, $n\\geq0$},\n\t\\end{equation}\n\t\\begin{equation}\\label{u4}\n\t\tu(x)=u_n=\\frac{\\delta}{(n+n_0)}+\\delta,\n\t\t\\quad \\mbox{for any $x\\in D_n$, $n\\geq0$},\n\t\\end{equation}\n\twhere $n_0 \\geq 2$ and $\\delta>0 $ are to be determined later.\n\t\n\tUnder the above choices of $\\mu$, it follows that (\\ref{e-vol-4}) holds.\n\n\t\n\t\\textbf{Case of $n = 0$}. By substituting (\\ref{u4}) and (\\ref{mu4}), (\\ref{en0}) is equivalent to\n\t\\begin{align*}\n\t\t\\frac{\\delta}{n_0+1}-\\frac{\\delta}{n_0}+\\left( \\frac{\\delta}{n_0}+\\delta\\right) ^p\n\t\t\\times \\left( \\frac{1}{2}\\left( \\frac{\\delta}{n_0}-\\frac{\\delta}{n_0+1}\\right) ^2\\right) ^{\\frac{1}{2}} \\leq 0,\n\t\\end{align*}\n\twhich is satisfied by choosing $\\delta \\geq\\delta_0$ with\n\t\\begin{align*}\n\t\t\\delta_0&= 2^{\\frac{1}{2p}} \\left(\\frac{1}{n_0^{\\sigma}(\\ln{n_0})^{\\beta}}-\\frac{1}{(n_0+1)^{\\sigma}(\\ln{(n_0+1)})^{\\beta}} \\right)^{\\frac{1}{p}} \\\\ &\n\t\t\\times \\left( \\frac{1}{n_0^{\\sigma}(\\ln{n_0})^{\\beta}}+1\\right) ^{-1}\\left( \\frac{1}{n_0^{\\sigma}(\\ln{n_0})^{\\beta}}-\\frac{1}{(n_0+1)^{\\sigma}(\\ln{(n_0+1)})^{\\beta}}\\right)^{-\\frac{1}{p}}.\n\t\\end{align*}\n\t\n\t\\textbf{Case of $n\\geq1$}. By substituting (\\ref{u4}) and (\\ref{mu4}), (\\ref{en}) is equivalent to\n\t\\begin{align*}\n\t\t&\\frac{e^{\\lambda(n+n_0)} \\frac{\\delta}{n+n_0+1}+e^{\\lambda(n+n_0-1)} \\frac{\\delta}{n+n_0-1}}{e^{\\lambda(n+n_0)} +e^{\\lambda(n+n_0-1)} }-\\frac{\\delta}{n+n_0}\\nonumber\\\\+(\\frac{\\delta}{n+n_0}+\\delta)^p(&\\frac{e^{\\lambda(n+n_0)} (\\frac{\\delta}{n+n_0+1}-\\frac{\\delta}{n+n_0})^2+e^{\\lambda(n+n_0-1)} (\\frac{\\delta}{n+n_0-1}-\\frac{\\delta}{n+n_0})^2}{2e^{\\lambda(n+n_0)} +2e^{\\lambda(n+n_0-1)} })^{\\frac{1}{2}} \\leq 0,\n\t\\end{align*}\n\twhich is equivalent to that $\\delta$ satisfies\n\\begin{equation}\\label{3-4-1}\n\t\\delta^{p}\\leq \\Lambda_4(n+n_0),\n\\end{equation}\n\twhere\n\t\\begin{align*}\n\t\t\\Lambda_4(n):=&\\left( \\frac{1}{n}-\\frac{e^{\\lambda n} \\frac{1}{n+1}+e^{\\lambda(n-1)} \\frac{1}{n-1}}{e^{\\lambda n} +e^{\\lambda(n-1)} }\\right)(\\frac{1}{n}+1)^{-p} \\nonumber\\\\&\n\t\t\\times\\left(\\frac{e^{\\lambda n} (\\frac{1}{n+1}-\\frac{1}{n})^2+e^{\\lambda(n-1)} (\\frac{1}{n-1}-\\frac{1}{n})^2}{2e^{\\lambda n} +2e^{\\lambda(n-1)} }\\right) ^{-\\frac{1}{2}}.\n\t\\end{align*}\n\tSince\n\t\\begin{equation*}\n\t\t\\lim_{n\\to\\infty}\\Lambda_4(n)= \\sqrt{2}\\frac{1-e^{-\\lambda}}{1+e^{-\\lambda}}.\n\t\\end{equation*}\n\twe obtain that there exists some large $n_0$ such that for all $n\\geq 0$, the RHS of (\\ref{3-4-1}) is bounded.\nLetting $\\delta=\\max\\{\\delta_0, \\delta_1\\}$,\nwe derive that when $(p, q)\\in G_4$, $u$ is a solution to (\\ref{ieq}), where $\\delta_1:=\\left[\\frac{\\sqrt{2}\\frac{1-e^{-\\lambda}}{1+e^{-\\lambda}}}{2}\\right]^{\\frac{1}{p}}$. Hence we complete the proof for Theorem \\ref{thm2} (IV).\n\\end{proof}\n\\vskip1ex\n\\begin{proof}[\\rm{Proof of Theorem \\ref{thm2} (V)}]\tWe divide the proof into two cases:\n\t\\begin{enumerate}\n\t\t\\item[(V-1).]{$(p,q)\\in \\{p+q=1,p\\geq 0,q>0\\};$}\n\t\t\\item[(V-2).]{$(p,q)\\in \\{p+q=1,p>1, q<0\\};$}\n\t\\end{enumerate}\n\tIn case (V-1), we take $\\mu$ and $u$ as follows\n\t\\begin{align}\\label{mu5-1}\n\t\t&\\mu_{xy}=\\mu_n=\\frac{\\lambda e^{\\lambda n}}{(N-1)^n},\\quad \\mbox{for any $(x,y)\\in E_n$, $n\\geq0$}\n\t\\end{align}\n\t\\begin{align}\\label{u5-1}\n\t\t&u(x)=u_n=e^{-\\frac{\\lambda}{4} n},\n\t\t\\quad \\mbox{for any $x\\in D_n$, $n\\geq0$}.\n\t\\end{align}\n\tUnder these choices of $\\mu$, we obtain that (\\ref{e-vol-5}) holds. We choose $\\lambda$ later.\n\t\n\t\n\t\\textbf{Case of $n = 0$}. Combining (\\ref{u5-1}) and (\\ref{mu5-1}), (\\ref{en0}) is equivalent to\n\t\\begin{align}\\label{3-5-1}\n\t\t1\\leq 2^{\\frac{q}{2}}(1-e^{-\\frac{\\lambda}{4}})^{-q}(1-e^{-\\frac{\\lambda}{4}})= 2^{\\frac{q}{2}}(1-e^{-\\frac{\\lambda}{4}})^{1-q},\n\t\\end{align}\n\t\n\t\\textbf{Case of $n\\geq1$}. Combining (\\ref{u5-1}) and (\\ref{mu5-1}), (\\ref{en}) is equivalent to\n\t\\begin{align}\\label{3-5-2}\n\t\t1&\\leq\n2^{\\frac{q}{2}}(1-e^{-\\frac{\\lambda}{4}})^{1-q}\\left(\\frac{1+e^{-\\lambda}}{1+e^{-\\frac{\\lambda}{2}}} \\right) ^{\\frac{q}{2}}\\left(\\frac{1-e^{\\frac{3\\lambda}{4}}}{1+e^{-\\lambda}} \\right).\n\t\\end{align}\n\tIt is easy to see that both of (\\ref{3-5-1}) and (\\ref{3-5-2}) hold by choosing some large $\\lambda$. Hence $u$ is a solution to (\\ref{ieq}).\n\t\n\tIn case (V-2), Fix an arbitrary vertex $o\\in T_N$ as the root, we choose a special vertex $p$, $p\\sim o$. Let us define\n\t\\begin{align*}\n\t\t&P:=\\{x\\in T_N|\\mbox{o is not in the path between x and p}\\},\\\\\n\t\t&D'_{-n}:=\\{x\\in P|d(o, x)=n\\} \\qquad\\mbox{for $n\\geq 0$},\\\\\n\t\t&D'_{n}:=\\{x\\in T_N\\setminus P|d(o, x)=n\\}\\qquad\\mbox{for $n\\geq 0$}.\n\t\\end{align*}\n\tAt last, we denote by $E'_n$ the collection of all the edges from vertices in $D'_n$ to\n\tvertices in $D'_{n+1}$ for $n\\in Z$.\n\t\n\tTake $\\mu$ and $u$ as follows\n\t\\begin{align}\\label{mu5-2-1}\n\t\t&\\mu_{xy}=\\mu_n=\\frac{\\lambda e^{\\lambda n}}{(N-1)^n},\\quad \\mbox{for any $(x,y)\\in E'_n$, $n\\geq0$},\n\t\\end{align}\n\t\\begin{align}\\label{mu5-2-2}\n\t\t&\\mu_{xy}=\\mu_n=\\frac{\\lambda e^{\\lambda n}}{(N-1)^{-n-1}},\\quad \\mbox{for any $(x,y)\\in E'_n$, $n\\leq 1$},\n\t\\end{align}\n\t\\begin{align}\\label{u5-2}\n\t\t&u(x)=u_n=e^{-(\\lambda-1) n},\n\t\t\\quad \\mbox{for any $x\\in D'_n$, $n\\in Z$}.\n\t\\end{align}\n\tNoticing $D_n=D'_{-n}\\cup D'_{n}$, for $n \\geq 2$, we have\n\t$$\\mu(B(o,n))=\\sum\\limits^n_{k=0}\\mu(D_k)\\asymp \\sum\\limits^n_{k=0}(N-1)^k (\\mu_k+\\mu_{-k})\\asymp e^{\\lambda n},$$\n\twhere $\\lambda>0$ is to be chosen later.\n\t\n\tThen we check that (\\ref{ieq}) holds for the $\\mu$ and $u$ given as above: for any $n\\geq 0$,\n\t\\begin{align}\\label{3-5-2-1}\n\t\t&\\frac{(N-1)\\mu_n u_{n+1}+\\mu_{n-1} u_{n-1}}{(N-1)\\mu_n+\\mu_{n-1}}-u_n\\nonumber\\\\&+u_n^p(\\frac{(N-1)\\mu_n(u_{n+1}-u_n)^2+\\mu_{n-1}(u_{n-1}-u_n)^2}{2(N-1)\\mu_n+2\\mu_{n-1}})^{\\frac{q}{2}} \\leq 0,\n\t\\end{align}\n\tand for any $n\\leq -1$,\n\t\\begin{align}\\label{3-5-2-2}\n\t\t&\\frac{\\mu_n u_{n+1}+(N-1)\\mu_{n-1} u_{n-1}}{\\mu_n+(N-1)\\mu_{n-1}}-u_n\\nonumber\\\\&+u_n^p(\\frac{\\mu_n(u_{n+1}-u_n)^2+(N-1)\\mu_{n-1}(u_{n-1}-u_n)^2}{2\\mu_n+2(N-1)\\mu_{n-1}})^{\\frac{q}{2}} \\leq 0,\n\t\\end{align}\nhold.\n\n\t\nCombining (\\ref{mu5-2-1})-(\\ref{u5-2}), we know (\\ref{3-5-2-1}) and (\\ref{3-5-2-2}) are equivalent to\n\t\\begin{align*}\n\t\t&\\frac{e^{\\lambda n} e^{-(\\lambda-1) (n+1)}+e^{\\lambda (n-1)} e^{-(\\lambda-1) (n-1)}}{e^{\\lambda n} + e^{\\lambda (n-1)}}- e^{-(\\lambda-1) (n)} \\nonumber\\\\&+ e^{-(\\lambda-1) p n}\\left(\\frac{e^{\\lambda n} (e^{-(\\lambda-1) (n+1)}- e^{-(\\lambda-1)n})^2+ e^{\\lambda (n-1)} (e^{-(\\lambda-1) (n-1)}- e^{-(\\lambda-1) n})^2}{2e^{\\lambda n} +2 e^{\\lambda (n-1)} }\\right)^{\\frac{q}{2}}\\\\&\n\t\t\\leq 0,\\quad\\mbox{ for any $n\\in \\mathbb{Z}$},\n\t\\end{align*}\n\tnamely\n\t\\begin{align}\\label{3-5-2-3}\n\t\t1&\\leq\n\t\t(1-e^{-(\\lambda-1)})^{1-q}\\left(\\frac{2+e^{-\\lambda}}{1+e^{\\lambda-2}} \\right) ^{\\frac{q}{2}}\\left(\\frac{1-e^{-1}}{1+e^{-\\lambda}} \\right).\n\t\\end{align}\n\tNoticing $q<0$, we obtain RHS of (\\ref{3-5-2-3}) tends to infinity as $\\lambda\\to\\infty$. This implies that there exists some large $\\lambda$, such that $ u $ is a solution to (\\ref{ieq}).\n\tHence we complete the proof for Theorem \\ref{thm2} (V).\n\\end{proof}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{}\n\\label{}\n\n\n\n\n\n\n\\section{}\n\\label{}\n\n\n\n\n\n\n\\section{Introduction}\n\nThe electric daily peak load is the maximum of the electricity power demand curve over one day. Having an accurate forecast of the daily peak enables independent system operators (ISOs) and energy providers to better deliver electricity and optimise power plant schedules. The importance of such a forecast is increasing as the integration of intermittent renewable production sources progresses. In particular, renewable energy sources are at the bottom of the merit order curve which makes them (currently) the most economical source of energy used to serve the market. However, they are intermittent and provide time-varying levels of power generation, which are only partially under human control. If electricity demand is high and renewables cannot provide for it alone, ISOs have to deliver electricity from sources with higher marginal costs (e.g., gas-fired plants) for the stakeholders as well as for the environment in terms of CO2 emissions. In such a context, accurately forecasting the peak demand magnitude and timing is essential for determining the generation capacity that must be held in reserve. \n\nElectrical equipment is tailored to support a specific peak load. If the demand comes close or exceeds the network capacity, it can lead to distribution inefficiencies and ultimately power system failures, such as blackouts. With the increasing number of electric vehicles (EV) in circulation, a further source of stress is added to the electricity system. For instance, 46\\% of vehicles sold in Norway in 2019 were EVs \\citep*{international_energy_agency_global_2019}. The challenge posed by the additional EV demand must be met by more tailored management systems and policies, if expensive infrastructural works are to be avoided. Dynamic electricity pricing schemes, for example, the Triads in the UK or the Global Adjustment in Ontario, Canada, have been developed to reduce the system peak load. Consumers who can correctly estimate and cut their use during peak events can unlock great savings. Peak demand forecasts will thus be key for the development of such policies.\n\n\nTo account for the increasing demand for electricity and to prevent system failures, smart grid technologies and policies are being implemented to foster communication between the various stakeholders of the electricity supply chain to achieve a more efficient use of energy. One major objective is to maximise the load factor. The load factor is the average load over a specific time period divided by the peak load over the same period. Maximising it leads to a more even use of energy through time, thus preventing system failures and surges in electricity prices. One of the most common ways to achieve load factor maximisation is peak shaving (Figure \\ref{Shaving}), which refers to the flattening of electrical load peaks. Three major strategies have been proposed for peak shaving, namely integration of Energy Storage System (ESS), integration of Vehicle-to-Grid (V2G) and Demand Side Management (DSM) \\citep*{uddin_review_2018}. ESS and V2G integration provide ancillary sources to balance the grid through batteries while DSM shifts consumer demand to flatten the peak. To be activated adequately, all these strategies require accurate forecasts of the demand peak magnitude (DP) and of the instant at which it occurs (IP).\n\n\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=\\linewidth,keepaspectratio]{Shaving.jpg}\n \\caption{Illustration of peak shaving}\n \\label{Shaving}\n\\end{figure}\n\n\nThis article proposes novel methods to forecast the DP and the IP by leveraging information at different time resolutions. In particular, the multi-resolution approach proposed here is illustrated in the context of two model classes: Generalised Additive Models (GAMs) and Neural Networks (NNs). Both are state of the art predictive models, widely used to forecast electrical load in industry and academia. The performance of the multi-resolution framework under both model classes is assessed using aggregate UK electricity demand data from the National Grid \\citep*{nationalgrid_eso_2021}.\n\nThe rest of the paper is structured as follows: Section 2 presents a literature review of daily peak forecasting methodologies. Section 3 introduces multi-resolution modelling using GAMs and neural networks. Section 4 explains how the different models were set up in the high-resolution, low-resolution and multi-resolution settings. Section 5 analyses the results of the models described in Section 4, using UK demand data.\n\n\\section{Related work}\n\nThis section provides an extensive literature review of peak forecasting methods and was conducted to identify gaps in the field. It includes methods ranging from probabilistic approaches to deep learning.\n\nProbabilistic forecasts have been widely adopted in the context of load forecasting applications \\citep*[e.g.,][for an overview]{hong_probabilistic_2016}, but little has been done on probabilistic peak demand forecasting. Two probabilistic set-ups, commonly used for peak load forecasting, were outlined by \\cite*{jacob_forecasting_2020}. The first is block maxima (BM), where data is separated into time chunks of equal lengths and the maximum of each chunk is assumed to approximately follow a generalised extreme value (GEV) distribution. The second is peaks over threshold (POT), which approximates the distribution of the excess load over a threshold by a generalised Pareto distribution. While the POT and BM settings can be unified via point processes \\citep*{boano-danquah_analysis_2020}, in this work we are mainly interested in the BM case.\n\nIn a long-term forecasting setting, \\cite*{mcsharry_probabilistic_2005} used demand data at the daily resolution to forecast the magnitude and timing of the yearly peak (i.e., the day characterised by the largest total demand). They considered a forecasting lead time of one full year and obtained a probabilistic forecast by simulating year-long trajectories for the weather variables and plugging them into a deterministic linear regression model. Similarly, \\cite*{hyndman_density_2010} considered a long-term forecasting application, where the aim was to forecast the probability distribution of the annual and weekly peak electricity demand. They used semi-parametric additive models to capture the effect of covariates, such as temperature, on the demand and obtained a probabilistic forecast by adopting a simulation and scenario-based approach. \\cite*{elamin_quantile_2018} used quantile regression methods to forecast the DP one day ahead. Even though they used quantile regression to obtain an upper bound on demand, quantile estimates at several probability levels could be used to estimate the full peak demand distribution. Also \\cite*{gibbons_quantile_2014} modelled the DP via a quantile regression model, but their objective was post-processing daily estimates to forecast the annual demand peak, rather than modelling the DP probabilistically.\n\nMultivariate regression models using multivariate adaptive regression splines (MARS) were proposed by \\cite*{sigauke_daily_2010} to forecast the DP in South Africa. Explanatory variables including meteorological variables are aggregated at the daily resolution (e.g., average, minimum and maximum temperature). The model outperforms piecewise polynomial regression models with an autoregressive error term. \\cite*{sigauke_prediction_2011} studied time series of the DP and illustrated its heteroscedastic structure. A SARIMA\u2013GARCH errors model and a regression-SARIMA\u2013GARCH model are then proposed to forecast it at a short-term horizon. Results show that SARIMA-like models produce forecasts with an accuracy around 1.4 in mean absolute percentage error on a testing period. \n\n\\cite*{saxena_hybrid_2019} proposed a hybrid model to forecast whether the following day will be a peak load day for the billing period for customers subject to demand charge structure. They apply their model to optimise the electricity bill of an American University. Load data is provided every five minutes from January 2013 to April 2016. Here, the POT set-up was used with a threshold depending on a monthly average and variance of the daily load. An original combination of 4 forecasts was proposed. First, a linear model is used to forecast the maximum daily load at a monthly horizon which is then coupled to short-term load forecasting models (NN and ARIMA) to provide two forecasts. Two other forecasts were computed using binary classifiers (logistic regression and NN) and a synthetic minority over-sampling technique (SMOTE) was used to balance the classes. The authors demonstrated that their methods led to better statistical accuracy and to reduced electricity bills.\n\n\nNNs are one of the most popular algorithms for peak load forecasting tasks because of their strong performance in non-linear modelling. Their flexibility is remarkable, but it is difficult to pick the right architecture and hyper-parameters for a specific problem. One of the first papers proposing a NN peak load forecasting method was produced by \\cite*{dash_peak_1995}. According to the authors, NNs performed well on load forecasting problems, but they were much less performant on peak load forecasting tasks. A fuzzy NN was found to be more robust and accurate than a traditional NN structure. It involved an additional layer of fuzzification of the inputs before entering the only hidden layer of the network. \n\nIn a more traditional set-up, \\cite*{saini_artificial_2002} tested a Fully Connected Neural Network (FCNN) with different variants of back-propagation algorithms where training was conducted separately in four periods of time during a year. Their work was further developed by \\cite*{saini_peak_2008}, where numerous weather variables were included (e.g., temperature, rainfall, wind speed, evaporation per day, sunshine hours and associated statistics). Similarly, different optimisation procedures were considered and it was found that an adaptive learning method based on the learning rate and momentum was the most performant. \\cite*{amin-naseri_combined_2008} combined a self-organising map with a NN to find better clusters of training data to improve forecasting performance. Some authors considered other form of networks. For instance, \\cite*{abdel-aal_modeling_2006} adopted abductive networks with the aim of obtaining a better intuition and a more automated way to address peak load forecasting. In particular, these networks split the overall problem into smaller and simpler ones along the network with abductive reasoning. It is based on an automated procedure which organises the data available into different chunks and deals with them separately. \n\nMore recently, recurrent Neural Networks (RNNs) have been used by \\cite*{yu_deep_2019} in the form of Gated Recurrent Units (GRU). In particular, a dynamic time warping (DTW) analysis was used to produce the GRU inputs. The DTW distance was used to find the most similar load curve to the one observed before the targeted load curve. Assuming that subsequent load curves are also very similar, they used the subsequent load curve from the training data to encode the inputs of the GRU network. A Long Short-Term Memory (LSTM) architecture has been used by \\cite*{ibrahim_lstm_2020} and was found to be more computationally efficient compared to FCNNs and other RNNs. Three statistical metrics were used to evaluate model performance: Mean Absolute Percentage Error (MAPE), Root-Mean Squared Error (RMSE) and mean bias error. In our work, statistical metrics including MAPE and RMSE will also be used to avoid introducing any bias towards a particular operational application. \n\nThe literature on deep learning peak load forecasting is sparse, but deep learning probabilistic load forecasting is much more common (e.g., \\citealp*{guo_deep_2018}, \\citealp*{yang_deep_2019} and \\citealp*{yang_bayesian_2020}). Such models do not explicitly focus on the DP or the IP as the objective functions used to estimate their parameters are based on demand observed at a higher frequency (intra-day). The high-frequency forecasts thus obtained can be post-processed to produce a forecast for the DP. \n\nSupport Vector Regression (SVR) is another popular class of load forecasting method, based on structural risk minimisation instead of empirical risk minimisation as in NNs. \\cite*{el-attar_forecasting_2009} used SVR in a local prediction framework. Recently, \\cite*{kim_peak-load_2020} used an ensemble forecasting approach with other Machine Learning algorithms such as boosting machines, tree-based methods and bagging techniques. A compensation process based on an isolation forest is later added by analysing the predicted values of the ensemble models to detect outliers in the peak data. SVR are compared to NNs by \\cite*{li_analysis_2018} for a control strategy of peak load and frequency regulation. LSTM NNs were used to forecast power load and improve the control strategy considered in this particular use case.\n\nFrom this literature review, it can be concluded that a wide range of methodologies have been adopted in peak load forecasting applications. In most short-term applications, model inputs are manually chosen features that are defined at the same (daily) time resolution as the peak demand, which is the variable to be forecasted. Conversely, in long-term applications, weather variables are simulated at the original (high) resolution to produce demand forecasts at the same resolution, which are then post-processed to obtain low resolution (e.g., yearly) peak forecasts. Hence, to the best of our knowledge, the existing literature on peak forecasting has not explored methods that are able to integrate both low- and high-resolution signals in a single model. However, in the field of functional data analysis, hybrid approaches have been used for clustering and forecasting functional data (e.g., \\citealp*{antoniadis_functional_2006} and \\citealp*{cho_modeling_2013}). Therefore, this paper aims to exploit functional methods to tackle multi-resolution problems. From a feature engineering point of view, the goal is to automate feature extraction of high-resolution signals, that is to let the model decide which hidden features to extract from the signal. This can be done with signal processing procedures such as tensor product decomposition, wavelets or Fourier transforms \\citep*{amin_feature_2015}. \n\nThe literature review also suggests that not much effort has been directed towards forecasting the IP, which is surprising because forecasting the IP is at least as important as forecasting the DP, for the purpose of short-term smart grid management and operational planning \\citep*{soman_peak_2020}. To fill this gap, the performance of multi-resolution methods will be illustrated in this paper on both a DP and an IP forecasting problem. \n\n\\section{Multi-resolution modelling}\n\nIn this section, the multi-resolution modelling approach is introduced with its general principles. It is then developed formally and illustrated with GAMs and NNs.\n\n\\subsection{General idea}\n\nThe main idea behind multi-resolution modelling is to build a parsimonious model that is able to handle input and output variables that are available at different resolutions.\nIn the context of DP load forecasting, low-resolution variables (e.g., day of the week, maximum daily temperature) are observed daily, while high-resolution variables (e.g., temperatures or raw demand) are updated every hour or half-hour. Such problems are usually handled by manually placing all variables at the same resolution. In particular, one option is to take a high-resolution approach, which consists in doing the modelling at the highest available resolution, which might require interpolating some of the low-resolution variables. Such an approach often lacks in parsimony, as the low-resolution variables are brought to the higher resolution, thus increasing the size of the data that needs to be processed, while adding no extra useful information. Another option is to take a low-resolution approach, that is to transform the high-resolution variables into a set of manually chosen daily summaries or features. In this approach, the size of the data is reduced, but feature engineering is time consuming and some of the information contained in the high-resolution variables is lost in the process. \n\nThe multi-resolution approach proposed here aims at capturing all the information contained in the high-resolution variable, while avoiding explicit feature engineering and retaining the parsimony of the low-resolution approach. To describe the multi-resolution idea more formally, let us consider $\\textbf{y}_i = \\{y_i(t)\\}_{t\\in\\{1,\\ldots,T\\}}$ the vector of electricity demand at each time step $t > 0$ of the day $i \\in \\mathbb{N}$ . $T$ is the total number of daily steps (e.g., T=48 for half-hourly steps). Then, the DP of day $i$ is $\\textrm{DP}_i = max(\\textbf{y}_i)$ and $\\textrm{IP}_i$ is the time step corresponding to $\\textrm{DP}_i$. Let $\\textbf{x}^{low}_i$ be the $i$-th vector of covariates observed daily and let $\\textbf{x}^{high}_i$ be the corresponding vector of covariates containing information at the intra-day resolution. The multi-resolution approach exploits both sets of covariates as model inputs to obtain the forecasts of the $\\hat{\\textrm{DP}}_i$ or the $\\hat{\\textrm{IP}}_i$, that is\n\\begin{align}\n \\hat{\\textrm{DP}}_i &= \\psi_1(\\textbf{x}^{low}_i,\\textbf{x}^{high}_i) \\\\\n \\hat{\\textrm{IP}}_i &= \\psi_2(\\textbf{x}^{low}_i,\\textbf{x}^{high}_i) \n\\end{align}\nwhere $\\psi_1$ and $\\psi_2$ represent the model for, respectively, the DP and the IP.\nThis general definition does not specify how the high-resolution inputs should be dealt with in practice. Several approaches could be considered, the aim being to process the information contained in a (possibly high-dimensional) signal vector, while avoiding information loss and retaining computational efficiency. In this paper, two options are considered. In particular, a description of how high-resolution covariates can be handled within GAMs and NNs is given below. \n\n\\subsection{Particular instances of the multi-resolution approach}\n\nThe multi-resolution approach is detailed firstly for GAMs which, due to their performance and interpretability \\citep*{amato_forecasting_2021}, are widely used in industry for load forecasting. Then, the multi-resolution approach is extended to NNs, which often perform well on load forecasting problems and enable the flexible handling of heterogeneous model inputs \\citep*{gao_matrix_2017}. \n\n\\subsubsection{Generalised Additive Models}\n\n First introduced by \\cite*{hastie_generalized_1999}, GAMs are a semi-parametric extension of generalised linear models (GLMs) where the response variable, $y_i$, is assumed to follow a parametric probability distribution. That is, $y_i \\sim \\text{Dist}(\\mu_i, \\bm \\theta)$ where $\\mu_i$ and $\\bm \\theta$ are model parameters. While the elements of $\\bm \\theta$ do not depend on $i$, parameter $\\mu_i$ is modelled as follows \\citep*{wood_generalized_2017}:\n\\begin{equation}\n g(\\mu_{i})=\\mathbf{x}_{i}^T \\bm{\\gamma}+\\sum_{j} f_{j}(\\mathbf{x}_{i})\n\\end{equation}\nwhere $g$ is a monotonic transformation, which is simply the identity function in this paper. Two separate terms can be distinguished on the right-hand side of this equation: a parametric part $\\mathbf{x}_{i}^T \\bm{\\gamma}$, where $\\mathbf{x}_{i}$ is a vector of covariates while $\\bm \\gamma$ is a vector of regression coefficients, and a non-parametric part $\\sum_{j} f_{j}(\\bm{x}_{i})$ which is a sum of smooth functions of covariates. The smooth effects are built via linear combinations of $K_j$ basis functions, while the corresponding basis coefficients are penalised via generalised ridge penalties. The strength of the penalties is controlled via smoothing hyper-parameters, which are selected using criteria such a generalised cross-validation.\n\nIn the context of forecasting $\\textrm{DP}_i$, it is interesting to consider for $\\text{Dist}(\\mu_i, \\bm \\theta)$ a generalised extreme value (GEV) distribution. In fact, the GEV model is asymptotically justified for block-maxima as $T \\rightarrow \\infty$ \\citep*{jacob_forecasting_2020}. Thus, when enough steps are available throughout the day, the GEV distribution is particularly attractive for modelling the DP. The scaled-T (a scaled version of Student's t) distribution provides an alternative, which is particularly suited for heavy tailed data such as peak load. The Gaussian distribution can be used as a baseline model. As for the IP, an ordered categorical (ocat) distribution based on a logistic regression latent variable is used. All of these distributions as well as GAM building and fitting methods are implemented in the \\textit{mgcv} R package \\citep*{wood_mgcv_2020}.\n\nWithin the additive structure of GAMs, $\\mathbf{x}^{low}_i$ and $\\mathbf{x}^{high}_i$ can be treated as inputs for different smooth functions. The elements of $\\mathbf{x}^{low}_i$ can be handled via separate standard smooth effects, which take scalars as inputs, while the joint effect of several elements of $\\mathbf{x}^{low}_i$ can be captured via standard multivariate smooth effects. However, the $\\mathbf{x}^{high}_i$ covariates have to be treated via functional smooth effects. The latter are smooth functions which take the vectors of high-resolution covariates as inputs and output a scalar. Therefore, functional GAMs permit the handling of each covariate at its original resolution, thus avoiding interpolation and guaranteeing parsimony.\n\nIn addition to the principle of parsimony, the goal is also to retain the time dependence of the covariates. In fact, it is important to ensure that the model is aware that each element of the high-resolution covariates has a different impact on the peak load distribution, as it belongs to a different time of day. A way to retain the time dependence of each high-resolution series of covariates is to make them interact with the time of day sequence via tensor product effects. Such effects can easily be integrated in GAMs, as explained in the following.\n\nIn continuous time, the smooth effect for a high-resolution (functional) covariate, $x_i(u)$, can be written as follows:\n\\begin{align}\nf(x_i) = \\int_{0}^{T}\\phi({x}_i(u),u)du\n\\end{align}\nwhere $\\phi$ is the time-dependent effect of the covariate, which needs to be estimated, while $u$ is the time of day. In practice, on the $i$-th day, ${x}_i(u)$ is observed at $F$ discrete instants $0 \\leq t_1 \\leq \\cdots \\leq t_F \\leq T$ and the corresponding values of ${x}_i(u)$ are stored in the vector $\\bm x_i$. Hence, approximating the integral with a summation and constructing $\\phi$ via a tensor product expansion leads to:\n\\begin{align}\n\\hat{f}(\\bm{x}_i) & = \\sum_{r=1}^{F}\\hat{\\phi}({x}_i(t_r),t_r) \\nonumber\\\\\n & = \\sum_{r=1}^{F} \\sum_{k=1}^{K} \\sum_{l=1}^{L} \\beta_{kl}a_k({x}_i(t_r))b_{l}(t_r)\n\\end{align}\nwhere $\\{a_{k}\\}_{(k)\\in\\{1,\\ldots,K\\}}$ and $\\{b_{l}\\}_{(l)\\in\\{1,\\ldots,L\\}}$ are known spline basis functions and $\\{\\beta_{kl}\\}_{(k,l) \\in\\{1,\\ldots,K\\} \\times \\{1,\\ldots,L\\}}$ are parameters to be estimated. By using such effects, high-resolution information can be parsimoniously incorporated into the model, while retaining the temporal information contained in the covariates. \n\n\\subsubsection{Neural Networks}\n\nNNs are convenient machine learning algorithms to implement a multi-resolution model. In fact, common architectures such as Convolutional Neural Networks (CNN) and RNNs already make use of inputs from different scales. Recent work was undertaken to make tensor inputs available for multi-layer perceptrons with MatNet \\citep*{gao_matrix_2017} which further shows their versatility. From scalars to tensors, the flexibility of NNs is hard for other machine learning models to compete with.\n\nA FCNN or CNN architecture, without its output layer, can be generally written as follows:\n\\begin{align}\n H_k(\\mathbf{x},\\Theta) = h_k (\\ldots h_3(h_2(h_1(\\mathbf{x},\\theta_1),\\theta_2),\\theta_3)\\ldots, \\theta_k ) \n\\end{align}\nwhere k is the number of hidden layers of the NN, ${h_i}_{,i \\in \\{1 \\ldots k\\}}$ are the transformations made by the hidden layers (e.g., linear operation, activation and dropout) and $\\Theta = \\{\\theta_i\\}_{,i \\in \\{1 \\ldots k\\}}$ is the sequence of parameter vectors (weights and biases). In a multi-resolution approach, one part of the architecture will contain low-resolution information feeding a FCNN branch and the other one will contain the reshaped high-resolution data feeding a CNN or RNN branch. In this paper, only CNNs were considered in depth for this latter branch, with the lags of the response provided as model inputs. The CNN enables a very close replication of the tensor product construction used for GAMs, thus creating a consistent set-up for comparing both algorithms.\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=\\linewidth,keepaspectratio]{MR_for_NN.jpg}\n \\caption{Multi-resolution architecture for NNs with a Multi-Layer Perceptron (MLP) taking low-resolution inputs, and a CNN with high-resolution inputs}\n \\label{MRNN}\n\\end{figure}\n\nEven though the CNN and FCNN branches do not have similarly shaped inputs and outputs, the unit shapes can be transformed along the network to interact and be brought together without losing consistency. This process consists in flattening the tensor shapes in order to bounce back on vectorial inputs within some layer of the network. It is precisely this flexibility that can be leveraged to build a multi-resolution architecture (Figure \\ref{MRNN}).\nMore precisely, the CNN branch contains one convolutional block for each of the high-resolution time series. In this way, each tensor product of the GAM formula can find its equivalent in the CNN branch of the network. In fact, the multi-resolution NN architecture can be concisely written as follows: \n\\begin{align}\n \\mathbf{\\mu}_i = F_j(H_k(\\mathbf{x}_{low},\\Theta), H'_l(\\mathbf{x}_{high},\\Theta'))\n\\end{align}\nIn (7), $H_k$ is the FCNN which handles low-resolution terms while $H'_l$ is the CNN which deals with the high-resolution information. Then, in the final part of the network, both outputs are concatenated (after flattening the CNN branch) and enter another FCNN $F_j$ which can be reduced to the output layer when $j = 1$. Here, $\\mu_i$ is the mean of the random output variable considered. This multi-resolution architecture is summarised in Figure \\ref{MRNN}.\n\n\\section{Experiments}\n\nOn the DP and the IP forecasting tasks, the multi-resolution approach is compared to two alternative modelling approaches: a high-resolution approach and a low-resolution approach (Figure \\ref{High-, low- and multi-resolution modelling setting}). The low-resolution approach uses inputs aggregated at the daily level (e.g., maximum daily temperature, day of the week) to forecast the DP and the IP separately. The high-resolution approach uses inputs at the half-hourly level to forecast the half-hourly demand and then it extracts the DP and the IP by taking the maximum of the half-hourly forecasted values and the corresponding time of day. Therefore, the high-resolution approach leverages all the information available by taking half-hourly inputs and outputs while the low-resolution approach directly models the variables of interest (DP and IP) with less parameters to be estimated. The multi-resolution approach can be seen as a compromise, aimed at integrating the advantages of both approaches, and the following experiments are designed to assess whether it can outperform them. \n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=\\linewidth,keepaspectratio]{Modelling-settings.jpg}\n \\caption{The different modelling settings compared in this work}\n \\label{High-, low- and multi-resolution modelling setting}\n\\end{figure}\n\nThe comparison includes baseline models: a naive persistence model, which simply consists of forecasting the DP and the IP based on the value taken by the target variable on the previous day; a low-resolution ARIMA (on daily peaks with horizon 1); a high-resolution ARIMA aggregated forecast composed of 48 ARIMA models, each fitted on the half-hourly load of a specific time of day with horizon 1. That is, the high-resolution ARIMA produces 48 forecasts at horizon 1 instead of one forecast at horizon 48. All ARIMA models are fitted using the \\cite*{hyndman_automatic_2007} algorithm without using exogenous information.\n\nThe performance metrics chosen for DP models are the mean absolute percentage error (MAPE), the mean absolute error (MAE) and the root mean squared error (RMSE). As for IP models, the same metrics are used except for the MAPE, which is substituted with a relaxed accuracy (R-Accuracy) metric in the form of a binary loss function (equal to 1 if the IP forecasted is more than 2 instants away from the observed IP and 0 if it is within 2 instants of the observed IP). While the R-Accuracy metric is also relevant in operational settings where it is crucial to know the IP within a small time window, the RMSE and the MAE penalise forecasts proportionally to their distance from the observed IP.\n\n\nA rolling-origin forecasting procedure is used to replicate a realistic short-term load forecasting set-up (Figure \\ref{Rolling Origin}). Model parameters are updated on a monthly basis with consolidated data since, in an operational setting, threats to data validity and computational constraints can emerge when refitting a model too often using real-time data.\n\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=\\linewidth,keepaspectratio]{Rolling-origin.jpg}\n \\caption{Rolling-origin forecasting procedure}\n \\label{Rolling Origin}\n\\end{figure}\n\nThe data used in the experiments is the half-hourly load consumption (total national demand) between 2011-07-01 00:00:00 and 2016-06-30 23:30:00, available via the UK \\cite*{nationalgrid_eso_2021} website. Temperature data at different locations (London, Sheffield, Manchester, Leeds, Cardiff, Bristol, Birmingham, Liverpool, Crosby and Glasgow) was downloaded from the \\cite*{noaa_national_2021} website. The temperature data is at an hourly resolution. It is interpolated (natural cubic spline interpolation) to obtain half-hourly data. Furthermore, demographic information $pop_s$ is compiled around each station $s$ and a weighted mean temperature is calculated as follows:\n\\begin{equation*}\n \\mathrm{temp}(t) = \\frac{1}{\\sum_{a=1}^{10} pop_s} \\sum_{s=1}^{10} pop_s T_{s,t}\n\\end{equation*}\nwhere $T_{s,t}$ is the temperature recorded at time $t$ by station $s$ and $\\mathrm{temp}(t)$ is the weighted mean temperature which will be used in the modelling experiments. An exponentially smoothed version of the weighted mean will also be included in the model features. It was computed using a smoothing parameter equal to 0.95.\n\n\\subsection{High-resolution approach}\n\nForecasting the electricity hourly or half-hourly demand is a problem that has been extensively studied in the literature \\citep*{kuster_electrical_2017}. It is well known that a common driver of electrical load is weather and in particular temperature. In addition, calendar information can be used to explain the seasonal variation of the demand. Finally, lagged demand values are highly informative for the subsequent values. These variables are summarised in Table 1.\n\n\\begin{table}[H]\n\\caption{High-resolution model inputs}\n\\centering\n\\begin{tabular}{@{}cccc@{}}\n\\toprule\nType & Name & Unit & Description \\\\ \\midrule\n\\multirow{2}{*}{Weather} & temp & {[}C\u00b0{]} & Half-hourly temperature \\\\ \\cmidrule(l){2-4} \n & temp95 & {[}C\u00b0{]} & Half-hourly smoothed temperature \\\\ \\midrule\n\\multirow{3}{*}{Calendar} & dow & Categorical & Day of the week \\\\ \\cmidrule(l){2-4} \n & toy & None & Time of year (between 0 and 1) \\\\ \\cmidrule(l){2-4} \n & t & Categorical & Time of day (between 0 and 47) \\\\ \\midrule\nLag & load24 & {[}$10^{1}$ GW{]} & Half-hourly load on the previous day \\\\ \\midrule\nOutput & load & {[}$10^{1}$ GW{]} & Half-hourly load \\\\ \\bottomrule\n\\end{tabular}%\n\\end{table}\n\nThe GAM chosen to implement this approach is $y_i(t) \\sim N(\\mu_i(t), \\sigma^2)$ where the mean of the Gaussian distribution is modelled by:\n\\begin{align}\n \\mu_{i}(t) = & \\, \\psi_{1}(\\mathrm{dow}_{i})+ \\psi_{2}(\\mathrm{t}) + f_{1}^{20}(\\mathrm{toy}_{i}(t)) + f_{2}^{20}(\\mathrm{temp}_{i}(t)) + f_{3}^{24}(\\mathrm{temp95}_{i}(t)) \\nonumber \\\\ \n & +\\mathrm{ti}_{1}^{5,5}(\\mathrm{temp}_{i}, t)+\\mathrm{ti}_{2}^{5,5}(\\mathrm{temp95}_{i}(t), t)+\\mathrm{ti}_{3}^{5,5}(\\mathrm{load24}_{i}(t), t) \\\\ \\nonumber\n & + \\mathrm{ti}_{4}^{5,5}(\\mathrm{toy}_{i}(t), t)\n\\end{align}\nIn (8), the $\\psi$ functions are parametric effects, while the $f$ functions are univariate smooth effects and the $\\mathrm{ti}$ functions are bivariate tensor product smooth interactions. The number of basis functions used is indicated in the exponents. For instance, $f_{1}^{20}$ uses 20 basis functions and $\\mathrm{ti}_{1}^{5,5}$ uses 5 basis functions for each marginal. Thin-plate spline bases are used to build all smooth effects \\citep*{wood_thin_2003}. The model structure (8) was decided on the basis of previous experience in the field and the statistical significance of each effect. \n\nThere are many NN architectures which could be considered for this problem. We want an architecture with the minimum number of layers possible and using the same model inputs as the GAM. Adding too many layers would lead to a drastic difference in degrees of freedom between the NN and the GAM which is not realistic in a short-term load forecasting scenario. Furthermore, as we are not in the big data regime, adding too many layers may actually worsen the performance of the network.\n\nGiven that the universal approximation theorem (\\citealp*{cybenko_approximation_1989} and \\citealp*{hornik_approximation_1991}) guarantees that a two-layer FCNN can approximate any measurable function on a compact support, a FCNN carefully built can approximate any non-linear function of the input variables with only one hidden layer. Therefore, a FCNN architecture was used to build an NN analogue of the high-resolution GAM baseline model.\n\nIn practice, there is no bound for the number of hidden units, which can lead to poor generalisation of the model when assessed on the test set. Therefore, a dropout layer was added after the hidden layer to foster the network generalisation. The outcome of the optimisation of hyperparameters led to the architecture shown in Figure \\ref{HRFCNN}, which contains 50 neurons in the hidden layer and a dropout layer with a 10\\% dropout rate.\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=\\linewidth,keepaspectratio]{HRFCNN.jpg}\n \\caption{High-resolution FCNN architecture (input variable names are detailed in Table 1)}\n \\label{HRFCNN}\n\\end{figure}\n\nAfter obtaining the half-hourly demand forecast for the GAM and the NN, $\\hat{\\textrm{DP}}_i$ is estimated as the maximum daily value forecasted and $\\hat{\\textrm{IP}}_i$ is estimated as the half-hour of the day during which $\\hat{\\textrm{DP}}_i$ occurred.\n\n\\subsection{Low-resolution approach}\n\n\\begin{table}[H]\n\\caption{Low-resolution model inputs}\n\\resizebox{\\textwidth}{!}{%\n\\begin{tabular}{@{}cccc@{}}\n\\toprule\nType & Name & Unit & Description \\\\ \\midrule\n\\multirow{4}{*}{Weather} & tempMax & {[}C\u00b0{]} & Daily maximum temperature \\\\ \\cmidrule(l){2-4} \n & temp95Max & {[}C\u00b0{]} & Daily maximum smoothed temperature \\\\ \\cmidrule(l){2-4} \n & tempMin & {[}C\u00b0{]} & Daily minimum temperature \\\\ \\cmidrule(l){2-4} \n & temp95Min & {[}C\u00b0{]} & Daily minimum smoothed temperature \\\\ \\midrule\n\\multirow{2}{*}{Calendar} & dow & Categorical & Day of the week \\\\ \\cmidrule(l){2-4} \n & toy & None & Time of year (between 0 and 1) \\\\ \\midrule\n\\multirow{2}{*}{Lag} & DP24 & {[}$10^{1}$ GW{]} & Previous day peak demand \\\\ \\cmidrule(l){2-4} \n & IP24 & Categorical & Previous day instant of peak \\\\ \\midrule\nOutput & DP or IP & {[}$10^{1}$ GW{]} or Categorical & Daily demand peak or Daily instant of peak \\\\ \\bottomrule\n\\end{tabular}%\n}\n\\end{table}\n\nIn the low-resolution approach, all input variables are at the daily resolution (Table 2). Here several distributions could be considered for GAMs. In particular, the scaled-T distribution, which is particularly suited for heavy tailed data, as well as the GEV family, which encompasses several extreme value distributions (Weibull, Gumbell and Fr\u00e9chet), are used to model the DP. For the IP forecasting task, the ordered-logit model implemented in the \\textit{mgcv} R package \\citep*{wood_mgcv_2020} is used. The low-resolution GAM can be written as follows:\n\\begin{align}\n \\mu_{i} =& \\psi_{1}(\\mathrm{dow}_{i})+ f_{1}^{10}(\\mathrm{IP24}_{i}) + f_{2}^{20}(\\mathrm{toy}_{i}(t)) + f_{3}^{20}(\\mathrm{DP24}_{i}) \\nonumber \\\\ \n & + f_{4}^{20}(\\mathrm{tempMax}_{i}(t)) + f_{5}^{20}(\\mathrm{temp95Max}_{i}(t)) \\\\ \\nonumber\n & + f_{6}^{20}(\\mathrm{tempMin}_{i}(t)) + f_{7}^{20}(\\mathrm{temp95Min}_{i}(t))\n\\end{align}\nFor the DP, $\\mu_{i}(t)$ is the location parameter of the distributions estimated, the other parameters are assumed to be constants. For the IP, $\\mu_{i}(t)$ is also the location parameter of a latent logistic distribution. Cut-off points are estimated in the course of model fitting and do not depend on the covariates. See \\cite*{wood_smoothing_2016} for details. \n\nThe same FCNN architecture as for the high-resolution approach was used (Figure \\ref{LRFCNN}). The only difference between them is the response variable which here is directly the DP or the IP. Furthermore, the hyperparameters chosen are different. In particular the number of epochs and the batch size are much larger. The response structure for the DP is 1 neuron with a ReLU activation while 48 neurons are used for the IP. Instead of the traditional softmax output used in classification problems, an ordinal output structure, more suited to model the IP, is implemented as formalised by \\cite*{jianlin_cheng_neural_2008}. The observed response is structured as a vector of 1 and 0. If the peak was observed at $t \\in \\{1,\\ldots,T\\}$ all neurons before and including the t-th one will be 1 and all neurons after will be 0. Therefore, sigmoidal activation functions are used.\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=\\linewidth,keepaspectratio]{LRFCNN.jpg}\n \\caption{Low-resolution FCNN architecture (input variable names are detailed in Table 2)}\n \\label{LRFCNN}\n\\end{figure}\n\n\\subsection{Multi-resolution approach}\n\nThe multi-resolution GAMs leverage the same level of information for model inputs as in the high-resolution GAMs. In addition, the directly targets the DP response variable as in the low-resolution approach.\n\\begin{table}[H]\n\\caption{Multi-resolution model inputs}\n\\resizebox{\\textwidth}{!}{%\n\\begin{tabular}{@{}cccc@{}}\n\\toprule\nType & Name & Unit & Description \\\\ \\midrule\n\\multirow{2}{*}{Weather} & matTem & {[}C\u00b0{]} & Vector of half-hourly temperatures \\\\ \\cmidrule(l){2-4} \n & matTem95 & {[}C\u00b0{]} & Vector of half-hourly smoothed temperatures \\\\ \\midrule\n\\multirow{3}{*}{Calendar} & dow & Categorical & Day of the week \\\\ \\cmidrule(l){2-4} \n & toy & None & Time of year (between 0 and 1) \\\\ \\cmidrule(l){2-4} \n & matInt & Categorical & Vector of time steps (between 0 and 47) \\\\ \\midrule\nLag & matLag & {[}$10^{1}$ GW{]} & Vector of half-hourly load from previous day \\\\ \\midrule\nOutput & DP or IP & {[}$10^{1}$ GW{]} or Categorical & Daily demand peak or Daily instant of peak \\\\ \\bottomrule\n\\end{tabular}%\n}\n\\end{table}\nTensor products defined in Section 3.2.1 are used to capture high-resolution information. The \\textit{mat} covariates presented in Table 3 are matrices of dimension $(N \\times 48)$, $N$ being the number of observations of the response variable DP. The multi-resolution GAM model is:\n\\begin{align}\n \\mu_{i} = & \\, \\psi_{1}(\\mathrm{dow}_{i})+ f_{1}^{20}(\\mathrm{toy}_{i}) + ti_{1}^{15,10}(\\mathrm{matTem}_{i}, \\mathrm{matInt}_{i}) \\nonumber \\\\ \n & + ti_{2}^{5,5}(\\mathrm{matTem95}_{i}, \\mathrm{matInt}_{i}) + ti_{3}^{5,5}(\\mathrm{matLag}_{i}, \\mathrm{matInt}_{i})\n\\end{align}\nUnlike previous approaches, IP and DP lags are not directly included as they can be captured by the model through the $ti_{3}$ tensor interaction. As for the low-resolution approach, Gaussian, scaled-T and GEV distributions are considered for the DP and the ordered categorical distribution for the IP. \n\nFor the multi-resolution NN, the tensor product interactions will be replaced by convolution layers. The mechanism looked for through these convolution layers is essentially the same as for tensor products: extracting high-resolution information to directly model the DP or the IP. The high-resolution (half-hourly) data will be passed on to the convolution layers while the low-resolution (daily) data will go through the same FCNN architecture used in the previous approaches. As shown in Figure \\ref{MRCNN}, these two sections of the architecture are then concatenated to produce the final forecast of the DP load. The output structure for the DP and the IP are the same as detailed in Section 4.3 with one neuron for the DP and 48 neurons for the IP. \n\nThe convolutions used for the high-resolution information are 1D convolutions on two channels. Usually, only one convolution funnel is used to capture interactions between all inputs. Here, each tensor product interaction will be replicated as a unique convolutional block. Thus, three convolution blocks will independently extract the three high-resolution terms: matTem, matTem95 and matLag. The second channel of each block is the matrix containing the vectors of time steps matInt.\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=\\linewidth,keepaspectratio]{MRCNN.jpg}\n \\caption{Multi-resolution CNN architecture (input variable names are detailed in Table 3)}\n \\label{MRCNN}\n\\end{figure}\n\n\\section{Results}\n\nThe performance of the models for the DP and the IP forecasting tasks is evaluated using three statistical metrics. As a rolling-origin forecasting procedure was chosen, a transitional regime can be observed in the first few iterations, particularly for NNs, which usually perform better with a large amount of training data. Therefore, Table 4 (DP) and Table 5 (IP) present the models' performances on the last year of data, that is, from 2015-07-01 to 2016-06-30 included.\n\n\\begin{table}[H]\n\\caption{Performance on the last year of data for the DP (best model and associated metrics are in \\textbf{bold})}\n\\centering\n\\begin{tabular}{@{}ccccc@{}}\n\\toprule\n\\multirow{2}{*}{Resolution} & \\multirow{2}{*}{Model} & \\multicolumn{3}{c}{Metrics} \\\\ \\cmidrule(l){3-5} \n & & MAPE [\\%] & MAE [MW] & RMSE [MW] \\\\ \\midrule\nNA & Persistence & 4.38 & 23.0 & 34.3 \\\\ \\midrule\n\\multirow{4}{*}{High} \n & ARIMA & 4.08 & 21.0 & 27.8 \\\\ \\cmidrule(l){2-5} \n & Gaussian GAM & 2.43 & 13.0 & 15.5 \\\\ \\cmidrule(l){2-5} \n & FCNN & 1.47 & 7.77 & 10.3 \\\\ \\midrule\n\\multirow{7}{*}{Low} & ARIMA & 3.85 & 20.0 & 26.7 \\\\ \\cmidrule(l){2-5} \n & Scat GAM & 1.92 & 10.5 & 12.9 \\\\ \\cmidrule(l){2-5} \n & GEV GAM & 2.67 & 14.5 & 16.9 \\\\ \\cmidrule(l){2-5} \n & Gaussian GAM & 2.26 & 12.3 & 14.4 \\\\ \\cmidrule(l){2-5} \n & FCNN & 2.11 & 11.2 & 14.4 \\\\ \\midrule\n\\multirow{5}{*}{Multi} & GEV GAM & 1.52 & 8.19 & 10.3 \\\\ \\cmidrule(l){2-5} \n & \\textbf{Scat GAM} & \\textbf{1.41} & \\textbf{7.55} & \\textbf{9.59} \\\\ \\cmidrule(l){2-5} \n & Gaussian GAM & 1.42 & 7.65 & 9.63 \\\\ \\cmidrule(l){2-5} \n & CNN & 1.56 & 8.44 & 10.5 \\\\ \\bottomrule\n\\end{tabular}%\n\\end{table} \n\n\nWith the exception of the high-resolution FCNN, the multi-resolution models perform better than the alternatives across all metrics (Table 4). The relative strong performance of the high-resolution FCNN can be explained by the large amount of high-resolution data available, which suits the needs of NNs. Further, the FCNN contains more parameters to estimate and is thus more flexible than the high-resolution GAMs, which require the user to manually specify how the effect of each input variable should be modelled. Nevertheless, the best model on all metrics is the scaled-T GAM, built using the multi-resolution approach. The GEV GAM performed worse than the other distributions, which is surprising given that the GEV distribution is asymptotically justified for BM. Interestingly, the shape parameter estimated was found to be close to 0, under which value the GEV model is simply a Gumbel distribution. \n\n\\begin{table}[H]\n\\caption{Performance on last year of data for the IP (best model and associated metrics are in \\textbf{bold})}\n\\resizebox{\\textwidth}{!}{%\n\\centering\n\\begin{tabular}{@{}cccccc@{}}\n\\toprule\n\\multirow{2}{*}{Resolution} & \\multirow{2}{*}{Model} & \\multicolumn{3}{c}{Metrics} \\\\ \\cmidrule(l){3-5} \n & & R-Accuracy [\\%] & MAE [half-hour] & RMSE [half-hour] \\\\ \\midrule\nNA & Persistence & 79.4 & 2.49 & 5.36 \\\\ \\midrule\n\\multirow{2}{*}{High} & Gaussian GAM & 82.6 & 2.01 & 4.59 \\\\ \\cmidrule(l){2-5} \n & FCNN & 81.8 & 1.93 & 4.39 \\\\ \\midrule\n\\multirow{2}{*}{Low} & Ocat GAM & 79.1 & 2.11 & 4.22 \\\\ \\cmidrule(l){2-5} \n & FCNN & 83.2 & 1.94 & 4.40 \\\\ \\midrule\n\\multirow{2}{*}{Multi} & Ocat GAM & 79.4 & 2.01 & 4.08 \\\\ \\cmidrule(l){2-5} \n & \\textbf{CNN} & \\textbf{83.5} & \\textbf{1.70} & \\textbf{3.85} \\\\ \\bottomrule\n\\end{tabular}%\n}\n\\end{table}\n\n\n\nIP multi-resolution models have a similar or better performance than high- and low-resolution alternatives within the same model class on the MAE and RMSE metrics (Table 5) and the multi-resolution CNN is the best model under all metrics. However, the metrics are affected by high sampling variability. The reasons for this are detailed later in this section, where we also argue that the mediocre performance of ocat GAMs for IP forecasting is not fundamental, but attributable to the insufficient flexibility of the specific ocat parametrisation adopted here.\n\nTo quantify the variability of the performance metrics considered so far, we used block-bootstrap resampling. As described by \\cite*{forecast_eval}, for a test set of size $N$, we sample with replacement data blocks of fixed size $B=7$ (i.e., one week) to obtain an evaluation sets of size $N$. Repeating this procedure $K$ times creates $K$ metric samples, which can be used to estimate the metric's sampling variability. In particular, Figure \\ref{boxplots} shows block-bootstrapped boxplots for all metrics and models on the last year of data. Figures \\ref{boxplots} (a-c) clearly demonstrate that the improvement obtained by adopting a multi-resolution approach is substantial and robust within the GAM model class. The HR-FCNN is competitive in terms of prediction but, as we discuss below, it is not easily interpretable and does not have the computational advantages of multi-resolution GAMs. For the IP problem, Figures \\ref{boxplots} (e-d) make clear that the sampling variability is substantial (reasons for this are discussed below).\n\n\n\\begin{figure}[H]\n \\centering\n \\begin{tabular}{c|c}\n \\textbf{DP} & \\textbf{IP} \\\\\n \\includegraphics[width=0.45\\linewidth,keepaspectratio,page=1]{DP-block-bootstrap.pdf} & \\includegraphics[width=0.45\\linewidth,keepaspectratio,page=2]{IP-block-bootstrap.pdf} \\\\\n (a) & (d) \\\\[6pt]\n \\includegraphics[width=0.45\\linewidth,keepaspectratio,page=2]{DP-block-bootstrap.pdf} &\n \\includegraphics[width=0.45\\linewidth,keepaspectratio,page=3]{IP-block-bootstrap.pdf} \\\\\n (b) & (e) \\\\[6pt]\n \\includegraphics[width=0.45\\linewidth,keepaspectratio,page=3]{DP-block-bootstrap.pdf} &\n \\includegraphics[width=0.45\\linewidth,keepaspectratio,page=4]{IP-block-bootstrap.pdf} \\\\\n (c) & (f) \\\\[6pt]\n \\multicolumn{2}{c}{\\includegraphics[width=0.65\\linewidth,keepaspectratio,page=7]{legend-block-bootstrap.pdf}}\n \\end{tabular}\n\\caption{Block-bootstrap boxplots of the three metrics considered for the DP models (a), (b), (c) and IP models (d), (e), (f) on the last year of data}\n\\label{boxplots}\n\\end{figure}\n\n\nAs mentioned above, the rolling-origin forecasting setting may present a transitional regime during the first few training iterations. Figure \\ref{cumulative_DP} and \\ref{cumulative_IP} show the evolution of the different cumulative metrics calculated on the prediction signal updated on a monthly basis. Interestingly, the multi-resolution CNN for the DP (Figure \\ref{cumulative_DP}) starts off with a very bad prediction error on the first months. With more data, its performance rapidly improves across all metrics. The other models have a less dramatic performance trend, with the multi-resolution GAMs consistently performing better than the other models. The prediction error of these models oscillates during the first few months, which can be explained by the fact that the models did not have enough information to adequately estimate the yearly cycle, because they were fitted to only one year of data. After a year, the prediction errors has stabilised. \n\n\\begin{figure}[H]\n \\centering\n \\begin{tabular}{>{\\centering\\arraybackslash}m{0.45\\linewidth} >{\\centering\\arraybackslash}m{0.45\\linewidth} }\n \\includegraphics[width=\\linewidth,keepaspectratio]{DP_MAPE.pdf} &\n \\includegraphics[width=\\linewidth,keepaspectratio]{DP_MAE.pdf} \\\\\n (a) & (b) \\\\[6pt]\n \\includegraphics[width=\\linewidth,keepaspectratio]{DP_RMSE.pdf} &\n \\includegraphics[width=\\linewidth,keepaspectratio,page=2]{legend-DP.pdf} \\\\\n (c) & \\\\[6pt]\n \\end{tabular}\n\\caption{Cumulative forecasting metrics evolution for each of the monthly updated DP models: (a) MAPE, (b) MAE, (c) RMSE}\n\\label{cumulative_DP}\n\\end{figure}\n\nFor the IP forecasting task, the different metrics evolve with similar patterns (Figure \\ref{cumulative_IP}), but the seasonal oscillations in performance persist beyond the first year. Figure \\ref{fig:ocat_issue_1} explains why predicting the IP is harder in summer than in winter. In particular, while winter daily demand profiles have a reliable evening peak, summer load profiles are flatter and on some days the peak distribution becomes bimodal. That is, the daily peak might occur in the morning and or in the evening with equal probability. This is shown also by the right plot in Figure \\ref{fig:ocat_issue_2}. Hence, it is clear that in the summer the IP point estimates might be unfairly penalised under the simple metrics considered here. This implies that a forecasting model might be better off providing an IP forecast that falls between the two peaks, as MR-CNN is occasionally doing (see Figure \\ref{fig:ocat_issue_1}). Such a forecast might improve the metrics but has little value in an operational setting. Note also that the ocat model struggles to capture an IP distribution that is unimodal or bimodal depending on the time of year. In particular, the ocat model used here is based on a standard ordered-logit parametrisation, which involves modelling the mean of a latent logistic random variable via an additive model. It is not possible to transform a unimodal distribution on the ordered categories (here, IP) into a bimodal one, simply by controlling a location parameter. Hence, a more flexible model (e.g., \\citealp*{peterson1990partial}) would be preferable.\n\n\\begin{figure}[H]\n \\centering\n \\begin{tabular}{>{\\centering\\arraybackslash}m{0.45\\linewidth} >{\\centering\\arraybackslash}m{0.45\\linewidth} }\n \\includegraphics[width=\\linewidth,keepaspectratio]{IP_R-ACC.pdf} &\n \\includegraphics[width=\\linewidth,keepaspectratio]{IP_MAE.pdf} \\\\\n (a) & (b) \\\\[6pt]\n \\includegraphics[width=\\linewidth,keepaspectratio]{IP_RMSE.pdf} &\n \\includegraphics[width=\\linewidth,keepaspectratio,page=2]{legend-IP.pdf} \\\\\n (c) & \\\\[6pt]\n \\end{tabular}\n\\caption{Cumulative forecasting metrics evolution for each of the monthly updated IP models: (d) R-accuracy, (e) MAE, (f) RMSE}\n\\label{cumulative_IP}\n\\end{figure}\n\n\\begin{figure}[H] \n \\centering\n \\includegraphics[width=\\linewidth,keepaspectratio]{ocat-1.pdf}\n \\caption{Left: observed IP as a function of the day of year (black) and corresponding predictions from MR-CNN (red, shifted downward for visibility) and MR-ocat (blue, shifted upward). Right: same plot for HR-FCNN (red, downward) and HR-Gauss (blue, upward).}\n \\label{fig:ocat_issue_1}\n\\end{figure}\n\n\\begin{figure}[H] \n \\centering\n \\begin{tabular}{>{\\centering\\arraybackslash}m{0.45\\linewidth} >{\\centering\\arraybackslash}m{0.45\\linewidth} }\n \\includegraphics[width=\\linewidth,keepaspectratio,page=8]{IP-block-bootstrap.pdf} &\n \\includegraphics[width=\\linewidth,keepaspectratio]{ocat-2.pdf} \\\\\n \\end{tabular}\n \\caption{Left: Block-bootstrap boxplots of the d-RMSE metric for the IP problem. Right: daily demand profile curves during winter (shifted upward by 15 GW) and summer. The blue curves are profiles with a small absolute difference between the morning and evening peak ($<$ 50 MW).}\n \\label{fig:ocat_issue_2}\n\\end{figure}\n\nIt is interesting to verify the performance of each model for IP forecasting via a bespoke metric. In particular, let $t^{\\text{m}}_i$ be the observed IP on day $i$ and let $\\hat{t}^{\\text{m}}_i$ be the corresponding forecast. We propose the following metric:\n\\begin{equation*}\n\\text{d-RMSE} = \\left(\\frac{1}{n}\\sum_{i=1}^n(y_{t^{\\text{m}}_i} - y_{\\hat{t}^{\\text{m}}_i})\\right)^{1\/2}\n\\end{equation*}\nwhich is based on the difference between the daily peak demand and the demand at the predicted IP (the d stands for demand). This metric is more relevant to operations than MSE or MAE. For instance, in peak shaving applications, providing a forecast $\\hat{t}^{\\text{m}}_i$ very different from $t^{\\text{m}}_i$ might not be a problem if $y_{t^{\\text{m}}_i}$ and $y_{\\hat{t}^{\\text{m}}_i}$ are similar, which is what d-RMSE quantifies. Figure \\ref{fig:ocat_issue_2} shows a bootstrapped boxplot of d-RMSE for each model. Interestingly, high-resolution methods are best here, by a substantial margin in the case of HR-FCNN.\n\nThe results obtained so far do not provide reliable evidence in favour or against the adoption of a multi-resolution approach for IP forecasting. In fact, the poor forecasting performance of MR-ocat is arguably attributable to the particular ordered-logit parametrisation used here. MR-CNN does well using standard, statistically motivated losses but it is inferior to high-resolution approaches on an operationally relevant one (d-RMSE). It would be interesting to verify whether fitting the MR-CNN model by minimising d-RMSE directly (rather than MSE as done here) would lead to better results. We leave this, and the search for a more flexible distribution for ordered categorical responses, for future work.\n\nImplementing the multi-resolution approach on the DP forecasting problem is more straightforward, hence the results discussed so far are positive and reliable. We further verify their significance by performing \\cite*{diebold_comparing_1995} (DM) tests on the absolute and squared error losses . The null hypothesis of the tests is: ``both forecasts have the same expected loss''. The results of the DM tests are available on Figure \\ref{DMTEST} which confirms that, within the GAM class, the multi-resolution forecasts are significantly different to the low-resolution and high-resolution approaches under both metrics. \n\n\\begin{figure}[H]\n \\centering\n \\begin{tabular}{c}\n \\resizebox{\\textwidth}{!}{%\n \\begin{tabular}{lllllllllllll}\nModel & HR-arima & HR-gauss & HR-FCNN & LR-arima & LR-gauss & LR-scat & LR-gev & LR-FCNN & MR-gauss & MR-scat & MR-gev & MR-CNN \\\\\nHR-arima & \\cellcolor[HTML]{C0C0C0} & 0 & 0 & \\textcolor{red}{0.116} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\nHR-gauss & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & 0 & 0 & 0.042 & 0 & 0 & 0.001 & 0 & 0 & 0 & 0 \\\\\nHR-FCNN & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & 0 & 0 & 0 & 0 & 0 & \\textcolor{red}{0.729} & \\textcolor{red}{0.549} & \\textcolor{red}{0.250} & 0.029\\\\\nLR-arima & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\nLR-gauss & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & 0 & 0 & { 0.010} & 0 & 0 & 0 & 0 \\\\\nLR-scat & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & 0 & \\textcolor{red}{0.122} & 0 & 0 & 0 & 0 \\\\\nLR-gev & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & 0 & 0 & 0 & 0 & 0 \\\\\nLR-FCNN & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & 0 & 0 & 0 & 0 \\\\\nMR-gauss & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\textcolor{red}{0.063} & 0 & { 0.001} \\\\\nMR-scat & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & 0 & 0 \\\\\nMR-gev & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\textcolor{red}{0.143} \\\\\nMR-CNN & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} & \\cellcolor[HTML]{C0C0C0} \n\\end{tabular}} \\\\\n (a) \\\\ [6pt]\n\\resizebox{\\textwidth}{!}{%\n\\begin{tabular}{lllllllllllll}\nModel & HR-arima & HR-gauss & HR-FCNN & LR-arima & LR-gauss & LR-scat & LR-gev & LR-FCNN & MR-gauss & MR-scat & MR-gev & MR-CNN \\\\\nHR-arima & \\cellcolor[HTML]{C0C0C0}{ } & 0 & 0 & \\textcolor{red}{0.161} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\nHR-gauss & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & 0 & 0 & { 0.010} & 0 & { 0.003} & \\textcolor{red}{0.118} & 0 & 0 & 0 & 0 \\\\\nHR-FCNN & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & 0 & 0 & 0 & 0 & 0 & \\textcolor{red}{0.400} & \\textcolor{red}{0.362} & \\textcolor{red}{0.492} & { 0.016} \\\\\nLR-arima & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\nLR-gauss & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & 0 & 0 & \\textcolor{red}{0.938} & 0 & 0 & 0 & 0 \\\\\nLR-scat & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & 0 & { 0.026} & 0 & 0 & 0 & { 0.002} \\\\\nLR-gev & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & 0 & 0 & 0 & 0 & 0 \\\\\nLR-FCNN & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & 0 & 0 & 0 & 0 \\\\\nMR-gauss & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\textcolor{red}{0.468} & 0 & 0 \\\\\nMR-scat & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & 0 & 0 \\\\\nMR-gev & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & { 0.015} \\\\\nMR-CNN & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ } & \\cellcolor[HTML]{C0C0C0}{ }\n\\end{tabular}} \\\\\n (b) \\\\[6pt]\n\n \\end{tabular}\n\\caption{P-values from the Diebold-Mariano test for DP forecasts. The test used is from the \\textit{multDM} package in R \\citep*{drachal_multdm_2020}. In black, the null hypothesis is rejected at the 5\\% threshold and both forecasts are significantly different. In red, the null hypothesis is not rejected at the 5\\% threshold and both forecasts cannot be significantly differentiated; (a) absolute errors (b) squared errors.}\n\\label{DMTEST}\n\\end{figure}\n\n\n\n\n\n\nIt is interesting to quantify the complexity or parsimony of the models considered so far. AIC can be interpreted as a parsimony measure, but it requires computing the effective number of models parameters and we are not aware of any method that would allow estimating them across all the model classes considered here. Figure \\ref{AIC} shows the AICs of low- and multi-resolution GAMs. The multi-resolution approaches consistently have a smaller AIC than the low-resolution approaches. Furthermore, the slopes indicate that with more data the gap continues to increase. \n\nFor NNs, parsimony is highly dependent on the chosen architecture. In our case, the low-resolution and high-resolution NNs have a very similar architecture with only one hidden layer and a dropout layer (Figure \\ref{HRFCNN} and Figure \\ref{LRFCNN}). Only the input shapes and the number of observations vary. On the other hand, the multi-resolution NN (Figure \\ref{MRCNN}) requires the use of convolutional layers which are leveraged to extract the high-resolution information. The extraction process requires multiple layers which forces the multi-resolution CNN to have a larger number of parameters than the low-resolution and high-resolution NNs.\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=0.8\\linewidth,keepaspectratio]{AIC.pdf}\n \\caption{AIC for the low-resolution and multi-resolution DP GAMs}\n \\label{AIC}\n\\end{figure}\n\nThe results discussed in this section show that multi-resolution approaches are superior to low- and high-resolution alternatives for the DP forecasting problem. The forecasting performance of the high-resolution FCNN and the multi-resolution GAMs are not significantly different but, in an operational peak demand forecasting context, the multi-resolution GAM would be preferred because it can be decomposed into additive components, which can be more easily interpreted (and manually adjusted) by operational staff. In addition, note that adopting a multi-resolution approach can bring substantial computational advantages, which are easy to quantify within the GAM model class. In particular, the GAM model matrix $\\bf X$ in the multi-resolution case has $T$ times less rows than in the high-resolution case, where $T$ is the number of daily observations (i.e., $T=48$ for half-hourly data). Therefore, $T$ times less memory is used, and many computations frequently required during GAM model fitting (such as ${\\bf X}^T {\\bf W} {\\bf X}$, where $\\bf W$ is a diagonal matrix) will take less time. \n\n\\section{Conclusion}\n\nThis paper proposes a novel modelling approach, which uses both high-resolution and low-resolution information to forecast the daily electrical load peak magnitude and timing. The results demonstrate that this multi-resolution approach is flexible enough to be applied to different model classes and that it provides a competitive predictive performance. In particular, GAMs and NNs with similar input structures were used to implement the multi-resolution approach and to compare its performance that of low-resolution, high-resolution and persistence alternatives. On UK aggregate demand data, the multi-resolution models performed significantly better across all metrics when forecasting peak magnitude. In addition to improved predictions, adopting a multi-resolution approach enables faster computation via data compression and leads to more parsimonious models, as demonstrated by the consistently lower AIC scores achieved by multi-resolution models within the GAM model class. \n\nThe results on the peak timing forecasting problem are mixed, but interesting. A multi-resolution neural network does marginally better than the alternatives, when performance is assessed via standard statistical metrics. However, the corresponding forecast is occasionally inappropriate (falling between the morning and evening peaks) and inferior to high-resolution alternatives when assessed via an operationally motivated metric. The results suggest that the multi-resolution neural network should be fitted to data by minimising a problem specific performance metric directly. For instance, one could consider financial metrics on billing periods as done by \\cite*{saxena_hybrid_2019}. The multi-resolution GAM does poorly on the peak timing problem, but this is attributable to the insufficient flexibility of the ordered logit parametrisation used here. Obtaining stronger evidence in favour or against the use of multi-resolution methods for the peak timing problem would require solving the issues just mentioned, which could be the subject of further work.\n\nThe forecasting methods presented here could be extended in several ways. The set of models described in this paper could be used within an aggregation of experts or ensemble methods, which might lead to more accurate forecasts. The benefits of multi-resolution methods have been demonstrated in a context where covariates were available at different temporal resolutions, but they could be generalised to other multi-resolution settings, such as spatio-temporal data or individual customer data (see e.g., \\citealp*{fasiolo_qgam_2020} for an example application of functional quantile GAMs \\citealp*{fasiolo_fast_2020} to residential electricity demand data). Finally, this paper focused on day-ahead daily peak magnitude and time forecasting, but multi-resolution methods could be applied to other short-term windows (e.g., weekly). However, estimating monthly or yearly peaks would require a different approach, because the number of observed demand peaks would be too low. \n\n\n\\section*{Acknowledgments}\nMatteo Fasiolo was partially funded by EPSRC grant EP\/N509619\/1. The datasets used in this paper are available on the National Grid and National Oceanic and Atmospheric Administration websites. The R code as well as the data prepared for the experiments in this paper are available at the following link: \\href{https:\/\/cutt.ly\/CYvgIP3}{https:\/\/cutt.ly\/CYvgIP3}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \n\\subsection{Introduction.} Given a compact operator $T:H \\rightarrow H$ on a Hilbert space $H$, compactness implies that the inversion problem, i.e. reconstructing $x$ from $y$ in\n$$ Tx = y$$\nis ill-posed: small changes in $y$ may lead to arbitrarily large changes in $x$. The simplest example is perhaps that of integral operators on $L^2(\\mathbb{R})$ where integration acts as a smoothing\nprocess and makes inversion of the operator difficult. Of particular importance is the Hilbert transform\n$$ (Hf)(x) = \\frac{1}{\\pi}\\mbox{p.v.}\\int_{\\mathbb{R}}{\\frac{f(y)}{x-y}dy},$$\nwhich satisfies $\\| Hf\\|_{L^2(\\mathbb{R})} = \\|f\\|_{L^2(\\mathbb{R})}$. However, in practice, measurements\nhave to be taken from a compact interval and this motivates the definition of the truncated Hilbert transform: using $\\chi_I$ to\ndenote the characteristic function on an interval $I \\subset \\mathbb{R}$, the truncated Hilbert transform $H_T:L^2(I) \\rightarrow L^2(J)$ on the\nintervals $I,J \\subset \\mathbb{R}$ is given by\n$$ H_T = \\chi_{J} H(f \\chi_{I}).$$\nWhenever the intervals $I$ and $J$ are disjoint, the singularity of the kernel never comes into play and the operator is highly smoothing: indeed, if $I$ and $J$\nare disjoint, the operator becomes \\textit{severely} ill-posed and the singular values decay exponentially fast. The inversion problem is ill-behaved even on finite-dimensional subspaces: \\textit{every} subspace $V \\subset L^2(I)$ contains some $0 \\neq f \\in V$ with\n$$ \\| H_T f\\|_{L^2(J)} \\leq c_1 e^{-c_2 \\dim(V)} \\| f \\|_{L^2(I)} \\qquad \\mbox{for some}~c_1, c_2 > 0~\\mbox{depending only on}~I,J.$$\n\n\\begin{figure}[h!]\n\\begin{center}\n\\begin{tikzpicture}[xscale=9,yscale=1.1]\n\\draw [ultra thick, domain=0:1, samples = 300] plot (\\x, {-0.15269*sin(2*pi*\\x r) + 0.4830*sin(3*pi*\\x r) + 0.3084*sin(4*pi*\\x r) + 0.80509*sin(5*pi*\\x r)} );\n\\draw [thick, domain=0:1] plot (\\x, {0} );\n\\filldraw (0,0) ellipse (0.006cm and 0.048cm);\n\\node at (0,-0.3) {0};\n\\filldraw (1,0) ellipse (0.006cm and 0.048cm);\n\\node at (1,-0.3) {1};\n\\end{tikzpicture}\n\\caption{A function $f$ on $[0,1]$ with $\\|Hf\\|^2_{L^2([2,3])} \\sim 10^{-7}\\|f\\|^2_{L^2([0,1])}$} \n\\end{center}\n\\end{figure}\n\nThis strong form of ill-posedness makes it very easy to construct bad examples: take any finite\northonormal set $\\left\\{\\phi_1, \\phi_2, \\dots, \\phi_n \\right\\} \\subset L^2(I)$, By linearity, we have for any scalar $a_1, \\dots, a_n$ that\n$$ \\left\\| H_T \\left(\\sum_{k=1}^{n}{a_k \\phi_k}\\right)\\right\\|_{L^2(J)}^2 = \\sum_{i, j = 1}^{n}{a_i a_j \\left\\langle H_T \\phi_i, H_T \\phi_j \\right\\rangle_{L^2(J)}}$$\nwhich is a simple quadratic form. Finding the eigenvector corresponding to the smallest eigenvalue of the Gramian $G = (\\left\\langle H_T \\phi_i, H_T \\phi_j \\right\\rangle)_{i,j=1}^{n}$\nproduces a suitable linear combination of $\\left\\{\\phi_1, \\phi_2, \\dots, \\phi_n \\right\\}$ for which $\\|H_Tf\\|_{L^2(J)} \\ll \\|f\\|_{L^2(I)}$. The strong degree of ill-posedness guarantees that the smallest eigenvalue decays\nexponentially in $n$ independently of the orthonormal basis. Recently, Alaifari, Pierce and the second author \\cite{al} showed that it is nonetheless possible to guarantee some control by proving a new type of stability\nestimate for the Hilbert transform: for disjoint intervals $I,J \\subset \\mathbb{R}$\n$$ \\|H f\\|_{L^2(J)} \\geq c_1 \\exp{\\left(-c_2\\frac{ \\|f_x\\|_{L^2(I)}}{\\|f\\|_{L^2(I)}}\\right)} \\| f \\|_{L^2(I)},$$\nwhere the constants $c_1, c_2$ depend only on the intervals $I,J$.\nThis estimate guarantees that the only way for $Hf$ to be substantially smaller than $f$ is the presence of oscillations. If one reconstructs data $f$ from\nmeasurements $g$ (the equation being $H_T f = g$), then a small error $f + h$ yields\n$$ H_T(f +h) = H_Tf + H_T h= g + H_T h.$$\nThe stability estimate implies that one can guarantee to distinguish $f$ from $f+h$ when $h$ has few oscillations.\nThe only existing result in this direction is \\cite{al} for the Hilbert transform.\n\n\n\n\n\\section{Main results}\n\nThe purpose of our paper is to combine the argument developed by Alaifari, Pierce and the second author \\cite{al} with classical results of Bertero \\& Gr\\\"unbaum \\cite{gru1}, Landau \\& Pollak \\cite{pr2, pr3} and Slepian \\& Pollak \\cite{pr1} to establish such stability estimate in three other cases: we give essentially sharp stability estimates for the Truncated Laplace Transform, the Adjoint Truncated Laplace Transform and the Truncated Fourier Transform. While this shows that this class of stability estimates exist in a wider context, the question of whether such results could be 'generically' true (i.e. for a wide class of integral operators) remains open.\n\n\n\n\\subsection{Truncated Laplace Transform} \n The truncated Laplace transform $\\mathcal{L}_{a,b}:L^2[a,b] \\rightarrow L^2[0,\\infty]$ is defined via\n$$\n (\\mathcal{L}_{a,b}f)(s) = \\int_{a}^{b}{e^{-s t} f(t) dt},\n$$\nwhere $0 < a < b < \\infty$. \nThe operator $\\mathcal{L}_{a,b}$ is compact and its image is dense in $L^2[0,\\infty]$. We show\nthat if $\\|\\mathcal{L}_{a,b} f\\|_{L^2[0, \\infty]} \\ll \\|f\\|_{L^2[a,b]}$,\nthen this is due to the presence of oscillations. \n\n\n\\begin{figure}[h!]\n\\begin{center}\n\\begin{tikzpicture}[xscale=9,yscale=1.1]\n\\draw [ultra thick, domain=1:2, samples = 300] plot (\\x, {-0.0707*sin(pi*\\x r) - 0.421*sin(2*pi*\\x r) + 0.2137*sin(3*pi*\\x r) + 0.8783*sin(4*pi*\\x r)} );\n\\draw [thick, domain=1:2] plot (\\x, {0} );\n\\filldraw (1,0) ellipse (0.006cm and 0.048cm);\n\\node at (1,-0.3) {1};\n\\filldraw (2,0) ellipse (0.006cm and 0.048cm);\n\\node at (2,-0.3) {2};\n\\end{tikzpicture}\n\\caption{A function $f$ on $[1,2]$ with $\\| \\mathcal{L}_{1,2} f \\|^2_{L^2[0,\\infty]} \\sim 10^{-8}\\|f\\|^2_{L^2([1,2])}$.} \n\\end{center}\n\\end{figure}\n\n\\begin{theorem} There exist $c_1, c_2>0$, depending only on $a,b$, so that for all real-valued $f \\in H^1[a,b]$\n$$ \\| \\mathcal{L}_{a,b} f \\|_{L^2[0,\\infty]} \\geq c_1 \\exp{\\left(-c_2\\frac{ \\|f_x\\|_{L^2[a,b]}}{\\|f\\|_{L^2[a,b]}}\\right)}\\|f\\|_{L^2[a,b]}.$$\n\\end{theorem}\nThe result is sharp up to constants: if $c_2$ is chosen sufficiently small, then for every $c_1 > 0$ there is an infinite orthonormal sequence of functions for which the inequality fails. The proof proceeds similarly as in \\cite{al} with a crucial ingredient for Laplace transforms coming from a a 1985 paper of Bertero \\& Gr\\\"unbaum \\cite{gru1}.\n\n\n\n\n\\subsection{Adjoint Truncated Laplace Transform.} The adjoint operator $\\mathcal{L}_{a,b}^*:L^2[0,\\infty] \\rightarrow L^2[a,b]$ \n$$ (\\mathcal{L}_{a,b}^*f)(s) = \\int_{0}^{\\infty}{e^{-s t} f(t) dt}.$$\nis very different in structure. We seek a lower bound on $\\|\\mathcal{L}_{a,b}^*f\\|_{L^2[a,b]}$ in terms of $\\| f \\|_{L^2[0, \\infty]}$: if $f$ is supported far away from the\norigin, then the exponentially decaying kernel will induce rapid decay even if no oscillations are present (additional oscillations can, of course, further decrease the size of $\\|\\mathcal{L}_{a,b}^*f\\|_{L^2[a,b]}$).\nAny lower bound will therefore have to incorporate where the function is localized and the natural framework for this are weighted estimates.\n\n\n\n\\begin{theorem} There exist $c_1, c_2$, depending only on $a,b$, so that for all real-valued $f \\in H^2[0, \\infty]$\n$$ \\| \\mathcal{L}_{a,b}^* f \\|_{L^2[a,b]} \\geq c_1 \\exp{\\left(-c_2\\frac{ \\|x f_{xx}\\|_{L^2[0,\\infty]} + \\|x f_{x}\\|_{L^2[0,\\infty]} + \\|x f_{}\\|_{L^2[0,\\infty]} + \\| f_{}\\|_{L^2[0,\\infty]} }{\\|f\\|_{L^2[0,\\infty]}}\\right)}\\|f\\|_{L^2[0,\\infty]}.$$\n\\end{theorem}\nThe result is again sharp in the sense that there are counterexamples for every $c_1 > 0$ if the constant $c_2$ is smaller than some fixed positive\nconstant depending on $a,b$.\n\n\n\n\\subsection{Truncated Fourier Transform} Let $\\mathcal{F}_T: L^2[-1,1] \\rightarrow L^2[-1,1]$ be given by\n$$ \\mathcal{F}_T = \\chi_{[-1,1]}\\mathcal{F}\\left(\\chi_{[-1,1]} f\\right)$$\nwhere, as usual, $\\mathcal{F}$ denotes the Fourier transform\n$$ (\\mathcal{F} f)(\\xi) = \\int_{\\mathbb{R}}^{}{f(x) e^{i \\xi x}dx}.$$\nThe Fourier transform of a compactly supported function is analytic and cannot vanish on an open set. Since it does not vanish on any open set, this yields\n$$ \\int_{-1}^{1}{|\\widehat{f}(\\xi)|^2d\\xi} > 0$$\nfor every nonzero $f \\in L^2[-1,1]$. The expression can certainly be small because $\\widehat{f}$ can have all its $L^2-$mass far away from the origin: however, if $\\widehat{f}$ has its $L^2-$mass\nfar away from the origin, $f$ oscillates on $[-1,1]$. We give a quantitative description of this phenomenon.\n\n\\begin{theorem} There exist $c_1, c_2 > 0$ such that for all real-valued $f \\in H^1[-1,1]$\n$$ \\int_{-1}^{1}{|\\widehat{f}(\\xi)|^2d\\xi}\\geq c_1\\left(c_2 \\frac{ \\left\\| f_x \\right\\|_{L^2[-1,1]}}{\\|f\\|_{L^2[-1,1]}} \\right)^{-c_2\\frac{\\left\\| f_x \\right\\|_{L^2[-1,1]}}{\\|f\\|_{L^2[-1,1]}} } \\int_{-1}^{1}{|f(x)|^2dx}.$$\n\\end{theorem}\n\n\\begin{figure}[h!]\n\\begin{center}\n\\begin{tikzpicture}[xscale=4.5,yscale=1.1]\n\\draw [ultra thick, domain=-1:1, samples = 300] plot (\\x, {0.00055*cos(pi*\\x r) + 0.0824*cos(2*pi*\\x r) + 0.6196*cos(3*pi*\\x r) + 0.7805*cos(4*pi*\\x r)} );\n\\draw [thick, domain=-1.05:1.05] plot (\\x, {0} );\n\\filldraw (-1,0) ellipse (0.012cm and 0.048cm);\n\\node at (-1,-0.3) {-1};\n\\filldraw (0,0) ellipse (0.012cm and 0.048cm);\n\\node at (0,-0.3) {0};\n\\filldraw (1,0) ellipse (0.012cm and 0.048cm);\n\\node at (1,-0.3) {1};\n\\end{tikzpicture}\n\\caption{A function $f$ on $[-1,1]$ with $\\| \\mathcal{F}_T f \\|^2_{L^2[-1,1]} \\sim 10^{-18}\\|f\\|^2_{L^2([-1,1])}$.} \n\\end{center}\n\\end{figure}\n\n\n\nWe are not aware of any such results in the literature,\nhowever, the result is certainly close in spirit to the question to which degree simultaneous localization in space and frequency is possible. An example is Nazarov's\nquantitative form \\cite{naz} of the Amrein-Berthier theorem \\cite{am} (see also \\cite{bene}): for any $S, \\Sigma \\subset \\mathbb{R}$ with\nfinite measure and any $f \\in L^2(\\mathbb{R})$ it is not possible for $f$ to be too strongly localized in $S$ and $\\widehat{f}$ to be too\nstrongly localized in $\\Sigma$\n$$ \\left\\|f \\chi_{\\mathbb{R} \\setminus S} \\right\\|^2_{L^2(\\mathbb{R})} + \\left\\|\\widehat{f} \\chi_{\\mathbb{R} \\setminus \\Sigma} \\right\\|^2_{L^2(\\mathbb{R})} \\geq \\frac{e^{-133 |S| |\\Sigma|}}{133} \\| f\\|^2_{L^2(\\mathbb{R})}.$$\nThe proof of Theorem 3 makes use of \\textit{prolate spheroidal wave functions} introduced by Landau, Pollak and Slepian \\cite{pr2, pr3, pr1, pr4, pr5}. They appear naturally in the\nLandau-Pollak uncertainty principle \\cite{pr3} which states that if\n$\\mbox{supp}(\\widehat{f}) \\subset [-1,1]$\nand\n$$ \\int_{|x| \\geq T}{|f(x)|^2 dx} \\leq \\varepsilon \\|f\\|_{L^2(\\mathbb{R})},~\\mbox{then}\n\\qquad \\|f - \\pi(f) \\|_{L^2} \\leq 49\\varepsilon^2 \\|f\\|_{L^2},$$\nwhere $\\pi$ is the projection onto a $(4\\left\\lfloor T \\right\\rfloor +1)-$dimensional subspace spanned by the first elements of a particular \\textit{universal} orthonormal basis $(\\phi_n)_{n \\in \\mathbb{N}}$ (these are the prolate spheroidal wave functions). \\\\\n\n\n\\textbf{Outline of the paper.} \\S 3 gives a high-level overview of the argument and provides two easy inequalities for real functions that will be used in the proofs. \\S 4 explains the underlying machinery specially required to prove Theorem 1 and gives the full proof. A very similar argument allows to prove Theorem 2 and we describe the necessary modifications in \\S 5. \\S 6 gives a proof of Theorem 3. $c_1, \\dots, c_5$ are positive constants, $\\sim$ denotes equivalence up to constants.\n\n\n\n\n\n\n\n\\section{Outline of the arguments}\n\n\n\\subsection{The overarching structure.}\nThe proofs (also for the result in \\cite{al}) have the same underlying structure: we use a $T^* T$ argument and the fact that\nthere is a differential operator $D$ whose eigenfunctions coincide with the eigenfunctions of $T^* T$. This allows us to exploit the structure of\nthe differential operator to analyze the decomposition of a generic function into the orthonormal basis of singular functions.\nMore precisely: we are interested in establishing lower bounds for an \ninjective operator between two Hilbert spaces $T:H_1 \\rightarrow H_2$. In all these cases, we assume that\n\\begin{enumerate}\n\\item we control the decay of the eigenvalues of $T^*T$ from below,\n\\item there is a differential operator $D:H_1 \\rightarrow H_1$ with the same eigenfunctions as $T^*T$\n\\item and we can control the growth of eigenvalues $\\lambda_n$ of $D$.\n\\end{enumerate}\nLet us denote the $L^2-$normalized eigenfunctions of $D$ (which are also eigenfunctions of $T^*T$) by $(u_n)_{n=1}^{\\infty}$. They form an orthonormal basis\nof $L^2$ in all situations that are of interest to us. Furthermore, we will use the spectral theorem\n$$ \\left\\langle D f, f\\right\\rangle = \\sum_{n=1}^{\\infty}{\\lambda_n \\left| \\left\\langle f, u_n \\right\\rangle \\right|^2}$$\nand explicit information on the growth of the eigenvalues $\\lambda_n$. We can furthermore, using integration by parts and the structure of $D$, control the action of $D$ in the Sobolev space $H^{s}$\n$$ \\left\\langle D f, f\\right\\rangle \\sim \\| f\\|^2_{H^s}.$$\nThe useful insight is that this implies that the eigenfunction $(u_n)_{n=1}^{\\infty}$ explore the phase space in a way that is analogous to classical eigenfunctions of the Laplacian: low-energy eigenfunctions\nhave small derivatives. In particular, if $Df$ is small, then at least some of the projections $|\\left\\langle f, u_n \\right\\rangle|$ have to be big for $n$ somewhat small. Conversely, functions whose $L^2-$energy\nis mostly concentrated on high-frequency eigenfunctions $(u_n)_{n \\geq N}$ have $ |\\left\\langle D f, f\\right\\rangle|$ large. The next Lemma makes this precise.\n\n\n\\begin{lemma}[Low oscillation implies low frequency] If $\\lambda_n \\geq c_1 n^{2}$ and $|\\left\\langle D f, f\\right\\rangle| \\leq c_2 \\|f_x\\|_{L^2}^2$ for some $0 < c_1, c_2 < \\infty$, then there exists a constant $0 < c < \\infty$ such that\n$$ \\sum_{n \\leq c \\frac{ \\|f_x\\|_{L^2}}{\\|f\\|_{L^2}} }^{}{ \\left| \\left\\langle f, u_n \\right\\rangle\\right|^2} \\geq \\frac{ \\| f\\|^2_{L^2}}{2}.$$\n\\end{lemma}\n\\begin{proof} Both inequalities have the same scaling under the multiplication with scalars $f \\rightarrow \\lambda f$, so we can assume w.l.o.g. that $\\|f\\|_{L^2} = 1$. Trivially,\n\\begin{align*}\n \\sum_{n \\geq c_3\\|f_x\\|_{L^2}}^{}{ \\lambda_n \\left| \\left\\langle f, u_n \\right\\rangle\\right|^2} &\\geq \\sum_{n \\geq c_3 \\|f_x\\|_{L^2}}^{}{ c_1 n^{2} \\left| \\left\\langle f, u_n \\right\\rangle\\right|^2} \\\\\n&\\geq c_1 \\left(c_3 \\|f_x\\|_{L_2}\\right)^2 \\sum_{n \\geq c_3 \\|f_x\\|_{L^2}}^{}{ \\left| \\left\\langle f, u_n \\right\\rangle\\right|^2} \n\\end{align*}\nHowever, we also clearly have that\n$$ \\sum_{n \\geq c_3 \\|f_x\\|_{L^2}}^{}{\\lambda_n \\left| \\left\\langle f, u_n \\right\\rangle\\right|^2} \\leq \\sum_{n=1}^{\\infty}{\\lambda_n \\left| \\left\\langle f, u_n \\right\\rangle\\right|^2}= |\\left\\langle D f, f\\right\\rangle| \\leq c_2\\| f_x\\|^2_{L^2}.$$\nAs a consequence\n$$ \\sum_{n \\geq c_3 \\|f_x\\|_{L^2}}^{}{ \\left(c_3 \\|f_x\\|_{L^2}\\right)^{2} \\left| \\left\\langle f, u_n \\right\\rangle\\right|^2} \\leq \\frac{c_2}{c_1 c_3^2},$$\nwhich can be made smaller than $1\/2$ for a suitable choice of $c_3$ (depending on $c_1,c_2$). Since the $(u_n)_{n=1}^{\\infty}$ form an orthonormal system\n$$ 1 = \\|f\\|_{L^2}^2 = \\sum_{n=1 }^{\\infty}{\\left| \\left\\langle f, u_n \\right\\rangle\\right|^2},~\\mbox{we get} \\quad \\sum_{n \\leq \\sqrt{\\frac{2c_2}{c_1}} \\frac{ \\|f_x\\|_{L^2}}{ \\|f\\|_{L^2} } }^{}{ \\left| \\left\\langle f, u_n \\right\\rangle\\right|^2} \\geq \\frac{ \\| f\\|^2_{L^2}}{2}.$$\n\\end{proof}\n\n\n\n\nWe may not know the eigenfunctions $(u_n)_{n=1}^{\\infty}$ but we can ensure that for any function $f$ half\nof its $L^2-$mass of the expansion will be contained in the subspace\n$$ \\mbox{span}\\left\\{u_n: n \\leq c \\frac{ \\|f_x\\|_{L^2}}{\\|f\\|_{L^2}} \\right\\} \\subset H_1.$$\nThe second step of the argument invokes decay of the eigenvalues $\\mu_n$ of $T^* T$ via\n\\begin{align*}\n\\| T f\\|^2_{H_2} &= \\left\\langle Tf, Tf \\right\\rangle_{H_2} = \\left\\langle T^*Tf, f \\right\\rangle_{H_1} = \\sum_{n =1 }^{\\infty}{ \\mu_n |\\left\\langle f, u_n \\right\\rangle|^2}\n\\end{align*}\nand combining this with the previous argument to obtain\n$$ \\sum_{n =1 }^{\\infty}{ \\mu_n |\\left\\langle f, u_n \\right\\rangle|^2} \\geq \\sum_{n \\leq c \\frac{ \\|f_x\\|_{L^2}}{\\|f\\|_{L^2}} }^{\\infty}{ \\mu_n |\\left\\langle f, u_n \\right\\rangle|^2} \\geq \n\\mu_{ c \\frac{ \\|f_x\\|_{L^2}}{\\|f\\|_{L^2}} } \\sum_{n \\leq c \\frac{ \\|f_x\\|_{L^2}}{\\|f\\|_{L^2}} }^{\\infty}{|\\left\\langle f, u_n \\right\\rangle |^2} \\geq \\frac{\\mu_{ c \\frac{ \\|f_x\\|_{L^2}}{\\|f\\|_{L^2} }} \\|f\\|^2_{L^2(H_1)} }{2}. \n$$\n\\textit{Sharpness of results.} It is not difficult to see that these types of arguments are actually sharp (up to constant) if $f=u_n$. This will immediately imply sharpness of our results: if constants\nin the statement are chosen too small, then the inequality will fail for $(u_n)_{n \\geq N}$ for some $N$ sufficiently large. While this is not our main focus, there is quite\na bit of additional research on precise asymptotics of the constants and how they depend on the intervals (see \\cite{led0}).\n\n\\subsection{An easy inequality.} All our proofs will have a natural case-distinction: either the function changes sign on the interval $[a,b]$ or it does not. If it changes sign, then\nwe can use standard arguments to bound all arising terms by $\\|f_x\\|_{L^2[a,b]}$ which simplifes the expressions. \n\n\\begin{lemma} Let $[a,b] \\subset \\mathbb{R}$. If $f:[a,b]$ is differentiable and changes sign on $[a,b]$, then\n$$ \\|f\\|_{L^{\\infty}[a,b]} \\leq \\sqrt{b-a} \\|f_x\\|_{L^2[a,b]}.$$\n\\end{lemma}\n\\begin{proof} Let us assume $f(x_0) = 0$ for some $x_0 \\in [a,b]$. Then, for every $x \\in [a,b]$, using Cauchy-Schwarz\n$$ |f(x)| = \\left| \\int_{x_0}^{x}{f'(z) dz} \\right| \\leq \\int_{x_0}^{x}{|f'(z)| dz} \\leq \\sqrt{b-a} \\|f_x\\|_{L^2[a,b]}.$$\n\\end{proof}\n\nIf $f$ does \\textit{not} change sign, then we cannot bound low-regularity terms like $\\|f\\|_{L^2}$ by high-regularity terms like $\\|f_x\\|_{L^2[a,b]}$. However, there is also no cancellation in\nthe integral operator and arguments specifically taylored to the integral operators will admit easy lower bounds in terms of the $L^1-$norm. The next inequality shows that the lower bounds we obtain in the Theorems are much smaller than the $L^1-$norm so that we may treat both cases at the same time.\n\n\\begin{lemma} Let $[a,b] \\subset \\mathbb{R}$. Then, for every $c_2 > 0$, there exists a $c_1 > 0$ (depending on $c_2, a, b$) such that for all nonnegative, differentiable $f:[a,b] \\rightarrow \\mathbb{R}_{+}$ \n$$ \\int_{a}^{b}{ f(x) dx} \\geq c_1 \\exp{\\left(-c_2\\frac{ \\|f_x\\|_{L^2[a,b]}}{\\|f\\|_{L^2[a,b]}}\\right)}\\|f\\|_{L^2[a,b]}.$$\n\\end{lemma}\n\\begin{proof} Squaring both sides of the desired inequality and using\n$$ \\|f\\|_{L^2[a,b]}^2 = \\int_{a}^{b}{f(x)^2 dx} \\leq \\|f\\|_{L^{\\infty}} \\int_{a}^{b}{f(x) dx}$$\nshows that the desired statement is implied by the stronger inequality\n$$ \\|f\\|_{L^{\\infty}[a,b]} \\leq \\frac{1}{c_1^2} \\exp{\\left(c_2\\frac{ \\|f_x\\|_{L^2[a,b]}}{\\|f\\|_{L^2[a,b]}}\\right)} \\int_{a}^{b}{ f(x) dx}.$$\nThe inequality is invariant under multiplication with scalars $f \\rightarrow c f$, which allows us to assume w.l.o.g. that $\\|f\\|_{L^{\\infty}[a,b]} = 1$.\nLet us now take $J \\subset [a,b]$ to be the largest possible interval such that $f$ assumes the value 1 on the boundary of $J$ and\nthe value $1\/2$ on the other boundary point. If no such interval exists, then the original inequality trivially holds with $c_1 = \\sqrt{b-a}\/2$ since\n$$ \\int_{a}^{b}{ f(x) dx} \\geq \\frac{b-a}{2} \\geq \\frac{ \\sqrt{b-a}}{2} \\|f\\|_{L^2[a,b]} \\geq \\frac{ \\sqrt{b-a}}{2} \\exp{\\left(-c_2\\frac{ \\|f_x\\|_{L^2[a,b]}}{\\|f\\|_{L^2[a,b]}}\\right)} \\|f\\|_{L^2[a,b]}.$$\nSuppose now that $J$ exists. Clearly, \n$$ \\int_{a}^{b}{f(x)dx} \\geq \\int_{J}^{}{f(x)dx} \\geq \\frac{|J|}{2} \\qquad \\mbox{and} \\qquad \\|f\\|_{L^2[a,b]} \\leq \\sqrt{b-a}.$$\nIt remains to bound $\\|f_x\\|_{L^2[a,b]}$ from below. We use the trivial estimate $\\|f_x\\|_{L^2[a,b]} \\geq \\|f_x\\|_{L^2(J)}$ and argue that\namong all functions on the interval $J$ assuming the values 1 and $1\/2$ on the boundary, the linear function yields the smallest value for $\\|f_x\\|_{L^2(J)}$.\nThe existence of a minimizing function is obvious because of compactness. The minimizer $g$ has to satisfy the Euler-Lagrange equation, which simplifies to $g_{xx} = 0$.\nThis implies \n$$\\|f_x\\|_{L^2(J)} \\geq \\left\\| \\left(1 - \\frac{x}{2|J|}\\right)_{x}\\right\\|_{L^2[0, |J|]} = \\frac{1}{2\\sqrt{|J|}}.$$\nAltogether, we have\n$$ \\frac{1}{c_1^2} \\exp{\\left(c_2\\frac{ \\|f_x\\|_{L^2[a,b]}}{\\|f\\|_{L^2[a,b]}}\\right)} \\int_{a}^{b}{ f(x) dx} \\geq \\frac{1}{c_1^2} \\exp{\\left(\\frac{c_2}{2 \\sqrt{|J|} \\sqrt{b-a} }\\right)} \\frac{|J|}{2}.$$\nHowever, for every choice of $a,b,c_2>0$ such that $a 0$ depending on $a,b$ such that the eigenvalues of $D_t$ on $[a,b]$\nsatisfy\n$$ \\lambda_n \\geq c_3 n^2 .$$\n\\end{lemma}\n\n\\subsection{Proof of Theorem 1.} \n\\begin{proof}\nThe proof combines the various ingredients. We assume w.l.o.g. that $\\| f\\|_{L^2[a,b]} = 1$. Integration by part gives, for differentiable $f$,\n\\begin{align*} \\left\\langle D f, f \\right\\rangle &\\leq \\int_{a}^{b}{ (t^2-a^2)(b^2-t^2) \\left( \\frac{d}{dt} f(t)\\right)^2 + 2(t^2-a^2) f(t)^2 dt } \\\\\n&\\leq (b^2-a^2)^2\\| f_x\\|^2_{L^2[a,b]} + 2(b^2-a^2) \\| f\\|^2_{L^2[a,b]}.\\end{align*}\nWe distinguish two cases: (1) $f$ has a root in $[a,b]$ or (2) $f$ has no roots in $[a,b]$.\nWe start with the first case. Then Lemma 2 implies\n$$ \\| f\\|^2_{L^2[a,b]} \\leq (b-a) \\| f\\|^2_{L^{\\infty}[a,b]} \\leq (b-a)^2 \\| f_x\\|^2_{L^2[a,b]}$$\nand thus\n\\begin{align*} \\left| \\left\\langle D f, f \\right\\rangle \\right| &\\leq (b^2-a^2)^2\\| f_x\\|^2_{L^2[a,b]} + 2(b^2-a^2) \\| f\\|^2_{L^2[a,b]} \\\\\n&\\leq \\left( (b^2-a^2)^2 + 2(b^2-a^2)(b-a)^2 \\right)\\|f_x\\|_{L^2[a,b]}^2.\\end{align*}\nAt the same time, since the eigenfunctions form a basis, we may also write\n\\begin{align*} \\left\\langle D f, f \\right\\rangle\n= \\sum_{n=1}^{\\infty}{\\lambda_n |\\left\\langle f, v_n \\right\\rangle|^2}\n \\end{align*}\nAltogether, we have, using the lower bound $\\lambda_n \\geq c_3 n^2$ that\n$$ \\sum_{n=1}^{\\infty}{c_3 n^2 |\\left\\langle f, v_n \\right\\rangle|^2} \\leq \\sum_{n=1}^{\\infty}{\\lambda_n |\\left\\langle f, v_n \\right\\rangle|^2} = |\\left\\langle Df, f\\right\\rangle| \\leq c_4 \\| f_x\\|^2_{L^2[a,b]} .$$\nAs a consequence, we can use Lemma 1 to deduce that the Littlewood-Paley projection onto low frequencies contains a positive fraction of the $L^2-$mass\n$$ \\sum_{n \\leq c_5 \\|f_x\\|_{L^2[a,b]}}^{}{ |\\left\\langle f, v_n \\right\\rangle|^2} \\geq \\frac{1}{2}\\|f\\|_{L^2[a,b]}^2.$$ \nThe argument can now be concluded as follows: it is known that the eigenvalues of $\\mathcal{L}_{a,b}^* \\mathcal{L}_{a,b}$ decay exponentially (for estimates, see \\cite{led0,led3})\n and we have also just established that a positive proportion of the $L^2-$mass lies at suitably small frequencies. We write \n\\begin{align*}\n\\| \\mathcal{L}_{a,b} f\\|^2_{L^2[0, \\infty]} &= \\left\\langle \\mathcal{L}_{a,b} f, \\mathcal{L}_{a,b} f \\right\\rangle_{L^2[0, \\infty]} = \\left\\langle \\mathcal{L}_{a,b}^* \\mathcal{L}_{a,b} f, f \\right\\rangle_{L^2[a,b]} = \\sum_{n =1 }^{\\infty}{ \\mu_n |\\left\\langle f, u_n \\right\\rangle|^2},\n\\end{align*}\nwhere $(\\mu_n)_{n=1}^{\\infty}$ are the eigenvalues of $ \\mathcal{L}_{a,b}^* \\mathcal{L}_{a,b}: L^2[a,b] \\rightarrow L^2[a,b]$ and $(u_n)_{n=1}^{\\infty}$ is the associated sequence of eigenfunctions. We bound\n\\begin{align*}\n \\sum_{n =1 }^{\\infty}{ \\mu_n |\\left\\langle f, u_n \\right\\rangle|^2} &\\geq \\sum_{n \\leq c_5 \\|f_x\\|_{L^2[a,b]} }^{\\infty}{ \\mu_n |\\left\\langle f, u_n \\right\\rangle|^2} \\\\\n&\\geq \\mu_{ c_5 \\|f_x\\|_{L^2[a,b]}} \\sum_{n \\leq c_5 \\|f_x\\|_{L^2[a,b]}}^{\\infty}{|\\left\\langle f, u_n \\right\\rangle |^2} \\\\\n&\\geq \\frac{\\mu_{ c_5 \\|f_x\\|_{L^2[a,b]}}}{2}.\n\\end{align*}\nIt is well-known (see e.g. \\cite{led0}) that the singular values decay exponentially\n$$ \\mu_n \\geq c_1 e^{-c_2 n},$$\nwhere the constants $c_1, c_2$ only depend on the interval. This yields \n$$ \\| \\mathcal{L}_{a,b} f \\|_{L^2[0,\\infty]}^2 = \\left\\langle \\mathcal{L}_{a,b}^* \\mathcal{L}_{a,b}f, f \\right\\rangle \\geq c_1 \\exp{\\left(-c_2 \\|f_x\\|_{L^2[a,b]}\\right)}\\|f\\|^2_{L^2[a,b]}$$\nfor functions satisfying $\\|f\\|_{L^2[a,b]} = 1$ which, in turn, implies that for general $f \\in L^2[a,b]$ \n$$ \\| \\mathcal{L}_{a,b} f \\|_{L^2[0,\\infty]}^2 \\geq c_1 \\exp{\\left(-c_2\\frac{ \\|f_x\\|_{L^2[a,b]}}{\\|f\\|_{L^2[a,b]}}\\right)}\\|f\\|^2_{L^2[a,b]}.$$\nIt remains to consider the second case. In that case, $f$ cannot change sign. We assume w.l.o.g. that it is always positive and bound\n\\begin{align*}\n\\| \\mathcal{L}_{a,b}f \\|_{L^2[0,\\infty]}^2 = \\int_{a}^{b}{ \\left( \\int_{a}^{b}{ \\frac{f(r)}{ r + t} dr} \\right) f(t) dt} \\geq \\int_{a}^{b}{ \\left( \\int_{a}^{b}{ \\frac{f(r)}{ b+b} dr} \\right) f(t) dt}\n= \\frac{1}{2b} \\left( \\int_{a}^{b}{ f(t) dt} \\right)^2.\n\\end{align*}\nHowever, here Lemma 3 immediately yields that for every $c_2 > 0$ and all $a 0$.\nTherefore \n\\begin{align*}\n \\sum_{n=1}^{\\infty}{c_4 n^2 |\\left\\langle f, v_n \\right\\rangle|^2} \\leq \\sum_{n=1}^{\\infty}{\\lambda_n |\\left\\langle f, v_n \\right\\rangle|^2} \\leq &\\| x f_{xx} \\|^2_{L^2[0,\\infty]} + c_1 \\|x f_x\\|^2_{L^2[0,\\infty]} \\\\\n&+ c_2\\|x f\\|^2_{L^2[0,\\infty]} + c_3 \\|f\\|^2_{L^2[0,\\infty]}.\\end{align*}\nLet \n$$ J = \\| x f_{xx} \\|^2_{L^2[0,\\infty]} + c_1 \\|x f_x\\|^2_{L^2[0,\\infty]} + c_2\\|x f\\|^2_{L^2[0,\\infty]} + c_3 \\|f\\|^2_{L^2[0,\\infty]}.$$\nUsing the argument from the proof of Lemma 1 in conjunction with\n$$ \\|f\\|_{L^2} = 1 = \\sum_{n=1}^{\\infty}{|\\left\\langle f, v_n \\right\\rangle|^2},$$\nwe can conclude the existence of a constant $ 0 < c_5 < \\infty$ depending only on $c_4$ such that\n$$ \\sum_{n \\leq c_5 \\sqrt{J}}^{}{ |\\left\\langle f, v_n \\right\\rangle|^2} \\geq \\frac{\\|f\\|^2_{L^2}}{2}.$$\nThe argument now follows from the exponential decay of the singular values (see \\cite{led0}) and the elementary inequality $(a^2+b^2+c^2+d^2)^{1\/2} \\leq a+b+c+d$ for positive $a,b,c,d \\in \\mathbb{R}_{\\geq 0}$\n $$ \\sqrt{J} \\leq \\| x f_{xx} \\|_{L^2[0,\\infty]} + c_1 \\|x f_x\\|_{L^2[0,\\infty]} + c_2\\|x f\\|_{L^2[0,\\infty]} + c_3 \\|f\\|_{L^2[0,\\infty]}.$$\n\\end{proof}\n\n\n\\section{Proof of Theorem 3}\n\n\\subsection{The Differential Operator} \nConsider the self-adjoint operator\n $\\mathcal{F}_T: L^2[-1,1] \\rightarrow L^2[-1,1]$\n$$ (\\mathcal{F}_Tf)(x) = \\int_{-1}^{1}{f(x) e^{i \\xi x}dx}.$$\nThe crucial ingredient, which the monograph of Osipov, Rokhlin \\& Xiao \\cite{mono} ascribes to Landau \\& Pollak \\cite{pr2, pr3} and Slepian \\& Pollak \\cite{pr1}, is that\nthe eigenfunctions of $\\mathcal{F}_T$ coincide with the eigenfunctions of a differential operator.\n\\begin{lemma}[\\cite{pr2, pr3, pr1}] The eigenfunctions $(u_n)_{n=1}^{\\infty}$ of $\\mathcal{F}_T$ coincide with the eigenfunctions of \n$$D = -(1-x^2)\\frac{d^2}{dx^2} + 2x\\frac{d}{dx} + x^2 \\qquad \\mbox{on} ~ [-1,1].$$\n\\end{lemma}\nIt is classical that the eigenvalues of the differential operator grow asymptotically as $\\lambda_n \\sim_{} n^2$, in particular, we have $\\lambda_n \\geq c_3 n^2$ for some $c_3 > 0$.\n\n\\subsection{Proof of Theorem 3} \n\\begin{proof} Let $f \\in H^1[-1,1]$ be arbitrary. We have\n$$\n \\sum_{n=1}^{\\infty}{c_3 n^2 |\\left\\langle f, u_n \\right\\rangle|^2} \\leq \\sum_{n=1}^{\\infty}{\\lambda_n |\\left\\langle f, u_n \\right\\rangle|^2} = \\left\\langle Df, f\\right\\rangle.\n$$\nRepeated integration by parts gives that\n\\begin{align*} \\left\\langle Df, f\\right\\rangle &= \\int_{-1}^{1}{(1-x^2)f_x(x)^2 + x \\frac{d}{dx}(f(x)^2) + x^2 f(x)^2 dx}\\\\\n&= f(1)^2- f(-1)^2 + \\int_{-1}^{1}{(1-x^2)f_x(x)^2 + (x^2-1) f(x)^2 dx} \\\\\n&\\leq \\left[ f(1)^2 - f(-1)^2 \\right] + \\int_{-1}^{1}{f_x(x)^2 + f(x)^2 dx}.\n \\end{align*}\nWe again distinguish cases: either $f$ changes sign or it does not. If $f$ has a root somewhere, then with Lemma 2 we may conclude that\n$$ \\max \\left( f(1)^2, f(-1)^2, \\int_{-1}^{1}{f(x)^2 dx} \\right) \\leq 4 \\int_{-1}^{1}{f_x(x)^2 dx}$$\nand the result follows as before. The difference in the final result is a result of the different asymptotical behaviour of the eigenvalues (see e.g. Widom \\cite{widom}, a very\nprecise description of the asymptotic behavior can be found in Fuchs \\cite{fuchs})\n$$ \\log{ \\lambda_n} \\sim - n \\log{n}.$$\nIf $f$ does not change sign, we have to argue differently. Assume w.l.o.g. that $f \\geq 0$. Then\n\\begin{align*}\n\\| \\mathcal{F}_{T} f\\|_{L^2[-1,1]}^2 &= \\left\\langle \\mathcal{F}_T f, \\mathcal{F}_T f\\right\\rangle_{L^2[-1,1]} =\n \\int_{-1}^{1}{ \\left( \\int_{-1}^{1}{ f(x) e^{i x \\xi} dx} \\right) \\overline{ \\left( \\int_{-1}^{1}{ f(x) e^{i x \\xi} dx} \\right) } d\\xi}\\\\\n&= \\int_{-1}^{1}{ \\int_{-1}^{1}{ \\int_{-1}^{1}{ f(x) f(y) e^{i \\xi (x-y)} d\\xi} d x} dy} \\\\\n&= \\int_{-1}^{1}{ \\int_{-1}^{1}{ \\frac{2 \\sin{(x-y)}}{x-y} f(x) f(y) dx dy}} \\\\\n&\\geq \\frac{1}{2} \\int_{-1}^{1}{ \\int_{-1}^{1}{ f(x) f(y) dx dy}} = \\frac{1}{2} \\left(\\int_{-1}^{1}{f(x) dx}\\right)^2.\n\\end{align*}\nIt remains to show that for every $c_2 > 0$ there exists $c_1 > 0$ such that for all differentiable $f:[-1,1] \\rightarrow \\mathbb{R}$\nthat do not change sign\n$$ \\frac{1}{2} \\left(\\int_{-1}^{1}{f(x) dx}\\right)^2 \\geq c_1 \\exp{\\left(-c_2\\frac{ \\|f_x\\|_{L^2[a,b]}}{\\|f\\|_{L^2[a,b]}}\\right)}\\|f\\|^2_{L^2[a,b]}$$\nwhich follows from Lemma 3.\n\\end{proof}\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{Introduction}\nWe use the concept of probability extensively in science, and very\nbroadly in everyday life. Many probabilistic tools used to ``quantify our ignorance'' seem \nintuitive even to non-scientists. For example, if we consider\nthe value of one bit which we know nothing about,\nwe are inclined to assign probabilities to each value. Furthermore,\nit seems natural to give it a ``$50$-$50$'' chance of being $0$ or\n$1$. This everyday \nintuition is often believed to have deep theoretical justification based in ``classical\nprobability theory'' (developed in famous works such as~\\cite{Laplace:1774zz}). \n\nHere we argue that the success of such\nintuition is fundamentally rooted in specific physical properties of the\nworld around us. In our view the things we call ``classical\nprobabilities'' can be seen as originating in the quantum probabilities that govern the microscopic\nworld, suitably propagated by physical processes so as to be\nrelevant on classical scales. From this perspective the validity of\nassigning equal probabilities to the two states of an unknown bit\ncan be quantified by understanding the particular physical processes\nthat connect quantum fluctuations in the microscopic world to that\nparticular bit. The fact that we have simple beliefs about how to\nassign probabilities that do not directly refer to complicated\nprocesses of physical propagation is simply a\nreflection of the intuition we have built up by living in a world\nwhere these processes behave in a particular\nway. Our position has implications for how we use probabilities in general,\nbut here we emphasize \napplications to cosmology which originally motivated our interest in\nthis topic. Specifically, we question a number of applications of\nprobabilities to cosmology that are popular today.\n\nMany physicists view\nclassical physics as something that emerges from a fundamentally\nquantum world under the right conditions (for example in systems\nlarge enough to have negligible quantum fluctuations and with suitable\ndecohering behavior) without the need for new fundamental physics\noutside of the quantum theory\\footnote{We personally take this ``fundamentally\n quantum'' view but our arguments go\n through for some (but not all) other interpretations of\n quantum mechanics}. Taking that point of view does not make the\nclaims in this paper trivial ones. Yes, in that picture ``all physics is\nfundamentally quantum'', but here we focus specifically on the origin of\nrandomness. Consider a classical computer well engineered to prevent\nquantum fluctuations of its constituent particles from\naffecting the classical steps of the computation. One could\nmodel a fluctuating classical system on such a computer (e.g.\na gas of perfect classical billiards), but the fluctuations in such an\nidealized classical gas would indeed be classical ones. The appearance\nof a given fluctuation would reflect information already encoded in\nclassical features of the initial state of the computation and\nwould {\\em not} come from quantum fluctuations of the particles\nmaking up the physical computer. \nWe argue that the real physical world does not contain \nsuch perfectly isolated \nclassical systems and that quantum uncertainty, not ignorance of\nclassical information dominates probabilistic behavior we\nobserve. (For the computer example just given, the quantum\nuncertainties will enter when setting up the \ninitial state.)\n\nIn Bayesian language, the probability of a theory $T$ being true given\na dataset $D$ is computed by combining the probability of $D$ given\n$T$ (``$P(D|T)$'') with the ``prior probability'' ($P(T)$) assigned to\n$T$. Often $P(T)$ will include other data combined in a\nsimilar way. Inputting new data over time produces a \nlist of updated probabilities. The start of such a list always\nrequires a ``model uncertainty'' (MU) prior that provides a personal\nstatement about which model(s) you prefer. Expressions for $P(D|T)$ can be tested by\nstatistical analysis of data and good scientists (discussing well\ndesigned experiments) should agree on how\nto compute $P(D|T)$. The MU prior is a personal choice which is not\nbuilt from a scientifically \nrigorous process. The quantity $P(D|T)$ describes randomness in\nphysical systems, whereas MU priors represent states of mind of\nindividual scientists. This paper only treats $P(D|T)$ probabilities, not\nMU priors. A further indication of the deep differences between\n$P(D|T)$ and MU priors is that the goal of science is to produced\nsufficiently high quality data (and sufficient consensus about the\ntheories) that which MU priors the community are willing to take is of no consequence to the\nresult. On the other hand, results will always depend strongly on at least some parts of\n$P(D|T)$. \n\n\n\\section{The Page Problem}\n\\label{Page}\nWe outline the relevance of this question to cosmology using a simple\ntoy model. It is commonplace in cosmology to\ncontemplate a ``multiverse'' (e.g. in the context of ``eternal\ninflation''~\\cite{Guth:2007ng}) in which many equivalent copies of a given observer\n appear in the theory. \n\n\n\n\n\nAs pointed\nout by Page~\\cite{Page:2009qe}, even if one knew the full wavefunction for\nsuch a theory it would be impossible to make predictions about\nfuture observations using probabilities derived from that\nwavefunction. The problem arises because multiverse theories\nare expected to contain many copies of the observer (sometimes said to\nbe in different\n``pocket universes'') that are identical in terms of\nall current data, but which differ in details of their environments\nthat affect outcomes of future\nexperiments (e.g. experiments measuring neutrino masses or\ncosmological perturbations). In these theories it is impossible\nto construct appropriate projection operators to describe measurements\nwhere one does not know which part of the Hilbert space (i.e. which copy of\nus and our world) is being measured. Thus, the outcomes of future\nmeasurements are ill-posed quantum questions which cannot be answered\nwithin the theory.\n\nTo illustrate this problem consider a\nsystem comprised of two two-state subsystems called ``$A$'' and ``$B$''.\nThe whole system is spanned by the four basis states constructed as\nproducts of basis states of the two subsystems: $\\left\\{ {{\\left| 1\n \\right\\rangle }^{A}}{{\\left| 1 \\right\\rangle }^{B}},{{\\left| 1\n \\right\\rangle }^{A}}{{\\left| 2 \\right\\rangle }^{B}},{{\\left| 2\n \\right\\rangle }^{A}}{{\\left| 1 \\right\\rangle }^{B}},{{\\left| 2\n \\right\\rangle }^{A}}{{\\left| 2 \\right\\rangle }^{B}} \\right\\}$.\nFor the whole system in state $\\left| \\psi \\right\\rangle$, the\nprobability assigned to measurement outcome ``$i$'' can be\nexpressed as $\\left\\langle \\psi \\right|\\hat{P}_i\\left| \\psi\n\\right\\rangle $ for a suitably chosen projection operator\n$\\hat{P}_i$. One can readily construct projection\noperators corresponding to measuring system ``$A$'' in the ``$1$'' state\n(regardless of the state of the ``$B$'' subsystem):\n\\begin{equation}\n\\hat{P}_{1}^{A}\\equiv \\left( {{\\left| 1 \\right\\rangle\n }^{A}}{{\\left| 1 \\right\\rangle }^{B}}{}^{B}\\left\\langle 1\n\\right|{}^{A}\\left\\langle 1 \\right| \\right)+\\left( {{\\left| 1\n \\right\\rangle }^{A}}{{\\left| 2 \\right\\rangle\n }^{B}}{}^{B}\\left\\langle 2 \\right|{}^{A}\\left\\langle 1 \\right|\n\\right).\n\\end{equation} A similar operator $\\hat{P}_{1}^{B}$ represents\nmeasurements of only subsystem ``$B$''. Operators such as $\\hat{P}_{12} \\equiv {{\\left| 1 \\right\\rangle\n }^{A}}{{\\left| 2 \\right\\rangle }^{B}}{}^{B}\\left\\langle 2\n\\right|{}^{A}\\left\\langle 1 \\right|$ represent measurements of {\\em\n both} subsystems.\n\nThe problem arises because there is no projection operator that\ngives the probability of outcome~``$1$'' when the subsystem to be\nmeasured (``$A$'' or ``$B$'') is undetermined. That is an ill-posed\nquestion in the quantum theory. Page emphasizes that this\nkind of question apparently needs to be addressed in order to make\npredictions in the multiverse, where our lack of knowledge about which\npocket universe we occupy corresponds to ``$A$'' vs. ``$B$'' not being\ndetermined in the toy model. Such ill-posed quantum questions exist\nin laboratory situations as well. We tend not to be concerned about\nthese questions however, since there are also plenty of well-posed problems\non which to focus our attention. Also, in the laboratory one might\nresolve the problem by adding a measurable ``label'' to the setup that\ndoes identify ``$A$'' vs. ``$B$''. But such a resolution is believed\nnot to be \npossible in many cosmological cases. \n\nA natural response to this issue is to appeal to classical\nideas about probabilities to ``fill in the gap''. In\nparticular, if one could assign classical probabilities $p_A$ and\n$p_B$ \nfor the measurement to be made on the respective subsystems,\nthen one could answer the question posed above (the probability of the\noutcome ``$1$'' with the \nsubsystem to be measured undetermined) by giving:\n\\begin{equation}\n{{p}_{1}}={{p}_{A}}\\left\\langle \\psi \\right|\\hat{P}_{1}^{A}\\left|\n\\psi \\right\\rangle +{{p}_{B}}\\left\\langle \\psi\n\\right|\\hat{P}_{1}^{B}\\left| \\psi \\right\\rangle.\n\\label{p1}\n\\end{equation}\nNote that the values of $p_A$ and $p_B$ are {\\em not} determined from\n$\\left| \\psi \\right\\rangle$, and instead provide additional\ninformation introduced to write\nEqn. \\ref{p1}. Although $p_1$ can be written as the expectation value\nof\n${{\\hat{P}}_{1}}={{p}_{A}}\\hat{P}_{1}^{A}+{{p}_{B}}\\hat{P}_{1}^{B}$,\nthe operator $\\hat{P}_1$ is not a projection operator\n($\\hat{P}_1\\hat{P}_1\\neq\\hat{P}_1$), confirming that $p_1$ does not give\nprobabilities of fully quantum origin.\n\nAuthors who apply expressions like Eqn. \\ref{p1} to\ncosmology~\\cite{Srednicki:2009vb,*Page:2012gh} do not claim this gives \na quantum probability. Instead they appeal to classical\nnotions of probability along \nthe lines we have discussed at the start of this paper. Surely one \nsuccessfully introduces classical probabilities such as $p_A$ and\n$p_B$ all the time in everyday situations to quantify our ignorance,\nso why should the same approach not be used in the cosmological case?\n\nOur view is that the two cases are completely different. We\nbelieve that in every situation where we use ``classical''\nprobabilities successfully to describe physical randomness these probabilities could in principle be\nderived from a wavefunction describing the full \nphysical situation. In this context classical probabilities are just ways to\nestimate quantum probabilities when calculating\nthem directly is inconvenient. Our\nextensive experience using classical probabilities in this way (really\nquantifying our {\\em quantum} ignorance) cannot \nbe used to justify the use of classical \nprobabilities in situations where quantum probabilities have been\nclearly shown to be ill-defined and uncomputable. Translating the\nformal framework from one situation to the other is not an extrapolation\nbut the creation of a brand new conceptual framework that needs\nto be justified on its own\\footnote{Cooperman~\\cite{Cooperman:2010zc}\n has explored the interpretation of these matters in the context of\n the Positive Operator Valued Measure (POVM) formalism. In our view\n this does not really resolve the problem, since one has to introduce\n new probabilities equivalent to $p_A$ and $p_B$ in an equally ad hoc\n way. We definitely do agree with the connections he draws to the\n standard treatment of identical particles, which we find quite intriguing.}.\n\nWe are only challenging the ad hoc introduction\nof classical probabilities such as $p_A$ and $p_B$. We are not criticizing\nthe use of standard ideas from probability theory to manipulate and\ninterpret probabilities that have a physical origin.\nOf course we never know the wavefunction completely (and thus often\nwrite states as density matrices). Our claim is that probabilities are only\nproven and reliable tools if they have clear values determined from the quantum\nstate, despite our uncertainties about it. \n\n\n\n\\section{Billiards}\n\\label{Billiards}\nWe next use simple calculations to argue that it is realistic\nto expect all probabilities we normally use to have a quantum origin.\nConsider a gas of idealized billiards with radius $r$, mean free path\n$l$,average speed ${\\bar v}$ and mass $m$. If two of these billiards\napproach each other with impact parameter $b$, the uncertainties in the\ntransverse momentum ($\\delta {{p}_{\\bot }}$) and position ($\\delta\n{{x}_{\\bot }}$) contribute to an uncertainty in the impact parameter given by:\n\\begin{equation}\n \\Delta b \n =\\delta {{x}_{\\bot }}+\\frac{\\delta {{p}_{\\bot}}}{m}\\Delta t\n =\\sqrt{2}\\left( a+\\frac{\\hbar }{2a}\\frac{l}{m\\bar{v}} \\right) \n\\label{Eqn:Deltab}\n\\end{equation}\nwhere the second equality is achieved using $\\Delta t = l\/{\\bar\n v}$ and assuming a minimum uncertainty wavepacket of width $a$ in\neach transverse direction. The value of $\\Delta b$ is\nminimized by $a=\\sqrt{\\hbar l \/(2m{\\bar v})} \\equiv \\sqrt{l\n \\lambdabar_{dB}\/2}$. We will show that $\\Delta b$ is\nsignificant even when minimized.\n\nThe local nature of subsequent collisions creates a distribution of entangled\nlocalized states reflecting the range of possible collision points\nimplied by $\\Delta b$. We estimate the width of this distribution as\nit fans out toward the next collision by classically propagating\ncollisions that occur at either side of the range $\\Delta\nb$. (Neglecting additional quantum effects increases the\nrobustness of our argument.)\nThe geometry of the collision amplifies uncertainties in a manner\nfamiliar from many chaotic \nprocesses~\\cite{Birk27a,Zurek:1994wd}. The quantity \n$\\Delta b_{n} =\\Delta b( 1+(2l)\/r)^{n}$\ngives the uncertainty in $b$ after $n$ collisions.\n\nSetting $\\Delta b_n=r$ and solving for $n$ determines $n_Q$,\nthe number of collisions after which the quantum spread is so large that\nthere is significant quantum uncertainty as to which billiard takes\npart in the next collision:\n\\begin{equation}\n{{n}_{Q}}=-\\frac{\\log \\left( \\frac{\\Delta b}{r} \\right)}{\\log \\left(\n 1+\\frac{2l}{r} \\right)}.\n\\label{Eqn:nQ}\n\\end{equation}\n For Table\n\\ref{Table} we evaluated Eqn. \\ref{Eqn:nQ} with different input\nparameters chosen to represent various physical\nsituations.\\footnote{Raymond~\\cite{Raymond:1967aa} presents similar\n result, applied only to actual billiards. He also makes some\n general points about the implications of his result that overlap\n with some of the points we are making here.}\n\\begin{table*}[htbp]\n\n \\begin{tabular}{l|r|r|r|r|r|r|r|}\n\n & \\multicolumn{1}{|c|}{$r$ {\\it(m)}} \n & \\multicolumn{1}{|c|}{$l$ {\\it(m)}} \n & \\multicolumn{1}{|c|}{$m$ {\\it (kg)}} \n & \\multicolumn{1}{|c|}{${\\bar v}$ {\\it (m\/s)}} \n & \\multicolumn{1}{|c|}{$\\lambdabar_{dB}$ {\\it (m)}}\n & \\multicolumn{1}{|c|}{$\\Delta b$ {\\it (m)}}\n & \\multicolumn{1}{|c}{$n_Q$} \\\\ \\hline\n\n Nitrogen at STP (Air) & $1.6 \\times 10^{-10}$ & $3.4\\times\n 10^{-07}$ & $4.7\\times 10^{-26}$ & $360$\n &$ 6.2 \\times 10^{-12}$ & $2.9\\times 10^{-9}$ & $-0.3$ \\\\ \\hline\n Water at body temp & $3.0\\times 10^{-10}$ & $5.4 \\times 10^{-10}$\n & $3.0\\times 10^{-26}$ & $460$ &\n $7.6\\times 10^{-12}$ & $1.3 \\times 10^{-10}$ & $0.6$ \\\\ \\hline\n Billiards game& $0.029$ & $1$ & $0.16$ & $1$ & $6.6 \\times\n 10^{-34}$ & $5.1 \\times 10^{-17}$ & $8$\n \\\\ \\hline\n Bumper car ride & $1$ & $2$ & $150$ & $0.5$ & $1.4\\times\n 10^{-36}$ & $3.4\\times 10^{-18}$& $25$\n \\\\ \\hline\n\n \\end{tabular}%\n \\caption{The number of collisions, ($n_Q$ from Eqn. \\ref{Eqn:nQ})\n before quantum uncertainty dominates, evaluated for physical\n systems modeled as a ``gas'' of billiards with\n different properties. Values $n_Q < 1$\n indicate that quantum fluctuations are so dominant that\n Eqn. \\ref{Eqn:nQ} breaks down. All randomness in\n these quantum dominated systems is fundamentally quantum in nature. \\label{Table}}\n\\end{table*}%\n\nTable \\ref{Table} shows that water and air are so dominated by quantum fluctuations\nthat $n_q < 1 $, indicating the breakdown of Eqn. \\ref{Eqn:nQ}, but\nall the more strongly supporting our view that {\\em all} randomness in\nthese systems is fundamentally quantum. This result strongly indicates\nthat if one were able to fully\nmodel the molecules in these macroscopic systems one would find that the\nintrinsic quantum uncertainties of the molecules, amplified by\nprocesses of the sort we just described, would be fully\nsufficient to account for all the fluctuations.\nOne would not be required to ``quantify our ignorance'' using \nclassical probability arguments to fully understand the system. For\nexample, the Boltzmann distribution for one of these systems in a\nthermal state should really be derivable as a feature dynamically\nachieved by the wavefunction without appeal to formal arguments about\nequipartition etc. \n\nThis argument that the randomness in collections of molecules in the world\naround us has a fully quantum origin lies at the core of our case. We\nexpect that all practical applications of probabilities can be traced\nto this intrinsic randomness in the physical world. As an \nillustration, we next trace the randomness of a coin\nflip to Brownian motion of polypeptides in the human nervous system. \n\n\\section{Coin Flip}\n\\label{Coin}\n\nRandomness in a coin flip comes from a lack of correlation between the\nstarting and ending coin positions. The\nsignal triggering the flip travels along\nhuman neurons which have an intrinsic\ntemporal uncertainty of $\\delta t_n \\approx 1ms$~\\cite{Faisal2008}. \nIt has been argued that fluctuations in the number of open neuron ion channels can account for the\nobserved values of $\\delta t_n$~\\cite{Faisal2008}. These molecular fluctuations are due to random Brownian motion\nof polypeptides in their surrounding fluid. Based on our assessment that the\nprobabilities for fluctuations in water are\nfundamentally quantum, we argue that the value of $\\delta t_n$\nrealized in a given situation is also fundamentally quantum. Quantum\nfluctuations in the water drive the motion \nof the polypeptides, resulting in different numbers of ion\nchannels being open or closed at a given moment in each instance\nrealized from the many quantum possibilities. \n\nConsider a coin flipped and caught at about the same height,\nby a hand moving at speed $v_h$ in the direction of \nthe toss and with a flip\nimparting an additional speed $v_f$ to the coin. A neurological\nuncertainty in the time of the flip, $\\delta t_n$, results in a\nchange in flight time $\\delta t_f = \\delta t_n \\times v_h\/(v_h+v_f)$. A\nsimilar catch time uncertainty gives a total flight time uncertainty\n$\\delta t_t = \\sqrt{2} \\delta t_f$. A coin flipped upward by an\nimpact at its edge has a rotation frequency \n$f=4v_f\/(\\pi d)$ where $d$ is the coin diameter. \nThe uncertainty in the\nnumber of spins is $\\delta N = f \\delta\nt_t$. Using $v_h=v_f=5m\/s$ and $d=0.01m$ (and $\\delta t_n = 1ms$) gives $\\delta N = 0.5$, \nenough to make the outcome of the coin toss completely dependent on\nthe time uncertainty in the neurological signal which we \nhave argued is fully quantum.\n\nNo doubt we have neglected significant factors in \nmodeling the coin flip. \nThe point here\nis that even with all our simplifications, we \nhave a plausibility argument that the outcome of a coin flip is truly\na quantum measurement (really, a Schr\\\"{o}dinger cat) and that the\n$50$--$50$ outcome of a coin toss may in principle be derived from\nthe quantum physics of a realistic coin toss with no reference to\nclassical notions of how we must ``quantify our ignorance''.\nEstimates such as this one illustrate how the quantum nature of\nfluctuations in the gasses and fluids around us \ncan lead to a fundamental quantum basis for probabilities we\ncare about in the macroscopic world. \n\n\\section{Digits of $\\pi$}\n\\label{Digits}\n\nThe view that all practical applications of probabilities are\nbased on physical quantum probabilities \nseems a challenging proposition to\nverify. As we have illustrated with the coin flip, the path from\nmicroscopic quantum fluctuations \nto macroscopic phenomena is complicated \nto track. \nAnd there are\nendless cases to check (rolling dice, \nchoosing a random card etc.), most \nalso too complicated to work through conclusively. So arguing \nour position on a case-by-case basis is certainly an impractical\ntask. \n\nOn the other hand, our\nideas are very easy to falsify. All one needs is one illustration of a\ncase where classical notions of probability are useful in a physical\nsystem that is fully isolated from the quantum fluctuations. Once the\npractical value of purely classical \nprobabilities is established there is no reason \nit should not be applicable to other situations. One idea for such a counterexample was proposed\nby Carroll.\\footnote{S. Carroll at the {\\em PCTS\n workshop on inflation} (Jan 2011).} One could place bets on, say,\nthe value of the millionth digit of $\\pi$. Since the digits of\n$\\pi$ are believed to be random~\\cite{Bailey:1997xx} one should be\nable to use this apparently purely classical notion to win bets.\nWhile on the face of it this appears to be an \nideal counterexample, further scrutiny reveals an essential quantum role. \n\nLet's phrase this problem more systematically: One expects that if you\nfinds someone who thinks the digits of $\\pi$ are not randomly\ndistributed, you can make money betting against them. Or\nequivalently, the expected payout $P_\\pi$ is zero if betting with someone who\n{\\em does} think the digits are random. A simple formula for such a\npayout is given by\n\\begin{equation}\n P_\\pi = \\lim_{N_{tot} \\to \\infty} {1 \\over N_{tot}} \\sum_{\\{i\\}} \\left( N_{\\pi}^i - 4.5\n\\right) = 0 \n\\label{BetOnPi}\n\\end{equation}\nwhere $\\{i\\}$ is the ensemble (of size $N_{tot}$) of the digits chosen and $N_\\pi^i$ is the actual\nvalue of the $i$th digit of $\\pi$. The result depends entirely on\nthe choice of ensemble. With enough knowledge of $\\pi $ one can\ncome up with ensembles that give any answer you like (for example that\nonly ever select the digit ``$1$''), despite all the randomness\n``intrinsic'' to $\\pi$ (and in fact {\\em because} \nthe properties of $\\pi$ are classical and knowable). Thus\nwe argue that the outcomes of such bets are all about the ensemble\nselected, and the choice of the ensemble is the only source of\nrandomness in the entire activity. \n\nThe reason the initial idea of betting on $\\pi$ is so\ncompelling is that no one ever thinks an ensemble will be chosen with\nattention to the actual values of the digits of $\\pi$. One can see\nhow quantum mechanics comes in by scrutinizing the process of coming\nup with ensembles. \nIt could be through the human neurons used in selecting a classical\nrandom number seed\\footnote{Similarly, the involvement of neurons etc. with the\ninitial setup prevents the classical computer example in\nSect. \\ref{Introduction} from being a counterexample.}, or through something\nmore systematic like a roulette wheel. Again this falls in the\ncategory where one counterexample could ruin the argument, but so far\nwe have not found one. The bet really is\nabout the lack of correlation between the digit selection and the\ndigit value and we argue it is quantum processes such as those discussed\nhere that are being counted on to create the lack of correlation that\nis crucial to the fairness of the bet. \n\n\nOur analysis depends crucially on seemingly\n``accidental'' levels of quantum noise in the physical world. Our\npoint is that accidental or not, we count on\nthis quantum noise to produce the uncorrelated\nmicroscopic states that lie at the heart of our understanding of\nrandomness and probabilities in the world around us. Extending this\nunderstanding to domains where quantum noise cannot play this role is\nnot at all straightforward. Discussions \nof the non-random behaviors of classical random number generators\n(such as in~\\protect\\cite{Press:1992zz}) underscore the difficulty\nof even imagining a classical source of randomness with the necessary\nlack of correlations. \n\n\\section{Toward a solution of cosmic measure problems}\n\\label{Applications}\nSo far we have used our ideas about probability to critique\nthe introduction of purely classical probabilities into cosmological\ntheories, which is an approach advocated by others~\\cite{Srednicki:2009vb,*Page:2012gh}. In this section we\nuse the ideas introduced here to work out our own \napproach to probabilities in the multiverse. \nWe embrace the idea advocated above, \nthat fundamentally classical probabilities have no place in\ncosmological theories, and declare that questions that seem to\nrequire classical probabilities for answers simply are not answered in\nthat theory. We are basically advocating a more strict discipline\nabout which questions are actually addressed by a given theory.\\footnote{Although here we focus on cosmology, it appears that\n our approach is relevant to other areas where there is confusion\n about about how to assign probabilities, such as the ``sleeping beauty problem''\\cite{Elga01042000}.}\nThen one can ask if there are multiverse theories with sufficient\npredictive power to remain viable after this discipline is\nimposed. Our first assessment of this question suggests that imposing \nthis discipline may reduce or completely eliminate the notorious\nmeasure problems of eternal inflation and the multiverse. \n\nOne challenge one faces when exploring this matter is the fact that most\ndiscussions of eternal inflation and the multiverse are approached in\na semiclassical manner (for example assuming well-defined\nclassical spatial slices of infinite extent). A more careful attempt\nto identify the full quantum nature of the picture may point to\nadditional ways proper quantum probabilities are assigned. We will not\ntry to address that aspect of the question here, and really just take a\nfirst look at the impact of hewing to our proposed probability\ndiscipline. \n\nA general point immediately becomes clear: We are used to\nlinking counting with probabilities, but such connections are not \nalways direct or relevant. Counting up the heads and tails in a long string of\ncoin flips {\\em is} connected with proper quantum\nprobabilities. Starting with our results of Sect.~\\ref{Coin} one can\nsee that a specific quantum probability is assigned to each different\npossible heads\/tails count, and thus counting can be tied in to\nwell-defined quantum probabilities for that system. However, the fact that one\ncosmology may have $3$ pocket universes of type $A$, while another may\nhave $10^{100}$ does not make a difference, because as we discussed in\nSect.~\\ref{Page}, no quantum probabilities can be constructed to\ndetermine which among different (equivalent so far) observers you\nmight be. While these numbers (by analogy with the flips of multiple\ncoins) may be linked to global properties of the state, they cannot by\nused to determine which among equivalent patches a given observer occupies. \n\nThe insight that counting of observers in itself is insufficient to\nlead to proper probabilities leads to some interesting conclusions. \nOne is immediately drawn to the question of ``volume factors'' that give large volume regions more weight than\nsmall ones. To the extent that volume factors are only a stand-in for\ncounting observers we regard such counting as meaningless because it \ncannot be related to true quantum probabilities. \n\nThis insight also relates to the ``young universe'' or ``end of time''\nproblem~\\cite{Bousso:2010yn,Guth:2011ie}, which can be sketched as follows:\nIf one regulates the \ncosmology with a time cutoff, inflation guarantees that most pocket\nuniverses will be produced close to the cutoff. Then the\ntime cutoff shows up at early times (relative to their time of\nproduction which is under strong pressure to happen late) for most\npocket universes. This problem persists even as one pushes the time\ncutoff out to infinity. But there is no evidence \nthat this counting has anything to do with probabilities\npredicted by the theory which are relevant to an observer. There is\nno sign that such theories are able to assign a true quantum\nprobability to the time when a particular observer's pocket \nuniverse was produced. One is simply looking at different pocket\nuniverses, and which one we occupy is not determined by the theory.\n\nOur position appears to offer significant implications for the Boltzmann Brain\nproblem~\\cite{Albrecht:2014eaa,Albrecht:2004ke,Page:2006ys}. For our purposes here,\nthis problem is simply the case where pathological\nobservers, called Boltzmann Brains or BB's, vastly outnumber realistic\nones. (The pathology of the BB's is that they match all the data we\nhave so far, but the next moment experience catastrophic breakdown\nof physicality, experiencing a rapid heat death.)\nAgain, we claim here that counting numbers of BB's vs realistic\nobservers cannot be related to quantum probabilities predicting which an observer\nis more likely to experience.\nThus, as long as there is at least one realistic pocket universe,\nthere will be no BB problem, no matter how many BB's are produced in\nthe theory. \n\nNow let us look at this matter from a slightly different point of\nview. The real problem arises when one does not know which part of\nthe Hilbert space one is about to measure. However, if one just takes\none piece of the Hilbert space in an eternally inflating universe,\nthat patch alone will have probabilities of tunneling into pocket\nuniverse $A$ or $B$, and perhaps many other outcomes as well. If one\nsimply traces out the rest of the Hilbert space, one will have a\ndensity matrix for what is going on in that patch. With that one {\\em\n can} take expectation values of operators, without introducing\nclassical probabilities to determine which pocket you are in. To the\nextent that the BB problem can be phrased in this way (in terms of a\nquantum branching into BB's vs realistic cosmologies in a given patch),\nwe expect the BB problem will remain if realistic cosmologies are\nsufficiently suppressed\\footnote{In \\cite{Albrecht:2014eaa} one of us (AA) treats\n BB's in the traditional counting language in toy models. However, we\nexpect that with a bit more realism the kind of quantum chaos discussed in\nthis paper would allow those BB discussions to go over nicely into\nthe (more legitimate) quantum branching form described here, without changing the\nconclusions in~\\cite{Albrecht:2014eaa}.}. And if\nall patches are the same (as may well be the case for highly symmetric\ntheories such as eternal inflation) then it does not really matter\nwhat patch you are in. The answer will still be the same. \n\nWhile we\nhave yet to offer a rigorous demonstration, this set of ideas seem\npromising to us as a way out of the measure problems in cosmology. A\nmore formal way to describe this picture is that if one does consider\na theory with multiple possible locations for the observer, one would\nbe obliged to give a ``prior'' on which location we occupy. These\npriors would look very much the same as the classical probabilities\nthat show up for example in Eqn.~\\ref{p1}. However, by viewing these\nprobabilities as priors,\nour agenda would be to reach a point where their values do not matter\nto our answers\\footnote{Note that while formally these priors look the\n same as the classical probabilities discussed\n in~\\cite{Srednicki:2009vb,*Page:2012gh}, those authors emphasize\n cases where results {\\em do} depend in a fundamental way on the values chosen\n for the classical probabilities. So they\n are not really treating their classical probabilities\n as prior probabilities, the values of which should ultimately not be\n important.}. It would \nappear that for sufficiently \nsymmetric theories, independence from these priors would be easy to\nachieve. Also, if certain obserables are sufficiently correlated, the\nmeasurement of one (which itself did not have a prediction for the\noutcome due to dependence on priors) could then lead to predictions\nfor the other observable. Both of these pictures outlined here could\nlead to a substantial level of predictive power, despite the\nrestrictions imposed by our probability discipline. \n\n\\section{Conclusions}\n\\label{Conclusions}\n\nIn summary, we have argued that all successful applications of\nprobability to describe nature can be traced to quantum origins.\nBecause of this, there has not been any systematic validation\nof purely classical probabilities, even though we appear to \nuse them all the time. These matters are of particular importance in \nmultiverse theories where truly classical probabilities are used\nto address critical\nquestions not addressed by the quantum theory. \nSuch applications of classical probabilities need to be built\nsystematically on separate foundations and not be thought of as\nextensions of already proven ideas. \nWe have yet to see purely classical probabilities motivated and\nvalidated in a compelling way, and thus are skeptical of\nmultiverse theories that depend on classical probabilities for their\npredictive power. Fundamentally finite cosmologies~\\cite{Banks:2003pt,*Albrecht:2011yg} that\ndo not have duplicate observers do not require classical\nprobabilities. These seem to be a more promising path. \n\nWe are not the only ones who regard quantum\nprobabilities as most fundamental\n(e.g.~\\cite{Deutsch:1999gs,*Wallace:2010aa,*Zurek:2011zz,*Bousso:2011up}), but there\nare also opposing views\\footnote{There are also some papers \n where the degree overlap is not so clear. Vilenkin appears to focus on\n quantum probabilities in~\\cite{Vilenkin:2013loa}, but then also\n seems to embrace a fundamentally classical picture similar to that\n advocated in~\\cite{Aguirre:2010rw}. Some aspects\n of~\\cite{Nomura:2011rb} also seem to overlap, although other things\n (such as the emphasis on holography) seem very different, so it is\n hard to tell the overall degree of agreement}. In addition to the\ncase already discussed where classical probabilities are introduced in\nmultiverse theories to enhance predictive \npower (such\nas in~\\cite{Srednicki:2009vb,Page:2012gh}),\nsome theories insert classical\nideas for other reasons, often in hopes of allaying\ninterpretational concerns\n(e.g. ~\\cite{'tHooft:2010zz,Weinberg:2011jg,Aguirre:2010rw}).\nThe arguments presented here make us generally \ndoubtful of such classical formulations, since our analysis reinforces the\nfundamental role of quantum theory in our overall understanding\nof probabilities. Perhaps some of these alternate theories integrate the\nclassical ideas sufficiently tightly with the quantum piece that the\neveryday tests we have discussed could just as well be regarded as\ntests of the classical ideas in the alternate theory. However, such \nlogic seems overly complex to us, and we prefer the simpler\ninterpretation that the strong connection between all our\nexperiences with probabilities and the quantum world means the quantum\ntheory really is the defining physical theory of probabilities. We\nhave offered suggestions that sticking only to quantum probabilities\nto make predictions in the multiverse may not be all that debilitating to\nthe predictive power of multiverse theories and may actually offer a\nsolution to the notorious measure problems of eternal inflation. \n\n\n\n\n\n\n\\acknowledgments\nWe thank E.~Anderes, A.~Arrasmith, S.~Carroll, J.~Crutchfield, D.~Deutsch, \nB.~Freivogel, A.~Guth, \nJ.~Hartle, T.~Hertog, T.~Kibble, L.~Knox, Z. Maretic, D.~Martin, J.~Morgan, J.~Preskill,\nR.~Singh, A.~Scacco, M.~Sredniki, A.~Vilenkin and W.~Zurek for\nhelpful conversations. We were supported in part by\nDOE Grant DE-FG03-91ER40674.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}