diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzffbr" "b/data_all_eng_slimpj/shuffled/split2/finalzzffbr" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzffbr" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nThe past five decades have witnessed a sizeable amount of statistical research on adaptive randomized designs in the context of clinical trials for treatment comparison.\nThese are sequential procedures where at each step the accrued information is used to make decisions about the way of randomizing the allocation of the next subject.\n\nStarting from the pioneering work of Efron's Biased Coin Design (BCD) \\cite{Efr71}, several authors have suggested adaptive procedures that, by taking into account at each step only previous assignments, are aimed at achieving balance between two available treatments (see e.g. \\cite{Bag04,Smi84a,Smi84b,Soa83,Wei78}). We shall refer to these as Assignment-Adaptive methods. Since clinical trials usually involve additional information on the experimental units, expressed by a set of important covariates\/prognostic factors, Pocock and Simon \\cite{Poc75} and other authors (see for instance \\cite{Atk82,Baz11,Beg80,Tav74}) proposed Covariate-Adaptive designs. These methods modify the allocation probabilities\nat each step according to the assignments and the characteristics of previous statistical units, as well\nas those of the present subject, in order to ensure balance between the treatment groups\namong covariates for reducing possible sources of heterogeneity.\n\nMotivated by ethical demands, another different viewpoint is the Response-Adaptive randomization methods. These are allocation rules introduced with the aim of skewing the assignments towards the treatment that appears to be superior at each step (see e.g. \\cite{Atk05a}) or, more generally, of converging to a desired target allocation of the treatments which combines inferential and ethical concerns \\cite{Bag2010,Tym07}. The above mentioned framework has been recently extended in order to incorporate covariates, which has led to the introduction of the so-called Covariate-Adjusted Response-Adaptive (CARA) procedures, i.e. allocation methods that sequentially modify the treatment assignments\non the basis of earlier responses and allocations, past covariate profiles and the characteristics\nof the subject under consideration. See \\cite{RosCARA01,Zha07} and the cornerstone book by Hu and Rosenberger \\cite{Hu07}.\n\nIn general, given a desired target it is possible to adopt different procedures converging to it, such as the Sequential Maximum Likelihood design \\cite{Mel01}, the Doubly-adaptive BCD \\cite{Eis94,Hu04} and their extensions with covariates given by Zhang et al.'s CARA design \\cite{Zha07} and the Covariate-adjusted Doubly-adaptive BCD \\cite{Zha09}, having well established asymptotic properties. However, in the absence of a given target one of the main problems lies in providing the asymptotic behaviour of the suggested procedure. This is especially true in the presence of covariates, where theoretical results seem to be few and the properties of the suggested procedures have been explored extensively through simulations; indeed, as stated by Rosenberger and Sverdlov \\cite{Ros08} ``very little theoretical work has been done in this area, despite the proliferation of papers''. For instance, even if Pocock and Simon's minimization method is widely used in the clinical practice, its theoretical properties are still largely unknown (indeed, Hu and Hu's results \\cite{Hu12} do not apply to this procedure), as well as the properties of several extensions of the minimization method and of Atkinson's Biased Coin Design \\cite{Atk82}.\n\nMoreover, although the large majority of the proposals are based on continuous and prefixed allocation rules, updated step by step on the basis of the current allocation proportion and some estimates of the unknown parameters (usually based on the sufficient statistics of the model), the recent literature tends to concentrate on discontinuous randomization functions, such as the Efficient Randomized-Adaptive Design (ERADE) \\cite{Hu09}, because of their low variability.\n\nIn this paper we provide some general convergence results for adaptive allocation\nprocedures both in the absence and presence of covariates, continuous or categorical. By combining the concept of downcrossing (originally introduced in \\cite{Hill80}) and stopping times of stochastic processes we demonstrate the almost\nsure convergence of the treatment allocation proportion for a large class of adaptive procedures, even in the absence of a given target, and thus our approach provides substantial insight for future suggestions as well as for several existing procedures that have not been theoretically explored \\cite{Her05,Signor93}. In particular, we prove that Pocock and Simon's minimization method \\cite{Poc75} is asymptotically balanced, both marginally and jointly, showing also the convergence to balance of Atkinson's BCD \\cite{Atk82}. The suggested approach allow to prove through a unique mathematical framework the convergence of continuous and discontinuous randomization functions (like e.g. the Doubly-Adaptive Weighted Differences design \\cite{Ger06}, the Reinforced Doubly-adaptive BCD \\cite{Baz12}, ERADE \\cite{Hu09} and Hu and Hu's procedure \\cite{Hu12}), taking also into account designs based on Markov chain structures, such as the Adjustable BCD \\cite{Bag04} and the Covariate-adaptive BCD \\cite{Baz11}, that can be characterized by sequences of allocation rules. Moreover, by removing some unessential conditions usually assumed in the literature, our results allow to provide suitable extensions of several existing procedures.\n\nThe paper is structured as follows. Even if Assignment-Adaptive and Response-Adaptive procedures can be regarded as special cases of CARA designs, we will treat them separately for the sake of clarity, in order to describe the general proof scheme in a simple setting, whereas Covariate-Adaptive methods will be discussed as particular case of CARA rules.\nTo treat CARA rules in the presence of solely categorical covariates we need to extend the concept of downcrossing in a vectorial framework; this generalization is not used for CARA procedures with continuous prognostic factors and therefore these two cases will be analyzed separately.\nStarting from the notation\nin Section 2, Section 3 deals with Assignment-Adaptive designs, while Section 4 discusses Response-Adaptive procedures.\nSections 5 and 6 illustrate the asymptotic behavior of CARA methods in the case of continuous and categorical covariates, respectively. \nTo avoid cumbersome notation, the paper mainly deals with the case of\njust two treatments, but the suggested methodology is shown to extend to more than two (see Section \\ref{several_treatments}).\n\n\\section{Notation}\nSuppose that patients come to the trial sequentially and are assigned to one of two treatments, $A$ and $B$, that we want to compare.\nAt each step $i\\geq1$, a subject will be assigned to one of the treatments and a response $Y_i$ will be observed. Typically,\nthe outcome $Y_i$ will depend on the treatment, but it may also depend on some characteristics\nof the subject expressed by a vector $\\boldsymbol{Z}_i$ of covariates\/concomitant variables. We assume that $\\{\\boldsymbol{Z}_i\\}_{i\\geq1}$ are i.i.d. covariates that are not under the experimenters' control, but they can be measured before assigning a treatment, and, conditionally on the treatments and the covariates (if present), patients' responses are assumed to be independent. Let $\\delta_{i}$ denote the $i$th allocation, with $\\delta_{i}=1$ if the $i$th subject is assigned to $A$ and $0$ otherwise; also,\n $\\widetilde{N}_n=\\sum_{i=1}^{n} \\delta_{i}$ is the number of allocations to $A$ after $n$ assignments and $\\pi_n$ the corresponding proportion, i.e. $\\pi_n=n^{-1}\\widetilde{N}_n$.\n\nIn general, adaptive allocation procedures can be divided in four different categories according to the experimental information used for allocating the patients to the treatments. Suppose that the $(n+1)$st subject is ready to be randomized; if the probability of assigning treatment $A$ depends on:\n\\begin{itemize}\n\\item[i)] the past allocations, i.e. $\\Pr(\\delta_{n+1}=1\\mid \\delta_1,\\ldots,\\delta_n)$, we call such a procedure Assignment-Adaptive (AA);\n\\item[ii)] earlier allocations and responses, i.e. $\\Pr(\\delta_{n+1}=1\\mid \\delta_1,\\ldots,\\delta_n; Y_1,\\ldots,Y_n)$, then the design is Response-Adaptive (RA);\n\\item[iii)] the previous allocations and covariates, as well as the covariate of the present subject, i.e. $\\Pr(\\delta_{n+1}=1\\mid \\delta_1,\\ldots,\\delta_n; \\boldsymbol{Z}_1,\\ldots,\\boldsymbol{Z}_{n},\\boldsymbol{Z}_{n+1})$, the procedure is Covariate-Adaptive (CA);\n\\item[iv)] the assignments, the outcomes and the covariates of the previous statistical\nunits, as well as the characteristics of the current subject that will be randomized, i.e. $\\Pr(\\delta_{n+1}=1\\mid \\delta_1,\\ldots,\\delta_n; Y_1,\\ldots,Y_n; \\boldsymbol{Z}_1,\\ldots,\\boldsymbol{Z}_{n+1})$, then the rule is called Covariate-Adjusted Response-Adaptive (CARA).\n\\end{itemize}\nFrom now on we will denote with $\\Im_n$ the $\\sigma-$algebra representing the natural history of the experiment up to step $n$ associated with a given procedure belonging to each category (with $\\Im_0$ the trivial $\\sigma-$field). For instance, in the case of AA rules, $\\Im_n=\\sigma\\left\\{\\delta_1,\\ldots,\\delta_n\\right\\}$, whereas for RA designs $\\Im_n=\\sigma\\left\\{\\delta_1,\\ldots,\\delta_n; Y_1,\\ldots,Y_n\\right\\}$.\n\nThe sequence of allocations is a stochastic process and a general way of representing\nit is by the sequence of the conditional probabilities of\nassigning treatment $A$ given the past information at every stage, i.e.\n$\\Pr(\\delta_{n+1}=1\\mid \\Im_n)$ for $n\\in \\mathds{N}$, which is called the allocation function. Even if the large majority of suggested procedures assume continuous allocation rules, in this paper we take also into account designs with discontinuous randomization functions, provided that their set of discontinuities is nowhere dense.\n\n\\section{Assignment-Adaptive Designs}\nAssignment adaptive rules, which depend on the past history\nof the experiment only through the sequence of previous allocations, were proposed as a suitable trade-off between balance (i.e. inferential optimality)\nand unpredictability in the context of randomized clinical trials. Indeed, if the main concern is maximum precision of the results (without\nethical demands), as is well-known under the classical linear model assumptions, balanced design is \\emph{universally optimal} \\cite{Sil80}, since it minimizes the most common inferential criteria for estimation and maximizes the power of the classical statistical tests.\nThe requirement of balance is considered particularly cogent for phase III trials, where patients are sequentially enrolled and the total sample size is often a-priori unknown, so that keeping a reasonable degree of balance at each step, even for small\/moderate samples, is crucial for stopping the experiment at any time under an excellent inferential setting.\n\nThe simplest sequential randomized procedure that approaches balance is the completely randomized (CR) design, where every allocation is to either treatment with probability $1\/2$ independently on the previous steps; thus, $\\delta_{1},\\delta_{2},\\ldots$ are i.i.d. $Be(1\/2)$ so that, as $n$ tends to infinity, $\\pi_n \\rightarrow 1\/2$ almost surely from the SLLN for independent r.v.'s. Although CR could represent an ideal trade-off between balance and unpredictability, this holds only asymptotically. In fact, CR may generate large imbalances for small samples, since $n^{-1\/2}\\pi_n$ is asymptotically normal, and this may induce a consistent loss of precision. For this reason, starting from the pioneering work of Efron \\cite{Efr71}, AA rules were introduced in the literature in order to force the allocations at each step towards balance maintaining, at the same time, a suitable degree of randomness.\n\nIn this section we shall deal with AA procedures such that\n\\begin{equation}\\label{AAdesigns}\n\\Pr(\\delta_{n+1}=1\\mid \\Im_n)=\\varphi^{AA}(\\pi_n),\\quad \\text{for } n\\geq1,\n\\end{equation}\nwhere $\\varphi^{AA}: [0;1] \\rightarrow [0;1]$.\n\\begin{definition}\\label{DC1}\nFor any function $\\psi: [0;1] \\rightarrow [0;1]$, a point $t \\in [0;1]$ is called a \\emph{downcrossing} of $\\psi(\\cdot)$ if\n\\begin{equation*}\n\\forall xt, \\quad \\psi(x)\\leq t.\n\\end{equation*}\n\\end{definition}\n\\noindent Note that if the function $\\psi(x)$ is decreasing, then there exists a single downcrossing $t\\in (0;1)$ and if the equation $\\psi(x)=x$ admits\na solution then the downcrossing coincides with it. Clearly, if $\\psi(\\cdot)$ is a continuous and decreasing function, then $t$ can be found directly by solving the equation $\\psi(x)=x$.\n\\begin{thm}\\label{prop1}\nIf the allocation function $\\varphi^{AA}(\\cdot)$ in (\\ref{AAdesigns}) has a unique downcrossing $t \\in (0;1)$, then\n$\\lim_{n\\rightarrow \\infty} \\pi_n=t$ a.s.\n\\end{thm}\n\\begin{proof}\nBy using a martingale decomposition of the number of assignments to treatment $A$, we will show that the asymptotic behavior of the allocation proportion $\\pi_n$ coincides with that of the sequence of downcrossing points of the corresponding allocation function (i.e. a constant sequence in the case of AA procedures). The same arguments will be generalized in the Appendix to the case of RA and CARA rules for random sequences of downcrossings.\n\nAt each step $n\\geq1$,\n\\begin{equation}\\label{SA}\n\\widetilde{N}_n = \\sum_{i=1}^{n}\\delta_i=\\sum_{i=1}^{n}\\{\\delta_i-E(\\delta_{i}| \\Im_{i-1})\\}+\\sum_{i=1}^{n}E(\\delta_{i}| \\Im_{i-1})=\\sum_{i=1}^{n}\\Delta M_i + \\sum_{i=1}^{n}\\varphi^{AA} \\left(\\pi_{i-1} \\right),\n\\end{equation}\nwhere $\\Delta M_i=\\delta_i-E(\\delta_i|\\Im_{i-1})$, $\\Im_n=\\sigma\\{\\delta_1,\\ldots,\\delta_n\\}$ and $\\pi_0=0$. Then $\\{\\Delta M_i; i\\geq 1\\} $ is a sequence of bounded martingale differences with $|\\Delta M_i |\\leq 1$ for any $i\\geq1$; thus the sequence $\\{M_{n}=\\sum_{i=1}^{n}\\Delta M_i ; \\Im_n \\}$ is a martingale with $\\sum_{k=1}^{n} E[(\\Delta M_i)^2 |\\Im_{k-1}] \\leq n$, so that as $n$ tends to infinity $n^{-1}M_n\\rightarrow 0 $ $a.s.$ Let $l_n=\\max \\left\\{s: 1 \\leq s \\leq n, \\pi_s \\leq t \\right\\}$, with $\\max \\emptyset =0$, then at each step $i>l_n$ we have $\\varphi^{AA} \\left(\\pi_{i} \\right) \\leq t$. Note that\n\\begin{displaymath}\n\\begin{split}\n\\widetilde{N}_n = &\\widetilde{N}_{l_n+1} + \\sum_{k=l_n+2}^{n} \\Delta M_k+ \\sum_{k=l_n+2}^{n} E(\\delta_{k}| \\Im_{k-1}) \\\\\n\\leq & \\widetilde{N}_{l_n} +1 +M_n -M_{l_n+1} + \\sum_{k=l_n+2}^{n} \\varphi^{AA} \\left(\\pi_{k-1} \\right)\\\\\n \\leq & \\widetilde{N}_{l_n} +1+M_n -M_{l_n+1} + \\sum_{k=l_n+2}^{n} t\n\\end{split}\n\\end{displaymath}\nand, since $\\widetilde{N}_{l_n} \\leq l_n t$, then\n\\begin{equation*}\n \\widetilde{N}_n - n t\\leq M_n -M_{l_n+1} +1-t\\,,\n\\end{equation*}\nnamely\n\\begin{equation}\\label{eq1}\n \\pi_n - t\\leq \\frac {M_n -M_{l_n+1} +1-t}{n }\\,.\n\\end{equation}\nAs $n\\rightarrow \\infty$, then $l_n\\rightarrow \\infty$ or $\\sup_n l_n< \\infty$, and in either case the r.h.s. of (\\ref{eq1}) goes to 0 $a.s.$\nThus $[\\pi_n - t] ^+ \\rightarrow 0$ a.s. and,\nanalogously, $\\left[(1-\\pi_n) - \\left(1- t\\right)\\right]^+ \\rightarrow 0$ a.s. Therefore $\\lim_{n\\rightarrow \\infty} \\pi_n=t$ a.s.\n\\end{proof}\n\n\\begin{ex}\nThe completely randomized design is defined by letting $\\Pr(\\delta_{n+1}=1\\mid \\Im_n)=1\/2$ for every $n$. This corresponds to assume $\\varphi^{CR}(x)=1\/2\\;$ for all $x\\in[0;1]$, which is continuous and does not depend on $x$; therefore $\\varphi^{CR}(\\cdot)$ has a single downcrossing $t=1\/2$ and thus $\\pi_n \\rightarrow 1\/2$ $a.s.$ as $n\\rightarrow \\infty$. Clearly, this procedure can be naturally extended to any given desirable target allocation that is a-priori known.\n\\end{ex}\n\n\\begin{ex}\nEfron's BCD \\cite{Efr71} is defined by\n\\begin{equation*}\n\\Pr(\\delta_{n+1}=1\\mid \\Im_n)=\n \\begin{cases}\n p , & \\text{ if} \\quad D_n<0, \\\\\n 1\/2 , & \\text{ if} \\quad D_n=0, \\quad \\text{for } n\\geq1,\\\\\n 1-p, & \\text{ if} \\quad D_n>0,\n \\end{cases}\\,\n\\end{equation*}\nwhere $D_n=2 \\widetilde{N}_n-n$ is the difference between the allocations to $A$ and $B$ after $n$ steps and $p\\in [1\/2;1]$ is the bias parameter. Since $\\sgn D_n =\\sgn (\\pi_n-1\/2)$, then Efron's rule corresponds to\n\\begin{equation}\\label{BCD}\n\\varphi^E(x)=\\begin{cases}\n p , & \\text{ if} \\quad x<1\/2, \\\\\n 1\/2 , & \\text{ if} \\quad x=1\/2,\\\\\n 1-p, & \\text{ if} \\quad x>1\/2,\n \\end{cases}\n\\end{equation}\nwhich has a single downcrossing $t=1\/2$ and therefore\n$\\lim_{n\\rightarrow \\infty} \\pi_n=1\/2$ $a.s.$\nClearly, Theorem \\ref{prop1} allows to provide suitable extensions of Efron's coin converging to any given desired target $t^*\\in (0;1)$, namely\n\\begin{equation}\\label{Efornext}\n\\varphi^{\\tilde{E}}(x)=\n \\begin{cases}\n p_2, & \\text{ if} \\quad x t^*,\n \\end{cases}\\,\n\\end{equation}\nwhere $0\\leq p_1 \\leq t^* \\leq p_2 \\leq 1$ and at least one of these inequalities must hold strictly.\n\\end{ex}\n\\begin{rem}\nNote that, from Theorem \\ref{prop1}, for the convergence to a given desired target $t^*$:\n\\begin{itemize}\n\\item[i)] the allocation function should be decreasing; this condition is quite intuitive, since it corresponds to assume that, at each step, if the current allocation proportion $\\pi_n$ is greater than $t^*$, then the next allocation is forced to treatment $B$ with probability greater than $t^*$ and this probability increases as the difference $\\pi_n-t^*$ grows;\n\\item[ii)] the continuity of the allocation rule is not required and therefore it is possible to consider discontinuous randomization functions like, e.g., (\\ref{BCD}) and (\\ref{Efornext});\n\\item[iii)] condition $\\varphi^{AA}(t^*)=t^*$ is not requested; moreover, structures of symmetry of the allocation function are not needed (e.g., in (\\ref{Efornext}) condition $p_2=1-p_1$ is not required), even if they are typically assumed in order to treat $A$ and $B$ in the same way. For instance, the following AA procedure\n\\begin{equation*}\n\\varphi^{AA^*}(x)=\n \\begin{cases}\n 1, & \\text{ if} \\quad x\\leq 1\/2, \\\\\n 1\/2 , & \\text{ if} \\quad x> 1\/2,\n \\end{cases}\\,\n\\end{equation*}\nis asymptotically balanced, i.e. $\\pi_n\\rightarrow 1\/2$ $a.s.$ as $n$ tends to infinity.\n\\end{itemize}\n\n\\end{rem}\n\\begin{corollary}\\label{cor1}\nSuppose that $\\varphi^{AA}$ is a composite function such that $\\varphi^{AA}(x)=h_1 \\left[h_2 \\left(x\\right)\\right]$,\nwhere $h_1: D\\subseteq \\mathds{R} \\rightarrow [0;1]$ is decreasing and $h_2:[0;1] \\rightarrow D$ is continuous and increasing. If $d \\in D$ is such that $h_1(d)=h_2^{-1}(d)$, then $\\lim_{n\\rightarrow \\infty} \\pi_n=h_2^{-1}(d)$ a.s.\n\\end{corollary}\n\\begin{proof}\nThe proof follows easily from Theorem \\ref{prop1}. Indeed, $\\varphi^{AA}(\\cdot)$ is a decreasing function with $\\varphi^{AA}\\left[h_2^{-1}(d)\\right]=h_1(d)=h_2^{-1}(d)$ and therefore $\\varphi^{AA}(\\cdot)$ has a single downcrossing in $h_2^{-1}(d)$.\n\\end{proof}\n\\begin{ex}\\label{weiabcd}\n\\citet{Wei78} defined his Adaptive BCD by letting\n\\begin{equation}\\label{weiallfunc}\n\\Pr \\left( \\delta _{n+1}=1\\mid \\Im_n\\right) =\\mathfrak{f}\\left(2\\pi_n-1\\right),\\quad \\text{for } n\\geq1,\n\\end{equation}\nwhere $\\mathfrak{f}:[-1;1] \\rightarrow [0;1]$ is a continuous and decreasing function s.t. $\\mathfrak{f}(-x)=1-\\mathfrak{f}(x)$.\nSet $g(w)=2w-1:[0;1] \\rightarrow [-1;1]$, Wei's allocation function is $\\varphi^W(x)=\\mathfrak{f}\\left[g(x)\\right]$. Since $g^{-1}(w)=(w+1)\/2\\;$ for all $w\\in [0;1]$, then $g^{-1}(0)=1\/2=\\mathfrak{f}(0)$, i.e. $1\/2$ is the only downcrossing of $\\varphi^W(\\cdot)$. Therefore, from Corollary \\ref{cor1} it follows that $ \\pi_n\\rightarrow1\/2$ $a.s.$ as $n \\rightarrow \\infty$.\n\\end{ex}\n\\begin{rem}\\label{remABCD}\nNote that Theorem \\ref{prop1} still holds even if we assume different randomization functions at each step by letting\n$\\Pr(\\delta_{n+1}=1\\mid \\Im_n)=\\varphi_n^{AA}(\\pi_n)$, provided that $t\\in (0;1)$ is the unique downcrossing of $\\varphi_n^{AA}(\\cdot)$ for every $n\\geq1$.\n\\end{rem}\n\\begin{ex}\\label{exABCD}\nThe Adjustable Biased Coin Design (ABCD) proposed by Baldi Antognini and Giovagnoli (2004) is defined as follows. Let $F(\\cdot):\\mathds{R}\\rightarrow [0;1]$ be a decreasing function such that $F(-x)=1-F(x)$, the ABCD assigns the $(n+1)$st subject to treatment $A$ with probability\n$\\Pr(\\delta_{n+1}=1\\mid \\Im_n) =F(D_n)$, for $n\\geq1$.\nThis corresponds to let $$\\varphi_n^{ABCD}(x) =F[n(2x-1)], \\qquad n\\geq1,$$ and, from the properties of $F(\\cdot)$, at each step $n$ the function $\\varphi_n^{ABCD}(\\cdot)$ is decreasing with $\\varphi_n^{ABCD}\\left(1\/2\\right)=1\/2$. Thus $t=1\/2$ is the only downcrossing of $\\varphi_n^{ABCD}(\\cdot)$ for every $n$, so that $\\lim_{n \\rightarrow \\infty}\\pi_n= 1\/2$ a.s.\n\\end{ex}\n\n\\subsection{The case of several treatments}\\label{several_treatments}\nNow we briefly discuss AA procedures in the case of several treatments in order to show how the proposed downcrossing methodology can be extended to $K> 2$ treatments. Even if the same mathematical structure could also be applied to the other types of adaptive rules that will be presented in Sections 4-6, we restrict the presentation of multi-treatment adaptive procedures only for AA designs, for the sake of simplicity regarding the notation.\n\nAt each step $i\\geq1$, let $\\delta_{i\\jmath}=1$ if the $i$th patient is assigned to treatment $\\jmath$ (with $\\jmath=1,\\ldots,K$) and 0 otherwise, and set $\\boldsymbol{\\delta }_i^t=(\\delta_{i1},\\ldots,\\delta_{iK})$ with $\\boldsymbol{\\delta }_i^t\\boldsymbol{1}_K=1$ (where $\\boldsymbol{1}_K$ is the $K$-dim vector of ones). After $n$ steps, let $\\widetilde{N}_{n\\jmath}=\\sum_{i=1}^{n}\\delta_{i\\jmath}$ be the number of allocations to treatment $\\jmath$ and $\\pi_{n\\jmath}$ the corresponding proportion, i.e. $\\pi_{n\\jmath}=n^{-1}\\widetilde{N}_{n\\jmath}$; also, set $\\boldsymbol{\\widetilde{N}}_n^t=(\\widetilde{N}_{n1},\\ldots,\\widetilde{N}_{nK})$ and $\\boldsymbol{\\pi}_n^t=(\\pi_{n1},\\ldots,\\pi_{nK})$, where $\\boldsymbol{\\widetilde{N}}_n^t\\boldsymbol{1}_K=n$ and $\\boldsymbol{\\pi}_n^t\\boldsymbol{1}_K=1$.\n\nIn this setting we consider a class of AA designs that assigns the $(n+1)$st patient to treatment $\\jmath$ with probability\n\\begin{equation}\\label{randcaracat}\n\\Pr(\\delta_{n+1,\\jmath}=1\\mid \\Im_n)=\\varphi^{AA}_{\\jmath}\\left(\\boldsymbol{\\pi}_n \\right), \\quad \\text{for } n\\geq 1,\n\\end{equation}\nwhere $\\Im_n=\\sigma(\\boldsymbol{\\delta }_1,\\ldots,\\boldsymbol{\\delta }_n)$, $\\varphi^{AA}_{\\jmath}$ is the allocation function of the $\\jmath$th treatment and from now on we set $\\boldsymbol{\\varphi}^{AA}(\\boldsymbol{\\pi}_n)= (\\varphi^{AA}_{1}\\left(\\boldsymbol{\\pi}_n \\right),\\ldots,\\varphi^{AA}_{K}\\left(\\boldsymbol{\\pi}_n \\right) )$.\n\\begin{definition}\\label{DC1bis}\nLet $\\mathbf{x}=\\left(x_{1},\\ldots,x_{K}\\right)$, where $x_{\\jmath}\\in [0;1]$ for any $\\jmath=1,\\ldots,K$, $\\psi_{\\jmath}(\\mathbf{x}): [0;1]^{K}\\rightarrow [0;1]$ and set $\\boldsymbol{\\psi}(\\mathbf{x})=\\left(\\psi_{1}(\\mathbf{x}) ,\\ldots, \\psi_{K}(\\mathbf{x})\\right)$.\nThen\n$\\boldsymbol{t}= \\left(t_{1},\\ldots,t_{K}\\right)\\in [0;1]^K$ is called a \\emph{vectorial} \\emph{downcrossing} of $\\boldsymbol{\\psi}$ if for any $\\jmath=1,\\ldots,K$\n\\begin{displaymath}\n\\text{for all } x_{\\jmath}t_{\\jmath},\\; \\; \\psi_{\\jmath}(\\mathbf{x}) \\leq t_{\\jmath}.\n\\end{displaymath}\n\\end{definition}\nClearly, if $\\psi_{\\jmath}(\\mathbf{x})$ is decreasing in $\\mathbf{x}$ (i.e. componentwise) for any $\\jmath$,\nthen the vectorial downcrossing $\\boldsymbol{t}$ is unique, with $\\boldsymbol{t}\\in(0;1)^ K$; furthermore $\\boldsymbol{\\psi}(\\boldsymbol{t})=\\boldsymbol{t}$, provided that the solution exists.\n\\begin{thm}\\label{thm1bis}\nAt each step $n$, suppose that $\\varphi^{AA}_{\\jmath}\\left(\\boldsymbol{\\pi}_n \\right)$ is decreasing in $\\boldsymbol{\\pi}_n$ (componentwise) for any $\\jmath=1,\\ldots,K$, then\n$\\lim_{n\\rightarrow \\infty} \\boldsymbol{\\pi}_n=\\boldsymbol{t}$ a.s.\n\\end{thm}\n\\begin{proof}\nThe proof follows easily from the one in Appendix \\ref{A3}, where $K$ treatments should be considered instead of the strata induced by the categorical covariates.\n\\end{proof}\n\\begin{ex}\\label{multitr}\nIn order to achieve balance, i.e. $\\pi^{\\ast}_{\\jmath}=K^{-1}$ for any $\\jmath=1,\\ldots,K$, Wei et al. \\cite{Wei86} considered the following allocation rules:\n\\begin{equation}\\label{weirule1}\n\\Pr(\\delta_{n+1,\\jmath}=1\\mid \\Im_n)= \\frac{\\pi_{n\\jmath}^{-1} -1}{\\sum_{k=1}^K(\\pi_{nk}^{-1} -1) },\n\\end{equation}\nand\n\\begin{equation}\\label{weirule2}\n\\Pr(\\delta_{n+1,\\jmath}=1\\mid \\Im_n)= \\frac{1-\\pi_{n\\jmath}}{K-1}.\n\\end{equation}\nBoth rules are decreasing in $\\pi_{n\\jmath}$ ($\\jmath=1,\\ldots,K$) and it is straightforward to see that $\\boldsymbol{t}=K^{-1}\\boldsymbol{1}_K$ is the only vectorial downcrossing of the functions $\\boldsymbol{\\psi}^{W_1}$ and $\\boldsymbol{\\psi}^{W_2}$ given by:\n\\begin{equation*}\n\\psi^{W_1}_{\\jmath}\\left(\\boldsymbol{x} \\right)=\\frac{x_\\jmath^{-1} -1}{\\sum_{k=1}^K(x_k^{-1} -1) } \\qquad \\text{ and } \\qquad \\psi^{W_2}_{\\jmath}\\left(\\boldsymbol{x} \\right)=\\frac{1-x_\\jmath}{K-1}\n\\end{equation*}\nand therefore, by Theorem \\ref{thm1bis}, $\\lim_{n\\rightarrow \\infty} \\pi_{n\\jmath} =K^{-1}$ $a.s.$ for any $\\jmath=1,\\ldots,K$.\n\nNote that, under rule $(\\ref{weirule2})$, $\\psi^{W_2}_{\\jmath}\\left(\\boldsymbol{x} \\right)=\\psi^{W_2}_{\\jmath}\\left(x_\\jmath \\right)$ (i.e. at each step the allocation probability of each treatment depends only on the current allocation proportion of that treatment); in such a case it is sufficient to solve the system of equations $\\psi^{W_2}_{\\jmath}\\left(x_\\jmath \\right)=x_\\jmath$ ($\\jmath=1,\\ldots,K$).\n\\end{ex}\n\n\n\\section{Response-Adaptive designs}\nRA rules, which change at each step the allocation probabilities on the basis of the previous assignments and responses, were originally introduced as a possible solution to local optimality problems in a parametric setup, where there exists a desired target allocation depending on the unknown model parameters \\cite{Robb67}. Recently, they have been also suggested in the context of sequential clinical trials where ethical purposes are of primary importance, with the aim of maximizing the power of the test and, simultaneously, skewing the allocations towards the treatment that appears to be superior (e.g. minimizing exposure to the inferior treatment) \\cite{Eis94,Ger06,Ros02}.\n\nSuppose that the probability law of the responses under treatments $A$ and $B$ depends on a vector of unknown parameters ${\\boldsymbol{\\gamma}_A }$ and ${\\boldsymbol{\\gamma}_B}$, respectively, with $\\boldsymbol{\\gamma}^t=({\\boldsymbol{\\gamma}_A}^t,{\\boldsymbol{\\gamma}_B}^t)\\in \\Omega$, where $\\Omega $ is an open convex subset of $\\mathbb{R} ^{k}$.\nStarting with $m$ observations on each treatment, usually assigned by using restricted randomization, an initial\nnon-trivial parameter estimation $\\widehat{\\boldsymbol{\\gamma}}_{2m}$ is derived. Then, at\neach step $n\\geq 2m$ let $\\widehat{\\boldsymbol{\\gamma}}_{n}$\nbe the estimator of the parameter $\\boldsymbol{\\gamma}$ based on the first $n$ observations, which is assumed to be consistent in the i.i.d. case (i.e. $\\lim_{n\\rightarrow\\infty}\\widehat{\\boldsymbol{\\gamma}}_{n}= \\boldsymbol{\\gamma}\\; a.s.$). Obviously, the speed of convergence of the allocation proportion is strictly related to the convergence rate of the chosen estimators; however, their consistency is sufficient in order to establish the almost sure convergence of $\\pi_n$.\n\nIn this section we shall deal with RA procedures such that\n\\begin{equation}\\label{RAdesigns}\n\\Pr(\\delta_{n+1}=1\\mid \\Im_n)=\\varphi^{RA}\\left(\\pi_{n}\\,;\\widehat{\\boldsymbol{\\gamma}}_{n} \\right), \\quad \\text{for } n\\geq 2m.\n\\end{equation}\nThe following definition will help illustrate the asymptotic behaviour of RA rules and also CARA designs with continuous covariates treated in Section 5.\n\\begin{definition}\\label{DC2}\nLet $\\dot{\\psi}(x;\\mathbf{y}): [0;1]\\times \\mathds{R}^d \\rightarrow [0;1]$. The function $t(\\mathbf{y}): \\mathds{R}^d\\rightarrow [0;1] $ is called a \\emph{generalized downcrossing} of $\\dot{\\psi}$ if for any given $\\mathbf{y}\\in \\mathds{R}^d$ we have\n\\begin{equation*}\n\\forall xt(\\mathbf{y}), \\quad \\dot{\\psi}(x;\\mathbf{y})\\leq t(\\mathbf{y}).\n\\end{equation*}\n\\end{definition}\n\\noindent If the function $\\dot{\\psi}(x,\\mathbf{y})$ is decreasing in $x$, then the generalized downcrossing $t(\\mathbf{y})$ is unique and $t(\\mathbf{y})\\neq\\{0;1\\}$ for any\n$\\mathbf{y}\\in \\mathds{R}^d$. Moreover, if there exists a solution of the equation $\\dot{\\psi}(x,\\mathbf{y})=x$, then $t(\\mathbf{y})$ coincides with this solution.\n\\begin{thm}\\label{thm2}\nSuppose that at each step $n$ the allocation rule $\\varphi^{RA}\\left(\\pi_{n}\\,;\\widehat{\\boldsymbol{\\gamma}}_{n} \\right)$ is decreasing in $\\pi_{n}$. If the only generalized downcrossing $t(\\widehat{\\boldsymbol{\\gamma}}_{n})$ is a continuous function, then\n$\\lim_{n\\rightarrow \\infty} \\pi_n=t({\\boldsymbol{\\gamma}})$ a.s.\n\\end{thm}\n\\begin{proof}\nSee Appendix \\ref{A1}.\n\\end{proof}\n\n\\begin{ex}\nGeraldes et al. (2006) introduced the Doubly Adaptive Weighted Differences Design (DAWD) for binary response trials. Let $\\boldsymbol{\\gamma}=(p_A,p_B)^t$ be the vector of the probabilities of success of $A$ and $B$ and $\\widehat{\\boldsymbol{\\gamma}}_{n}=(\\widehat{p}_{An},\\widehat{p}_{Bn})^t$ the corresponding estimate after $n$ steps. When the $(n+1)$st patient is ready to be randomized, the DAWD allocates him\/her to treatment $A$ with probability\n\\begin{equation} \\label{dawd}\n\\Pr(\\delta_{n+1}=1\\mid \\Im_n)=\\rho g_1(\\widehat{p}_{An}-\\widehat{p}_{Bn})+(1-\\rho) g_2\\left(2 \\pi_n -1\\right), \\quad \\text{for } n\\geq 2m,\n\\end{equation}\nwhere $\\rho \\in [0;1)$ represents an ``ethical weight'' and $g_1, g_2: [-1,1] \\rightarrow [0,1]$ are continuous functions s.t.\n\\begin{itemize}\n\\item[i)] $g_1(0)=g_2(0)=1\/2$ and $g_1(1)=g_2(-1)=1$;\n\\item[ii)] $g_1(-x)=1-g_1(x)$ and $g_2(-x)=1-g_2(x)$ $\\forall x\\in [-1;1]$;\n\\item[iii)] $g_1(\\cdot)$ is non decreasing and $g_2(\\cdot)$ is decreasing.\n\\end{itemize}\nRegarded as a function of $\\pi_{n}$ and $\\widehat{\\boldsymbol{\\gamma}}_{n}$, rule (\\ref{dawd}) corresponds to\n$$\\varphi^{DAWD} \\left(\\pi_{n}\\,;\\widehat{\\boldsymbol{\\gamma}}_{n}\\right) =\\rho g_1( (1; -1) \\widehat{\\boldsymbol{\\gamma}}_{n} )+(1-\\rho) g_2\\left(2 \\pi_n -1\\right),$$ which is decreasing in $\\pi_{n}$, so that the equation $\\varphi^{DAWD} \\left(\\pi_{n}\\,;\\widehat{\\boldsymbol{\\gamma}}_{n}\\right)=\\pi_{n}$ has a unique solution $t(\\widehat{\\boldsymbol{\\gamma}}_{n})$, i.e. the generalized downcrossing, which is continuous in $\\widehat{\\boldsymbol{\\gamma}}_{n}$ (see \\cite{Ger06}). Thus\n$\\lim_{n\\rightarrow \\infty} \\pi_n=t(\\boldsymbol{\\gamma})$ $a.s.$\n\\end{ex}\nOften there is a desired target allocation $\\pi^{\\ast}$ to treatment $A$ that depends on the unknown model parameters, i.e. $\\pi^{\\ast}=\\pi^{\\ast}(\\boldsymbol{\\gamma})$, where $\\pi^{\\ast}:\\Omega\\rightarrow (0;1)$ is a mapping that transforms a $k$-dim vector of parameters into a scalar one. Thus, Theorem \\ref{thm2} still holds even if, instead of (\\ref{RAdesigns}), we assume\n\\begin{equation*}\n\\Pr(\\delta_{n+1}=1\\mid \\Im_n)=\\breve{\\varphi}^{RA}\\left(\\pi_{n}\\,;\\pi^{\\ast}(\\widehat{\\boldsymbol{\\gamma}}_{n})\\right), \\quad \\text{for } n\\geq 2m,\n\\end{equation*}\nprovided that $\\pi^{\\ast}(\\cdot)$ is a continuous function. In this case the generalized downcrossing could be more properly denoted by $t(\\widehat{\\boldsymbol{\\gamma}}_n)=t(\\pi^{\\ast}(\\widehat{\\boldsymbol{\\gamma}}_{n}))$.\n\n\\begin{ex}\nThe Doubly-adaptive Biased Coin Design (DBCD) \\cite{Eis94,Hu04} is one of the most effective families of RA procedures aimed at converging to a desired target $\\pi^{\\ast}(\\boldsymbol{\\gamma})\\in (0,1)$ that is a continuous function of the model parameters. The DBCD assigns treatment $A$ to the $(n+1)$st subject with probability\n\\begin{equation}\\label{dbcd}\n\\Pr(\\delta_{n+1}=1\\mid \\Im_n)=\\breve{\\varphi}^{DBCD}(\\pi_n; \\pi^{\\ast }(\\widehat{\\boldsymbol{\\gamma}}_n)), \\quad \\text{for } n\\geq2m,\n\\end{equation}\nwhere the allocation function $\\breve{\\varphi}$\nneeds to satisfy the following conditions:\n\\begin{itemize}\n\\item[i)] $\\quad$ $\\breve{\\varphi}^{DBCD}(x;y)$ is continuous on $(0;1)^2$;\n\\item[ii)] $\\quad$ $\\breve{\\varphi}^{DBCD}(x;x)=x$;\n\\item[iii)] $\\quad $ $\\breve{\\varphi}^{DBCD}(x;y)$ is decreasing in $x$ and increasing in $y$;\n\\item[iv)] $\\quad $ $\\breve{\\varphi}^{DBCD}(x;y)=1-\\breve{\\varphi}^{DBCD}(1-x;1-y)$ for all $x,y\\in (0;1)^{2}$.\n\\end{itemize}\nThe DBCD forces the allocation proportion to the target\nsince from conditions ii) and iii), when $x>y$ then $\\breve{\\varphi}^{DBCD}(x,y)y$. However, condition i) is quite restrictive since it does not include several widely-known proposals based on discontinuous allocation functions, such as Efron's BCD and its extensions \\cite{Hu09}, while condition iv) simply guarantees that $A$ and $B$ are treated symmetrically.\n\nSince $\\breve{\\varphi}^{DBCD}(x;y)$ is decreasing in $x$ with $\\breve{\\varphi}^{DBCD}(x;x)=x$, then the generalized downcrossing is unique, given by\n$t(\\pi^{\\ast }(\\widehat{\\boldsymbol{\\gamma}}_n))=\\pi^{\\ast }(\\widehat{\\boldsymbol{\\gamma}}_n)$. Thus, from the continuity of the target $\\pi^{\\ast }(\\cdot)$ it follows that $\\lim_{n\\rightarrow \\infty} \\pi_n=\\pi^{\\ast }(\\boldsymbol{\\gamma})$ a.s.\n\\end{ex}\n\\begin{ex}\nIn the same spirit of Efron's BCD, Hu, Zhang and He (2009) have recently introduced the ERADE, which is a class of RA procedures based on discontinuous randomization functions. Let again $\\pi^{\\ast}(\\boldsymbol{\\gamma})\\in (0,1)$ be the desired target, that is assumed to be a continuous function of the unknown model parameters, the ERADE assigns treatment $A$ to the $(n+1)$st patient with probability\n\\begin{equation}\\label{erade}\n\\Pr(\\delta_{n+1}=1\\mid \\Im_n)=\n \\begin{cases}\n \\alpha \\pi^{\\ast }(\\widehat{\\boldsymbol{\\gamma}}_n) , & \\text{ if} \\; \\pi_n>\\pi^{\\ast }(\\widehat{\\boldsymbol{\\gamma}}_n), \\\\\n \\pi^{\\ast }(\\widehat{\\boldsymbol{\\gamma}}_n), & \\text{ if} \\; \\pi_n=\\pi^{\\ast }(\\widehat{\\boldsymbol{\\gamma}}_n),\\\\\n 1- \\alpha(1-\\pi^{\\ast }(\\widehat{\\boldsymbol{\\gamma}}_n)), & \\text{ if} \\; \\pi_n<\\pi^{\\ast }(\\widehat{\\boldsymbol{\\gamma}}_n),\n \\end{cases}\\,\n\\end{equation}\nwhere $\\alpha \\in [0;1)$ governs the degree of randomness. Clearly, rule (\\ref{erade}) corresponds to\n\\begin{equation*}\n\\breve{\\varphi}^{ERADE}(x; y)=\n \\begin{cases}\n \\alpha y, & \\text{ if} \\; x>y, \\\\\n y, & \\text{ if} \\; x=y,\\\\\n 1- \\alpha(1-y), & \\text{ if} \\; x\\pi^{\\ast }(\\widehat{\\boldsymbol{\\gamma}}_n), \\\\\n \\pi^{\\ast }(\\widehat{\\boldsymbol{\\gamma}}_n)^{1\/\\tau}, & \\text{ if} \\quad \\pi_n\\leq\\pi^{\\ast }(\\widehat{\\boldsymbol{\\gamma}}_n),\n \\end{cases}\\,\n\\end{equation*}\nwhere the parameter $\\tau\\geq 1$ controls the degree of randomness, then $\\pi_n\\rightarrow \\pi^{\\ast}(\\boldsymbol{\\gamma})$ $a.s.$ as $n \\rightarrow\\infty$.\n\\end{rem}\n\n\\section{CARA designs with continuous covariates}\nSince in the actual clinical practice information on patients' covariates or prognostic factors is usually collected, in some circumstances it may not be suitable to base the allocation probabilities only on earlier responses and assignments. This is particularly true when ethical demands are cogent and the patients have different profiles that induce heterogeneity in the outcomes.\n\nStarting from the pioneering work of Rosenberger et al. \\cite{RosCARA01}, there has been a growing statistical interest\nin the topic of CARA randomization procedures. These designs change at each step the probabilities of allocating treatments by taking into account all the available\ninformation, namely previous responses, assignments and covariates, as well as the covariate profile of the current subject, with the aim of skewing the allocations towards the superior treatment or, in general, of converging to a desired target allocation depending on the covariates \\cite{Zha07}.\n\nWithin this class of procedures, if past outcomes are not taken into account in the allocation process, then the corresponding class of rules are called Covariate-Adaptive. The direct application of CA designs regards clinical trials without ethical demands, where the experimental aim consists in balancing the assignments of the treatments across covariates in order to optimize inference \\cite{Baz11}.\n\nDue to the fact that the proof scheme for CARA rules with categorical covariates requires the extension of the concept of downcrossing in a vectorial framework, which is not used under CARA procedures with continuous prognostic factors, we will treat these cases separately and the former will be analyzed in the next Section.\n\nFrom now on we deal with CARA designs such that\n\\begin{equation}\\label{phi general}\n \\Pr(\\delta_{n+1}=1\\mid \\Im_n, \\boldsymbol{Z}_{n+1}=\\boldsymbol{z}_{n+1})=\\varphi^{CARA} \\left(\\pi_{n}\\,;\\widehat{\\boldsymbol{\\gamma}}_{n},\\boldsymbol{S}_n,f(\\boldsymbol{z}_{n+1}) \\right), \\; n\\geq 2m,\n\\end{equation}\nwhere $\\Im_n=\\sigma(\\delta_1,\\ldots,\\delta_n;Y_1,\\ldots,Y_n;\\boldsymbol{Z}_1,\\ldots,\\boldsymbol{Z}_n)$, $f(\\cdot)$ is a known vector function of the covariates of the $(n+1)$st patient (usually $f$ is the identity function, but it can also incorporate cross-products to account for interactions among covariates), $\\widehat{\\boldsymbol{\\gamma}}_{n}$ depends on earlier allocations, covariates and responses, while $\\boldsymbol{S}_n=\\boldsymbol{S}(\\boldsymbol{z}_{1},\\ldots,\\boldsymbol{z}_{n})$ is a function of the covariates of the previous patients. In general, it is a vector of sufficient statistics of the covariate distribution that incorporates the information on $\\boldsymbol{Z}$ after $n$ steps, and from now on we always assume that, as $n\\rightarrow \\infty$,\n\\begin{equation}\\label{ipotesicov}\n\\boldsymbol{S}_n=\\boldsymbol{S}(\\boldsymbol{Z}_{1},\\ldots,\\boldsymbol{Z}_{n})\\rightarrow \\boldsymbol{\\varsigma }\\qquad a.s.\n\\end{equation}\nOften, $\\boldsymbol{S}_n$ contains the moments up to a given order of the covariate distribution, and (\\ref{ipotesicov}) is satisfied provided that these moments exist.\n\\begin{thm}\\label{thmCARAc}\nAt each step $n$, suppose that the allocation function $\\varphi^{CARA}$ in (\\ref{phi general}) is decreasing in $\\pi_{n}$ and let\n\\begin{equation*}\n\\tilde{\\varphi}_{\\boldsymbol{Z}} \\left(\\pi_{n}\\,;\\widehat{\\boldsymbol{\\gamma}}_{n},\\boldsymbol{S}_n\\right)=E_{\\boldsymbol{Z}_{n+1}}\\left[\\varphi^{CARA} \\left(\\pi_{n}\\,;\\widehat{\\boldsymbol{\\gamma}}_{n},\\boldsymbol{S}_n,f(\\boldsymbol{Z}_{n+1}) \\right)\\right].\n\\end{equation*}\nIf the only generalized downcrossing $\\tilde{t}_{\\boldsymbol{Z}}(\\widehat{\\boldsymbol{\\gamma}}_{n},\\boldsymbol{S}_n)$ of $\\tilde{\\varphi}_{\\boldsymbol{Z}}$ is jointly continuous, then\n\\begin{equation}\\label{thmcovcont}\n\\lim_{n\\rightarrow \\infty} \\pi_n= \\tilde{t}_{\\boldsymbol{Z}}({\\boldsymbol{\\gamma}},\\boldsymbol{\\varsigma}) \\quad \\text{a.s.}\n\\end{equation}\n\\end{thm}\n\\begin{proof}\nSee Appendix \\ref{A2}.\n\\end{proof}\n\\begin{ex}\nConsider the linear homoscedastic model with treatment\/covariate interactions in the following form\n\\begin{equation*}\nE(Y_{i}) =\\delta _{i}\\ \\mu _{A}+(1-\\delta _{i})\\ \\mu _{B}+ {z}_{i}\\left[\\delta _{i}\\beta_{A}+(1-\\delta _{i})\\beta_{B}\\right],\\qquad i\\geq1,\n\\end{equation*}\nwhere $\\mu_{A}$ and $\\mu_{B}$ are the baseline treatment effects,\n$\\beta_{A}\\neq\\beta_{B}$ are different regression parameters and $z_{i}$ is a scalar covariate observed on the $i$th individual, which is assumed to be a standard normal.\nUnder this model, adopting ``the-larger-the-better'' scenario, treatment $A$ is the best for patient $(n+1)$ if\n$\\mu_{A}+z_{n+1} \\beta_{A}>\\mu_{B}+z_{n+1} \\beta_{B}$; thus, if only ethical aims are taken into account it could be reasonable to consider the following allocation rule:\n\\begin{equation}\\label{ruleetica}\n\\varphi^{ETH} \\left(\\pi_{n}\\,;\\widehat{\\boldsymbol{\\gamma}}_{n},\\boldsymbol{S}_n,f(\\boldsymbol{z}_{n+1}) \\right)=\n\\mathds{1}_{\\left\\{\\hat{\\mu}_{An}-\\hat{\\mu}_{Bn} +z_{n+1} \\left(\\hat{\\beta}_{An} -\\hat{\\beta}_{Bn} \\right)>0 \\right\\}},\n\\end{equation}\nwhere $\\mathds{1}_{\\{\\cdot\\}}$ is the indicator function and $\\boldsymbol{\\hat{\\gamma}}_n=(\\hat{\\mu} _{An},\\hat{\\mu} _{Bn},\\hat{\\beta}_{An},\n\\hat{\\beta}_{Bn})^{t}$ is the least square estimator of $\\boldsymbol{\\gamma }=(\\mu _{A},\\mu _{B},\\beta_{A},\n\\beta_{B})^{t}$ after $n$ steps. Thus,\n\\begin{equation}\\label{cont2}\n\\begin{split}\nE_{\\boldsymbol{Z}_{n+1}}& \\left[\\varphi^{ETH} \\left(\\pi_{n}\\,;\\widehat{\\boldsymbol{\\gamma}}_{n},\\boldsymbol{S}_n,f(\\boldsymbol{Z}_{n+1}) \\right)\\right]\\\\\n= & \\Pr {\\left\\{\\hat{\\mu}_{An}-\\hat{\\mu}_{Bn} +Z_{n+1} \\left(\\hat{\\beta}_{An} -\\hat{\\beta}_{Bn} \\right)>0 \\right\\}} =\n1-\\Phi\\left(\\frac{\\hat{\\mu}_{Bn}-\\hat{\\mu}_{An}}{\\mid\\hat{\\beta}_{An}-\\hat{\\beta}_{Bn} \\mid}\\right)\n\\end{split}\n\\end{equation}\nwhere $\\Phi(\\cdot)$ is the cdf of $Z$.\nNote that (\\ref{cont2}) is constant in $\\pi_{n}$, so it has a single generalized downcrossing and from Theorem \\ref{thmCARAc},\n\\begin{equation*}\n\\lim_{n\\rightarrow \\infty} \\pi_n=1-\\Phi\\left(\\frac{\\mu_{B}-\\mu_{A}}{\\mid\\beta_{A} -\\beta_{B}\\mid }\\right).\n\\end{equation*}\n\\end{ex}\n\n\\begin{ex}\nAs in the case of RA procedures, also for $CARA$ rules there is often a desired target allocation $\\pi^{\\ast}$ to treatment $A$ that is a function of the unknown model parameters and the covariates, i.e. $\\pi^{\\ast}=\\pi^{\\ast}(\\boldsymbol{\\gamma},\\boldsymbol{z})$, which is assumed to be continuous in ${\\boldsymbol{\\gamma }}$ for any fixed covariate level $\\boldsymbol{z}$.\nIn particular, Zhang et al. \\cite{Zha07} assumed a generalized linear model setup and suggested to allocate subject $(n+1)$ to $A$ with probability\n\\begin{equation}\n \\Pr(\\delta_{n+1}=1\\mid \\Im_n, \\boldsymbol{Z}_{n+1}=\\boldsymbol{z}_{n+1})=\\pi^{\\ast }(\\widehat{\\boldsymbol{\\gamma}}_n, \\boldsymbol{z}_{n+1} ), \\quad \\text{for } n\\geq 2m, \\label{CARAhu07}\n\\end{equation}\nwhich represents an analog of the Sequential Maximum Likelihood design \\cite{Mel01} in the presence of\ncovariates. Assuming that the target function $\\pi ^{\\ast }$ is differentiable\nin $\\boldsymbol{\\gamma}$, under the expectation, with bounded derivatives, the authors showed that\n$\\lim_{n\\rightarrow \\infty }\\pi _{n}=E_{\\boldsymbol{Z}}[\\pi^{\\ast }(\\boldsymbol{\\gamma}, \\boldsymbol{Z} )]$ $a.s.$\n\nClearly, allocation rule (\\ref{CARAhu07}) is constant in $\\pi_{n}$ and therefore\n$\\tilde{\\varphi}_{\\boldsymbol{Z}} \\left(\\pi_{n}\\,;\\widehat{\\boldsymbol{\\gamma}}_{n},\\boldsymbol{S}_n\\right)=E_{\\boldsymbol{Z}_{n+1}}\\left[\\pi^{\\ast }(\\widehat{\\boldsymbol{\\gamma}}_n, \\boldsymbol{Z}_{n+1} )\\right]$ is also constant in $\\pi_{n}$. Thus, the generalized downcrossing of $\\tilde{\\varphi}_{\\boldsymbol{Z}}$ is unique and obviously\n$\\lim_{n\\rightarrow \\infty} \\pi_n= E_{\\boldsymbol{Z}}\\left[\\pi^{\\ast }(\\boldsymbol{\\gamma}, \\boldsymbol{Z} )\\right]$ a.s.\n\\end{ex}\n\n\\begin{rem}\nSome authors (see for instance \\cite{Ban01}) suggested CARA designs that incorporate covariate information\nin the randomization process, but ignoring the covariate of the current subject. Note that these methods can be regarded as special cases of $\\varphi^{CARA}$ in (\\ref{phi general}) and therefore Theorem \\ref{thmCARAc} can still be applied by taking into account the generalized downcrossing of $\\varphi^{CARA}$ directly.\n\\end{rem}\nEven if Theorem \\ref{thmCARAc} proves the convergence of CARA designs in the case of continuous covariates, it could be difficult to obtain an analytical expression for $\\tilde{\\varphi}_{\\boldsymbol{Z}}$ and therefore to find the corresponding generalized downcrossing. Nevertheless, the following Lemma allows to obtain the generalized downcrossing in a simple manner in some circumstances.\n\n\\begin{lemma}\\label{lemma1}\nLet $\\varphi^{CARA} \\left(\\pi_{n}\\,;\\widehat{\\boldsymbol{\\gamma}}_{n},\\boldsymbol{S}_n,f(\\boldsymbol{z}_{n+1}) \\right)$ be jointly continuous and, assuming that $\\varphi^{CARA}\\left(x\\,;\\boldsymbol{\\gamma},\\boldsymbol{\\varsigma },f(\\boldsymbol{Z})\\right)$ is decreasing in $x$, let $t^{*}_{\\boldsymbol{Z}}(\\boldsymbol{\\gamma},\\boldsymbol{\\varsigma })$ be the unique solution of equation\n\\begin{equation*}\n\\varphi^{CARA}\\left(x\\,;\\boldsymbol{\\gamma},\\boldsymbol{\\varsigma },E_{\\boldsymbol{Z}}[f(\\boldsymbol{Z})] \\right)=x.\n\\end{equation*}\nIf $\\varphi^{CARA} \\left(t_{\\boldsymbol{Z}}^{*}(\\boldsymbol{\\gamma} ,\\boldsymbol{\\varsigma }) \\,;\\boldsymbol{\\gamma},\\boldsymbol{\\varsigma },f(\\boldsymbol{Z}) \\right)$ is linear in $f(\\boldsymbol{Z})$ and $t^{*}_{\\boldsymbol{Z}}$ is jointly continuous,\nthen (\\ref{thmcovcont}) still holds with $\\tilde{t}_{\\boldsymbol{Z}}(\\boldsymbol{\\gamma} ,\\boldsymbol{\\varsigma })=t_{\\boldsymbol{Z}}^{*}(\\boldsymbol{\\gamma} ,\\boldsymbol{\\varsigma })$.\n\\end{lemma}\n\\begin{proof}\nAssume that $\\tilde{t}_{\\boldsymbol{Z}}(\\boldsymbol{\\gamma} ,\\boldsymbol{\\varsigma })< t_{\\boldsymbol{Z}}^{*}(\\boldsymbol{\\gamma} ,\\boldsymbol{\\varsigma })$. From the properties of $\\varphi^{CARA}$, the function $\\tilde{\\varphi}_{\\boldsymbol{Z}} \\left(x\\,;\\boldsymbol{\\gamma} ,\\boldsymbol{\\varsigma }\\right)$ is jointly continuous and decreasing in $x$, so that\n$\\tilde{t}_{\\boldsymbol{Z}}(\\boldsymbol{\\gamma} ,\\boldsymbol{\\varsigma })=\\tilde{\\varphi}_{\\boldsymbol{Z}} \\left(\\tilde{t}_{\\boldsymbol{Z}}(\\boldsymbol{\\gamma} ,\\boldsymbol{\\varsigma })\\,;\\boldsymbol{\\gamma},\\boldsymbol{\\varsigma }\\right)> \\tilde{\\varphi}_{\\boldsymbol{Z}} \\left(t_{\\boldsymbol{Z}}^*(\\boldsymbol{\\gamma} ,\\boldsymbol{\\varsigma })\\,;\\boldsymbol{\\gamma},\\boldsymbol{\\varsigma }\\right)$.\nHowever,\n\\begin{equation*}\n\\tilde{\\varphi}_{\\boldsymbol{Z}} \\left(t_{\\boldsymbol{Z}}^*(\\boldsymbol{\\gamma} ,\\boldsymbol{\\varsigma })\\,;\\boldsymbol{\\gamma},\\boldsymbol{\\varsigma }\\right)=\n\\varphi^{CARA} \\left(t_{\\boldsymbol{Z}}^{*}(\\boldsymbol{\\gamma} ,\\boldsymbol{\\varsigma }) \\,;\\boldsymbol{\\gamma},\\boldsymbol{\\varsigma },E_{\\boldsymbol{Z}}[f(\\boldsymbol{Z})] \\right)=t_{\\boldsymbol{Z}}^*(\\boldsymbol{\\gamma} ,\\boldsymbol{\\varsigma }),\n\\end{equation*}\nsince $\\varphi^{CARA} \\left(t_{\\boldsymbol{Z}}^{*}(\\boldsymbol{\\gamma} ,\\boldsymbol{\\varsigma }) \\,;\\boldsymbol{\\gamma},\\boldsymbol{\\varsigma },f(\\boldsymbol{Z}) \\right)$ is linear in $f(\\boldsymbol{Z})$, contradicting the assumption. Analogously if we assume $\\tilde{t}_{\\boldsymbol{Z}}(\\boldsymbol{\\gamma},\\boldsymbol{\\varsigma })> t_{\\boldsymbol{Z}}^{*}(\\boldsymbol{\\gamma},\\boldsymbol{\\varsigma })$.\n\\end{proof}\n\\begin{ex}\nThe Covariate-adjusted Doubly-adaptive Biased Coin Design introduced by Zhang and Hu (2009) is a class of CARA procedures intended to converge to a desired target $\\pi^*(\\boldsymbol{\\gamma},\\boldsymbol{z})$. When the $(n+1)$st subject with covariate $\\boldsymbol{Z}_{n+1}=\\boldsymbol{z}_{n+1}$ is ready to be randomized, he\/she will be assigned to $A$ with probability\n\\begin{equation}\\label{cinesacci}\n\\begin{split}\n\\Pr(\\delta_{n+1}=& 1\\mid \\Im_n, \\boldsymbol{Z}_{n+1}=\\boldsymbol{z}_{n+1}) =\\\\\n&\\frac{\\pi^*(\\widehat{\\boldsymbol{\\gamma}}_n, \\boldsymbol{z}_{n+1} ) \\left( \\frac{\\widehat{\\rho}_n}{\\pi_n} \\right)^\\nu }{\\pi^*(\\widehat{\\boldsymbol{\\gamma}}_n, \\boldsymbol{z}_{n+1} )\\left( \\frac{\\widehat{\\rho}_n}{\\pi_n} \\right)^\\nu + [1-\\pi^*(\\widehat{\\boldsymbol{\\gamma}}_n, \\boldsymbol{z}_{n+1} )] \\left( \\frac{1-\\widehat{\\rho}_n}{1-\\pi_n} \\right)^\\nu },\n\\end{split}\n\\end{equation}\nwhere $\\widehat{\\rho}_n=n^{-1} \\sum_{i=1}^{n}\\pi^*(\\widehat{\\boldsymbol{\\gamma}}_n,\\boldsymbol{z}_i)$. Assuming that\n\\begin{equation}\\label{hpcin}\n\\Pr(\\delta_{n+1}=1\\mid \\Im_n, \\boldsymbol{Z}_{n+1}=\\boldsymbol{z}) \\rightarrow \\pi^*(\\boldsymbol{\\gamma},\\boldsymbol{z}) \\quad \\text{a.s.}\n\\end{equation}\nthe authors proved that $\\lim_{n\\rightarrow \\infty} \\pi_n=E_{\\boldsymbol{Z}}\\left[\\pi^*(\\boldsymbol{\\gamma},\\boldsymbol{Z})\\right]$ $a.s.$\n\nNote that rule (\\ref{cinesacci}) can be regarded as special case of $\\varphi^{CARA}$ after the transformation $(\\widehat{\\boldsymbol{\\gamma}}_{n},\\boldsymbol{S}_n,f(\\boldsymbol{z}_{n+1}) )\\mapsto (\\widehat{\\rho}_n,\\pi^*(\\widehat{\\boldsymbol{\\gamma}}_n, \\boldsymbol{z}_{n+1} ))$\nand thus, even if we remove condition (\\ref{hpcin}), Lemma \\ref{lemma1} can be applied to the allocation function\n\\begin{equation*}\n\\breve{\\varphi}^{ZH}\\left(x;a,b\\right)= \\left\\{ 1+ \\frac{1-b }{b} \\left[\\frac{(1-a) x}{a(1-x)}\\right]^{\\nu} \\right\\}^{-1},\n\\end{equation*}\nwhich is decreasing in $x$ and continuous in all the arguments.\nIndeed, since both $\\widehat{\\rho}_n$ and $E_{\\boldsymbol{Z}_{n+1}}\\left[\\pi^*(\\widehat{\\boldsymbol{\\gamma}}_n, \\boldsymbol{Z}_{n+1} )\\right]$ converge to $E_{\\boldsymbol{Z}}\\left[\\pi^*(\\boldsymbol{\\gamma},\\boldsymbol{Z})\\right]$ a.s., the solution of the equation $\\breve{\\varphi}^{ZH}(x;E_{\\boldsymbol{Z}}\\left[\\pi^*(\\boldsymbol{\\gamma},\\boldsymbol{Z})\\right],E_{\\boldsymbol{Z}}\\left[\\pi^*(\\boldsymbol{\\gamma},\\boldsymbol{Z})\\right])\n=x$ is $t^{*}_{\\boldsymbol{Z}}= E_{\\boldsymbol{Z}}\\left[\\pi^*(\\boldsymbol{\\gamma},\\boldsymbol{Z})\\right]$. Furthermore, since $\\breve{\\varphi}^{ZH}\\left(E_{\\boldsymbol{Z}}\\left[\\pi^*(\\boldsymbol{\\gamma},\\boldsymbol{Z})\\right];E_{\\boldsymbol{Z}}\\left[\\pi^*(\\boldsymbol{\\gamma},\\boldsymbol{Z})\\right],\n\\pi^*(\\boldsymbol{\\gamma}, \\boldsymbol{Z})\\right)= \\pi^*(\\boldsymbol{\\gamma}, \\boldsymbol{Z})$,\nthen $\\lim_{n\\rightarrow \\infty} \\pi_n=E_{\\boldsymbol{Z}}\\left[\\pi^*(\\boldsymbol{\\gamma},\\boldsymbol{Z})\\right]$ $a.s.$\n\\end{ex}\n\n\\begin{rem}\nTheorem \\ref{thmCARAc} and Lemma \\ref{lemma1} can be naturally applied to CA designs in the presence of continuous covariates by considering, instead of (\\ref{phi general}), the following class of allocation rules:\n\\begin{equation*}\n\\Pr(\\delta_{n+1}=1\\mid \\Im_n, \\boldsymbol{Z}_{n+1}=\\boldsymbol{z}_{n+1})=\\varphi^{CA} \\left(\\pi_{n}\\,;\\boldsymbol{S}_n,f(\\boldsymbol{z}_{n+1}) \\right),\n\\end{equation*}\nwith $\\Im_n=\\sigma(\\delta_1,\\ldots,\\delta_n;\\boldsymbol{Z_1},\\ldots,\\boldsymbol{Z_{n}})$. Clearly, $\\tilde{t}_{\\boldsymbol{Z}}({\\boldsymbol{\\gamma}},\\boldsymbol{\\varsigma})$ and $t^{*}_{\\boldsymbol{Z}}(\\boldsymbol{\\gamma},\\boldsymbol{\\varsigma })$ should be\nreplaced by $\\tilde{t}_{\\boldsymbol{Z}}(\\boldsymbol{\\varsigma})$ and $t^{*}_{\\boldsymbol{Z}}(\\boldsymbol{\\varsigma })$, respectively.\n\\end{rem}\n\n\n\\section{CARA designs with categorical covariates}\nWe now provide a convergence result for CARA designs in the case of categorical covariates. In order to avoid cumbersome\nnotation, from now on we assume without loss of generality two categorical\ncovariates, i.e. $\\boldsymbol{Z}=(T,W)$, with levels $t_j$ $(j=0,\\ldots,J)$ and $w_l$ $(l=0,\\ldots,L)$, respectively.\nAlso, let $\\boldsymbol{p} = [p_{jl}: j =0,\\ldots,J; l = 0, \\ldots,L]$ be the joint probability distribution of the categorical covariates, with $p_{jl}>0$ for any $j=0,\\ldots,J$ and $l=0,\\ldots,L$ and $\\sum_{j=0}^J\\sum_{l=0}^L p_{jl}=1$.\n\nAfter $n$ steps, let $N_n(j,l)=\\sum_{i=1}^{n}\\mathds{1}_{\\{Z_i=(t_j,w_l)\\}}$ be the number of subjects within the\nstratum $(t_j,w_l)$, $\\widetilde{N}_n(j,l)=\\sum_{i=1}^{n}\\delta_i \\mathds{1}_{\\{Z_i=(t_j,w_l)\\}}$ the number of allocations to $A$ within this\nstratum and $\\pi_n(j,l)$\nthe corresponding proportion, i.e. $\\pi_n(j,l)=N_n(j,l)^{-1}\\widetilde{N}_n(j,l)$, for any $j=0,\\ldots,J$ and $l=0,\\ldots,L$. Also, let $\\boldsymbol{\\pi}_n=\\left[\\pi_n(j,l): j =0,\\ldots,J; l = 0, \\ldots,L\\right]$.\n\nAfter an initial stage with $m$ observations on each treatment, performed to derive a non-trivial parameter estimation, we consider a class of CARA designs that assigns the $(n+1)$st patient with covariate profile $\\boldsymbol{Z}_{n+1}=(t_j,w_l)$ to $A$ with probability\n\\begin{equation}\\label{randcaracat}\n\\Pr(\\delta_{n+1}=1\\mid \\Im_n, \\boldsymbol{Z}_{n+1}=(t_j,w_l))=\\varphi_{jl}\\left(\\boldsymbol{\\pi}_n\\,;\\widehat{\\boldsymbol{\\gamma}}_{n},\\boldsymbol{S}_{n} \\right), \\quad \\text{for } n\\geq 2m,\n\\end{equation}\nwhere $\\Im_n=\\sigma(\\delta_1,\\ldots,\\delta_n;Y_1,\\ldots,Y_n;\\boldsymbol{Z}_1,\\ldots,\\boldsymbol{Z}_n)$ and $\\varphi_ {jl}$ is the allocation function of the stratum $(t_j,w_l)$.\n\nLet $\\boldsymbol{\\varphi}(\\boldsymbol{\\pi}_n;\\widehat{\\boldsymbol{\\gamma}}_{n},\\boldsymbol{S}_{n} )= [\\varphi_{jl}(\\boldsymbol{\\pi}_n;\\widehat{\\boldsymbol{\\gamma}}_{n},\\boldsymbol{S}_{n} ): j =0,\\ldots,J; l = 0, \\ldots,L]$,\noften the allocation rule at each stratum does not depend on the entire vector of allocation proportions $\\boldsymbol{\\pi}_n$ involving all the strata, but depends only on the current allocation proportion of this stratum, i.e.\n\\begin{equation}\\label{stratrand}\n\\varphi_{jl}(\\boldsymbol{\\pi}_n;\\widehat{\\boldsymbol{\\gamma}}_{n},\\boldsymbol{S}_{n})=\\varphi_{jl}(\\pi_n(j,l);\\widehat{\\boldsymbol{\\gamma}}_{n}, \\boldsymbol{S}_{n}), \\quad \\forall j =0,\\ldots,J; l = 0, \\ldots,L.\n\\end{equation}\nHowever, note that (\\ref{stratrand}) does not correspond in general to a stratified randomization, due to the fact that the estimate $\\widehat{\\boldsymbol{\\gamma}}_{n}$ usually involves the information accrued from all the strata up to that step, and thus the evolutions of the procedure at different strata are not independent.\n\\begin{definition}\\label{DC3}\nLet $\\mathbf{x}=\\left[x_{1},\\ldots,x_{\\mathcal{K}}\\right]$, where $x_{\\iota}\\in [0;1]$ for any $\\iota=1,\\ldots,\\mathcal{K}$ and $\\mathcal{K}$ is a positive integer. Also, let $\\ddot{\\psi}_{\\iota}(\\mathbf{x};\\mathbf{y}): [0;1]^{\\mathcal{K}}\\times \\mathds{R}^d \\rightarrow [0;1]$ and set $\\ddot{\\boldsymbol{\\psi}}(\\mathbf{x};\\mathbf{y})=\\left[\\ddot{\\psi}_{1}(\\mathbf{x};\\mathbf{y}),\\ldots,\\ddot{\\psi}_{\\mathcal{K}}(\\mathbf{x};\\mathbf{y})\\right]$.\nThen\n$\\boldsymbol{t}(\\mathbf{y})= \\left[t_{1}(\\mathbf{y}),\\ldots,t_{\\mathcal{K}}(\\mathbf{y})\\right]$, with $t_{\\iota}(\\mathbf{y}): \\mathds{R}^d \\rightarrow [0;1] $ for $\\iota=1,\\ldots,\\mathcal{K}$, is called a \\emph{vectorial} \\emph{generalized} \\emph{downcrossing} of $\\ddot{\\boldsymbol{\\psi}}$ if for all $\\mathbf{y}\\in \\mathds{R}^d$ and for any $\\iota=1,\\ldots,\\mathcal{K}$\n\\begin{displaymath}\n\\text{for all } x_{\\iota}t_{\\iota}(\\mathbf{y}),\\; \\; \\ddot{\\psi}_{\\iota}(\\mathbf{x};\\mathbf{y}) \\leq t_{\\iota}(\\mathbf{y}).\n\\end{displaymath}\n\\end{definition}\nClearly, if the function $\\ddot{\\psi}_{\\iota}(\\mathbf{x};\\mathbf{y})$ is decreasing in $\\mathbf{x}$ (i.e. componentwise) for any $\\iota$,\nthen the vectorial generalized downcrossing $\\boldsymbol{t}(\\mathbf{y})$ is unique, with $\\boldsymbol{t}(\\mathbf{y})\\in(0;1)^\\mathcal{K}$\nfor any $\\mathbf{y}\\in \\mathds{R}^d$; furthermore $\\ddot{\\boldsymbol{\\psi}}(\\boldsymbol{t}(\\mathbf{y});\\mathbf{y})=\\boldsymbol{t}(\\mathbf{y})$, provided that the solution exists.\nMoreover, note that if $\\ddot{\\psi}_{\\iota}(\\mathbf{x};\\mathbf{y})=\\ddot{\\psi}_{\\iota}(x_{\\iota};\\mathbf{y})$ for any $\\iota=1,\\ldots,\\mathcal{K}$, then each\ncomponent $t_{\\iota}(\\mathbf{y})$ of $\\boldsymbol{t}(\\mathbf{y})$ is simply the single generalized downcrossing of $\\ddot{\\psi}_{\\iota}(x_\\iota;\\mathbf{y})$, which\ncan be found by solving the equation $\\ddot{\\psi}_{\\iota}(x;\\mathbf{y})=x$ (if the solution exists).\n\\begin{thm}\\label{thm4}\nAt each step $n$, suppose that for any given stratum $(t_j,w_l)$ the allocation function\n$\\varphi_{jl}\\left(\\boldsymbol{\\pi}_n\\,;\\widehat{\\boldsymbol{\\gamma}}_{n},\\boldsymbol{S}_{n} \\right)$ is decreasing in $\\boldsymbol{\\pi}_n$ (componentwise) .\nIf the unique vectorial generalized downcrossing $\\boldsymbol{t}\\left(\\widehat{\\boldsymbol{\\gamma}}_{n},\\boldsymbol{S}_{n} \\right)= [t_{jl}(\\widehat{\\boldsymbol{\\gamma}}_{n},\\boldsymbol{S}_{n}) : j =0,\\ldots,J; l = 0, \\ldots,L]$\nis a continuous function and $\\boldsymbol{\\varphi}(\\boldsymbol{t}\\left(\\boldsymbol{\\gamma},\\boldsymbol{\\varsigma} \\right);\\boldsymbol{\\gamma},\\boldsymbol{\\varsigma})=\\boldsymbol{t}\\left(\\boldsymbol{\\gamma},\\boldsymbol{\\varsigma} \\right)$, then\n\\begin{equation*}\n\\lim_{n\\rightarrow \\infty} \\boldsymbol{\\pi}_n=\\boldsymbol{t}\\left(\\boldsymbol{\\gamma},\\boldsymbol{\\varsigma} \\right) \\quad \\text{and} \\quad \\lim_{n\\rightarrow \\infty} \\pi_n=E_{\\boldsymbol{Z} }[\\boldsymbol{t}\\left(\\boldsymbol{\\gamma},\\boldsymbol{\\varsigma} \\right)]=\\sum_{j=0}^J\\sum_{l=0}^Lt_{jl}(\\boldsymbol{\\gamma},\\boldsymbol{\\varsigma}) p_{jl}\\quad a.s.\n\\end{equation*}\n\\end{thm}\n\\begin{proof}\nSee Appendix \\ref{A3}.\n\\end{proof}\n\\begin{ex}\\label{exRDBCD}\nThe Reinforced Doubly-adaptive Biased Coin Design (RDBCD) is a class of CARA procedures recently introduced by \\citet{Baz12} in the case of categorical covariates intended to target any desired allocation proportion\n$$\\boldsymbol{\\pi}^{\\ast}(\\boldsymbol{\\gamma})=[\\pi^{\\ast }(j,l): j=0,\\ldots, J; l=0,\\ldots, L]:\\Omega\\rightarrow (0,1)^{(J+1)\\times(L+1)},$$\nwhich is a continuous function of the unknown model parameters. Starting with a pilot stage performed to\nderive an initial parameter estimation, at each step $n\\geq2m$ let $\\widehat{\\pi}_{n}^{\\ast }(j,l)$ be the estimate of the target within stratum $(t_j,w_l)$ obtained using all the collected data up to that step and $\\widehat{p}_{jln}=n^{-1}N_n(j,l)$ the estimate of $p_{jl}$; when the next patient\nwith covariate $\\boldsymbol{Z}_{n+1}=(t_j,w_l)$ is ready to be randomized, the RDBCD assigns him\/her\nto $A$ with probability\n\\begin{equation*}\n\\Pr(\\delta_{n+1}=1\\mid \\Im_n, \\boldsymbol{Z}_{n+1}=(t_j,w_l))=\\varphi_{jl}\\left(\\pi_{n}(j,l);\\widehat{\\pi}_{n}^{\\ast }(j,l),\\widehat{p}_{jln}\\right),\n\\end{equation*}\nwhere the function $\\varphi_{jl}(x;y,z):(0,1)^3 \\rightarrow [0,1]$ satisfies the following conditions:\n\\begin{itemize}\n\\item[i)] $\\varphi_{jl}$ is decreasing in $x$ and increasing in $y$, for any $ z \\in (0,1)$;\n\\item[ii)] $\\varphi_{jl}(x;x,z)=x$ for any $z \\in (0,1)$;\n\\item[iii)] $\\varphi_{jl}$ is decreasing in $z$ if $xy$;\n\\item[iv)] $\\varphi_{jl}(x;y,z)=1-\\varphi_{jl}(1-x;1-y,z)$ for any $ z \\in (0,1)$.\n\\end{itemize}\nFirstly observe that for the RDBCD (\\ref{stratrand}) holds and thus, from i) and ii), at each stratum $(t_j,w_l)$ the only generalized downcrossing of $\\varphi_{jl}$ is simply given by $\\widehat{\\pi}_{n}^{\\ast }(j,l)$. Therefore, by Theorem \\ref{thm4}, $\\lim_{n\\rightarrow \\infty} \\pi_n(j,l) ={\\pi}^*(j,l)$ $a.s.$ for any $j=0,\\ldots, J$ and $l=0,\\ldots, L$, due to the continuity of the target, i.e. $\\lim_{n\\rightarrow \\infty} \\boldsymbol{\\pi}_n=\\boldsymbol{\\pi}^*(\\boldsymbol{\\gamma})$ $a.s.$\n\\end{ex}\n\n\\subsection{Covariate-Adaptive designs with categorical covariates}\nTheorem \\ref{thm4} can be naturally applied to CA procedures in the case of categorical covariates by assuming, instead of (\\ref{randcaracat}), the following class of allocation rules:\n\\begin{equation}\\label{randcaracat2}\n\\Pr(\\delta_{n+1}=1\\mid \\Im_n, \\boldsymbol{Z}_{n+1}=\\boldsymbol{z}_{n+1})=\\varphi_{jl}\\left(\\boldsymbol{\\pi}_n\\,;\\boldsymbol{S}_{n} \\right),\n\\end{equation}\nwhere now $\\Im_n=\\sigma(\\delta_1,\\ldots,\\delta_n;\\boldsymbol{Z_1},\\ldots,\\boldsymbol{Z_{n}})$.\nMoreover, from now on we let $\\boldsymbol{t}^{B}= [1\/2 : j =0,\\ldots,J; l = 0, \\ldots,L]$.\n\\begin{ex}\\label{exCABCD}\nThe Covariate-Adaptive Biased Coin Design (C-ABCD) \\cite{Baz11} is a class of stratified randomization procedures intended to achieve joint balance. For any stratum $(t_j,w_l)$, let $F_{jl}(\\cdot): \\mathds{R}\\rightarrow [0,1]$ be a non-increasing and symmetric function with $F_{jl}(-x)=1-F_{jl}(x)$; the C-ABCD assigns the $(n+1)$st patient with profile $\\boldsymbol{Z}_{n+1}=(t_{j},w_{l})$ to $A$ with probability\n\\begin{equation}\\label{CABCD}\n\\Pr \\left(\\delta _{n+1}=1\\mid\\Im_n,\n\\boldsymbol{Z}_{n+1}=(t_j,w_l)\\right) =F_{jl}[D_n(j,l)],\n\\end{equation}\nwhere $D_n(j,l)=N_n(j,l)\\left[2\\pi_n(j,l)-1\\right]$ is the imbalance between the two groups after $n$ steps within stratum $(t_j,w_l)$.\nAs showed in Remark \\ref{remABCD} and Example \\ref{exABCD} in the case of AA procedures, Theorem \\ref{thm4}\nstill holds even if we assume different randomization functions at each step,\nprovided that the unique vectorial generalized downcrossing is the same for any $n$. Indeed, it is trivial to see that rule (\\ref{CABCD}) corresponds to\n\\begin{equation*}\n\\varphi_{jln}\\left(\\boldsymbol{\\pi}_n\\,;\\boldsymbol{S}_{n} \\right)=\\varphi_{jln}\\left(\\pi_n(j,l)\\,;\\boldsymbol{S}_{n} \\right)=F_{jl}\\left\\{n \\left[2\\pi_n(j,l)-1\\right]\\widehat{p}_{jln} \\right\\},\n\\end{equation*}\nand, from the properties of $F_{jl}$, $\\varphi_{jln}$'s have $1\/2$ as unique downcrossing for any $n$; thus $\\lim_{n\\rightarrow \\infty} \\boldsymbol{\\pi}_n=\\boldsymbol{t}^{B}$, which clearly implies marginal balance.\n\nMoreover, when the covariate distribution is known \\citet{Baz11} suggested the following class of randomization rules:\n\\begin{equation*}\nF_{jl}^q(x)= \\{x^{q(p_{jl})}+1\\}^{-1}, \\qquad x\\geq 1,\n\\end{equation*}\nwhere $q(\\cdot)$ is a decreasing function with $\\lim_{t\\rightarrow 0^+} q(t)=\\infty$. Clearly, the above mentioned arguments and Theorem \\ref{thm4} guarantee the convergence to balance even if the covariate distribution is unknown, by replacing at each step $p_{jl}$ with its current estimate.\n\\end{ex}\nExamples \\ref{exRDBCD} and \\ref{exCABCD} deal with procedures such that, at every step $n$, the allocation rule $\\varphi_{jl}$ depends only on the current allocation proportion $\\pi_n(j,l)$, namely satisfying (\\ref{stratrand}). We now present additional examples where $\\varphi_{jl}$ is a function of the whole vectorial allocation proportion $\\boldsymbol{\\pi}_n$.\n\n\\begin{ex}\nMinimization methods \\cite{Poc75,Tav74}\nare stratified randomization procedures intended to achieve the so-called marginal balance among covariates. In general,\nthey depend on the definition of a measure of overall imbalance among the\nassignments which summarizes the imbalances between the treatment groups for each level of every factor. Assuming the well-known variance method proposed by Pocock and Simon (1975), the $(n+1)$st subject with covariate profile $\\boldsymbol{Z}_{n+1}=(t_{j},w_{l})$ is assigned to treatment $A$ with probability\n\\begin{equation}\n\\Pr (\\delta _{n+1}=1\\mid \\Im _{n},\\boldsymbol{Z}_{n+1}=(t_{j},w_{l}))=\n\\begin{cases}\np & D_{n}(t_{j}) + D_{n}(w_{l})<0 \\\\\n\\frac{1}{2} & D_{n}(t_{j}) + D_{n}(w_{l})=0, \\\\\n1-p & D_{n}(t_{j}) + D_{n}(w_{l})>0 \\\\\n\\end{cases}\n\\label{peS1}\n\\end{equation}\nwhere $p\\in \\lbrack 1\/2;1]$, $D_n(t_j)$ is the imbalance between the two arms within the level $t_j$\nof $T$ and, similarly, $D_n(w_l)$ represents the imbalance at the category $w_l$ of $W$.\nAt each step $n$, note that $\\sgn\\{ D_{n}(t_{j})\\}=\\sgn\\{n^{-1} D_{n}(t_{j})\\}$ where\n\\begin{equation}\\label{joint}\nn^{-1}D_{n}(t_{j})=\\sum_{l=0}^L \\left[2 \\pi_n(j,l) -1 \\right]\\hat{p}_{jln}, \\quad \\text{for any } j=0,\\ldots,J\n\\end{equation}\nand analogously for $D_{n}(w_{l})$. Thus, allocation rule (\\ref{peS1}) corresponds to\n\\begin{equation*}\\label{peS2}\n\\varphi^{PS}_{jl}\\left(\\boldsymbol{\\pi}_n\\,;\\boldsymbol{S}_{n} \\right)=\n\\begin{cases}\np & \\sum_{l=0}^{L} \\left[ \\pi_n(j,l) - \\frac{1}{2} \\right]\\hat{p}_{jln} + \\sum_{j=0}^{J} \\left[ \\pi_n(j,l) - \\frac{1}{2} \\right] \\hat{p}_{jln} <0 \\\\\n\\frac{1}{2} & \\sum_{l=0}^{L} \\left[ \\pi_n(j,l) - \\frac{1}{2} \\right] \\hat{p}_{jln} + \\sum_{j=0}^{J} \\left[ \\pi_n(j,l) - \\frac{1}{2} \\right] \\hat{p}_{jln} =0, \\\\\n1-p & \\sum_{l=0}^{L} \\left[ \\pi_n(j,l) - \\frac{1}{2} \\right] \\hat{p}_{jln} + \\sum_{j=0}^{J} \\left[ \\pi_n(j,l) - \\frac{1}{2} \\right] \\hat{p}_{jln} >0 \\\\\n\\end{cases}\n\\end{equation*}\nand therefore the problem consists in finding the vectorial generalized downcrossing of\n$\\boldsymbol{\\varphi}^{PS} (\\boldsymbol{\\pi}_n;\\boldsymbol{S}_{n})=[\\varphi^{PS}_{jl}(\\boldsymbol{\\pi}_n; \\boldsymbol{S}_{n}): j =0,\\ldots,J; l = 0, \\ldots,L]$. Since at each step $n$, $\\varphi^{PS}_{jl}(\\boldsymbol{\\pi}_n; \\boldsymbol{S}_{n})$ is decreasing in $\\pi_{n}(j,l)$ for any $j =0,\\ldots,J$ and\n$l = 0, \\ldots,L$, then the vectorial generalized downcrossing is unique. It is\nstraightforward to see that\n$\\boldsymbol{\\varphi}^{PS}(\\boldsymbol{t}^{B};\\boldsymbol{\\varsigma})=\\boldsymbol{t}^{B}$ for every $n$ and thus\n$\\lim_{n\\rightarrow \\infty} \\boldsymbol{\\pi}_n=\\boldsymbol{t}^{B}$ a.s.\n\\end{ex}\n\n\\begin{ex}\nIn order to include minimization methods and stratified randomization procedures in a unique framework, Hu and Hu (2012)\nhave recently suggested to assign subject $(n+1)$ belonging to the stratum $(t_{j},w_{l})$ to $A$ with probability\n\\begin{equation}\\label{huhuproc}\n\\Pr (\\delta _{n+1}=1\\mid \\Im _{n},\\mathbf{Z}_{n+1}=(t_{j},w_{l}))=\n\\begin{cases}\np & \\bar{D}_{n}(j,l)<0 \\\\\n\\frac{1}{2} & \\bar{D}_{n}(j,l)=0\\,, \\\\\n1-p & \\bar{D}_{n}(j,l)>0 \\\\\n\\end{cases}\n\\end{equation}\nwhere the overall measure of imbalance\n\\begin{equation*}\n\\bar{D}_{n}(j,l)= \\omega _{g}D_{n}+\\omega _{T}D_{n}(t_{j})+\\omega\n_{W}D_{n}(w_{l})+\\omega _{s}D_{n}(j,l)\n\\end{equation*}\nis a weighted average of the three types of imbalances actually observed (global, marginal and within-stratum), with non-negative weights $\\omega\n_{g}$ (global), $\\omega _{T}$ and $\\omega _{W}$ (covariate marginal) and $\\omega _{s}$ (stratum) chosen such that $\\omega _{g}+\\omega _{T}+\\omega\n_{W}+\\omega _{s}=1$.\n\nBy choosing the weights $\\omega_g$, $\\omega_T$, $\\omega_W$ such that\n\\begin{equation}\\label{condcin2}\n(JL+J+L)\\omega_g+J\\omega_W+L\\omega_T<1\/2,\n\\end{equation}\nthe authors proved that the probabilistic structure of the within stratum imbalance is that of a positive recurrent Markov chain and this implies that procedure (\\ref{huhuproc}) is asymptotically balanced, both marginally and jointly. However, as stated by the authors, only strictly positive choices of the stratum weight $\\omega _{s}$ satisfy (\\ref{condcin2}), and thus their result cannot be applied to Pocock and Simon's minimization method.\n\nThe asymptotic behaviour of Hu and Hu's design can be illustrated in a different way by applying Theorem \\ref{thm4}. Since $\\sgn \\{\\bar{D}_{n}(j,l) \\}=\\sgn \\{n^{-1}\\bar{D}_{n}(j,l) \\}$ and\n\\begin{equation}\\label{marginal}\nn^{-1}D_n=2\\pi_n-1=\\sum_{j=0}^J\\sum_{l=0}^L \\left[2\\pi_n(j,l)-1\\right] \\hat{p}_{jln},\n\\end{equation}\nfrom (\\ref{joint}) it follows that\n\\begin{equation*}\n\\begin{split}\n & \\sgn \\{n^{-1}\\bar{D}_{n}(j,l) \\}= \\sgn \\{\\omega _{g}\\sum_{j=0}^J\\sum_{l=0}^L \\left[\\pi_n(j,l)- \\frac{1}{2}\\right] \\hat{p}_{jln}+ \\\\\n& \\omega _{T}\\sum_{l=0}^L \\left[\\pi_n(j,l)- \\frac{1}{2}\\right] \\hat{p}_{jln} +\n\\omega_{W} \\sum_{j=0}^J \\left[\\pi_n(j,l)- \\frac{1}{2}\\right] \\hat{p}_{jln}+\n\\omega _{s}\\left[\\pi_n(j,l)- \\frac{1}{2}\\right] \\hat{p}_{jln} \\}.\n\\end{split}\n\\end{equation*}\nThus, at each step $n$ procedure (\\ref{huhuproc}) corresponds to an allocation rule $\\varphi^{HH}_{jl}\\left(\\boldsymbol{\\pi}_n\\,;\\boldsymbol{S}_{n} \\right)$\nwhich is decreasing in $\\pi_{n}(j,l)$ for any $j =0,\\ldots,J$ and\n$l = 0, \\ldots,L$. Since $\\boldsymbol{\\varphi}^{HH}(\\boldsymbol{t}^{B};\\boldsymbol{\\varsigma})=\\boldsymbol{t}^{B}$,\nthen the unique vectorial generalized downcrossing is $\\boldsymbol{t}^{B}$ for any $n$ and therefore $\\lim_{n\\rightarrow \\infty} \\boldsymbol{\\pi}_n=\\boldsymbol{t}^{B}$ a.s.\n\\end{ex}\nUnder the same arguments it can be easily proved the convergence to balance of several extensions of minimization methods (see e.g.\n\\cite{Her05,Signor93}), since at each step $n$ every type of imbalance (global, marginal and within-stratum) is a linear combination of the allocation proportions $\\pi_n(j,l)$'s.\n\n\\begin{ex}\\label{exatk}\nAssuming the liner homoscedastic model without treatment\/covariate interaction in the form\n\\begin{equation}\\label{linearsenza}\nE(Y_{i}) =\\delta _{i}\\ \\mu _{A}+(1-\\delta _{i})\\ \\mu _{B}+ \\widetilde{f}(\\boldsymbol{z}_{i})^{t} \\boldsymbol{\\beta}, \\qquad i\\geq1,\n\\end{equation}\nwhere $\\widetilde{f}(\\cdot)$ is a known vector function and $\\boldsymbol{\\beta}$ is a vector of common regression parameters.\n\nPut $\\mathcal{F}_n=\\left[\\widetilde{f}(\\boldsymbol{z}_{i})^{t}\\right]$ and $\\mathds{F}_n=[\\mathbf{1}_n\\,:\\, \\mathcal{F}_n]$,\nAtkinson \\cite{Atk82} introduced his biased coin design by assigning the $(n+1)$st patient to $A$ with probability\n\\begin{equation}\\label{atkinsondes}\n\\begin{split}\n& \\Pr(\\delta_{n+1}=1\\mid \\Im_n, \\boldsymbol{Z}_{n+1})=\n\\\\&\n \\frac{\\{1-(1; \\widetilde{f} (\\boldsymbol{z}_{n+1})^t )\n(\\mathds{F}_n^t \\mathds{F}_n)^{-1} \\boldsymbol{b}_n \\}^2}{\\{1-(1; \\widetilde{f} (\\boldsymbol{z}_{n+1})^t )\n(\\mathds{F}_n^t \\mathds{F}_n)^{-1} \\boldsymbol{b}_n \\}^2+\\{1+(1; \\widetilde{f} (\\boldsymbol{z}_{n+1})^t )\n( \\mathds{F}_n^t \\mathds{F}_n)^{-1} \\boldsymbol{b}_n \\}^2},\n\\end{split}\n\\end{equation}\nwhere $\\boldsymbol{b}_n^t=(2\\delta_1-1, \\ldots,2\\delta_n-1)\\mathds{F}_n$ is usually called the imbalance vector.\n\nAs showed in \\cite{Baz11}, in the presence of all interactions among covariates we obtain\n$$\\boldsymbol{b}_n^t=(D_n,D_n(t_1),\\ldots,D_n(t_J), D_n(w_1),\\ldots,D_n(w_L),D_n(1,1),\\ldots,D_n(J,L))$$\nand Atkinson's procedure (\\ref{atkinsondes}) becomes a stratified randomization rule with\n\\begin{equation}\\label{DABCD}\n\\Pr(\\delta_{n+1}=1\\mid \\Im_n,\\boldsymbol{Z}_{n+1}=(t_j,w_l))=\\frac{ \\left( 1- \\frac{D_n(j,l)}{N_n(j,l)}\\right)^2 }{\\left( 1- \\frac{D_n(j,l)}{N_n(j,l)}\\right)^2+\\left( 1+ \\frac{D_n(j,l)}{N_n(j,l)}\\right)^2}\\,.\n\\end{equation}\nClearly, procedure (\\ref{DABCD}) corresponds to\n\\begin{equation*}\n\\varphi_{jl}\\left(\\boldsymbol{\\pi}_n\\,;\\boldsymbol{S}_{n} \\right)=\\frac{ \\left[ 1- \\pi_n(j,l)\\right]^2 }{\\left[ 1- \\pi_n(j,l)\\right]^2+\\pi_n(j,l)^2},\n\\end{equation*}\nso (\\ref{stratrand}) holds; thus, by Theorem \\ref{thm4}, $\\lim_{n\\rightarrow \\infty} \\boldsymbol{\\pi}_n=\\boldsymbol{t}^{B}$.\n\n\\noindent\nWhen the model is not full, then $\\boldsymbol{b}_n$ contains\nall the imbalance terms corresponding to the included\ninteractions. Thus, from (\\ref{joint}) and (\\ref{marginal}), $(1;\\widetilde{f} (\\boldsymbol{z}_{n+1})^t )\n(\\mathds{F}_n^t \\mathds{F}_n)^{-1} \\boldsymbol{b}_n$ is a linear function of the allocation proportion $\\boldsymbol{\\pi}_n$, so that Theorem \\ref{thm4} can be applied by the previous arguments.\n\\end{ex}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nOne of the most important problems in the theory of vertex operator\nalgebra is to classify the rational vertex operator algebras. It is\nnot realistic to achieve this goal at this stage due to the limited\nknowledge of the structure theory and representation theory. \nIf a vertex operator algebra is rational then the\ncentral charge $c$ and effective central charge $\\tilde{c}$ are\nrational (cf. \\cite{AM}, \\cite{DLM2}). While the central\ncharge can be negative, the effective central charge is always\nnonnegative \\cite{DM2}. In this paper we classify the rational \nvertex operator algebras with $c=\\tilde{c}<1$ although we cannot write down\nthe results explicitly.\n\nIt is well known that one can construct vertex operator algebras\nassociated to highest weight modules for the Virasoro algebra \\cite{FZ}.\nIn particular, each irreducible highest weight module $L(c,0)$ for any complex\nnumber $c$ is a simple vertex operator algebra. Moreover, $L(c,0)$ is rational\nif and only if $c=c_{p,q}=1-6(p-q)^2\/pq$ for coprime positive integers\n$p,q$ with $11$ or $c\\geq 1$ (see \\cite{FQS} and \\cite{GKO}).\nOur classification result says that any simple, rational and $C_2$-cofinite\nvertex operator algebra with $c={\\tilde{c}}<1$ is an extension of\nthe Virasoro vertex operator algebra $L(c_{p,q},0)$ for some $p,q.$ \nThat is, such vertex operator algebra is a finite direct sum of irreducible \n$L(c_{p,q},0)$-modules. \n \nThe main ideal is to use the modular invariance of the $q$-characters of the \nmodules (see \\cite{Z} and \\cite{DLM2}) to control the growth of the graded\ndimensions of the vertex operator algebra. The same idea has been used to \nclassify the holomorphic vertex\noperator algebras with small central charges \\cite{DM1}, to prove the\nnonnegative property of the effective central charges \\cite{DM2},\nand to obtain some uniqueness result on the moonshine vertex operator algebra\n$V^{\\natural}$ \\cite{DGL}. The modular invariance property of \nthe $q$-characters of the modules is also the reason why we use the effective \ncentral charges instead of central charges (see Lemma \\ref{l2.1} below).\nWe should point out that we do not assume that the vertex operator\nalgebra is a unitary representation for the Virasoro algebra and that \n$\\sum_{i}|\\chi_i(q)|^2$ is a modular function over the full modular group\nwhere $\\chi_i(q)$ are the $q$-characters of the irreducible modules of\nthe vertex operator algebra (see Section 2). \n\nAs a corollary of the main result we prove that for any simple, rational\nand $C_2$-cofinite vertex operator algebra with $c={\\tilde{c}}$, the vertex operator subalgebra \ngenerated by the Virasoro vector is simple. That is, the vertex operator \nsubalgebra is an irreducible highest weight module for the Virasoro algebra.\n\nIt is worthy to mention that we do not have a explicit list of such\nvertex operator algebras. An eventual classification\nrequires to construct all extensions of $L(c_{p,q},0)$ for all $p,q.$\nIn the case that\n$c=c_{p,p+1}$, the extensions of $L(c_{p,p+1},0)$ have been classified\nin the theory of conformal nets (an analytical approach to conformal\nfield theory) \\cite{KL} (also see \\cite{X}). Although it is believed that such\nclassification result is valid in the theory of vertex operator\nalgebra, most of such extensions have not been constructed in the context of\nvertex operator algebra except for a few examples from the code vertex operator\nalgebras and lattice vertex operator algebras.\n\n\n\n\n\n\\section{Rational vertex operator algebras}\n\\setcounter{equation}{0}\n\nIn this section, we review some basic facts on the $q$-characters \nof modules for a rational vertex operator algebra. The main\nfeature of these functions is the modular invariance property\n\\cite{Z}, and its connection with the vector-valued modular forms\n\\cite{KM}. This connection is the key for us to estimate the growth of\nthe graded dimensions of the vertex operator algebra and its modules.\nWe will also discuss the effective central charge ${\\tilde{c}}.$ \n\nWe assume that vertex operator algebra $V$ is simple and\nis of CFT type. That is,\n\\begin{equation}\\label{2.1}\nV = \\bigoplus_{n=0}^{\\infty}V_n\n\\end{equation} \nmoreover $V_0$ is spanned by the vacuum vector ${\\bf 1}$. Following \\cite{DLM1}, $V$ is called rational if the admissible module category is semisimple. $V$ is\ncalled $C_2$-cofinite if $V\/C_2(V)$ is finite dimensional \\cite{Z}\nwhere $C_2(V)=\\.$\n\nA rational vertex operator algebra $V$ has only \nfinitely many irreducible modules $V = M^{1}, M^{2}, ... M^{r}$ up to \nisomorphism such that \n$$M^i=\\oplus_{n\\geq 0}M^i_{\\lambda_i+n}$$\nwhere $\\lambda_i$ is a rational number and $M^i_{\\lambda_i}\\ne 0$ (see \\cite{DLM1},\n\\cite{DLM2}). Moreover each homogeneous subspace \n$M^i_{\\lambda_i+n}$ is finite dimensional. \nLet $\\lambda_{min}$ be the minumum among the $\\lambda_i.$ The\n effective central charge $\\tilde{c}$ which appeared \nin the physics literature [GN] is defined by ${\\tilde{c}}=c - 24 \\lambda_{min}.$ One of the main results in \\cite{DM2} is that\n$\\tilde{c}$ is nonnegative, and $\\tilde{c}=0$ if and only if $V=\\Bbb C$ is trivial.\n\nFor each $i$ we define the $q$-character of $M^i$ as \n$$\\chi_i(q)=ch_qM^i={\\rm tr}_{M^i}q^{L(0)-c\/24}=\\sum_{n\\geq 0}(\\dim M^i_{n+\\lambda_i})q^{n-c\/24}.$$ It is proved in \\cite{Z} (also see \\cite{DLM2}) that if $V$ is\nrational and $C_2$-cofinite then each $\\chi_i(q)$ is\na holomorphic function on the upper half plane ${\\Bbb H}$\nwhere $q=e^{2\\pi i\\tau}$ and the span\nof these functions affords a representation of the modular group $SL(2,\\Bbb Z).$\nFor short we also write $\\chi_i(\\tau)$ for $\\chi_i(q).$ \nThen there exists a group homomorphism $\\rho$ from $SL(2,\\Bbb Z)$ to $GL(r,\\Bbb C)$ \nsuch that for any $\\gamma\\in SL(2,\\Bbb Z),$ \n$$\\chi_i(\\gamma\\tau)=\\sum_j\\gamma_{ij}\\chi_j(\\tau)$$\nwhere $\\rho(\\gamma)=(\\gamma_{ij}).$ \nThis exactly says that $\\chi(\\tau)=(\\chi_1(\\tau),\\cdots,\\chi_r(\\tau))$ \nis a ( meromorphic) vector-valued modular function \\cite{KM}. \n\nRecall the Dedekind eta function\n$$\\eta(\\tau) = q^{1\/24}\\phi(q) = q^{1\/24}\\prod_{n=1}^{\\infty}(1 - q^n)$$\n and the expansion\n$$\\frac{1}{\\prod_{n\\geq 1}(1-q^n)} =\\sum_{n=0}^{\\infty}p(n) q^n$$\nwhere $p(n)$ is the usual unrestricted partition function. An asymptotic\nexpression for $p(n)$ is given by \n$$p(n)\\sim \\frac{e^{\\pi\\sqrt{2n\/3}}}{4n\\sqrt{3}}$$\nas $n\\to \\infty$ (cf. \\cite{A}).\nIt is clear that $p(n)$ \ngrows faster than $n^{\\alpha}$ for any fixed real number $\\alpha.$ \n\n\nThe $\\eta(\\tau)$ is a modular form of weight $1\/2.$ Since\n$\\eta(\\tau)^{\\tilde{c}}\\chi_i(\\tau)$ is \nholomorphic at $\\tau=i\\infty,$ \n $$\\eta(\\tau)^{\\tilde{c}}\\chi(\\tau)=(\\eta(\\tau)^{\\tilde{c}}\\chi_1(\\tau),\\cdots,\\eta(\\tau)^{\\tilde{c}}\\chi_r(\\tau))$$ \nis a holomorphic vector-valued modular form of weight $\\tilde{c}\/2.$ \nFrom \\cite{KM} the Fourier \ncoefficients $a_n$ of a component of a \nholomorphic vector-valued modular form\nsatisfy the growth condition $a_n = O(n^{\\alpha})$ \nfor a constant $\\alpha$ independent of $n$. As a result, we see that\n\\begin{lem}\\label{l2.1} The Fourier coefficients of each component of \n$\\eta(\\tau)^{{\\tilde{c}}}\\chi_(\\tau)$ satisfy a polynomial growth condition \n$a_n = O(n^{\\alpha})$.\n\\end{lem}\n\n\n\\section{Vertex operator algebras with $c<1$}\n\\setcounter{equation}{0}\n\nIn this section we will prove the main result. We assume that\n$V$ is a rational and $C_2$-cofinite vertex operator algebra with \n$c=\\tilde{c}<1.$ It is proved in \\cite{DM2} that ${\\tilde{c}}$ is always \nnonnegative. If $c=0$ then $V=\\Bbb C.$ So we assume that $c>0.$ \n\nWe first need information on the vertex\noperator algebras associated to the highest weight modules for the Virasoro\nalgebra (see \\cite{FF}, \\cite{FZ}, \\cite{W}). \nWe use the standard basis $\\{L_n,C|n\\in\\Bbb Z\\}$ for the Virasoro algebra. \nFor any two complex numbers $c,h$ we denote the Verma module with\ncentral charge $c$ and highest weight $h$ by $V(c,h),$ as usual. \nLet $\\bar{V}(c,0)$ be the quotient of $V(c,0)$ modulo submodule \ngenerated by $L_{-1}v$ where $v$ a nonzero highest weight vector \nof $V(c,0)$ with highest weight $0.$ We use $L(c,h)$ to denote the irreducible\nquotient of $V(c,h).$ \n\nWe have already defined the $q$-character ${\\rm ch}_qM$ if $M$ is a module for any\nvertex operator algebra. We now extend this definition to any module\nfor the Virasoro algebra with finite dimensional homogeneous subspaces\nand central charge $c.$ In general the $q$-character is just a formal power\nseries in $q.$ Note that ${\\rm ch}_q \\bar{V}(c, 0)=\\frac{q^{-c\/24}}{\\prod_{n>1}(1-q^n)}$ and its coefficients grow faster than $n^{\\alpha}$ for any fixed\nreal number $\\alpha.$\n\n\\begin{lem}\\label{la} For any\n$\\mu>0$ the coefficients of $\\frac{1}{\\prod_{n>1}(1-q^n)^{\\mu}}$ grow faster\nthan any polynomial $n^\\alpha.$ \n\\end{lem}\n\n\\noindent {\\bf Proof: \\,} Observe that the coefficients ${-\\mu\\choose i}(-1)^i$ of $q^i$ in the expansion of $(1-q)^{-\\mu}$ is always positive for any $\\mu >0.$ Assume that the coefficients of \n$$\\frac{1}{\\prod_{n>1}(1-q^n)^{\\mu}}=\\sum_{n\\geq 0}a_nq^n$$\nsatisfy the polynomial growth condition. Then \nthe coefficients of $\\frac{1}{\\prod_{n>1}(1-q^n)^{k\\mu}}$ satisfy the \npolynomial growth condition for any positive integer $k.$ But if $k$ is large \nenough then $k\\mu>1$ and the coefficients of\n $\\frac{1}{\\prod_{n>1}(1-q^n)^{k\\mu}}$ grow faster than\n$n^{\\alpha}$ for any real number $\\alpha.$ \n This is a contradiction. \\mbox{ $\\square$}\n\\bigskip\n\nHere are some basic facts about these modules.\n\\cite{FF}, \\cite{FQS},\\cite{GKO}, \\cite{FZ}, \\cite{W}.\n\n\\begin{prop}\\label{vir} Let $c$ be a complex number. Then the following hold:\n\n(i) $\\bar{V}(c,0)$ is a vertex operator algebra and $L(c,0)$ is a simple\nvertex operator algebra.\n\n(ii) The following are equivalent: (a) $\\bar{V}(c, 0) = L(c, 0),$ \n(b) $c\\ne c_{p,q}=1-6(p-q)^2\/pq$ for all coprime \npositive integers $p,q$ with $11}(1-q^n)}$ and its\ncoefficients grow faster than any polynomial in $n.$ \n\n(iii) The following are equivalent: (a) $\\bar{V}(c, 0) \\ne L(c, 0),$ \n(b) $c=c_{p,q}$ for some $p,q,$ (c) $L(c,0)$ is rational. \n\\end{prop}\n\n\nWe now back to vertex operator algebra $V.$ \nLet $U=\\<\\omega\\>$ be the vertex operator subalgebra of $V.$ Then there are\ntwo possibilities: either $U$ is isomorphic to $\\bar{V}(c,0)$ or $L(c,0)$\nfrom the structure theory for these modules \\cite{FF}.\n\nThe following is the key lemma.\n\\begin{lem}\\label{kl} Assume that $c<1.$ Then the $q$ character $ch_qU$ of $U$ is different \nfrom $\\frac{q^{-c\/24}}{\\prod_{n>1}(1-q^n)}.$ \n\\end{lem}\n\n\\noindent {\\bf Proof: \\,} We prove by contradiction. Suppose that\n${\\rm ch}_qU$ is equal to $\\frac{q^{-c\/24}}{\\prod_{n>1}(1-q^n)}.$ Then\n$$ \\eta(q)^c {\\rm ch}_qU =\\frac{(1-q)^c}{\\prod_{n>1}(1-q^n)^{1-c}}.$$\nBy Lemma \\ref{la} the coefficients of $\\frac{1}{\\prod_{n>1}(1-q^n)^{1-c}}$ grow faster than\nany polynomial in $n.$ Set\n$$\\eta(q)^c{\\rm ch}_qV=\\sum_{n\\geq 0}b_nq^n.$$ By Lemma \\ref{l2.1}, the\ncoefficients $b_n$ satisfy polynomial growth condition.\n\n Let\n$f(q)=(1-q)^{-1}\\eta(q)^c{\\rm ch}_qV=\\sum_{n\\geq 0}c_nq^n.$ Then\n$$c_n=\\sum_{i=0}^nb_i$$ for all $n\\geq 0.$ Since $b_n$ satisfy \npolynomial growth condition, there exist positive constants $C$ and $\\alpha$\nsuch that $|b_n|\\leq Cn^{\\alpha}$ for all $n.$ As a result,\n$|c_n|\\leq C(n+1)n^{\\alpha}\\leq 2Cn^{\\alpha+1}$ for $n>0.$ \nThat is, the coefficients\n$c_n$ also satisfy the polynomial growth condition.\n \nSince $U$ is a subspace of $V$ we see that $\\dim U_n\\leq \\dim V_n$ for all\n$n.$ This implies that ${\\rm ch}_qU\\leq {\\rm ch}_qV$ and \n$\\eta(q)^c {\\rm ch}_qU\\leq \\eta(q)^c{\\rm ch}_qV$ as real numbers \nfor any $q\\in (0,1).$ Thus $\\frac{(1-q)^c}{\\prod_{n>0}(1-q^n)^{1-c}}\\leq f(q)$\nfor all $q\\in (0,1).$ But this is impossible as the coefficients\nof $\\frac{1}{\\prod_{n>1}(1-q^n)^{1-c}}$ grow faster than any polynomial \nin $n.$\nThe proof is complete. \\mbox{ $\\square$}\n\n\\begin{coro}\\label{c} Let $V$ be a simple, rational and $C_2$-cofinite \nvertex operator\nalgebra with central charge $c=\\tilde{c}<1.$ Then the vertex operator\nalgebra $U$ generated by the Virasoro element $\\omega$ is simple and\n$c=c_{p,q}$ for some coprime $p,q$ such that $11}(1-q^n)}$$\nwhich contradicts to Lemma \\ref{kl}. \n\nSo we can assume that $c=c_{p,q}$ for coprime $p,q$ such that $1 0,\n\\end{equation}\nwhere $\\sigma$ sets the length-scale of the kernel. Unlike in continuous spaces, the exponential in the diffusion kernel is costly to calculate. To avoid this, \\citet{Smola2003} proposed as a cheaper approximation the \\emph{random walk kernel}\n\\begin{equation}\\label{eqn:randomwalkkernel}\n\\bm{C} = \\left(\\bm{I} - a^{-1}\\bm{L}\\right)^{p} = \\left( (1-a^{-1})\\bm{I} +\na^{-1}\\bm{D}^{-1\/2}\\bm{A}\\bm{D}^{-1\/2}\\right)^{p},\\quad a>2,\\quad p\\in\\mathbb{N}.\n\\end{equation}\nThis gives back the diffusion kernel in the limit $a,p\\to\\infty$ whilst keeping $p\/a=\\sigma^{2}\/2$ fixed. The random walk kernel derives its name from its use of random walks to express correlations between vertices. Explicitly, a binomial expansion of Equation~\\eqref{eqn:randomwalkkernel} gives\n\\begin{equation}\\label{eqn:binomrandomwalk}\n\\begin{split}\n\\bm{C} &= \\sum_{q=0}^{p}\\binom{p}{q}(1-a^{-1})^{p-q}(a^{-1})^q(\\bm{D}^{-1\/2}\\bm{A}\\bm{D}^{-1\/2})^{q} \\\\&=\n\\bm{D}^{-1\/2}\\sum_{q=0}^{p}\\binom{p}{q}(1-a^{-1})^{p-q}(a^{-1})^q(\\bm{A}\\bm{D}^{-1})^{q}\\bm{D}^{1\/2}.\n\\end{split}\n\\end{equation}\nThe matrix $\\bm{A}\\bm{D}^{-1}$ is a random walk transition matrix: $(\\bm{A}\\bm{D}^{-1})_{ij}$ is the probability of being at vertex $i$ after one random walk step starting from vertex $j$. Apart from the pre- and post-multiplication by $\\bm{D}^{-1\/2}$ and $\\bm{D}^{1\/2}$, the kernel $\\bm{C}$ is therefore a $q$-step random walk transition matrix, averaged over the number of steps $q$ distributed as\n$q\\sim\\textrm{Binomial}(p,a^{-1})$. Equivalently one can interpret the random walk kernel as a $p$-step lazy random walk, where at each step the walker stays at the current vertex with probability $(1-a^{-1})$ and moves to a neighbouring vertex with probability $a^{-1}$.\n\nUsing either interpretation, one sees that $p\/a$ is the lengthscale over which the random walk can diffuse along the graph, and hence the lengthscale describing the typical maximum range of the random walk kernel. In the limit of large $p$, where this lengthscale diverges, the kernel should represent full correlation across all vertices. One can see that this is the case by observing that for large $p$, a random walk on a graph will approach its stationary distribution, $\\bm{p}_{\\infty}\\propto \\bm{D}\\bm{e}$, $\\bm{e}=(1,\\dots,1)^{T}$. The $q$-step transition matrix for large $q$ is therefore $(\\bm{A}\\bm{D}^{-1})^q \\approx \\bm{p}_{\\infty}\\bm{e}^{T} = \\bm{D}\\bm{e}\\bm{e}^{T}$, representing the fact that the random walk becomes stationary independently of the starting vertex. This gives, for $p\\to\\infty$, the kernel $\\bm{C}\\propto \\bm{D}^{1\/2}\\bm{e}\\bm{e}^{T}\\bm{D}^{1\/2}$, that is, $C_{ij} \\propto d_i^{1\/2}d_{j}^{1\/2}$. This corresponds to full correlation across vertices as expected; explicitly, if $\\bm{f}$ is a Gaussian process on the graph with covariance matrix $\\bm{D}^{1\/2}\\bm{e}\\bm{e}^{T}\\bm{D}^{1\/2}$, then $\\bm{f}=v\\bm{D}^{1\/2}\\bm{e}$ with $v$ a single Gaussian degree of freedom.\n\nWe next consider how random walk kernels on graphs approach the fully correlated case, and show that even for `simple' graphs the convergence to this limit is non-trivial. Before we do so, we note an additional peculiarity of random walk kernels compared to their Euclidean counterparts: in addition to the maximum range lengthscale $p\/a$ discussed so far, they have a diffusive lengthscale $\\sigma=(2p\/a)^{1\/2}$, which is suggested for large $p$ and $a$ by the lengthscale of the corresponding diffusion kernel \\eqref{eqn:diffkernel}. This diffusive lengthscale will appear in our analysis of learning curves in the large $p$-limit \\Sref{sec:scaling}.\n\n\n\\subsection{The $d$-Regular Tree: A Concrete Example} \\label{sec:dregtree}\nTo begin our discussion of the dependence of the random walk kernel on the lengthscale $p\/a$, we first look at how this kernel behaves on a $d$-regular graph sampled uniformly from the set of all $d$-regular graphs. Here $d$-regular means that all vertices have degree $d_i=d$.\nFor a large enough number of vertices $V$, typical cycles in such a $d$-regular graph are also large, of length $O(\\log V)$, and can be neglected for calculation of the kernel when $V\\to\\infty$. We therefore begin by assuming the graph is an infinite tree, and assess later how the cycles that do exist on random $d$-regular graphs cause departures from this picture.\n\nA $d$-regular tree is a graph where each vertex has degree $d$ with \\emph{no cycles}; it is unique up to permutations of the vertices. Since all vertices on the tree are equivalent, the random walk kernel $C_{ij}$ can only depend on the distance between vertices $i$ and $j$, that is, the smallest number of steps on the graph required to get from one vertex to the other. Denoting the value of a $p$-step lazy random walk kernel for vertices a distance $l$ apart by $C_{l,p}$, we can determine these values by recursion over $p$ as follows:\n\\begin{equation}\\label{eqn:shelljump}\n\\begin{split}\nC_{l,p=0} &= \\delta_{l,0}, \\qquad \\gamma_{p+1}C_{0,p+1} = \\left(1-\\frac{1}{a}\\right)C_{0,p} + \\frac{1}{a}C_{1,p},\\\\\n\\gamma_{p+1}C_{l,p+1} &= \\frac{1}{ad}C_{l-1,p} + \\left(1-\\frac{1}{a}\\right)C_{l,p} + \\frac{d-1}{ad}C_{l+1,p}\\quad l\\geq1.\n\\end{split}\n\\end{equation}\nHere $\\gamma_{p}$ is chosen to achieve the desired normalisation of the prior variance for every $p$. We will normalise so that $C_{0,p}=1$.\n\nFigure \\ref{fig:clp} (left) shows the results obtained by iterating Equation~\\eqref{eqn:shelljump} numerically for a 3-regular tree with $a=2$. As expected the kernel becomes longer-ranged initially as $p$ is increased, but seems to approach a non-trivial limiting form. This can be calculated analytically and is given by (see Appendix \\ref{app:clplc})\n\\begin{equation}\\label{eqn:clpinfinity}\nC_{l,p\\to\\infty} = \\left[ 1+\\frac{l(d-2)}{d}\\right]\\frac{1}{(d-1)^{l\/2}}.\n\\end{equation}\nEquation \\eqref{eqn:clpinfinity} can be derived by taking the $\\sigma^2\\to\\infty$ limit of the integral expression for the diffusion kernel from \\citet{Chung1999} whilst preserving normalisation of the kernel (see Appendix \\ref{app:chunglimit} for further details). Alternatively the result (\\ref{eqn:clpinfinity}) can be obtained by rewriting the random walk in terms of shells, that is, grouping vertices according to distance $l$ from a chosen central vertex. The number of vertices in the $l$-th shell, or shell volume, is $v_l = d(d-1)^{l-1}$ for $l\\geq 1$ and $v_0=1$. Introducing $R_{l,p} = C_{l,p}\\sqrt{v_l}$, Equation~\\eqref{eqn:shelljump} can be written in the form\n\\begin{equation}\\label{eqn:Rshelljump}\n\\begin{split}\nR_{l,p=0} &= \\delta_{l,0}, \\qquad \\gamma_{p+1}R_{0,p+1} = \\left(1-\\frac{1}{a}\\right)R_{0,p} + \\frac{1}{a\\sqrt{d}}R_{1,p},\\\\\n\\gamma_{p+1}R_{l,p+1} &= \\frac{\\sqrt{d-1}}{ad}R_{l-1,p} + \\left(1-\\frac{1}{a}\\right)R_{l,p} + \\frac{\\sqrt{d-1}}{ad}R_{l+1,p}\\quad l\\geq1.\n\\end{split}\n\\end{equation}\nThis is just the un-normalised diffusion equation for a biased random walk on a one dimensional lattice with a reflective boundary at 0. This has been solved in \\citet{Monthus1996}, and mapping this solution back to $C_{l,p}$ gives \\eqref{eqn:clpinfinity} (see Appendix \\ref{app:lcscale} for further details).\n\nTo summarise thus far, the analysis on a $d$-regular tree shows that, for large $p$, the random walk kernel does not approach the expected fully correlated limit: because all vertices have the same degree this limit would correspond to $C_{l,p\\to\\infty}=1$. On the other hand, on a $d$-regular graph with any finite number $V$ of vertices, the fully correlated limit must necessarily be approached as $p\\to\\infty$. As a large regular graph is locally treelike, the difference must arise from the existence of long cycles in a regular graph.\n\nTo estimate when the existence of cycles will start to affect the kernel, consider first a $d$-regular tree truncated at depth $l$. This will have\n$V=1+\\sum_{i=1}^{l}d(d-1)^{i-1} = O(d(d-1)^{l-1})$ vertices. On a $d$-regular graph with the same number of vertices, we therefore expect to encounter cycles after a number of steps, taken along the graph, of order $l$. In the random walk kernel the typical number of steps is $p\/a$, so effects of cycles should appear once $p\/a$ becomes larger than\n\\begin{equation}\\label{eqn:p_over_a_limit}\n \\frac{p}{a} \\approx \\frac{\\log(V)}{\\log(d-1)}.\n\\end{equation}\nFigure \\ref{fig:treegraph} (right) shows a comparison between $C_{1,p}$ as calculated from Equation~\\eqref{eqn:shelljump} for a $3$-regular tree and its analogue on random $3$-regular graphs of finite size, which we call $K_{1,p}$. We define this analogue as the average of $C_{ij}\/\\sqrt{C_{ii}C_{jj}}$ over all pairs of neighbouring vertices on a fixed graph, averaged further over a number of randomly generated regular graphs. The square root accounts for the fact that local kernel values $C_{ii}$ can vary slightly on a regular graph because of cycles, while they are the same for all vertices of a regular tree. Looking at Figure \\ref{fig:treegraph} (right) one sees that, as expected from the arguments above, the nearest neighbour kernel value for the $3$-regular graph, $K_{1,p}$, coincides with its analogue $C_{1,p}$ on the $3$-regular tree for small $p$. When $p\/a$ crosses the threshold \\eqref{eqn:p_over_a_limit}, cycles in the regular graph become important and the two curves separate. For larger $p$, the kernel value for neighbouring vertices approaches the fully correlated limit $K_{1,p}\\to 1$ on a regular graph, while on a regular tree one has the non-trivial limit $C_{1,p}\\to 2\\sqrt{d-1}\/d$ from \\eqref{eqn:clpinfinity}.\n\\begin{figure}\n\\input{figs\/gnuplot\/clp.tex}\n\\caption{(Left) Random walk kernel $C_{l,p}$ on a 3-regular tree plotted against distance $l$ for increasing number of steps $p$ and $a=2$. (Right) Comparison between numerical results for the average nearest neighbour kernel $K_{1,p}$ on random 3-regular graphs with the result $C_{1,p}$ on a 3-regular tree, calculated numerically by iteration of \\protect\\eqref{eqn:shelljump}.\n}\\label{fig:clp}\\label{fig:treegraph}\n\\end{figure}\n\nIn conclusion of our analysis of random walk kernels, we have seen that these kernels have an unusual dependence on their lengthscale $p\/a$. In particular, kernel values for vertices a short distance apart can remain significantly below the fully correlated limit, even if $p\/a$ is large. That limit is approached only once $p\/a$ becomes larger than the graph size-dependent threshold \\eqref{eqn:p_over_a_limit}, at which point cycles become important. We have focused here on random regular graphs, but the same qualitative behaviour should be observed also on graphs with a non-trivial distribution of vertex degrees $d_i$.\n\n\\section{Learning Curves for Gaussian Process Regression}\\label{sec:lc}\nHaving reached a better understanding of the random walk kernel we now study its application in machine learning. In particular we focus on the use of the random walk kernel for regression with Gaussian processes. We will begin, for completeness, with an introduction to GPs for regression. For a more comprehensive discussion of GPs for machine learning we direct the reader to \\citet{Rasmussen2005}.\n\n\\subsection{Gaussian Process Regression: Kernels as Covariance Functions}\n\nGaussian process regression is a Bayesian inference technique that constructs a posterior distribution over a function space, $P(f|\\x,\\y)$, given training input locations $\\x=(x_{1},\\ldots,x_{N})\\T$ and corresponding function value outputs $\\y=(y_{1},\\ldots,y_{N})\\T$. The posterior is constructed from a prior distribution $P(f)$ over the function space and the likelihood $P(\\y|f,\\x)$ to generate the observed output values from function $f$ by using Bayes' theorem\n\\begin{equation}\nP(f|\\x,\\y) = \\frac{P(\\y|f,\\x)P(f)}{\\int \\rmd f' P(\\y|f',\\x)P(f') }.\n\\end{equation}\nIn the GP setting the prior is chosen to be a Gaussian process, where any finite number of function values has a joint Gaussian distribution, with a covariance matrix with entries given by a \\emph{covariance function} or kernel $C(x,x')$ and with a mean vector with entries given by a \\emph{mean function} $\\mu(x)$. For simplicity we will focus on zero mean GPs\\footnote{In the discussion and analysis that follows, generalisation to non-zero mean GPs is straightforward.} and a Gaussian likelihood, which amounts to assuming that training outputs are corrupted by independent and identically distributed Gaussian noise. Under these assumptions all distributions are Gaussian and can be calculated explicitly. If we assume we are given training data $\\{(x_{\\mu},y_{\\mu})|\\mu=1,\\ldots,N\\}$ where $y_\\mu$ is the value of the target or `teacher' function at input location $x_{\\mu}$, corrupted by additive Gaussian noise with variance $\\sigma^{2}$, the posterior distribution is then given by another Gaussian process with mean and covariance functions\n\\begin{align}\n \\bar{f}(x) &=\\bm{k}(x)\\T\\bm{K}^{-1}\\bm{y},\\label{eqn:GPmean}\\\\\n \\CoVar(x,x') &=\nC(x,x')-\\bm{k}(x)\\T\\bm{K}^{-1}\\bm{k}(x'),\\label{eqn:GPcovariance}\n\\end{align}\nwhere $\\bm{k}(x) = (C(x_{1},x),\\ldots,C(x_{N},x))\\T$ and $K_{\\mu\\nu} = C(x_{\\mu},x_{\\nu})+ \\delta_{\\mu\\nu}\\sigma^{2}$. With the posterior in the form of a Gaussian process, predictions are simple. Assuming a squared loss function, the optimal prediction of the outputs is given by $\\bar{f}(x)$ and a measure of uncertainty in the prediction is provided by $\\CoVar(x,x)^{1\/2}$.\n\nEquations \\eqref{eqn:GPmean} and \\eqref{eqn:GPcovariance} illustrate that, in the setting of GP regression, kernels are used to change the type of function preferred by the Gaussian process prior, and correspondingly the posterior. The kernel can encode prior beliefs about smoothness properties, lengthscale and expected amplitude of the function we are trying to predict. Of particular importance for the discussion below, $C(x,x)$ gives the prior variance of the function $f$ at input $x$, so that $C(x,x)^{1\/2}$ sets the typical function \\emph{amplitude} or \\emph{scale}.\n\n\\subsection{Kernel Normalisation}\\label{sec:kernelnorm}\nConventionally one fixes the desired scale of the kernel using a \\emph{global normalisation}: denoting the unnormalised kernel by $\\hat{C}(x,x')$ one scales $C(x,x')= \\hat{C}(x,x')\/\\kappa$ to achieve a desired average of $C(x,x)$ across input locations $x$. In Euclidean spaces one typically uses translationally invariant kernels like the squared exponential kernel. For these, $C(x,x)$ is the same for all input locations $x$ and so global normalisation is sufficient to fix a spatially uniform scale for the prior amplitude. In the case of kernels on graphs, on the other hand, the local connectivity structure around each vertex can be different. Since information about correlations `propagates' only along graph edges, graph kernels are not generally translation invariant. In particular, there can be large variation among the prior variances at different vertices. This is usually undesirable in a probabilistic model, unless one has strong prior knowledge to justify such variation. For the random walk kernel, the local prior variances are the diagonal entries of Equation~\\eqref{eqn:binomrandomwalk}. These are directly related to the probability of return of a lazy random walk on a graph, which depends sensitively on the local graph structure. This dependence is in general non-trivial, and not just expressible through, for example, the degree of the local vertex. It seems difficult to imagine a scenario where such a link between prior variances and local graph structures could be justified by prior knowledge.\n\nTo emphasise the issue, Figure \\ref{fig:poissonvariance} shows examples of distributions of local prior variances $C_{ii}$ for random walk kernels globally normalised to an average prior variance of unity.\\footnote{We use $C_{ii}$ again here, instead of $C(i,i)$ as in our general discussion of GPs; the subscript notation is more intuitive because the covariance function on a graph is just a $V\\times V$ matrix.} The distributions are peaked around the desired value of unity but contain many `outliers' from vertices with abnormally low or high prior variance. Figure \\ref{fig:poissonvariance} (left) shows the distribution of $C_{ii}$ for a large single instance of an Erd\\H{o}s-R\\'enyi random graph \\citep{Erdos1959}. In such graphs, each edge is present independently of all others with some fixed probability, giving a Poisson distribution of degrees $p_{\\lambda}(d) = \\lambda^{d}\\exp(-\\lambda)\/d!$; for the figure we chose average degree $\\lambda=3$. Figure \\ref{fig:powerlawvariance} (right) shows analogous results for a generalised random graph with power law mixing distribution \\citep{Britton2006}. Generalised random graphs are an extension of Erd\\H{o}s-R\\'enyi random graphs where different edges are assigned different probabilities of being present. By appropriate choice of these probabilities \\citep{Britton2006}, one can generate a degree distribution that is a superposition of Poisson distributions, $p(d)=\\int \\rmd \\lambda\\, p_{\\lambda}(d)p(\\lambda)$. We have taken a shifted Pareto distribution, $p(\\lambda)=\\alpha\\lambda^{\\alpha}_{m}\/\\lambda^{\\alpha+1}$ with exponent $\\alpha=2.5$ and lower cutoff $\\lambda_m=2$ for the distribution of the means.\n\nLooking first at Figure \\ref{fig:poissonvariance} (left), we know that large Erd\\H{o}s-R\\'enyi graphs are locally tree-like and hence one might expect that this would lead to relatively uniform local prior variances. As shown in the figure, however, even for such tree-like graphs large variations can exist in the local prior variances. To give some specific examples, the large spike near 0 is caused by single disconnected vertices and the smaller spike at around 6.8 arises from two-vertex (single edge) disconnected subgraphs. Single vertex subgraphs have an atypically small prior variance since, for a single disconnected vertex $i$, before normalisation $C_{ii} = (1-a^{-1})^{p}$ which is the $q=0$ contribution from Equation~\\eqref{eqn:binomrandomwalk}. Other vertices in the graph will get additional contributions from $q\\geq 1$ and so have a larger prior variance. This effect will become more pronounced as $p$ is increased and the binomial weights assign less weight to the $q=0$ term.\n\nSomewhat surprisingly at first sight, the opposite effect is seen for two-vertex disconnected subgraphs as shown by the spike around $C_{ii}=6.8$ in Figure \\ref{fig:poissonvariance} (left). For vertices on such subgraphs, $C_{ii} = \\sum_{q=0}^{\\lfloor p\/2\\rfloor} \\binom{p}{2q}a^{-2q}(1-a^{-1})^{p-2q}$, which is an atypically large return probability: after any even number of steps, the walker must always return to its starting vertex. A similar situation would occur on vertices at the centre of a star. This illustrates that local properties of a vertex alone, like its degree, do not constrain the prior variance. In a two-vertex disconnected subgraph both vertices have degree 1. But there will generically be other vertices of degree 1 that are dangling ends of a large connected graph component, and these will not have similarly elevated return probabilities. Thus, local graph structure is intertwined in a complex manner with local prior variance.\n\nThe black line in Figure \\ref{fig:poissonvariance} (left) shows theoretical predictions (see Section \\ref{sec:cavityvariance}) for the prior variance distribution in the large graph limit. There is significant fine structure in the various peaks, on which theory and simulations agree well where the latter give reliable statistics. The decay from the mean is roughly exponential (see linear-log plot in inset), emphasizing that the distribution of local prior variances is not only rather broad but can also have large tails.\n\nFor the power law random graph, Figure \\ref{fig:poissonvariance} (right), the broad features of the distribution of local prior variances $C_{ii}$ are similar: a peak at the desired value of unity, overlaid by spikes which again come from single and two-vertex disconnected subgraphs. The inset shows that the tail beyond the mean is roughly exponential again, but with a slower decay; this is to be expected since power law graphs exhibit many more different local structures with a significantly larger probability than is the case for Erd\\H{o}s-R\\'enyi graphs. Accordingly, the distribution of the $C_{ii}$ also has a larger standard deviation than for the Erd\\H{o}s-R\\'enyi case. The maximum values of $C_{ii}$ that we see in these two specific graph instances follow the same trend, with $\\max_i C_{ii}\\approx 40$ for the power law graph and $\\max_i C_{ii}\\approx 15$ for the Erd\\H{o}s-R\\'enyi graph. Such large values would constitute rather unrealistic prior assumptions about the scaling of the target function at these vertices.\n\nTo summarise, Figure \\ref{fig:priorvariance} shows that after global normalisation a random walk kernel can retain a large spread in the local prior variances, with the latter depending on the graph structure in a complicated manner. We propose that to overcome this one should use a \\emph{local normalisation}. For a desired prior variance $c$ this means normalising according to $C_{ij} = c\\hat{C}_{ij}\/(\\kappa_{i}\\kappa_{j})^{1\/2}$ with local normalisation constants $\\kappa_i = \\hat{C}_{ii}$; here $\\hat{C}_{ij}$ is the unnormalised kernel matrix as before. This guarantees that all vertices have exactly equal prior variance as in the Euclidean case, that is, all vertices have a prior variance of $c$. No uncontrolled local variation in the scaling of the function prior then remains, and the computational overhead of local over global normalisation is negligible. Graphically, if we were to normalise the kernel to unity according to the local prescription, a plot of prior variances like the one in Figure \\ref{fig:priorvariance} would be a delta peak centred at 1.\n\nThe effect of this normalisation on the behaviour of GP regression is a key question for the remainder of this paper; numerical simulation results are shown in \\Sref{sec:predictinglc} below, while our theoretical analysis is described in \\Sref{sec:cavitypred}.\n\n\\begin{figure}\n \\input{figs\/gnuplot\/prior.tex}\n\\caption{(Left) Grey: histogram of prior variances for the globally normalised random walk kernel with $a=2$, $p=10$ on\na single instance of an Erd\\H{o}s-R\\'enyi graph with mean degree $\\lambda=3$ and $V=10000$ vertices. Black: prediction\nfor this distribution in the large graph limit (see Section \\protect\\ref{sec:cavityvariance}). Inset: Linear-log plot of the\ntail of the distribution. (Right) As (left) but for a power law generalised random graph with exponent 2.5 and cutoff 2.\\label{fig:poissonvariance}\\label{fig:powerlawvariance}\\label{fig:priorvariance}}\n\\end{figure}\n\n\\subsection{Predicting the Learning Curve}\\label{sec:predictinglc}\nThe performance of non-parametric methods such as GPs can be characterised by studying the \\emph{learning curve},\n\\begin{equation}\n\\epsilon(N) = \\Bigg\\langle \\Bigg\\langle\\Bigg\\langle\\Bigg\\langle\\frac{1}{V}\\sum_{i=1}^{V}\\left(g_i - \\langle\nf_i\\rangle_{\\f|\\x,\\y}\\right)^{2}\n \\Bigg\\rangle_{\\y|\\g,\\x}\\Bigg\\rangle_{\\g}\\Bigg\\rangle_{\\x}\\Bigg\\rangle_{\\mathcal{G}},\n\\end{equation}\ndefined as the average squared error between the student and teacher's predictions $\\f = (f_1,\\ldots,f_V)\\T$ and $\\g=(g_1,\\ldots,g_V)\\T$ respectively, averaged over the student's posterior distribution given the data $\\f|\\x,\\y$, the outputs given the teacher $\\y|\\g,\\x$, the teacher functions $\\g$, and the input locations $\\x$. This gives the average generalisation error as a function of the number of training examples. For simplicity we will assume that the input distribution is uniform across the vertices of the graph.\n\nBecause we are analysing GP regression on graphs, after the averages discussed so far the generalisation error will still depend on the structure of the specific graph considered. We therefore include an additional average, over all graphs in a random graph ensemble $\\mathcal{G}$. We consider graph ensembles defined by the distribution of degrees $d_i$: we specify a degree sequence $\\{d_1, \\ldots, d_V\\}$, or, for large $V$, equivalently a degree distribution $p(d)$, and pick uniformly at random any one of the graphs that has this degree distribution. The actual shape of the degree distribution is left arbitrary, as long as it has finite mean. Our analysis therefore has broad applicability, including in particular the graph types already mentioned above ($d$-regular graphs, where $p(d')=\\delta_{dd'}$, Erd\\H{o}s-R\\'enyi graphs, power law generalised random graphs).\n\nFor this paper, as is typical for learning curve studies, we will assume that teacher and student have the same prior distribution over functions, and likewise that the assumed Gaussian noise of variance $\\sigma^2$ reflects the actual noise process corrupting the training data. This is known as the \\emph{matched case}.\\footnote{The case of mismatch has been considered in \\citet{Malzahn2005} for fixed teacher functions, and for prior and noise level mismatch in \\citet{Sollich2002a,Sollich2005}.} Under this assumption the generalisation error becomes the Bayes error, which given that we are considering squared error simplifies to the posterior variance of the student averaged over data sets and graphs \\citep{Rasmussen2005}. Since we only need the posterior variance we shift $\\f$ so that the posterior mean is $\\bm{0}$; $f_i$ is then just the deviation of the function value at vertex $i$ from the posterior mean. The Bayes error can now be written as\n\\begin{equation}\\label{eqn:bayeserror}\n \\epsilon(N) = \\Bigg\\langle\\Bigg\\langle\\Bigg\\langle\n\\frac{1}{V}\\sum_{i=1}^{V}f_i^{2}\\Bigg\\rangle_{\\f|\\x}\\Bigg\\rangle_{\\x}\\Bigg\\rangle_{\\mathcal{G}}.\n\\end{equation}\nNote that by shifting the posterior distribution to zero mean, we have eliminated the dependence on $\\y$ in the above equation. That this should be so can also be seen from \\eqref{eqn:GPcovariance} for the posterior (co-)variance, which only depends on training inputs $\\x$ but not the corresponding outputs $\\y$.\n\nThe averages in Equation \\eqref{eqn:bayeserror} are in general difficult to calculate analytically, because the training input locations $\\x$ enter in a highly nonlinear matter, see~\\eqref{eqn:GPcovariance}; only for very specific situations can exact results be obtained \\citep{Malzahn2005,Rasmussen2005}.\nApproximate learning curve predictions have been derived, for Euclidean input spaces, with some degree of success \\citep{Sollich1999a, Sollich1999b,\nOpper1999, Williams2000, Malzahn2003, Sollich2002a,\nSollich2002b, Sollich2005}. We will show that in the case of GP regression for functions defined on graphs, learning curves can be predicted exactly in the limit of large random graphs. This prediction is broadly applicable because the degree distribution that specifies the graph ensemble is essentially arbitrary.\n\nIt is instructive to begin our analysis by extending a previous approximation seen in \\citet{Sollich1999a} and \\cite{Malzahn2005} to our discrete graph case. In so doing we will see explicitly how one may improve this approximation to fully exploit the structure of random graphs, using belief propagation or equivalently the \\emph{cavity method} \\citep{Mezard2003}. We will sketch the derivation of the existing approximation following the method of \\citet{Malzahn2005}; the result given by \\citet{Sollich1999a} is included in this as a somewhat more restricted approximation. Both the approximate treatment and our cavity method take a statistical mechanics approach, so we begin by rewriting Equation~\\eqref{eqn:bayeserror} in terms of a \\emph{generating} or \\emph{partition function} $Z$\n\\begin{equation}\\label{eqn:epgZ}\n \\epsilon(N) = \\left\\langle\\frac{1}{V}\\sum_{i}\\int \\rmd\\f P(\\f|\\x) f_i^{2}\\right\\rangle_{\\x,\\mathcal{G}} = -\\lim_{\\lambda\\to0}\\frac{2}{V}\\frac{\\partial}{\\partial\\lambda}\\left\\langle \\log(Z)\\right\\rangle_{\\x,\\mathcal{G}},\n\\end{equation}\nwith\n\\begin{equation}\n Z = \\int \\rmd\\f \\exp\\left(-\\frac{1}{2}\\f\\T \\C^{-1}\\f - \\frac{1}{2\\sigma^{2}}\\sum_{\\mu=1}^Nf_{x_{\\mu}}^2-\\frac{\\lambda}{2}\\sum_{i}f_{i}^{2}\\right).\n\\end{equation}\nIn this representation the inputs $\\x$ only enter $Z$ through the sum over $\\mu$. We introduce $\\ni[i]$ to count the number of examples at vertex $i$ so that $Z$ becomes\n\\begin{equation}\\label{eqn:Z}\n Z = \\int \\rmd\\f \\exp\\left(-\\frac{1}{2}\\f\\T\\C^{-1}\\f - \\frac{1}{2}\\f\\T\\textrm{diag}\\left(\\frac{\\ni[i]}{\\sigma^{2}}+\\lambda\\right)\\f\\right).\n\\end{equation}\nThe average in Equation~\\eqref{eqn:epgZ} of the logarithm of this partition function can still not be carried out in closed form. The approximation given by \\citet{Malzahn2005} and our present cavity approach diverge\nat this point. Section \\ref{sec:evalpred} discusses the existing approximation for the learning curve, applied to the case of regression on\na graph. Section \\ref{sec:cavitypred} then improves on this using the cavity method to fully exploit the graph structure.\n\n\\subsection{Kernel Eigenvalue Approximation}\\label{sec:evalpred}\n\nThe approach of \\citet{Malzahn2005} is to average $\\log(Z)$ from \\eqref{eqn:Z} using the replica trick \\citep{Mezard1987}.\nOne writes $\\langle\\log Z\\rangle_{\\x} = \\lim_{n\\to0}\\frac{1}{n}\\log\\langle Z^{n}\\rangle_{\\x}$, performing the average $\\langle Z^{n}\\rangle_{\\x}$ for integer $n$ and assuming that a continuation to $n\\to0$ is possible. The required $n$-th power of Equation~\\eqref{eqn:Z} is given by\n\\begin{equation}\n\\langle Z^n\\rangle_{\\x} = \\int \\prod_{a=1}^n\n\\rmd\\f^a\\left\\langle\\exp\\left(-\\frac{1}{2}\\sum_a (\\f^a)\\T\\C^{-1}\\f^a\n-\\frac{1}{2\\sigma^{2}}\\sum_{i,a}\\ni[i] (f_{i}^a)^{2}\n-\\frac{\\lambda}{2}\\sum_{i,a}(f_{i}^{a})^{2}\\right)\\right\\rangle_{\\x},\n\\end{equation}\nwhere the replica index $a$ runs from $1$ to $n$.\nAssuming as before that examples are generated independently and uniformly from $\\mathcal{V}$, the data set average over $\\x$ will, for large $V$, become equivalent to independent Poisson averages over $\\ni[i]$ with mean $\\nu=N\/V$. Explicitly performing these averages gives\n\\begin{equation}\n\\langle Z^n\\rangle_{\\x} = \\int \\prod_{a=1}^n\n\\rmd\\f^a\\exp\\left(-\\frac{1}{2}\\sum_a (\\f^a)\\T\\C^{-1}\\f^a\n+\\nu \\sum_i \\left(e^{-\\sum_{a}(f_{i}^a)^{2}\/2\\sigma^{2}}-1\\right)\n-\\frac{\\lambda}{2}\\sum_{i,a}(f_{i}^{a})^{2}\\right).\n\\label{eqn:Opper2}\n\\end{equation}\nIn order to evaluate \\eqref{eqn:Opper2} one has to find a way to deal with the exponential term in the exponent. \\citet{Malzahn2005} do this using a variational approximation for the distribution of the $\\bm{f}^a$, of Gaussian form. Eventually this leads to the following eigenvalue learning curve approximation (see also \\citealp{Sollich1999a}):\n\\begin{equation}\\label{eqn:mercerapprox}\n \\epsilon(N) = g\\left(\\frac{N}{\\epsilon(N)+\\sigma^{2}}\\right),\\qquad g(h)=\\sum_{\\alpha=1}^{V}\\left(\\lambda_{\\alpha}^{-1}+h\\right)^{-1}.\n\\end{equation}\nThe eigenvalues $\\lambda_\\alpha$ of the kernel are defined here from the eigenvalue equation\\footnote{Here and below we consider the case of a uniform distribution of inputs across vertices, though the results can be generalised to the non-uniform case.} $(1\/V) \\sum_j C_{ij} \\phi_j = \\lambda \\phi_i$. The Gaussian variational approach is evidently justified for large $\\sigma^2$, where a Taylor expansion of the exponential term in \\eqref{eqn:Opper2} can be truncated after the quadratic term. For small noise levels, on the other hand, the Gaussian variational approach will in general not capture all the details of the fluctuations in the numbers of examples $\\ni[i]$. This issue is expected to be most prominent for values of $\\nu$ of order unity, where fluctuations in the number of examples are most relevant because some vertices will not have seen examples locally or nearby and will have posterior variance close to the prior variance, whereas those vertices with examples will have small posterior variance, of order $\\sigma^2$. This effect disappears again for large $\\nu$, where the $O(\\sqrt{\\nu})$ fluctuations in the number of examples at each vertex becomes relatively small. Mathematically this can be seen from the term proportional to $\\nu$ in \\eqref{eqn:Opper2}, which for large $\\nu$ ensures that only values of $f_i^a$ with $\\exp(-\\sum_{a}(f_{i}^a)^{2}\/2\\sigma^{2})$ close to 1 will contribute. A quadratic approximation is then justified even if $\\sigma^2$ is not large.\n\nLearning curve predictions from Equation~\\eqref{eqn:mercerapprox} using numerically computed eigenvalues for the globally normalised random walk kernel are shown in Figure \\ref{fig:globallc} as dotted lines for random regular (left), Erd\\H{o}s-R\\'enyi (centre) and power law generalised random graphs (right). The predictions are compared to numerically simulated learning curves shown as solid lines, for a range of noise levels.\nConsistent with the discussion above, the predictions of the eigenvalue approximation are accurate where the Gaussian variational approach is justified, that is, for small and large $\\nu$. Figure \\ref{fig:globallc} also shows that the accuracy of the approximation improves as the noise level $\\sigma^2$ becomes larger, again as expected by the nature of the Gaussian approximation.\n\n\\begin{figure}\n\\begin{center}\n \\input{figs\/gnuplot\/global.tex}\n \\end{center}\n \\caption{(Left) Learning curves for GP regression with globally normalised kernels with $p=10$, $a=2$ on 3-regular random graphs for a range of noise levels $\\sigma^2$. Dotted lines: eigenvalue predictions (see Section \\protect\\ref{sec:evalpred}), solid lines: numerically simulated learning\ncurves for graphs of size $V=500$, dashed lines: cavity predictions (see Section \\ref{sec:cavityglobal}); note these are mostly visually indistinguishable from the simulation results. (Centre) As (left)\nfor Erd\\H{o}s-R\\'enyi random graphs with mean degree 3. (Right) As (left) for power law generalised random graphs with exponent 2.5 and cutoff 2.\\label{fig:erdos} \\label{fig:powerlaw}\n\\label{fig:regular} \\label{fig:globallc}}\n\\end{figure}\n\n\\subsubsection{Learning Curves for Large $p$}\\label{sec:scaling}\n\nBefore moving on to the more accurate cavity prediction of the learning curves, we now look at how the learning curves for GP regression on graphs depend on the kernel lengthscale $p\/a$. We focus for this discussion on random regular graphs, where the distinction between global and local normalisation is not important.\nIn \\Sref{sec:dregtree}, we saw that on a large regular graph the random walk kernel approaches a non-trivial limiting form for large $p$, as long as one stays below the threshold \\eqref{eqn:p_over_a_limit} for $p$ where cycles become important. One might be tempted to conclude from this that also the learning curves have a limiting form for large $p$. This is too naive however, as one can see by considering, for example, the effect of the first example on the Bayes error. If the example is at vertex $i$, the posterior variance at vertex $j$ is, from \\eqref{eqn:GPcovariance}, $C_{jj} - C_{ij}^2\/(C_{ii}+\\sigma^2)$. As the prior variances $C_{jj}$ are all equal, to unity for our chosen normalisation, this is $1-C_{ij}^2\/(1+\\sigma^2)$. The reduction in the Bayes error is therefore $\\epsilon(0)-\\epsilon(1) = (1\/V)\\sum_{j} C_{ij}^2\/(1+\\sigma^2)$. As long as cycles are unimportant this is independent of the location of the example vertex $i$, and in the notation of \\Sref{sec:dregtree} can be written as\n\\begin{equation}\n\\epsilon(0)-\\epsilon(1) = \\frac{1}{1+\\sigma^2} \\sum_{l=0}^p v_l C_{l,p}^2,\n\\label{eqn:initial_error_decay}\n\\end{equation}\nwhere $v_l$ is, as before, the number of vertices a distance $l$ away from vertex $i$, that is, $v_0=1$, $v_l=d(d-1)^{l-1}$ for $l\\geq 1$. To evaluate \\eqref{eqn:initial_error_decay} for large $p$, one cannot directly plug in the limiting kernel form \\eqref{eqn:clpinfinity}: the `shell volume' $v_l$ just balances the $l$-dependence of the factor $(d-1)^{-l\/2}$ from $C_{l,p}$, so that one gets contributions from all distances $l$, proportional to $l^2$ for large $l$. Naively summing up to $l=p$ would give an initial decrease of the Bayes error growing as $p^3$. This is not correct; the reason is that while $C_{l,p}$ approaches the large $p$-limit \\eqref{eqn:clpinfinity} for any fixed $l$, it does so more and more slowly as $l$ increases. A more detailed analysis, sketched in Appendix \\ref{app:lcscale}, shows that for large $l$ and $p$, $C_{l,p}$ is proportional to the large $p$-limit $l(d-1)^{-l\/2}$ up to a characteristic cutoff distance $l$ of order $p^{1\/2}$, and decays quickly beyond this. Summing in \\eqref{eqn:initial_error_decay} the contributions of order $l^2$ up to this distance predicts finally that the initial error decay should scale, non-trivially, as $p^{3\/2}$.\n\nWe next show that this large $p$-scaling with $p^{3\/2}$ is also predicted, for the entire learning curve, by the eigenvalue approximation \\eqref{eqn:mercerapprox}. As before we consider $d$-regular random graphs. The required spectrum of kernel eigenvalues $\\lambda_\\alpha$ becomes identical, for large $V$, to that on a $d$-regular tree \\citep{McKay1981}. Explicitly, if $\\lambda_\\alpha^L$ are the eigenvalues of the normalised graph Laplacian on a tree, then the kernel eigenvalues are $\\lambda_\\alpha=\\kappa^{-1} V^{-1} (1-\\lambda_\\alpha^L\/a)^p$. Here the factor $V^{-1}$ comes from the same factor in the kernel eigenvalue definition after \\eqref{eqn:mercerapprox}, and $\\kappa$ is the overall normalisation constant which enforces $\\sum_\\alpha \\lambda_\\alpha = V^{-1}\\sum_j C_{jj} = 1$. The spectrum of the tree Laplacian is known \\citep[see][]{McKay1981,Chung1996} and is given by\n\\begin{equation}\n \\rho(\\lambda^L) = \\begin{cases}\n \\frac{\\sqrt{\\frac{4(d-1)}{d^{2}}-(\\lambda^L-1)^{2}}}{(2\\pi \/d) \\lambda^L(2-\\lambda^L)} & \\lambda_{-}\\leq \\lambda \\leq \\lambda_{+}, \\\\\n 0 & {\\rm otherwise},\n \\end{cases}\n\\end{equation}\nwhere $\\lambda_{\\pm} = 1 \\pm \\frac{2}{d}(d-1)^{1\/2}$. (There are also two isolated eigenvalues at 0 and 2, which do not contribute for large $V$.)\n\nWe can now write down the function $g$ from \\eqref{eqn:mercerapprox}, converting the sum over kernel eigenvalues to $V$ times an integral over Laplacian eigenvalues for large $V$. Dropping the $L$ superscript, the result is\n\\begin{equation}\\label{eqn:ghregtree}\n g(h) = \\int_{\\lambda_{-}}^{\\lambda_{+}}\\rmd\\lambda\\, \\rho(\\lambda)[\\kappa(1-\\lambda\/a)^{-p}+hV^{-1}]^{-1}.\n\\end{equation}\nThe dependence on $hV^{-1}$ here shows that in the approximate learning curve \\eqref{eqn:mercerapprox}, the Bayes error will depend only on $\\nu=N\/V$ as might have been expected. The condition for the normalisation factor $\\kappa$ becomes simply $g(0)= 1$, or $\\kappa^{-1}=\\int \\rmd\\lambda\\,\\rho(\\lambda)(1-\\lambda\/a)^{p}$.\n\nSo far we have written down how one would evaluate the eigenvalue approximation to the learning curve on large $d$-regular random graphs, for arbitrary kernel parameters $p$ and $a$. Now we want to consider the large $p$-limit. We show that there is then a \\emph{master curve} for the Bayes error against $\\nu p^{3\/2}$. This is entirely consistent with the $p^{3\/2}$ scaling found above for the initial error decay. The intuition for the large $p$ analysis is that the factor $(1-\\lambda\/a)^p$ decays quickly as the Laplacian eigenvalue $\\lambda$ increases beyond $\\lambda_-$, so that only values of $\\lambda$ near $\\lambda_-$ contribute. One can then approximate\n\\begin{equation}\n \\left(1-\\frac{\\lambda}{a}\\right)^{p}\\approx\n \\left(1-\\frac{\\lambda_{-}}{a}\\right)^{p}\\exp\\left(-\\frac{p(\\lambda-\\lambda_{-})}{a-\\lambda_{-}}\\right).\n\\end{equation}\nSimilarly one can replace $\\rho(\\lambda)$ by its leading square root behaviour near $\\lambda_{-}$,\n\\begin{equation}\n \\rho(\\lambda) = (\\lambda-\\lambda_{-})^{1\/2}\\frac{(d-1)^{1\/4}d^{5\/2}}{\\pi(d-2)^{2}}.\n\\end{equation}\nSubstituting these approximations into \\eqref{eqn:ghregtree} and introducing the rescaled integration variable $y=p(\\lambda-\\lambda_{-})\/(a-\\lambda_{-})$ gives\n\\begin{equation}\n g(h) = r\\kappa^{-1}(1-\\lambda_{-}\/a)^{p}\\left(\\frac{a-\\lambda_{-}}{p}\\right)^{3\/2}F(h \\kappa^{-1} V^{-1}(1-\\lambda_{-}\/a)^{p}),\n\\end{equation}\nwhere $r = (d-1)^{1\/4}d^{5\/2}\/(\\pi(d-2)^{2})$ and $F(z)=\\int_{0}^{\\infty}\\rmd y\\, y^{1\/2}(\\exp(y)+z)^{-1}$. Since $g(0)=1$, the prefactor must equal $1\/F(0)=2\/\\sqrt{\\pi}$. This fixes the normalisation constant $\\kappa$, and we can simplify to\n\\begin{equation}\n g(h) = \\frac{F(hV^{-1}c^{-1})}{F(0)},\\qquad c=rF(0)\\left(\\frac{a-\\lambda_{-}}{p}\\right)^{3\/2}.\n\\end{equation}\nThe learning curves for large $p$ are then predicted from \\eqref{eqn:mercerapprox} by solving\n\\begin{equation}\n\\epsilon = F(\\nu c^{-1}\/(\\epsilon+\\sigma^{2}))\/F(0),\n\\label{eqn:mastercurve}\n\\end{equation}\nand depend clearly only on the combination $\\nu c^{-1}$. Because $c$ is proportional to $p^{-3\/2}$, this shows that learning curves for different $p$ should collapse onto a master curve when plotted against $\\nu p^{3\/2}$.\n\nA plot of the scaling of the eigenvalue learning curve approximations onto the master curve is shown in Figure \\ref{fig:taillcscale} (left). As can be seen, large values of $p$ are required in order to get a good collapse in the tail of the learning curve prediction, whereas in the initial part the $p^{3\/2}$ scaling is accurate already for relatively small $p$.\n\nFinally, Figure \\ref{fig:taillcscale} (right) shows that the predicted $p^{3\/2}$-scaling of the learning curves is present not only within the eigenvalue approximation, but also in the actual learning curves. Figure \\ref{fig:cavlcscale} (right) displays numerically simulated learning curves for $p=5,10,15$ and $20$, against the rescaled number of examples $\\nu p^{3\/2}$ as before. Even for these comparatively small values of $p$ one sees that the rescaled learning curves approach a master curve.\n\n\\begin{figure}\n \\input{figs\/gnuplot\/lcscale.tex}\n \\caption{(Left) Eigenvalue approximation for learning curves on a random 3-regular graph, using a random walk kernel with $a=2$, $\\sigma^{2}=0.1$ and increasing values of $p$ as shown. Plotting against $\\nu p^{3\/2}$ shows that for large $p$ these rescaled curves approach the master curve predicted from \\protect\\eqref{eqn:mastercurve}, though this approach is slower in the tail of the curves. (Right) As (left), but for numerically simulated learning curves on graphs of size $V=500$.\\label{fig:taillcscale}\\label{fig:cavlcscale}}\n\\end{figure}\n\n\\section{Exact Learning Curves: Cavity Method}\\label{sec:cavitypred}\n\nSo far we have discussed the eigenvalue approximation of GP learning curves, and how it deviates from numerically exact simulated learning curves.\nAs discussed in \\Sref{sec:evalpred}, the deficiencies of the eigenvalue approximation can be traced back to the fact that the fluctuations in the number of training examples seen at each vertex of the graph cannot be accounted for in detail. If in the average over data sets these fluctuations could be treated exactly, one would hope to obtain exact, or at least very accurate, learning curve predictions.\nIn this section we show that this is indeed possible in the case of a random walk kernel, for both global and local normalisations. We derive our prediction using belief propagation or, equivalently, the cavity method \\citep{Mezard2003}. The approach relies on the fact that the local structure of the graph on which we are learning is tree-like. This local tree-like structure always occurs in large random graphs sampled uniformly from an ensemble specified by an arbitrary but fixed degree distribution, which is the scenario we consider here. We will see that already for moderate graph sizes of $V=500$, our predictions are nearly indistinguishable from numerical simulations.\n\nIn order to apply the cavity method to the problem of predicting learning curves we must first rewrite the partition function \\eqref{eqn:Z} in the form of a graphical model. This means that the function being integrated over to obtain $Z$ must consist of factors relating only to individual vertices, or to pairs of neighbouring vertices. The inverse of the covariance matrix in \\eqref{eqn:Z} creates factors linking vertices at arbitrary distances along the graph, and so must be eliminated before the cavity method can be applied. We begin by assuming a general form for the normalisation of $\\hat{C}$ that encompasses both local and global normalisation and set $\\bm{C}=\\bm{\\mathcal{K}}^{-1\/2} [(1-a^{-1})\\bm{I} + a^{-1}\\bm{D}^{-1\/2}\\bm{A}\\bm{D}^{-1\/2}]^{p}\\bm{\\mathcal{K}}^{-1\/2}$ with $\\mathcal{K}_{ij} = \\kappa_i\\delta_{ij}$. To eliminate interactions across the entire graph we first Fourier transform the prior term $\\exp(-\\frac{1}{2}\\f\\T\\C^{-1}\\f)$ in \\eqref{eqn:Z}, introduce Fourier variables $\\bm{h}$, and then integrate out the remaining terms with respect to $\\f$ to give\n\\begin{equation}\\label{eqn:Zfourier}\n Z \\propto \\prod_{i}\\left(\\frac{\\ni[i]}{\\sigma^{2}}+\\lambda\\right)^{-1\/2}\\int \\rmd\\bm{h}\\exp\\left(-\\frac{1}{2}\\bm{h}\\T\\bm{C}\\bm{h} -\\frac{1}{2}\\bm{h}\\T\\textrm{diag}\\left(\\left(\\frac{\\ni[i]}{\\sigma^{2}}+\\lambda\\right)^{-1\/2}\\right)\\bm{h}\\right).\n\\end{equation}\nThe coupling between different vertices in (\\ref{eqn:Zfourier}) is now through $\\bm{C}$ so still links vertices up to distance $p$. To reduce these remaining interactions to ones among nearest neighbours only, one exploits the binomial expansion of the random walk kernel given in \\eqref{eqn:binomrandomwalk}. Defining $p$ additional variables at each vertex as $\\bm{h}^{q} =\n\\bm{\\mathcal{K}}^{1\/2}(\\bm{D}^{-1\/2}\\bm{A}\\bm{D}^{-1\/2})^q\\bm{\\mathcal{K}}^{-1\/2}\\bm{h}$, $q=1,\\ldots,p$, and abbreviating\n$c_q = \\binom{p}{q}(1-a^{-1})^{p-q}(a^{-1})^q$, the interaction term $\\bm{h}\\T\\bm{C}\\bm{h}$ turns into a local term $\\sum_{q=0}^{p}c_{q}(\\bm{h}^0)\\T\\bm{\\mathcal{K}}^{-1}\\bm{h}^q$.\n(Here we have, for the sake of uniformity, written $\\bm{h}^0$ instead of $\\bm{h}$.)\nOf course the interactions have only been `hidden' in the $\\bm{h}^q$, but the key point is that the definition of these additional variables can be enforced recursively, via $\\bm{h}^{q} = \\bm{\\mathcal{K}}^{1\/2}\\bm{D}^{-1\/2}\\bm{A}\\bm{D}^{-1\/2}\\bm{\\mathcal{K}}^{-1\/2}\\bm{h}^{q-1}$. We represent this definition via a Dirac delta function (for each $q=1,\\ldots,p$) and then Fourier transform the latter, with conjugate variables $\\bm{\\hat{h}}^{q}$, to get\n\\begin{multline}\\label{eqn:Zglobalbinom}\n Z \\propto \\prod_{i}\\left(\\frac{\\ni[i]}{\\sigma^{2}}+\\lambda\\right)^{-1\/2}\\int \\prod_{q=0}^{p}\\rmd\\bm{h}^{q}\\prod_{q=1}^{p}\\rmd\\bm{\\hat{h}}^{q}\\exp\\Bigg(-\\frac{1}{2}(\\bm{h}^0)\\T\\textrm{diag}\\left(\\left(\\frac{\\ni[i]}{\\sigma^{2}}+\\lambda\\right)^{-1}\\right)\\bm{h}^0\\\\\n -\\frac{1}{2}\\sum_{q=0}^{p}c_{q}(\\bm{h}^0)\\T\\bm{\\mathcal{K}}^{-1}\\bm{h}^q +\\rmi\\sum_{q=1}^p(\\bm{\\hat{h}}^{q})\\T\\left(\\bm{h}^{q} - \\bm{\\mathcal{K}}^{1\/2}\\bm{D}^{-1\/2}\\bm{A}\\bm{D}^{-1\/2}\\bm{\\mathcal{K}}^{-1\/2}\\bm{h}^{q-1}\\right)\\Bigg).\n\\end{multline}\nBecause the graph adjacency matrix $\\bm{A}$ now appears at most linearly in the exponent, all interactions are between nearest neighbours only. We have thus expressed our $Z$ as the partition function of a (complex-valued) graphical model.\n\n\\subsection{Global Normalisation}\\label{sec:cavityglobal}\nWe can now apply belief propagation to the calculation of marginals for the above graphical model. We focus first on the simpler case of a globally normalised kernel where $\\kappa_i = \\kappa$ for all $i$. Rescaling each $h_{i}^{q}$ to $d_{i}^{1\/2}\\kappa^{1\/2}h_{i}^{q}$ and $\\hat{h}_{i}^{q}$ to $d_{i}^{1\/2}\\hat{h}_{i}^{q}\/\\kappa^{1\/2}$ we are left with\n\\begin{multline}\\label{eqn:Zglobalsite}\n Z\\propto \\prod_{i}\\left(\\frac{\\ni[i]}{\\sigma^{2}}+\\lambda\\right)^{-1\/2}\\int \\prod_{q=0}^{p}\\rmd\\bm{h}^{q}\\prod_{q=1}^{p}\\rmd\\bm{\\hat{h}}^{q}\\prod_{i}\\exp\\left(-\\frac{1}{2}\\sum_{q=0}^{p}c_{q}h_{i}^{0}h_{i}^{q}d_{i} - \\frac{1}{2}\\frac{(h_{i}^{0})^{2}\\kappa d_{i}}{\\ni[i]\/\\sigma^{2}+\\lambda} +\\rmi\\sum_{q=1}^{p}d_{i}\\hat{h}^q_{i}h_{i}^{q}\\right)\\\\\n \\prod_{(i,j)}\\exp\\left(-\\rmi\\sum_{q=1}^{p}\\left(\\hat{h}_{i}^{q}h_{j}^{q-1} + \\hat{h}_{j}^{q}h_{i}^{q-1}\\right)\\right),\n\\end{multline}\nwhere the interaction terms coming from the adjacency matrix, $\\bm{A}$, have been written explicitly as a product over distinct graph edges $(i,j)$.\n\nTo see how the Bayes error (\\ref{eqn:bayeserror}) can be obtained from this partition function, we differentiate $\\log(Z)$ with respect to $\\lambda$ as prescribed by \\eqref{eqn:epgZ} to get\n\\begin{equation}\\label{eqn:globalepg}\n \\epsilon(\\nu) =\n\\lim_{\\lambda\\to0}\\frac{1}{V}\\sum_{i}\\frac{1}{\\ni[i]\/\\sigma^{2}+\\lambda}\n\\left(1-\\frac{d_i\\kappa\\langle(h_{i}^{0})^{2}\\rangle}{\\ni[i]\/\\sigma^{2}+\\lambda}\\right).\n\\end{equation}\nIn order to calculate the Bayes error we therefore require specifically the marginal distributions of $h_i^0$. These can be calculated using the cavity method: for a large random graph with arbitrary fixed degree sequence the graph is locally tree-like, so that if\nvertex $i$ were eliminated the corresponding subgraphs (locally trees) rooted at the neighbours $j\\in\\mathcal{N}(i)$ of\n$i$ would become approximately independent. The resulting cavity marginals created by removing $i$, which we denote $P^{(i)}_{j}(\\bm{h}_j,\\bm{\\hat{h}}_{j}|\\x)$, can then be calculated\niteratively within these subgraphs using the update equations\n\\begin{multline}\\label{eqn:globalcavity}\n P^{(i)}_{j}(\\bm{h}_j,\\bm{\\hat{h}}_{j}|\\x) \\propto \\exp\\left(-\\frac{1}{2}\\sum_{q=0}^{p}c_{q}d_{j}h^{0}_{j}h_{j}^{q}\n-\\frac{1}{2}\\frac{d_{j}\\kappa(h_{j}^{0})^{2}}{\\ni[j]\/\\sigma^{2}+\\lambda}\n+ \\rmi\\sum_{q=1}^{p}\nd_j\\hat{h}^{q}_{j}h_{j}^q\\right)\\\\\n\\int\\prod_{k\\in\\mathcal{N}(j)\\backslash i} \\rmd\\bm{h}_{k}\\rmd\\bm{\\hat{h}}_{k}\\,\n\\exp\\left(-\\rmi\\sum_{q=1}^{p}(\\hat{h}_{j}^{q}h_{k}^{q-1} +\n\\hat{h}_{k}^{q}h_{j}^{q-1})\\right)P_{k}^{(j)}(\\bm{h}_k,\\bm{\\hat{h}}_k|\\x).\n\\end{multline}\nwhere $\\bm{h}_j = (h_{j}^0,\\ldots,h_{j}^p)\\T$ and $\\bm{\\hat{h}}_j = (\\hat{h}_{j}^1,\\ldots,\\hat{h}_{j}^p)\\T$. In terms of the sum-product formulation of belief propagation, the cavity marginal on the left is the message that vertex $j$ sends to the factor in $Z$ for edge $(i,j)$ \\citep{Bishop2007}.\n\nOne sees that the cavity update Equations~(\\ref{eqn:globalcavity}) are solved self-consistently by complex-valued Gaussian distributions with mean zero and\ncovariance matrices $\\bm{V}_{j}^{(i)}$. This Gaussian character of the solution was of course to be expected because in \\eqref{eqn:Zglobalsite} we have a Gaussian graphical model. By performing the Gaussian integrals in the cavity update equations explicitly, one finds for the corresponding updates of the covariance matrices the rather simple form\n\\begin{equation}\n\\bm{V}_{j}^{(i)}= (\\bm{O}_{j}\n-\\sum_{k\\in\\mathcal{N}(j)\\backslash i}\\bm{X}\\bm{V}_{k}^{(j)}\\bm{X})^{-1},\n\\label{eqn:globalvariance}\n\\end{equation}\nwhere we have defined the $(2p+1)\\times(2p+1)$ matrices\n\\begin{equation}\n\\setlength{\\arraycolsep}{1mm}\n \\bm{O}_j = d_{j}\\left(\\begin{array}{cccc|ccc}\n c_0 \\!+\\!\\frac{\\kappa}{\\ni[j]\/\\sigma^{2} +\\lambda} &\n\\frac{c_1}{2} & \\dots & \\frac{c_p}{2} &\n0 & \\dots & 0 \\\\\n\\frac{c_1}{2}& & & &\n-\\rmi & & \\\\\n\\vdots & & & &\n & \\ddots & \\\\\n\\frac{c_p}{2}& & & &\n & & -\\rmi\\\\[0.5mm]\n\\hline\n0 & -\\rmi & & &\n & & \\\\\n\\vdots & & \\ddots & &\n & \\bm{0}_{p,p} & \\\\\n0 & & & -\\rmi &\n & &\n\\end{array}\\right), \\quad\n\\bm{X} = \\left(\\begin{array}{cccc|ccc}\n & & & & \\rmi & & \\\\\n & \\multicolumn{2}{c}{\\bm{0}_{p+1,p+1}} & & & \\ddots & \\\\\n & & & & & & \\rmi\\\\\n & & & & 0 & \\dots & 0\\\\\n\\hline\n\\rmi & & & 0 & & & \\\\\n & \\ddots & & \\vdots & & \\bm{0}_{p,p}\\\\\n & & \\rmi & 0 & & &\n\\end{array}\\right).\\label{eqn:globalOX}\n\\end{equation}\n\nAt first glance \\eqref{eqn:globalvariance} becomes singular for $\\ni[j] = 0$; however this is easily avoided. We introduce $\\bm{O}_j-\\sum_{k=1}^{d-1}\\bm{X}\\bm{V}^{(j)}_k\\bm{X}=\\bm{M}_{j}+ [d_j\\kappa\/(\\ni[j]\/\\sigma^{2}+\\lambda)]\\bm{e}_{0}\\bm{e}_{0}^{T}$ with $\\bm{e}_0^{T}=(1,0,\\ldots,0)$ so that $\\bm{M}_j$ contains all the non-singular terms. We may then apply the Woodbury identity \\citep{Hager1989} to write the matrix inverse in a form where the $\\lambda\\to 0$ limit can be taken without difficulties:\n\\begin{equation}\n\\left(\\bm{O}_j-\\sum_{k=1}^{d-1}\\bm{X}\\bm{V}^{(j)}_k\\bm{X}\\right)^{-1} = \\bm{M}_{j}^{-1} - \\frac{\\bm{M}_{j}^{-1}\\bm{e}_{0}\\bm{e}_{0}^{T}\\bm{M}_{j}^{-1}}{(\\ni[j]\/\\sigma^{2}+\\lambda)\/(d_j\\kappa) +\\bm{e}_{0}^{T}\\bm{M}_{j}^{-1}\\bm{e}_{0}}.\n\\end{equation}\n\nIn our derivation so far we have assumed a fixed graph, we therefore need to translate these equations to the setting we ultimately want to study, that is, an ensemble of large random graphs. This ensemble is characterised by\nthe distribution $p(d)$ of the degrees $d_i$, so that every graph that has the desired degree distribution is assigned\nequal probability. Instead of individual cavity covariance matrices $\\bm{V}_{j}^{(i)}$, one must then consider their\nprobability distribution $W(\\bm{V})$ across all edges of the graph. Picking at random an edge $(i,j)$ of a graph, the\nprobability that vertex $j$ will have degree $d_j$ is then $p(d_j)d_j\/\\bar{d}$, because such a vertex has $d_j$\n`chances' of being picked. (The normalisation factor is the average degree $\\bar{d}=\\sum_i p(d_i)d_i$.) Using again the locally\ntreelike structure, the incoming (to vertex $j$) cavity covariances $\\bm{V}_k^{(j)}$ will be independent and identically distributed samples from\n$W(\\bm{V})$. Thus a fixed point of the cavity update equations corresponds to a fixed point of an update equation for\n$W(\\bm{V})$:\n\\begin{equation}\\label{eqn:globalensembleupdate}\nW(\\bm{V}) = \\sum_{d}\\frac{p(d)d}{\\bar{d}}\\left\\langle\\int\n\\prod_{k=1}^{d-1} \\rmd\\bm{V}_k\\, W(\\bm{V}_k)\\\n\\delta\\left(\\bm{V} -\\left(\\bm{O} - \\sum_{k=1}^{d-1}\\bm{X}\\bm{V}_{k}\\bm{X}\\right)^{-1}\\right)\\right\\rangle_{\\ni}.\n\\end{equation}\nSince the vertex label is now arbitrary, we have omitted the index $j$. The average in \\eqref{eqn:globalensembleupdate} is over the distribution of the number of examples $\\ni\\equiv\n\\ni[j]$ at vertex $j$. As before we assume for simplicity that examples are drawn with uniform input probability\nacross all vertices, so that the distribution of $\\ni$ is simply \\mbox{$\\ni\\sim\\textrm{Poisson}(\\nu)$} in the limit of large $N$ and $V$ at fixed $\\nu=N\/V$.\n\nIn general Equation~\\eqref{eqn:globalensembleupdate}---which can also be formally derived using the replica approach\n\\citep[see][]{Urry2012}---cannot be solved analytically, but we can tackle it numerically using\npopulation dynamics \\citep{Mezard2001}. This is an iterative technique where one creates a population of\ncovariance matrices and for each iteration updates a random element of\nthe population according to the delta function in\n\\eqref{eqn:globalensembleupdate}. The update is calculated\nby sampling from the degree distribution $p(d)$ of local degrees, the Poisson distribution of the local number of\nexamples $\\nu$ and from the distribution $W(\\bm{V}_k)$ of `incoming'\ncovariance matrices, the latter being approximated\nby uniform sampling from the current population.\n\nOnce we have $W(\\bm{V})$, the Bayes error can be found from the graph ensemble version of Equation~\\eqref{eqn:epgZ}. This is obtained by inserting the explicit expression for\n$\\langle (h_i^0)^2\\rangle$ in terms of the cavity marginals of the neighbouring vertices, and replacing the average over vertices with an average over degrees $d$:\n\\begin{equation}\n\\label{eqn:epgglobalcavity}\n\\epsilon(\\nu) = \\lim_{\\lambda\\to 0}\n\\sum_{d}p(d)\\left\\langle\n\\frac{1}{\\ni\/\\sigma^{2}+\\lambda}\n\\left(1-\\frac{d\\kappa}{\\ni\/\\sigma^{2}+\\lambda}\n\\int \\prod_{k=1}^{d} \\rmd\\bm{V}_k\\, W(\\bm{V}_k)\\\n(\\bm{O} - \\sum_{k=1}^{d}\\bm{X}\\bm{V}_{k}\\bm{X})^{-1}_{00}\n\\right)\\right\\rangle_{\\ni}.\n\\end{equation}\nThe number of examples at the vertex $\\ni$ is once more to be averaged over $\\ni\\sim\\textrm{Poisson}(\\nu)$. The subscript `00'\nindicates the top left element of the matrix, which determines the variance of $h^0$.\n\nTo be able to use Equation~\\eqref{eqn:epgglobalcavity}, we again need to rewrite it into a form that remains explicitly\nnon-singular when $\\ni=0$ and $\\lambda\\to 0$. We separate the $\\ni$-dependence of the matrix inverse again and write, in slightly modified notation as appropriate for the graph ensemble case,\n$\\bm{O}-\\sum_{k=1}^d\\bm{X}\\bm{V}_k\\bm{X}=\\bm{M}_d+ [d\\kappa\/(\\ni\/\\sigma^{2}+\\lambda)]\\bm{e}_{0}\\bm{e}_{0}^{T}$, where\n$\\bm{e}_0^{T}=(1,0,\\ldots,0)$. The $00$ element of the matrix inverse appearing above can then be expressed using the Woodbury formula \\citep{Hager1989} as\n\\begin{equation}\\label{eqn:wood}\n\\e0\\T\\left(\\bm{O}-\\sum_{k=1}^d\\bm{X}\\bm{V}_k\\bm{X}\\right)^{-1}\\!\\!\\!\\!\\!\\!\\e0 = \\e0\\T\\bm{M}_d^{-1}\\e0 - \\frac{\\e0\\T\\bm{M}_d^{-1}\\bm{e}_{0}\\bm{e}_{0}^{T}\\bm{M}_d^{-1}\\e0}{(\\ni\/\\sigma^{2}+\\lambda)\/(d\\kappa) +\\bm{e}_{0}^{T}\\bm{M}_d^{-1}\\bm{e}_{0}}.\n\\end{equation}\nThe $\\lambda\\to0$ limit can now be taken, with the result\n\\begin{equation}\\label{eqn:epgglobalwood}\n\\epsilon(\\nu) = \\left\\langle\n\\sum_{d}p(d)\n\\int \\prod_{k=1}^{d} \\rmd\\bm{V}_k\\, W(\\bm{V}_k)\\\n\\frac{1}{\\ni\/\\sigma^2 + d\\kappa(\\bm{M}_d^{-1})_{00}}\\right\\rangle_{\\ni}.\n\\end{equation}\nThis has a simple interpretation: the cavity marginals of the neighbours provide an effective Gaussian prior for each\nvertex, whose inverse variance is $d\\kappa(\\bm{M}^{-1})_{00}$.\n\nThe self-consistency Equation~\\eqref{eqn:globalensembleupdate} for $W(\\bm{V})$ and the expression\n\\eqref{eqn:epgglobalwood} for the resulting Bayes error allow us to predict learning curves as a function of the number\nof examples per vertex, $\\nu$, for \\emph{arbitrary degree distributions} $p(d)$ of our random graph ensemble. For large graphs the predictions should become exact. It is worth stressing that such exact learning curve predictions have previously only been available in very specific, noise-free, GP regression scenarios, while our result for GP regression on graphs is applicable to a broad range of random graph ensembles, with arbitrary noise levels and kernel parameters.\n\nWe note briefly that for graphs with isolated vertices ($d=0$), one has to be slightly careful: already in the\ndefinition of the covariance function \\eqref{eqn:randomwalkkernel} one should replace $\\bm{D}\\to \\bm{D}+\\delta \\bm{I}$\nto avoid division by zero, taking $\\delta\\to0$ at the end. For $d=0$ one then finds in the expression\n\\eqref{eqn:epgglobalwood} that $(\\bm{M}^{-1})_{00}=1\/(c_{0}\\delta)$, where $c_0$ is defined before \\eqref{eqn:Zglobalbinom}.\nAs a consequence, $\\kappa(\\delta+d)\n(\\bm{M}^{-1})_{00}=\\kappa\\delta (\\bm{M}^{-1})_{00}=\\kappa\/c_0$. This is to be expected since isolated vertices each have a separate Gaussian prior with variance $c_0\/\\kappa$.\n\nEquations \\eqref{eqn:globalensembleupdate} and \\eqref{eqn:epgglobalwood} still require the normalisation constant, $\\kappa$. The simplest way to calculate this is to run the population dynamics once for $\\kappa=1$ and $\\nu=0$, that is, an unnormalised kernel and no training data. The result for $\\epsilon$ then just gives the average (over vertices) prior variance. With $\\kappa$ set to this value, one can then run the population dynamics for any $\\nu$ to obtain the Bayes error prediction for GP regression with a globally normalised kernel.\n\nComparisons between the cavity prediction for the learning curves, numerically exact simulated learning curves and the results of the eigenvalue approximation are shown in Figure\n\\ref{fig:globallc} (left, centre and right), for regular, Erd\\H{o}s-R\\'enyi and generalised random graphs with power law degree distributions respectively. As can be\nseen the cavity predictions greatly outperform the eigenvalue approximation and are accurate along the whole length of the curve. This confirms our expectation that the cavity approach will become exact on large graphs, although it is remarkable that the agreement is quantitatively so good already for graphs with only five hundred vertices.\n\n\\subsection{Predicting Prior Variances}\\label{sec:cavityvariance}\nAs a by-product of the cavity analysis for globally normalised kernels we note that in the cavity form\nof the Bayes error in Equation~\\eqref{eqn:epgglobalwood}, the fraction\n$(\\ni\/\\sigma^2 + d\\kappa(\\bm{M}_d^{-1})_{00})^{-1}$ is the local Bayes error, that is, the local posterior variance. By keeping track of individual samples for this quantity from the population dynamics approach, we can thus predict the distribution of local posterior variances. If we set $\\nu=0$, then this becomes the distribution of prior variances. The cavity approach therefore gives us, without additional effort, a prediction for this distribution.\n\nWe can now go back to \\Sref{sec:kernelnorm} and compare the cavity predictions to numerically simulated distributions of prior variances. The cavity predictions for these distributions are shown by the black lines in Figure \\ref{fig:priorvariance}. The cavity approach provides, in particular, detailed information about the tail of the distributions as shown in the insets. There is good agreement between the predictions and the numerical simulations, both regarding the general shape of the variance distributions and the fine structure with a number of non-trivial peaks and troughs. The residual small shifts between the predictions and the numerical results\nfor a single instance of a large graph are most likely due to finite size effects: in a finite graph, the assumption\nof a tree-like local structure is not exact because there can be rare short cycles; also, the long cycles that the cavity method ignores because their length diverges logarithmically with $V$ will have an effect when $V$ is finite.\n\n\\subsection{Local Normalisation}\\label{sec:cavitylocal}\n\nWe now extend the cavity analysis for the learning curves to the case of locally normalised random walk kernels, which, as argued above, provide more plausible probabilistic models. In this case the diagonal entries of the normalisation matrix $\\bm{\\mathcal{K}}$ are defined as\n\\begin{equation}\n \\kappa_i = \\int \\rmd\\bm{f}f_{i}^{2}P(\\bm{f}),\n\\end{equation}\nwhere $P(\\f)$ is the GP prior with the unnormalised kernel $\\bm{\\hat{C}}$. This makes clear why the locally normalised kernel case is more challenging technically: we cannot calculate the normalisation constants once and for all for a given random graph ensemble and set of kernel parameters $p$ and $a$ as we did for $\\kappa$ in the globally normalised scenario. Instead we have to account for the dependence of the $\\kappa_i$ on the specific graph instance.\n\nOn a single graph instance, this stumbling block can be overcome as follows. One iterates the cavity updates \\eqref{eqn:globalvariance} for the unnormalised kernel and without the training data (i.e., setting $\\kappa=1$ and $\\ni[i]=0$). The local Bayes error at vertex $i$, given by the $i$-th term in the sum from \\eqref{eqn:globalepg}, then gives us $\\kappa_i$. Because $\\ni[i]=0$, one has to use the Woodbury trick to get well-behaved expressions in the limit where the auxiliary parameter $\\lambda\\to 0$, as explained after \\eqref{eqn:epgglobalcavity}.\n\nOnce the $\\kappa_i$ have been determined in this way, one can use them for predicting the Bayes error for the scenario we really want to study, that is, using a locally normalised kernel and incorporating the training data.\nThe relevant partition function is the analogue of \\eqref{eqn:Zglobalbinom} for local normalisation. Dropping the prefactors, the resulting $Z$ can be written as\n\\begin{multline}\\label{eqn:Zlocalfinal}\nZ \\propto \\int \\prod_{q=0}^p\n\\rmd\\bm{h}^{q}\\prod_{q=1}^p \\rmd\\bm{\\hat{h}}^{q}\\prod_{i}\\exp\\left(-\\frac{1}{2}\\sum_{q=0}^{p}c_{q}d_{i}h^{0}_{i}h_{i}^{q}\n-\\frac{1}{2}\\frac{d_{i}\\kappa_{i}(h_{i}^{0})^{2}}{\\ni[i]\/\\sigma^{2}+\\lambda}\n+ \\rmi\\sum_{q=1}^pd_i\\hat{h}^{q}_{i}h_{i}^q\\right)\\\\\n\\prod_{(i,j)}\\exp\\left(-\\rmi\\sum^p_{q=1}(\\hat{h}_{i}^{q}h_{j}^{q-1} +\n\\hat{h}_{j}^{q}h_{i}^{q-1})\\right),\n\\end{multline}\nwhere we have rescaled $h_{i}^{q}$ to $d_{i}^{1\/2}\\kappa_{i}^{1\/2}h_{i}^{q}$ and $\\hat{h}_{i}^{q}$ to\n$d_{i}^{1\/2}\\kappa_{i}^{-1\/2}\\hat{h}_{i}^{q}$. Given that the $\\kappa_i$ have already been determined, this is a graphical model for which marginals can be calculated by iterating to a fixed point the equations for the cavity marginals:\n\\begin{multline}\\label{eqn:localcavity}\n P_{\\textrm{loc},j}^{(i)}(\\bm{h}_j,\\bm{\\hat{h}}_{j}|\\x) \\propto \\exp\\left(-\\frac{1}{2}\\sum_{q=0}^{p}c_{q}d_{j}h^{0}_{j}h_{j}^{q}\n -\\frac{1}{2}\\frac{d_{j}\\kappa_{j}(h_{j}^{0})^{2}}{\\ni[j]\/\\sigma^{2}+\\lambda}\n+ \\rmi\\sum_{q=1}^{p}\nd_j\\hat{h}^{q}_{j}h_{j}^q\\right)\\\\\n\\int\\prod_{k\\in\\mathcal{N}(j)\\backslash i} \\rmd\\bm{h}_{k}\\rmd\\bm{\\hat{h}}_{k}\\,\n\\exp\\left(-\\rmi\\sum_{q=1}^{p}(\\hat{h}_{j}^{q}h_{k}^{q-1} +\n\\hat{h}_{k}^{q}h_{j}^{q-1})\\right)P_{\\textrm{loc},k}^{(j)}(\\bm{h}_k,\\bm{\\hat{h}}_k|\\x).\n\\end{multline}\nAs in Section \\ref{sec:cavityglobal} these update equations are solved by cavity marginals of complex Gaussian form, and so we can simplify them to updates for the covariance matrices:\n\\begin{equation}\n \\bm{V}_{\\textrm{loc},j}^{(i)}= \\left(\\bm{O}_{\\textrm{loc},j}\n -\\sum_{k\\in\\mathcal{N}(j)\\backslash i}\\bm{X}\\bm{V}_{k,\\textrm{loc}}^{(j)}\\bm{X}\\right)^{-1}.\n\\label{eqn:localvariance}\n\\end{equation}\nHere $\\bm{X}$ is defined as in Equation~\\eqref{eqn:globalOX} and $\\bm{O}_{\\textrm{loc},j}$ is the obvious analogue of\n$\\bm{O}_j$ also defined in Equation~\\eqref{eqn:globalOX}; specifically, $\\kappa$ is replaced by $\\kappa_j$. Once the update equations have converged, one can calculate the Bayes error from a similarly adapted version of \\eqref{eqn:globalepg}.\n\nThe above procedure for a single fixed graph now has to be extended to the case of an ensemble of large random graphs characterised by some degree distribution $p(d)$. The outcome of the first round of cavity updates, for the unnormalised kernel without training data, is then represented by a distribution of cavity covariances $\\bm{V}$, while the second one gives a distribution of cavity covariances $\\bm{V}_{\\textrm{loc}}$ for the locally normalised kernel, with training data included. Importantly, these message distributions are coupled to each other via the graph structure, so we need to look at the joint distribution\n$W(\\bm{V}_{\\textrm{loc}},\\bm{V})$.\n\nDetailed analysis using the replica method \\citep{Urry2012} shows that the correct fixed point equation updates the $\\bm{V}$-messages as in the globally normalised case with $\\ni=0$. The second set of local covariances, $\\bm{V}_{{\\rm loc}}$, are then updated according to \\eqref{eqn:localvariance}, with a normaliser calculated using the marginals from the $d-1$ $\\bm{V}$-covariances and an additional `counterflow' covariance generated from $W(\\bm{V}) = \\int d\\bm{V}_{\\textrm{loc}}W(\\bm{V}_{\\textrm{loc}},\\bm{V})$, subject to the constraint that the local marginals of the neighbours are consistent. We find in practice that the consistency constraint can be dropped and the fixed point equation for the distribution of the two sets of messages can be approximated by\n\\begin{multline}\\label{eqn:localensembleupdate}\n W(\\bm{V}_{\\textrm{loc}},\\bm{V}) =\\left\\langle\\sum_{d}\\frac{p(d)d}{\\bar{d}}\\int\n \\prod_{k=1}^{d-1} \\rmd\\bm{V}_k\\rmd\\bm{V}_{\\textrm{loc},k}\\rmd\\bm{V}_{d}\\, \\prod_{k=1}^{d-1}W(\\bm{V}_{\\textrm{loc},k},\\bm{V}_{k})W(\\bm{V}_{d})\\right.\\\\\n \\left.\\delta\\left(\\bm{V}_{\\textrm{loc}} -\\left(\\bm{O}_{\\textrm{loc},j}\n -\\sum_{k=1}^{d-1}\\bm{X}\\bm{V}_{k,\\textrm{loc}}^{(j)}\\bm{X}\\right)^{-1}\\right)\n\\delta\\left(\\bm{V} -\\left(\\bm{O} - \\sum_{k=1}^{d-1}\\bm{X}\\bm{V}_{k}\\bm{X}\\right)^{-1}\\right)\\right\\rangle_{\\ni}.\n\\end{multline}\nOne sees that if one marginalises over $\\bm{V}_{\\textrm{loc}}$, then one obtains exactly the same condition on $W(\\bm{V})$ as before in the globally normalised kernel case (but with $\\kappa=1$ and $\\nu=0$), see \\eqref{eqn:globalensembleupdate}. This reflects the fact that the cavity updates for the first set of messages on a single graph do not rely on any information about the second set.\nThe first delta function in \\eqref{eqn:localensembleupdate} corresponds to the fixed point condition for this second set of cavity updates. This condition depends, via the value of the local $\\kappa$, on the $\\bm{V}$-cavity covariances:\n\\begin{equation}\\label{eqn:kappa_i}\n\\kappa = \\frac{1}{d(\\bm{M}_d^{-1})_{00}}.\n\\end{equation}\nIt may seem unusual that $d$ copies of $\\bm{V}$ enter here; $\\bm{V}_d$ represents the cavity covariance from the first set that is {\\em received} from the vertex to which the new message $\\bm{V}_{\\textrm{loc}}$ is being {\\em sent}. While this counterflow appears to run against the basic construction of the cavity or belief propagation method, it makes sense here because the first set of cavity messages (or equivalently the distribution $W(\\bm{V})$) reaches a fixed point that is independent of the second set, so the counterflow of information is only apparent. The reason why knowledge about $\\bm{V}_d$ is needed in the update is that $\\kappa$ is the variance of a full marginal rather than a cavity marginal.\n\nSimilarly to the case of global normalisation, \\eqref{eqn:localensembleupdate} can be solved by looking for a fixed point of $W(\\bm{V}_{\\textrm{loc}},\\bm{V})$ using population dynamics. Updates are made by first updating $\\bm{V}$ using Equation~\\eqref{eqn:globalcavity} and then updating $\\bm{V}_{\\textrm{loc}}$ using \\eqref{eqn:localcavity} with $\\kappa \\equiv\\kappa_i$ replaced by \\eqref{eqn:kappa_i}.\n\nOnce a fixed point has been calculated for the covariance distribution we apply the Woodbury formula to \\eqref{eqn:globalepg} in a similar manner to Section \\ref{sec:cavityglobal} to give the prediction for the learning curve for GP regression with a locally normalised kernel. The result for the Bayes error becomes\n\\begin{equation}\\label{eqn:epglocalwood}\n\\epsilon = \\left\\langle\n\\sum_{d}p(d)\n\\int \\prod_{k=1}^{d} \\rmd\\bm{V}_{\\textrm{loc},k}\\rmd\\bm{V}_k\\, \\prod_{k=1}^{d}W(\\bm{V}_{\\textrm{loc},k},\\bm{V}_k)\\\n\\frac{1}{\\ni\/\\sigma^2 + (\\bm{M}_{d,\\textrm{loc}}^{-1})_{00}\/(\\bm{M}_d^{-1})_{00}}\\right\\rangle_{\\ni}.\n\\end{equation}\n\nLearning curve predictions for GPs with locally normalised kernels as they result from the cavity approach described above are shown in Figure\n\\ref{fig:locallc}. The figure shows numerically simulated learning curves and the cavity prediction, both for Erd\\H{o}s-R\\'enyi random graphs (left) and power law generalised random graphs (centre) of size $V=500$. As for the globally normalised case one sees that the cavity predictions\nare quantitatively very accurate even with the simplified update Equation~\\eqref{eqn:localensembleupdate}. They capture all aspects of learning curve both qualitatively and quantitatively, including, for example, the shoulder in the curves from disconnected single vertices, a feature discussed in more detail below.\n\\begin{figure}\n \\input{figs\/gnuplot\/local.tex}\n \\caption{(Left) Learning curves for GP regression with locally normalised kernels with $p=10$, $a=2$ on Erd\\H{o}s-R\\'enyi random graphs with mean degree 3, for a range of noise levels $\\sigma^2$. Solid lines: numerically simulated learning\ncurves for graphs of size $V=500$, dashed lines: cavity predictions (see Section \\protect\\ref{sec:cavityglobal}); note these are mostly visually indistinguishable from the simulation results.\n(Left inset) Dotted lines\nshow single vertex contributions to the learning curve (solid line). (Centre) As (left) for power law generalised random graphs with exponent 2.5 and cut off 2.\n(Right top) Comparison between learning curves for locally (dashed line) and globally (solid line) normalised kernels for\nErd\\H{o}s-R\\'enyi random graphs. (Right bottom) As (right top) for power law random graphs. \\label{fig:locallc}\n\\label{fig:localpoisson} \\label{fig:localpowerlaw} \\label{fig:globallocaloverlay}}\n\\end{figure}\n\nThe fact that the cavity predictions of the learning curve for a locally normalised kernel are indistinguishable from the numerically simulated learning curves in Figure \\ref{fig:locallc} leads us to believe that the simplification made by dropping the consistency requirement in \\eqref{eqn:localensembleupdate} is in fact exact. This is further substantiated by looking not just at the average of the posterior variance over vertices, which is the Bayes error, but its distribution across vertices. As shown in Figure \\ref{fig:posteriorvariance}, the cavity predictions for this distribution are in very good agreement with the results of numerical simulations. This holds not only for the two values of $\\nu$ shown, but along the entire learning curve.\n\n\\begin{figure}\n \\input{figs\/posterior_var_plots\/posteriorvar.tex}\n\\caption{(Left) Grey: histogram of posterior variances at $\\nu=1.172$ for the locally normalised random walk kernel with $a=2$, $p=10$, averaged over ten samples each\nof teacher functions, data and Erd\\H{o}s-R\\'enyi graphs with mean degree $\\lambda=3$ and $V=1000$ vertices. Black: cavity prediction\nfor this distribution in the large graph limit. (Right) As (left) but for $\\nu=6.210$. \\label{fig:posteriorvariance}}\n\\end{figure}\n\n\n\\section{A Qualitative Comparison of Learning with Locally and Globally Normalised Kernels}\\label{sec:qualcompare}\n\nThe cavity approach we have\ndeveloped gives very accurate predictions for learning curves for GP\nregression on graphs using random walk kernels. This is true for both\nglobal and local normalisations of the kernel. We argued in\n\\Sref{sec:kernelnorm} that the local normalisation is much more\nplausible as a probabilistic model, because it avoids variability in\nthe local prior variances that is non-trivially related to the local\ngraph structure and so difficult to justify from prior knowledge. We\nnow compare what the qualitative effects of the two\ndifferent normalisations are on GP learning.\n\nIt is not a simple matter to say which kernel is\n`better', the locally or globally normalised one. Since we have dealt with the matched case, where for each kernel the target functions are sampled from a GP prior with that kernel as covariance function, it would not make sense to say the better kernel is the one that gives the lower Bayes error for given number of examples, as the Bayes error reflects both the complexity of the target function and the success in learning it. A more definite answer could be obtained only empirically, by running GP regression with local and global kernel normalisation on the same data sets and comparing the prediction errors and also the marginal data likelihood. The same approach could also be tried with synthetic data sets generated from GP priors that are mismatched to both priors we have considered, defined by the globally and locally normalised kernel, though justifying what is a reasonable choice for the prior of the target function would not be easy.\n\nWhile a detailed study along the lines above is outside the scope of this paper, we can nevertheless at least qualitatively study the effect of the kernel normalisation, to understand to what extent the corresponding priors define significantly different probabilistic models.\nFigure \\ref{fig:globallocaloverlay} (right top and bottom) overlays the learning curves for global and local kernel normalisations, for an Erd\\H{o}s-R\\'enyi and a power law generalised random graph respectively. There are qualitative differences in the shapes of the learning curves, with the ones for the locally normalised kernel exhibiting a shoulder around $\\nu=2$. This shoulder is due to the proper normalisation of isolated vertices to unit prior variance; by contrast, as shown earlier in Figure \\ref{fig:poissonvariance} (left), global normalisation gives too small a prior variance to such vertices. The inset in Figure \\ref{fig:localpoisson} (left) shows the expected learning curve contributions from all locally normalised isolated vertices (single vertex subgraphs) as dotted lines. After the GP learns the rest of the graph to a sufficient accuracy, the single vertex error dominates the learning curve until these vertices have typically seen at least one example. Once this point has been passed, the dominant error comes once more from the giant connected component of the graph, and the GP learns in a similar manner to the globally normalised case. A similar effect, although not plotted, is seen for the generalised random graph case.\n\nWe can extend the scope of this qualitative comparison by examining how a student GP with a kernel with one normalisation performs when learning from a teacher with a kernel with the other normalisation. This is a case of \\emph{model mismatch}; our theory so far does not extend to this scenario, but we can obtain learning curves by numerical simulation.\nFigure \\ref{fig:stglobalpoissonmismatch} (left) shows the case of GP students with a globally normalised kernel learning from a teacher with a locally normalised kernel on an Erd\\H{o}s-R\\'enyi graph. The learning curves for the mismatched scenario are very different from those for the matched case (Figures \\ref{fig:globallc} and \\ref{fig:locallc}), showing an increase in error as $\\nu$ approaches unity. The resulting maximum in the learning curve again emphasises that the two choices of normalisation produce distinctly different probabilistic models. Similar behaviour can be observed for the case of power law generalised random graphs, shown in Figure \\ref{fig:stglobalpowerlawmismatch} (right) and for the case of GP students with a locally normalised kernel learning from a teacher with a globally normalised kernel, shown in Figure \\ref{fig:stlocalpoissonmismatch}. In all cases close inspection (see Appendix \\ref{app:mismatchbump}) shows that the error maximum is caused by `dangling edges' of the graph, that is, chains of vertices (with degree two) extending away from the giant graph component and terminating in a vertex of degree one.\n\nAs a final qualitative comparison between globally and locally normalised kernels, Figure \\ref{fig:poissonlocalvariances} shows the variance of local posterior variances. This measures how much the local Bayes error typically varies from vertex-to-vertex, as a function of the data set size. Plausibly one would expect that this error variance is low initially when prediction on all vertices is equally uncertain. For large data sets the same should be true because errors on all vertices are then low. In an intermediate regime the error variance should become larger because examples will have been received on or near some vertices but not others. As Figure \\ref{fig:poissonlocalvariances} shows, for kernels with local normalisation we find exactly this scenario, both for Erd\\H{o}s-R\\'enyi and power law random graphs. The error variance is low for small $\\nu=N\/V$, increasing to a peak at $\\nu\\approx 0.2$ and finally decreasing again.\n\nThese results can now be contrasted with those for globally normalised kernels, also displayed in Figure \\ref{fig:poissonlocalvariances}. Here the error variance is largest at $\\nu=0$ and decays from there. This means that the initial variance in the local prior variances is so large that any effects from the uneven distribution of example locations in any given data set remain sub-dominant throughout. We regard this as another indication of the probabilistically implausible character of the large spread of prior variances caused by global normalisation.\n\n\\begin{figure}\n\\input{figs\/gnuplot\/stglobal_mismatch.tex}\n\\caption{(Left) Numerically simulated learning curves for a GP with a globally normalised kernel with $p=10$ and $a=2$, on Erd\\H{o}s-R\\'enyi random graphs with mean degree 3 for a range of noise levels. The teacher GP has a locally normalised kernel with the same parameters. (Right) As (left) but for power law generalised random graphs with exponent 2.5 and cutoff 2. \\label{fig:stglobalpowerlawmismatch}\n\\label{fig:stglobalpoissonmismatch}}\n\\end{figure}\n\n\\begin{figure}\n\\input{figs\/gnuplot\/stlocal_mismatch.tex}\n\\caption{(Left) Numerically simulated learning curves for a GP with a locally normalised kernel with $p=10$ and $a=2$, on Erd\\H{o}s-R\\'enyi random graphs with mean degree 3 for a range of noise levels. The teacher GP has a globally normalised kernel with the same parameters. (Right) As (left) but for power law generalised random graphs with exponent 2.5 and cutoff 2. \\label{fig:stlocalpowerlawmismatch}\n\\label{fig:stlocalpoissonmismatch}}\n\\end{figure}\n\n\\begin{figure}\n\\input{figs\/gnuplot\/variancevariance.tex}\n\\caption{(Left) Error variance for GP regression with locally (stars) and globally (circles) normalised kernels against $\\nu$, on Erd\\H{o}s-R\\'enyi random graphs, and for matched learning with $p=10$, $a=2$, $\\sigma^{2}=0.1$. (Right) As (left) but for power law generalised random graphs with exponent 2.5 and cutoff 2.\\label{fig:poissonlocalvariances} \\label{fig:powerlawlocalvariances}}\n\\end{figure}\n\n\\section{Conclusions and Further Work}\\label{sec:conclusions}\n\nIn this paper we studied random walk kernels and their application to GP regression. We began, in Section \\ref{sec:randomwalk}, by studying the random walk kernel, with a focus on applying this to $d$-regular trees and graphs. We showed that the kernel exhibits a rather subtle approach to the fully correlated limit; this limit is reached only beyond a graph-size dependent threshold for the kernel range $p\/a$, where cycles become important. If $p\/a$ is large but below this threshold, the kernel reaches a non-trivial limiting shape.\n\nIn Section \\ref{sec:lc} we moved on to the application of random walk kernels to GP regression. We showed, in Section \\ref{sec:kernelnorm}, that the more typical approach to normalisation, that is, scaling the kernel globally to a desired average prior variance, results in a large spread of local prior variances that is related in a complicated manner to the graph structure; this is undesirable in a prior. We suggested as a simple remedy to perform local normalisation, where the raw kernel is normalised by its local prior variance so that the prior variance becomes the same at every vertex.\n\nIn order to get a deeper understanding of the performance of GPs with random walk kernels we then studied the learning curves, that is, the mean Bayes error as a function of the number of examples. We began in section \\ref{sec:evalpred} by applying a previous approximation due to \\citet{Sollich1999a} and \\citet{Malzahn2005} to the case of discrete inputs that are vertices on a graph. We demonstrated numerically that this approximation is accurate only for small and large number of training examples per vertex, $\\nu=N\/V$, while it fails in the crossover between these two regimes. The outline derivation of this approximation suggested how one might improve it: one has to exploit fully the structure of random graphs, using the cavity method, thus avoiding approximating the average over data sets. In Section \\ref{sec:cavitypred} we implemented this programme, beginning in Section \\ref{sec:cavityglobal} with the case of global normalisation. We showed that by Fourier transforming the prior and introducing $2p$ additional variables at each vertex one can rewrite the partition function in terms of a complex-valued Gaussian graphical model, where the marginals that are required to calculate the Bayes error can be found using the cavity method, or equivalently belief propagation. In Section \\ref{sec:cavitylocal} we tackled the more difficult scenario of a locally normalised kernel. This required two sets of cavity equations. The first serves to calculate the local normalisation factors. The second one then combines these with the information about the data set to find the local marginal distributions. One might be tempted to consider applying our methods to a lattice so that one could make an estimate of the learning curves for the continuous limit, that is, regression with a squared exponential kernel for inputs in $\\mathbb{R}^2$ or similar. Sadly, however, since the cavity method requires graphs to be treelike this is not possible.\n\nFinally in Section \\ref{sec:qualcompare} we qualitatively compared GPs with kernels that are normalised globally and locally. We showed that learning curves are indeed qualitatively different. In particular, local normalisation leads to a shoulder in the learning curves owing to the correct normalisation of the single vertex disconnected graph components. We also considered numerically calculated mismatch curves. The mismatch caused a maximum to appear in the learning curves, as a result of the large differences in the teacher and student priors. Lastly we looked at the variance among the local Bayes errors, for GPs with both globally and locally normalised kernels. Plausibly, locally normalised kernels lead to this error variance being maximal for intermediate data set sizes. This reflects the variation in number of examples seen at or near individual vertices. For globally normalised kernels, the error variance inherited from the spread of local prior variances is always dominant, obscuring any signatures from the changing `coverage' of the graph by examples.\n\nIn further work we intend to extend the cavity approximation of the learning curves to the case of mismatch, where teacher and student have different kernel hyperparameters. It would also be interesting to apply the cavity method to the learning curves of GPs with random walk kernels on more general random graphs, like those considered in \\citet{Rogers2010} and \\citet{Kuhn2011}. This would enable us to consider graphs exhibiting some community structure. Looking further ahead, preliminary work has shown that it should be possible to extend the cavity learning curve approximation to the problem of graph mismatch, where the student has incomplete information about the graph structure of the teacher.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\\label{intro}\n\\subsection*{Background and motivation}\n\nLet $L \\subset S^3$ be a framed link and $M_L$ the closed orientable $3$-manifold obtained from $S^3$ by surgery along $L$. By a theorem of Lickorish and Wallace, any closed connected orientable 3-manifold arises in this way \\cite{wallace1960,lickorish1962}. Moreover, the $3$-manifolds $M_L$ and $M_{L^{\\prime}}$ are homeomorphic if and only if the framed links $L$ and $L^{\\prime}$ are related by a finite sequence of Kirby moves \\cite{kirby1978calculus}. These results are the starting point for a knot theoretic approach to problems and constructions in $3$-manifold topology. For example, it follows from the previous two results that an isotopy invariant of framed links which is also invariant under Kirby moves defines an invariant of $3$-manifolds, thereby emphasizing the topological importance of link invariants.\n\nReshetikhin and Turaev constructed a large class of link invariants using the theory of ribbon categories \\cite{reshetikhin1990ribbon}. Associated to each ribbon category $\\mathcal{D}$ is a ribbon functor $F_{\\mathcal{D}} : \\mathbf{Rib}_{\\mathcal{D}} \\rightarrow \\mathcal{D}$ with domain the category of $\\mathcal{D}$-colored ribbon graphs. Interpreting an isotopy class of a $\\mathcal{D}$-colored framed link $L$ as a $(0,0)$-tangle, and so an endomorphism of the unit object $\\mathbb{I} \\in \\mathbf{Rib}_{\\mathcal{D}}$, produces an invariant $F_{\\mathcal{D}}(L) \\in \\textnormal{End}_{\\mathcal{D}}(\\mathbb{I})$ of $L$, the \\emph{Reshetikhin--Turaev invariant}. The invariant $F_{\\mathcal{D}}(L)$ is computed as follows. Choose a regular diagram $D$ for $L$. Decompose $D$ into elementary pieces consisting of cups, caps, simple crossings and twists and assign to these pieces the corresponding coevaluations, evaluations, braidings and twists, respectively, of $\\mathcal{D}$. The composition of these morphisms in $\\mathcal{D}$ is $F_{\\mathcal{D}}(L)$. \\cref{fig:RTHopf} illustrates this procedure for the Hopf link.\n\n\\begin{figure}\n\\centering\n\\begin{equation*}\n \\begin{aligned}\n \\begin{tikzpicture}[anchorbase]\n \n \\draw [very thick] (-.25,0) to [out=270,in=130] (-.05,-.55);\n \\draw [very thick, postaction={decorate}, decoration={markings, mark=at position .55 with {\\arrow{<}}}] (.1,-.7) to [out=310,in=180] (.75,-1) to [out=0,in=270] (1.75,0) to [out=90,in=0] (.75,1) to [out=180,in=90] (-.25,0);\n \n \\draw [very thick, , postaction={decorate}, decoration={markings, mark=at position .7 with {\\arrow{<}}}] (.25,0) to [out=270,in=0] (-.75,-1) to [out=180,in=270] (-1.75,0) to [out=90,in=180] (-.75,1) to [out=0,in=130](-.1,.7);\n \\draw [very thick ] (.05,.55) to [out=310,in=90] (.25,0);\n \\node at (-2,0) {$V$};\n \\node at (2,0) {$W$};\n \\end{tikzpicture}\n \\rightsquigarrow \n \\begin{tikzpicture}[anchorbase]\n \n \\draw [->,very thick] (-.25,.1) to [out=90,in=270] (.25,.6) to (.25,.7);\n \\draw [very thick] (.25,.1) to [out=90,in=330] (.08,.31);\n \\draw [->,very thick] (-.08,.385) to [out=150,in=270] (-.25,.6) to (-.25,.7);\n \n \n \\draw [<-,very thick] (.25,-.1) to (.25,-.2) to [out=270,in=90] (-.25,-.7);\n \\draw [very thick] (.25,-.7) to [out=90,in=330] (.08,-.49);\n \\draw [->,very thick] (-.08,-.415) to [out=150,in=270] (-.25,-.2) to (-.25,-.1);\n \n \\draw [<-,very thick] (-1,.8) to [out=90,in=180] (-.625,1.2) to [out=0,in=90] (-.25,.8);\n \\draw [<-,very thick] (1,.8) to [out=90,in=0] (.625,1.2) to [out=180,in=90] (.25,.8);\n \n \\draw [->,very thick] (-1,-.8) to [out=270,in=180] (-.625,-1.2) to [out=0,in=270] (-.25,-.8);\n \\draw [->,very thick] (1,-.8) to [out=270,in=0] (.625,-1.2) to [out=180,in=270] (.25,-.8);\n \n \\draw [->,very thick] (-1,.7) to (-1,.1);\n \\draw [<-,very thick] (-1,-.7) to (-1,-.1);\n \\draw [->,very thick] (1,.7) to (1,.1);\n \\draw [<-,very thick] (1,-.7) to (1,-.1);\n \n \\node at (-1.25,.5) {$V$};\n \\node at (1.3,.5) {$W$};\n \\end{tikzpicture}\n \\xmapsto{F_\\mathcal{D}}\n \\begin{tikzpicture} [anchorbase]\n \n \n \n \n \\node at (0,1.1) {$\\mathrm{ev}_V \\otimes \\widehat{\\mathrm{ev}}_W$};\n \\node at (0,.4) {$\\mathrm{id}_{V^\\vee} \\otimes c_{W,V} \\otimes \\mathrm{id}_{W^\\vee}$};\n \\node at (0,-.4) {$\\mathrm{id}_{V^\\vee} \\otimes c_{V,W} \\otimes \\mathrm{id}_{W^\\vee}$};\n \\node at (0,-1.1) {$\\widehat{\\mathrm{coev}}_V \\otimes \\mathrm{coev}_W$};\n \\node at (0,-.8) {$\\circ$};\n \\node at (0,0) {$\\circ$};\n \\node at (0,.73) {$\\circ$};\n \\end{tikzpicture}\n \\end{aligned}\n\\end{equation*}\n\\caption{The Reshetikhin--Turaev invariant of a Hopf link colored by objects $V$ and $W$ of $\\mathcal{D}$. Composition in $\\mathcal{D}$ is read from bottom to top.}\n\\label{fig:RTHopf}\n\\end{figure}\n\t\nThe Reshetikhin--Turaev construction highlights the topological significance of ribbon categories. Classical representation theory produces many examples of symmetric monoidal categories: representations of groups and Lie algebras and, more generally, cocommutative Hopf algebras. Unfortunately, Reshetikhin--Turaev invariants associated to a symmetric monoidal category are uninteresting since they retain information only about the number of components of a link. On the other hand, categories of representations of quantum groups and, more generally, quasi-triangular Hopf algebras famously give rise to (non-symmetric) ribbon categories \\cite{jimbo1985aq,drinfeld1986quantum,drinfeld1990hopf,chari1994}. The resulting quantum invariants of links, which include the Jones and HOMFLYPT polynomials, are at the foundation of quantum topology \\cite{jones1987,freyd1985,przytycki1988,reshetikhin1990ribbon,turaev2016quantum}.\n\n\t\nMany ribbon categories arising in representation theory have the following properties:\n\\begin{enumerate}[label=(P\\arabic*)]\n \\item \\label{ite:vanQD} The category has simple objects with vanishing quantum dimension.\n\t\\item \\label{ite:nonss} The category is non-semisimple, that is, not every short exact sequence splits.\n\t\\item \\label{ite:infSimp} The category has infinitely many non-isomorphic simple objects.\n\\end{enumerate}\nFor example, the category $U_q(\\mathfrak{g}) {\\operatorname{-mod}}$ of finite dimensional representations of the quantum group $U_q(\\mathfrak{g})$ associated to a complex simple Lie (super)algebra $\\mathfrak{g}$ at a root of unity has Properties \\ref{ite:vanQD}-\\ref{ite:infSimp}. It is well-known that the Reshetikhin--Turaev invariant of a link colored by a simple object of vanishing quantum dimension is zero. For this reason, the Reshetikhin--Turaev construction is not well-suited to extracting the full topological content of categories having Property \\ref{ite:vanQD}. Properties \\ref{ite:nonss} and \\ref{ite:infSimp} do not cause problems for Reshetikhin--Turaev invariants of links but are serious obstructions to extending these invariants to $3$-manifolds. For example, these properties obstruct the definition of the Kirby color, a weighted sum of isomorphism classes of simple objects, which is crucial to the construction of $3$-manifold invariants in \\cite{reshetikhin1991invariants}.\n\nA standard approach to simultaneously eliminating Properties \\ref{ite:vanQD}-\\ref{ite:infSimp} for the category $U_q(\\mathfrak{g}) {\\operatorname{-mod}}$, with $\\mathfrak{g}$ a simple Lie algebra, is semisimplification \\cite{andersen1992}, whereby simple objects of vanishing quantum dimension are formally set to zero. The semisimpflied categories are, for particular roots of unity, modular tensor categories. The resulting $3$-manifold invariants comprise the top level of a three dimensional topological quantum field theory which is a mathematical model for Chern--Simons theory with gauge group the simply connected compact Lie group associated to $\\mathfrak{g}$ \\cite{witten1989,reshetikhin1991invariants}. On the other hand, for the category $ U_q(\\mathfrak{g}) {\\operatorname{-mod}}$, with $\\mathfrak{g}$ a type I Lie superalgebra, typical representations have vanishing quantum dimension and semisimplification eliminates most interesting content of the category.\n\t\nRibbon categories with Properties \\ref{ite:vanQD}-\\ref{ite:infSimp} also arise in quantum field theory. For example, such categories arise as line operators in Chern--Simons theories with non-compact gauge groups \\cite{witten1991,barnatan1991,rozansky1994,mikhaylov2015} and topological twists of supersymmetric quantum field theories \\cite{kapustin2009b,creutzig2021} and as modules for vertex operator algebras in non-rational (or logarithmic) conformal field theories \\cite{rozansky1993,creutzig2013,creutzig2013b}.\n\nEarly examples of knot invariants constructed from ribbon categories with Properties \\ref{ite:vanQD}-\\ref{ite:infSimp} include the work of Akutsu, Deguchi and Ohtsuki \\cite{akutsu1992} and Murakami and Murakami \\cite{murakami2001}, who defined (framed) link invariants from typical representations of the unrolled quantum group $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$ at an even root of unity. A systematic program to define and study quantum invariants from ribbon categories with Properties \\ref{ite:vanQD}-\\ref{ite:infSimp}, called \\emph{renormalized Reshetikhin--Turaev theory}, was developed by Blanchet, Costantino, Geer, Patureau-Mirand and Turaev \\cite{BCGP,geer_2009,costantino2014quantum}. In the setting of links, these renormalized invariants provide non-trivial invariants of links colored by objects with vanishing quantum dimension. The goal of this paper is to give a self-contained introduction to the theory of renormalized Reshetikhin--Turaev invariants of links in the simplest case of the category of modules over $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$, following \\cite{geer_2009,costantino2015}. While this paper contains no new results, we do offer a number of new proofs of known results and give complete details where they are often not available in the literature. Some familiarity with the representation theory of $U_q(\\mathfrak{sl}_2(\\mathbb{C}))$, at the level of \\cite{jantzen}, and its associated Reshetikhin--Turaev invariants would be beneficial, but is not strictly necessary. We assume basic knowledge of Hopf algebras and monoidal categories.\n\n\\subsection*{Contents of this paper}\n\nFix an integer $r \\geq 2$ and set $q= e^{\\frac{\\pi \\sqrt{-1}}{r}}$. The De Concini--Kac quantum group $U_q(\\mathfrak{sl}_2(\\mathbb{C}))$ has generators $K^{\\pm 1}$, $E$ and $F$ with relations $KK^{-1}=1=K^{-1}K$ and\n\\begin{equation}\n\\label{eq:DCKReln}\nKE = q^2EK, \\qquad KF = q^{-2}FK, \\qquad EF - FE = \\frac{K - K^{-1}}{q - q^{-1}}.\n\\end{equation}\nThe \\emph{unrolled quantum group} $U_q^H(\\mathfrak{sl}_2(\\mathbb{C}))$, as introduced in \\cite{geer_2009}, is defined similarly to the De Concini--Kac quantum group but with an additional generator $H$, thought of as a logarithm of $K$, which commutes with $K$ and satisfies the classical limit of the first two relations \\eqref{eq:DCKReln}:\n\\[\n[H,E] = 2E, \\qquad [H,F]=-2F.\n\\]\nThe algebra of primary interest in this paper is the \\emph{restricted unrolled quantum group} $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$, defined to be the quotient of $U^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$ by the relations $E^r = F^r = 0$. A $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$-module $V$ is called a \\emph{weight module} if it is a direct sum of $H$-eigenspaces and $K=q^H$ as operators on $V$. The category $\\mathcal{C}$ of finite dimensional weight modules over $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$ is the central algebraic object of this paper.\n\t\n\\cref{sunrolled} is devoted to a detailed study of $\\mathcal{C}$. A natural Hopf algebra structure on $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$ gives $\\mathcal{C}$ the structure of a rigid monoidal abelian category. We use Verma modules, which are finite dimensional due to the relations $E^r=F^r=0$, to classify simple objects of $\\mathcal{C}$ in \\cref{simplemodules}. The result is that there is a discrete family of simple modules $S^{lr}_n$ of highest weight $n + lr$ and dimension $n+1$, $l \\in \\mathbb{Z}$, $0 \\leq n \\leq r-2$, and a continuous family of simple Verma modules $V_{\\alpha}$ of highest weight $\\alpha + r -1$, $\\alpha \\in \\mathbb{C} \\setminus \\mathbb{Z} \\cup r \\mathbb{Z}$, and dimension $r$.\n \nTracking $H$-weights modulo $2 \\mathbb{Z}$ defines a $\\mathbb{C} \\slash 2 \\mathbb{Z}$-grading $\\mathcal{C} = \\bigoplus_{\\overline{\\alpha} \\in \\mathbb{C} \\slash 2 \\mathbb{Z}} \\mathcal{C}_{\\overline{\\alpha}}$ which is compatible with the rigid monoidal structure. While the category $\\mathcal{C}$ is not semisimple, it is \\emph{generically semisimple} in the sense that most homogeneous subcategories $\\mathcal{C}_{\\overline{\\alpha}} \\subset \\mathcal{C}$ are semisimple. More precisely, we prove in \\cref{Cisgss} that $\\mathcal{C}_{\\overline{\\alpha}}$ is semisimple unless $\\overline{\\alpha} \\in \\mathbb{Z} \\slash 2 \\mathbb{Z}$. \\cref{ribbon} states that $\\mathcal{C}$ is braided. A complete proof of this statement does not seem to be in the literature. The proof we present is elementary and self-contained. The form of the braiding is motivated by the well-known universal $R$-matrix of the $\\hbar$-adic quantum group of $\\mathfrak{sl}_2(\\mathbb{C})$ \\cite{drinfeld1986quantum,ohtsuki2002quantum}. In \\cref{ribbon theorem} we prove that $\\mathcal{C}$ is ribbon. The candidate ribbon structure is based on the twist associated to the rigid monoidal structure, namely the right partial trace of the braiding. We use generic semisimplicity of $\\mathcal{C}$ to prove that this twist is compatible with duality by checking that this is so generically and concluding, via a general result of \\cite{geer_2017}, that this extends to the entirety of $\\mathcal{C}$. The results of \\cref{sunrolled} can be summarized as follows.\n\n \n\t\n\\begin{introtheorem}\n The category $\\mathcal{C}$ is a $\\mathbb{C}\/2\\mathbb{Z}$-graded generically semisimple ribbon category.\n\\end{introtheorem}\n\t\nIn \\cref{rti} we recall standard material related to the Reshetikhin--Turaev functor $F_{\\mathcal{D}}: \\mathbf{Rib}_\\mathcal{D} \\rightarrow \\mathcal{D}$ associated to a ribbon category $\\mathcal{D}$. Central to the renormalized theory is the well-known statement, proved in this paper as \\cref{cutting}, that if $V \\in \\mathcal{D}$ is a simple object of a $\\mathbb{C}$-linear ribbon category, $L$ is a $\\mathcal{D}$-colored link and $T$ is a $(1,1)$-tangle with closure $L$ and open strand colored by $V$, then\n\\begin{equation*}\n\tF_\\mathcal{D}(L) = \\textnormal{qdim}_{\\mathcal{D}}(V) F_\\mathcal{D}(T).\n\\end{equation*}\nHere both sides of the equation are identified with the scalar by which they act. In particular, if $\\textnormal{qdim}_{\\mathcal{D}}(V)=0$, then $F_\\mathcal{D}(L)$ vanishes, while $F_\\mathcal{D}(T)$ need not. We prove in \\cref{modified_invariant} that it, if $L$ is a knot, then $F_\\mathcal{D}(T)$ is an invariant of $L$.\n\t\nIn \\cref{mqi} we extend the invariant $L \\mapsto F_{\\mathcal{D}}(T)$ from framed knots to framed links. To clarify the exposition, we restrict attention to $\\mathcal{D}=\\mathcal{C}$, the category of weight $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$-modules. The main obstacle in this extension is that cutting a link $L$ with multiple components produces a $(1,1$)-tangle whose isotopy type depends on the component which is cut. Ambidextrous modules are the key to overcoming this obstacle. A simple $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$-module $V$ is called \\emph{ambidextrous} if the equality\n\\begin{equation*}\n\t\\begin{aligned}\n\t\tF_\\mathcal{C}\\left(\n\t \\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\\draw [very thick] (-.5,-.25) -- (.5,-.25) -- (.5,.25) -- (-.5,.25) -- cycle;\n\t\t\t\\draw [<-, very thick] (-1.25,0) to [out=90, in=180] (-.75,1) to [out=0,in = 90] (-.25,.25);\n\t\t\t\\draw [very thick] (-1.25,0) to [out=270, in=180] (-.75,-1) to [out=0,in=270] (-.25,-.25);\n\t\t\t\\draw [->, very thick] (.25,.25) to [out=90, in=230] (.5,1);\n\t\t\t\\draw [very thick] (.25,-.25) to [out=270, in=130] (.5,-1);\n\t\t\t\\node at (-1.7,0) {$V$};\n\t\t\t\\node at (.8,-.9) {$V$};\n\t\t\t\\node at (0,0) {$T$};\n\t\t\\end{tikzpicture}\n\t\t\\right)\n\t\t&=F_\\mathcal{C}\\left(\n\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\\draw [very thick] (-.5,-.25) -- (.5,-.25) -- (.5,.25) -- (-.5,.25) -- cycle;\n\t\t\t\\draw [->, very thick] (-.25,.25) to [out=90, in=310] (-.5,1);\n\t\t\t\\draw [very thick] (-.25,-.25) to [out=270, in=50] (-.5,-1);\n\t\t\t\\draw [<-, very thick] (1.25,0) to [out=90, in=0] (.75,1) to [out=180,in = 90] (.25,.25);\n\t\t\t\\draw [very thick] (1.25,0) to [out=270, in=00] (.75,-1) to [out=180,in=270] (.25,-.25);\n\t\t\t\\node at (1.7,0) {$V$};\n\t\t\t\\node at (-.8,-.9) {$V$};\n\t\t\t\\node at (0,0) {$T$};\n\t\t\\end{tikzpicture}\n\t\t\\right)\n\t\\end{aligned}\n\\end{equation*}\nof endomorphisms of $V$ holds for all (2,2)-tangles $T$ whose open strands are colored by $V$. We prove in \\cref{manyAmbi} that all simple $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$-modules are ambidextrous.\n\t\nDefine a function $S': \\mathbb{C} \\times \\mathbb{C} \\rightarrow \\mathbb{C}$ by\n\\begin{equation*}\n\t\\begin{aligned}\n\t S'(\\beta, \\alpha) = \\,\tF_\\mathcal{C}\\left(\n\t \\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.4 with {\\arrow{>}}}] (-.1,-.5) to [out=180,in=270] (-.8,0) to [out=90,in=180] (0,.5) to [out=0,in=90] (.8,0) to [out=270,in=0] (.1,-.5);\n\t\t\t\\draw [very thick] (0,-1) to (0,.4);\n\t\t\t\\draw [->, very thick] (0,.6) to (0,1);\n\t\t\t\\node at (1.2,0) {$V_\\beta$};\n\t\t\t\\node at (.45,-.9) {$V_\\alpha$};\n\t\t\\end{tikzpicture}\n\t\t\\right) \\in \\textnormal{End}_{\\mathcal{C}}(V_{\\alpha}) \\simeq \\mathbb{C}.\n\t\\end{aligned}\n\\end{equation*}\nFor a fixed ambidextrous module $V_\\eta$, the \\emph{modified quantum dimension} of a simple Verma module $V_{\\alpha}$ is defined to be $\\mathbf{d}_\\eta(\\alpha) = \\frac{S'(\\alpha, \\eta)}{S'(\\eta, \\alpha)}$. The main result of this paper can be stated as follows.\n\t\n\\begin{introtheorem}\\textup{(\\cref{invaraiant})}\n Let $V_\\eta \\in \\mathcal{C}$ be an ambidextrous module and $L$ a framed link with at least one strand colored by $V_\\alpha$ for some $\\alpha \\in \\mathbb{C} \\setminus \\mathbb{Z} \\cup r \\mathbb{Z}$. Then the assignment\n\t\t\\begin{equation*}\n\t\t\tL \\mapsto F_\\eta'(L) := \\mathbf{d}_\\eta(\\alpha) F_\\mathcal{C}(T),\n\t\t\\end{equation*}\n\twhere $T$ is a $(1,1)$-tangle whose closure is $L$ and whose open strand is colored by $V_{\\alpha}$, is a well-defined isotopy invariant of framed colored links.\n\\end{introtheorem}\n\nIn \\cref{sec:examples} we discuss some basic properties of the renormalized invariant $F'_\\eta$, such as its behavior under connect sum and its associated skein relations, and compute some basic examples. We also show that renormalizations with respect to different ambidextrous modules $V_{\\eta}$ lead to invariants which differ by a global scalar.\n\t\nFinally, in \\cref{sec:furtherReading} we present a brief guide to further mathematical and physical applications of renormalized Reshetikhin--Turaev invariants of links.\n \n\\subsection*{Conventions} \nThe ground field is $\\mathbb{C}$. Write $\\otimes$ for $\\otimes_{\\mathbb{C}}$. All modules are left modules and finite dimensional over $\\mathbb{C}$. Any categorical notion regarding monoidal categories is in congruence with \\cite{etingof2016tensor}. Given a scalar endomorphism $e$ of a vector space $V$, define $\\langle e \\rangle \\in \\mathbb{C}$ by $e = \\langle e \\rangle \\cdot \\mathrm{id}_V$.\n\t\t\n\\subsection*{Acknowledgements}\nN.G.\\ is partially supported by NSF grants DMS-1664387 and DMS-2104497. M.B.Y. is partially supported by a Simons Foundation Collaboration Grant for Mathematicians (Award ID 853541).\n\n\\addtocontents{toc}{\\protect\\setcounter{tocdepth}{2}}\n\n\\section{The unrolled quantum group $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$ and its weight modules} \\label{sunrolled}\n\nFix an integer $r \\geq 2$. Set $q = e^{\\frac{\\pi \\sqrt{-1}}{r}}$.\nFor $z \\in \\mathbb{C}$, define\n\\[\nq^z = e^{\\frac{\\pi \\sqrt{-1} z}{r}}, \\qquad \\{z\\} = q^z - q^{-z}, \\qquad [z] = \\frac{\\{z\\}}{\\{1\\}}.\n\\]\nSet $\\{0\\}!=1$ and $\\{n\\}! = \\prod_{i=1}^n \\{i\\}$ for $n \\in \\mathbb{Z}_{> 0}$, and similarly for $[n]!$. For $0 \\leq k \\leq l$, set $\\genfrac{[}{]}{0pt}{}{l}{k} = \\frac{[l]!}{[k]![l-k]!}$.\n\n\\subsection{The unrolled quantum group of $\\mathfrak{sl}_2(\\mathbb{C})$}\n\nWe recall the definition of the unrolled quantum group of $\\mathfrak{sl}_2(\\mathbb{C})$, as introduced in \\cite{geer_2009,costantino2015}. Pre-cursors of the unrolled quantum group appear in work of Ohtuski \\cite{ohtsuki2002quantum}.\n\t\n\\begin{definition} \\label{unrolled} \nThe \\emph{unrolled quantum group of $\\mathfrak{sl}_2(\\mathbb{C})$} is the unital associative algebra $U^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$ generated by $K$, $K^{-1}$, $H$, $E$ and $F$ with relations\n\\[\nKK^{-1} = K^{-1}K = 1, \\qquad HK = KH,\n\\]\n\\[\nHE - EH = 2E,\n\\qquad\nHF - FH = -2F,\n\\]\n\\[\nKE = q^2EK,\n\\qquad\nKF = q^{-2}FK,\n\\]\n\\[\nEF - FE = \\frac{K - K^{-1}}{q - q^{-1}}.\n\\]\nThe \\emph{restricted unrolled quantum group} $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$ is the quotient of $U^H_q (\\mathfrak{sl}_2(\\mathbb{C}))$ by the relations $E^r = F^r = 0$.\n\\end{definition}\n\t\nInformally, the generator $H$ should be viewed as a logarithm of $K$. While this constraint is not imposed at the level of algebras, it is imposed on the modules of interest in this paper. See \\cref{sec:weightMod} below.\n\t\t\nBoth $U^H_q (\\mathfrak{sl}_2(\\mathbb{C}))$ and $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$ are Hopf algebras with coproduct, counit and antipode defined by\n\\begin{center}\n\t\\setlength{\\tabcolsep}{25pt}\n\t\\begin{tabular}{l l l}\n\t\t$\\Delta(E) = 1\\otimes E + E\\otimes K$, & $\\varepsilon(E) = 0$, & $S(E) = -EK^{-1}$,\\\\\n\t\t$\\Delta(F) = F\\otimes 1 + K^{-1}\\otimes F$, & $\\varepsilon(F) = 0$, & $S(F) = -KF$, \\\\\n\t\t$\\Delta(K) = K\\otimes K$, & $\\varepsilon(K) = 1$, & $S(K) = K^{-1}$, \\\\\n\t\t$\\Delta(H) = H\\otimes 1 + 1\\otimes H$, & $\\varepsilon(H) = 0$, & $S(H) = -H$.\n\t\\end{tabular}\n\\end{center}\n\t\nThe De Concini--Kac quantum group $U_q(\\mathfrak{sl}_2(\\mathbb{C}))$ is isomorphic to the Hopf subalgebra of $U^H_q (\\mathfrak{sl}_2(\\mathbb{C}))$ generated by $E,F$ and $K^{\\pm 1}$. Similarly, the restricted quantum group $\\overline{U}_q(\\mathfrak{sl}_2(\\mathbb{C}))$ is isomorphic to the Hopf subalgebra of $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$ generated by $E$, $F$ and $K^{\\pm 1}$. The algebra $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$ shares many properties with $\\overline{U}_q(\\mathfrak{sl}_2(\\mathbb{C}))$. For example, $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$ has a Poincar\\'{e}--Birkhoff--Witt basis\n\\[\n\\{F^a H^b K^c E^d \\mid 0 \\leq a, d \\leq r-1, \\; b \\in \\mathbb{Z}_{\\geq 0}, \\; c \\in \\mathbb{Z} \\}\n\\]\nand admits a triangular decomposition\n\\[\n\\overline{U}^{H,-}_q(\\mathfrak{sl}_2(\\mathbb{C})) \\otimes \\overline{U}^{H,0}_q(\\mathfrak{sl}_2(\\mathbb{C})) \\otimes \\overline{U}^{H,+}_q(\\mathfrak{sl}_2(\\mathbb{C})) \\xrightarrow[]{\\sim} \\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))\n\\]\nwhere $\\overline{U}^{H,-}_q(\\mathfrak{sl}_2(\\mathbb{C}))$, $\\overline{U}^{H,0}_q(\\mathfrak{sl}_2(\\mathbb{C}))$ and $\\overline{U}^{H,+}_q(\\mathfrak{sl}_2(\\mathbb{C}))$ are the subalgebras of $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$ generated by $F$, $H$ and $K^{\\pm 1}$ and $E$, respectively. For later use, let $\\overline{U}^{H}_q(\\mathfrak{b})$ be the Hopf subalgebra of $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$ generated by $E$, $K^{\\pm 1}$ and $H$. \n\n\\subsection{Weight modules}\\label{sec:weightMod}\n\nRecall that all modules are assumed to be finite dimensional.\n\\begin{definition}\\label{def:weightModule}\nLet $V$ be a $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$-module.\n \\begin{enumerate}\n \\item A \\emph{weight vector of weight $\\lambda \\in \\mathbb{C}$} is a non-zero vector $v \\in V$ which satisfies $Hv = \\lambda v$. If, moreover, $Ev=0$, then $v$ is called a \\emph{highest weight vector}. The subspace $V[\\lambda] = \\{v \\in V \\mid Hv = \\lambda v\\}$ is called the \\emph{weight space of weight $\\lambda$}.\n \\item \\label{ite:KasH} The module $V$ is called a \\emph{weight module} if it is the direct sum of its weight spaces, $V = \\bigoplus_{\\lambda \\in \\mathbb{C}} V[\\lambda]$, and $Kv = q^{\\lambda}v$ for all $v \\in V[\\lambda]$.\n \\item The module $V$ is called a \\emph{highest weight module} if it is generated by a highest weight vector. \\qedhere\n \\end{enumerate}\n\\end{definition}\n\t\nAll $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$-modules considered in this paper are assumed to be weight modules. The second condition in \\cref{def:weightModule}(\\ref{ite:KasH}) can be written as the equality as operators $K = q^H$ on $V$. In view of this, when speaking of weight modules we often give the action of $H$ and omit that of $K$. Finally, note that a highest weight module is necessarily a weight module. \n\t\nLet $\\mathcal{C}$ be the category of weight $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$-modules and their $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$-linear maps. The category $\\mathcal{C}$ is $\\mathbb{C}$-linear, locally finite and abelian. The bialgebra structure of $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$ makes $\\mathcal{C}$ into a monoidal category with unit object the one dimensional module $\\mathbb{C}$ on which $H$, $E$ and $F$ act by zero. The associators and unitors are as for the category of complex vector spaces and are henceforth suppressed from the notation.\n\t\nLet $V \\in \\mathcal{C}$. Denote by $V^{\\vee} \\in \\mathcal{C}$ the dual vector space $\\textnormal{Hom}_\\mathbb{C}(V, \\mathbb{C})$ with $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$-module structure given by\n\\[\n(x \\cdot f)(v) = f(S(x)v), \\qquad x \\in \\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C})), \\qquad f \\in V^{\\vee}, \\qquad v \\in V.\n\\]\nGiven a basis $\\{v_i\\}_{i=1}^n$ of $V$ with dual basis $\\{v_i^\\vee\\}_{i=1}^n$ of $V^\\vee$, define\n\\[\n\\widehat{\\mathrm{ev}}_V:V \\otimes V^\\vee \\rightarrow \\mathbb{C}, \\qquad\tv\\otimes f \\mapsto f(K^{1-r}v)\n\\]\nand \n\\[\n\\widehat{\\mathrm{coev}}_V:\\mathbb{C}\\rightarrow V^\\vee\\otimes V, \\qquad\t1 \\mapsto \\sum_{i=1}^n K^{r-1}v_i^\\vee\\otimes v_i.\n\\]\nNote that $\\widehat{\\mathrm{coev}}_V$ is independent of the choice of basis. A direct check shows that $\\widehat{\\mathrm{ev}}_V$ and $\\widehat{\\mathrm{coev}}_V$ are $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$-linear and satisfy the snake relations, namely, that the compositions\n\\[\nV \\xrightarrow[]{\\mathrm{id}_V \\otimes \\widehat{\\mathrm{coev}}_V} V \\otimes V^{\\vee} \\otimes V \\xrightarrow{\\widehat{\\mathrm{ev}}_V \\otimes \\mathrm{id}_V} V\n\\]\nand\n\\begin{equation}\n\\label{eq:snakeRel}\nV^{\\vee} \\xrightarrow[]{\\widehat{\\mathrm{coev}}_V \\otimes \\mathrm{id}_{V^{\\vee}}} V^{\\vee} \\otimes V \\otimes V^{\\vee} \\xrightarrow{\\mathrm{id}_{V^{\\vee}} \\otimes \\widehat{\\mathrm{ev}}_V} V^{\\vee}\n\\end{equation}\nare the respective identities. It follows that $\\widehat{\\mathrm{ev}}_V$ and $\\widehat{\\mathrm{coev}}_V$ are right duality morphisms. Define also\n\\[\n\\mathrm{ev}_V:V^\\vee \\otimes V \\rightarrow \\mathbb{C}, \\qquad f\\otimes v \\mapsto f(v)\n\\]\nand\n\\[\n\\mathrm{coev}_V:\\mathbb{C}\\rightarrow V\\otimes V^\\vee, \\qquad 1 \\mapsto \\sum_{i=1}^n{v_i\\otimes v_i^\\vee}.\n\\]\nThese are the usual left duality morphisms in the category of finite dimensional vector spaces and are easily verified to be $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$-linear. It follows that the category $\\mathcal{C}$ is rigid. Hence, $\\mathcal{C}$ is tensor in the sense of \\cite[Definition 4.1.1]{etingof2016tensor}.\n\nGiven a finite dimensional vector space $V$, write $V \\rightarrow V^{\\vee \\vee}$, $v \\mapsto \\spr{-}{v}$, for the canonical evaluation isomorphism.\n\n\\begin{lemma}\n\tThe maps $\\{p_V: V \\rightarrow V^{\\vee \\vee}\\}_{V \\in \\mathcal{C}}$ given by $p_V(v)= K^{1-r} \\spr{-}{v}$ define a pivotal structure on $\\mathcal{C}$.\n\\end{lemma}\n\\begin{proof}\n We need to verify that $\\{p_V\\}_{V \\in \\mathcal{C}}$ are the components of a monoidal natural isomorphism $p: \\mathrm{id}_{\\mathcal{C}} \\Rightarrow (-)^{\\vee} \\circ (-)^{\\vee}$. Naturality is immediate and a direct check shows that $p_V$ is $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$-linear. The relation $\\Delta(K^{1-r}) = K^{1-r} \\otimes K^{1-r}$ implies the equality $p_{V \\otimes W} = p_V \\otimes p_W$, $V,W \\in \\mathcal{C}$, which is the required monoidality.\n\\end{proof}\n\nOne can readily see that the right and left duality structures defined above are compatible with the above pivotal structure, in the sense that the equalities $\\mathrm{id}_{V^{\\vee}} \\otimes (p_V \\circ \\widehat{\\mathrm{coev}}_V) = \\mathrm{coev}_{V^{\\vee}}$ and $\\widehat{\\mathrm{ev}}_V = (\\mathrm{ev}_{V^{\\vee}} \\circ p_V) \\otimes \\mathrm{id}_{V^{\\vee}}$ hold for each $V \\in \\mathcal{C}$.\n \n\\subsection{Simple modules}\\label{sec:simpObj}\n\nA non-zero module $V \\in \\mathcal{C}$ is called \\emph{simple} (or \\emph{irreducible}) if it has no non-zero proper submodules. In this section, we classify simple objects of $\\mathcal{C}$. The results of this section are contained in \\cite[\\S 5]{costantino2015}, although we give different proofs.\n\t\n\\begin{lemma}\\label{highestweight}\n\tEvery simple object of $\\mathcal{C}$ is a highest weight module. \n\\end{lemma}\n\\begin{proof}\n Let $V \\in \\mathcal{C}$ be simple and $v \\in V$ a weight vector. Since $E^r=0$, there exists a minimal integer $l > 0$ such that $E^l v=0$. Then $E^{l-1} v$ is a highest weight vector and $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C})) \\cdot E^{l-1} v \\subset V$ is a non-zero submodule which, by simplicity, is equal to $V$.\n\\end{proof}\n\nLet $\\alpha \\in \\mathbb{C}$. Denote by $\\mathbb{C}_{\\alpha + r -1}$ the one dimensional weight $\\overline{U}^{H}_q(\\mathfrak{b})$-module of $H$-weight $\\alpha + r -1$ on which $E$ and $F$ act by zero.\n\t\n\\begin{definition}\n\tThe \\emph{Verma module of highest weight $\\alpha+r-1$} is the $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$-module\n\t$V_\\alpha = \\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C})) \\otimes_{\\overline{U}^{H}_q(\\mathfrak{b})} \\mathbb{C}_{\\alpha + r - 1}.$\n\\end{definition}\n\t\nWrite $v_i$ for the vector $F^i \\otimes 1 \\in V_\\alpha$. The Poincar\\'{e}--Birkhoff--Witt basis for $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$ shows that $\\{v_0, \\ldots, v_{r-1}\\}$ \\label{basis} is a weight basis of $V_\\alpha$ and $V_{\\alpha} \\in \\mathcal{C}$. Direct calculations show that the $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$-action on $V_\\alpha$ is given by\n\\[\nH v_i = (\\alpha + r - 1 - 2i) v_i, \\quad Ev_i = \\frac{\\{i\\}\\{i -\\alpha\\}}{\\{1\\}^2}v_{i-1}, \\quad Fv_i = v_{i+1},\n\\]\nwhere by convention $v_{-1}=v_{r}=0$. In particular, $V_\\alpha$ is a highest weight module generated by $v_0$. The structure of $V_{\\alpha}$ is summarized by the diagram\n\\[\n\\begin{tikzcd}[column sep=4em]\n\t0 & v_{r-1} \\ar[loop above, \"H=\\alpha - r + 1\"] \\ar[r, bend left, \"E\"] \\ar[l, bend left, \"F\"below] & v_{r-2} \\ar[loop above, \"H=\\alpha -r + 3\"] \\ar[l, bend left, \"F\"below] \\ar[r, bend left, \"E\"above] & \\ldots \\ar[r, bend left, \"E\"above right] \\ar[l, bend left, \"F\"below] & v_{1} \\ar[loop above, \"H=\\alpha + r - 3\"] \\ar[r, bend left, \"E\"] \\ar[l, bend left, \"F\"] & v_0 \\ar[loop above, \"H=\\alpha + r -1\"] \\ar[r, bend left, \"E\"] \\ar[l, bend left, \"F\"] & 0. \\\\\n\\end{tikzcd}\n\\]\n\n\\begin{lemma}\\label{verma}\n If $V$ is a highest weight module of highest weight $\\alpha + r - 1$, then there exists a surjection $V_\\alpha \\twoheadrightarrow V$.\n\\end{lemma}\n\\begin{proof}\nBy adjunction, there is an isomorphism\n\\[\n\\textnormal{Hom}_{\\mathcal{C}}(V_{\\alpha}, V) \\simeq \\textnormal{Hom}_{\\overline{U}^{H}_q(\\mathfrak{b})}(\\mathbb{C}_{\\alpha + r - 1},V_{\\big\\vert \\overline{U}^{H}_q(\\mathfrak{b})}).\n\\]\nIt follows that $\\textnormal{Hom}_{\\mathcal{C}}(V_{\\alpha}, V)$ is isomorphic to the subspace of highest weight vectors of weight $\\alpha+r-1$ in $V$. In particular, if $v \\in V$ is a generating highest weight vector of weight $\\alpha+r-1$, then the assignment $v_0 \\mapsto v$ extends to a surjective morphism $V_{\\alpha} \\rightarrow V$ in $\\mathcal{C}$.\n\\end{proof}\n\nUsing \\cref{verma}, it is straightforward to verify that the map $v_{\\alpha,r-1}^{\\vee} \\mapsto v_{-\\alpha,0}$ extends to a $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$-module isomorphism\n\\begin{equation}\n\\label{ValphaDual}\nV_{\\alpha}^{\\vee} \\xrightarrow[]{\\sim} V_{-\\alpha}.\n\\end{equation}\n\t\nIt follows from \\cref{highestweight,verma} that any simple object of $\\mathcal{C}$ is a quotient of a unique Verma module $V_\\alpha$. In particular, a simple module has dimension at most $r$.\n\t\n\\begin{proposition} \\label{simplemodules}\nLet $\\alpha \\in \\mathbb{C}$.\n \\begin{enumerate}\n \\item \\label{eins} If $\\alpha \\notin \\mathbb{Z} \\setminus r \\mathbb{Z}$, then $V_{\\alpha}$ is simple.\n \\item \\label{zwei} If $\\alpha \\in \\mathbb{Z} \\setminus r\\mathbb{Z}$ is written in its unique form as $\\alpha = (l-1)r + n + 1$ with $0 \\leq n \\leq r-2$ and $l \\in \\mathbb{Z}$, then there exists a non-split short exact sequence $$ 0 \\rightarrow S_{r-n-2}^{(l-1)r} \\rightarrow V_\\alpha \\rightarrow S_{n}^{lr} \\rightarrow 0$$ which is a Jordan--H\\\"{o}lder filtration of $V_{\\alpha}$.\n \\item \\label{drei} Any simple object of $\\mathcal{C}$ is isomorphic to a unique module of the form $V_\\alpha$, $\\alpha \\in \\mathbb{C} \\setminus \\mathbb{Z} \\cup r \\mathbb{Z}$, or $S_n^{lr}$, $l \\in \\mathbb{Z}$, $0 \\leq n \\leq r-2$.\n \\end{enumerate} \n\\end{proposition}\n\t\n\\begin{proof}\n\tIf $\\alpha \\notin \\mathbb{Z} \\setminus r \\mathbb{Z}$, then $\\frac{\\{i\\}\\{i - \\alpha\\}}{\\{1\\}^2} \\neq 0$ for $i=1, \\dots, r-1$, as follows from the assumption that $q$ is a primitive $2r$\\textsuperscript{th} root of unity. It follows from the explicit form of the action of $E$ on $V_{\\alpha}$ that $Ev_i \\neq 0$ for $i=1, \\dots, r-1$, whence $V_{\\alpha}$ is simple. \n\t\t\n\tIf instead $\\alpha \\in \\mathbb{Z} \\setminus r\\mathbb{Z}$, then $V_\\alpha$ has exactly one proper submodule. Indeed, write $\\alpha = (l-1)r + n + 1$ as in the statement of the proposition, so that $V_\\alpha$ is of highest weight $lr + n$. Examining the action of $E$ on $V_\\alpha$ shows that $Ev_{n+1} = 0$ and $Ev_i \\neq 0$ if $i\\neq 0, n+1$. Hence, $S := \\text{span}\\{v_{n+1}, \\ldots, v_{r-1}\\}$ is the unique proper submodule of $V_\\alpha$. The module $S$ has dimension $r - n - 1$ and its quotient $S_n^{lr}:=V_\\alpha \/ S$ is a simple highest weight module of highest weight $lr + n$ and dimension $n+1$. By \\cref{verma}, there exists a surjection $V_{(l-1)r-n-1} \\rightarrow S$ which, by the argument of this paragraph, descends to an isomorphism $S_{r - n - 2}^{(l-1)r} \\xrightarrow[]{\\sim}S$. Finally, the uniqueness of $S$ implies that the sequence $$ 0 \\rightarrow S_{r-n-2}^{(l-1)r} \\rightarrow V_\\alpha \\rightarrow S_{n}^{lr} \\rightarrow 0$$ is non-split.\n\t\t\n\tBy \\cref{highestweight,verma}, any simple module is a quotient of a Verma module. Thus, the third statement of the proposition follows from the first two.\n\\end{proof}\n\t\n\\begin{remark}\n Since $\\overline{U}_q(\\mathfrak{sl}_2(\\mathbb{C}))$ is a Hopf subalgebra of $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$, there is a monoidal forgetful functor $\\mathcal{C} \\rightarrow \\overline{U}_q(\\mathfrak{sl}_2(\\mathbb{C})) {\\operatorname{-mod}}$. In the notation of \\cite[\\S 2.11]{jantzen}, this functor sends the simple objects $S^{lr}_n$ and $V_{\\alpha}$ of $\\mathcal{C}$ to $L(n,(-1)^l)$ and $Z_0(q^{\\alpha+r-1})$, respectively.\n\\end{remark}\n\n\\cref{simplemodules} implies that a simple object is determined up to isomorphism by its highest weight and that the simple objects $S^{lr}_n$ are neither injective nor projective.\n\t\n\\begin{proposition}\\label{valphaproj}\n If $\\alpha \\in \\mathbb{C} \\setminus \\mathbb{Z} \\cup r \\mathbb{Z}$, then $V_{\\alpha} \\in \\mathcal{C}$ is projective and injective.\n\\end{proposition}\n\\begin{proof}\nLet $f: V \\twoheadrightarrow W$ be a surjection in $\\mathcal{C}$ and $\\phi: V_{\\alpha} \\rightarrow W$ a non-zero morphism. By the proof of \\cref{verma}, the map $\\phi$ is determined by a highest weight vector $\\phi(v_0)=w \\in W$ of weight $\\alpha+r-1$. Surjectivity of $f$ implies that $w$ has a preimage under $f$, say $v$, which is of weight $\\alpha+r-1$ and satisfies $Ev \\in \\ker f$.\n\nLet $\\xi = Ev$, which is of weight $\\alpha+r+1$ and satisfies $E^{r-1} \\xi=0$. For any $a_0, \\dots, a_{r-2} \\in \\mathbb{C}$, the vector\n\\[\nv^{\\prime} = v + \\sum_{i=0}^{r-2} a_i F^{i+1}E^i \\xi.\n\\]\nis of weight $\\alpha+r-1$ and satisfies $f(v^{\\prime})=w$. Using \\cite[\\S 1.3]{jantzen}, we compute\n\\[\nE v^{\\prime}\n=\n\\xi + \\sum_{i=0}^{r-2} a_i(F^{i+1}E^{i+1} \\xi + [i+1] [\\alpha+r+1+i] F^i E^i \\xi).\n\\]\nThen $Ev^{\\prime}=0$ if and only if the recursive equations\n\\[\na_i = - [i+2][\\alpha+r+2+i] a_{i+1} \\qquad i=-1, \\dots, r-3,\n\\]\nhold, with $a_{-1}=1$. This recursive system determines $\\{a_i\\}_i$ if and only if $\\alpha \\in \\mathbb{C} \\setminus \\mathbb{Z} \\cup r \\mathbb{Z}$, as otherwise the coefficient of some $a_{i+1}$ vanishes. Arguing as in the start of the proof, the assignment $v_0 \\mapsto v^{\\prime}$ determines a $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$-linear map $\\tilde{\\phi}:V_{\\alpha} \\rightarrow V$ which satisfies $f \\circ \\tilde{\\phi} = \\phi$. This establishes the projectivity of $V_{\\alpha}$.\n\nIn view of the isomorphism \\eqref{ValphaDual} and the previous paragraph, the module $V_\\alpha^\\vee$ is projective. Standard adjunction isomorphisms (see \\cite[Proposition 2.10.8]{etingof2016tensor}) give a natural isomorphism of contravariant functors\n\\[\n\\textnormal{Hom}_{\\mathcal{C}}(-,V_{\\alpha}) \\simeq \\textnormal{Hom}_{\\mathcal{C}}(V_{\\alpha}^{\\vee}, -) \\circ (-)^{\\vee}.\n\\]\nBecause $V_\\alpha^\\vee$ is projective, $\\textnormal{Hom}_\\mathcal{C}(V_{\\alpha}^{\\vee}, -)$ is an exact functor. Because $(-)^\\vee$ is an exact functor at the level of complex vector spaces, it is also exact on $\\mathcal{C}$. Hence, the functor $\\textnormal{Hom}_{\\mathcal{C}}(-,V_{\\alpha})$ is exact and $V_\\alpha$ is injective.\n\\end{proof}\n\n\\subsection{Generic semisimplicity}\n\t\nRecall that an abelian category is called \\emph{semisimple} if every object is a direct sum of simple objects. In view of \\cref{simplemodules}(\\ref{zwei}), the category $\\mathcal{C}$ is not semisimple. However, $\\mathcal{C}$ fails to be semisimple in a controlled manner. The goal of this section is to make this statement precise. To do so, we begin with some general definitions from \\cite{geer_2017}.\n\nLet $G$ be an additive abelian group.\n\t\n\\begin{definition}\n\tA \\emph{$G$-grading} on a rigid monoidal category $\\mathcal{D}$ is the data of non-empty full subcategories $\\mathcal{D}_g \\subset \\mathcal{D}$, $g \\in G$, such that $\\mathcal{D} = \\bigoplus_{g \\in G} \\mathcal{D}_g$ and $V^\\vee \\in \\mathcal{D}_{-g}$ and $V \\otimes V' \\in \\mathcal{D}_{g+g'}$ whenever $V \\in \\mathcal{D}_g$ and $V' \\in \\mathcal{D}_{g'}$.\n\\end{definition}\n\t\n\\begin{definition}\n\tA subset $X \\subset G$ is called \\emph{symmetric} if $-X = X$ and \\emph{small} if $G \\neq \\bigcup_{i=1}^n (g_i+ X)$ for all $g_1, \\ldots, g_n \\in G$.\n\\end{definition}\n\n\\begin{definition}\n\tA $G$-graded category $\\mathcal{D}$ is called \\emph{generically semisimple with small symmetric subset $X \\subset G$} if $\\mathcal{D}_g$ is semisimple whenever $g \\in G\\setminus X$. In this case, a simple module $V \\in \\mathcal{C}_g$ in degree $g \\in G \\setminus X$ is called \\emph{generic simple}.\n\\end{definition}\n\nConsider again the category of weight modules over $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$. Let $G$ be the additive group $\\mathbb{C}\/2\\mathbb{Z}$. For each $ \\overline{\\alpha} \\in \\mathbb{C} \\slash 2\\mathbb{Z}$, let $\\mathcal{C}_{\\overline{\\alpha}}$ be the full subcategory of $\\mathcal{C}$ consisting of modules whose weights are in the class $\\overline{\\alpha}$. The Hopf algebra structure of $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$ shows that $\\mathcal{C} = \\bigoplus_{\\overline{\\alpha} \\in \\mathbb{C} \\slash 2 \\mathbb{Z}} \\mathcal{C}_{\\overline{\\alpha}}$ is a $\\mathbb{C} \\slash 2 \\mathbb{Z}$-grading.\n\n\\begin{theorem} \\label{Cisgss}\n\tThe $\\mathbb{C}\/2\\mathbb{Z}$-graded category $\\mathcal{C}$ is generically semisimple with small symmetric subset $\\mathbb{Z} \\slash 2 \\mathbb{Z} \\subset \\mathbb{C} \\slash 2\\mathbb{Z}$.\n\\end{theorem}\n\\begin{proof}\n\tLet $\\overline{\\alpha} \\in (\\mathbb{C}\/2\\mathbb{Z}) \\setminus (\\mathbb{Z}\/2\\mathbb{Z})$ and $V \\in \\mathcal{C}_{\\overline{\\alpha}}$ a non-zero object. Then $V$ contains a highest weight vector $v$ of weight $\\alpha \\in \\mathbb{C}$, where $\\alpha$ is in the class $\\overline{\\alpha}$. The assumption on $\\overline{\\alpha}$ implies that the submodule generated by $v$ is isomorphic to $V_{\\alpha - r + 1}$; see the proof of \\cref{simplemodules}. The module $V_{\\alpha - r + 1}$ is injective by \\cref{valphaproj}, whence there is a splitting $V \\simeq V_{\\alpha - r + 1} \\oplus V^{\\prime}$ for some $V^{\\prime} \\in \\mathcal{C}_{\\overline{\\alpha}}$ of dimension strictly less than that of $V$. An induction argument on the dimension of $V$ then completes the proof.\n\\end{proof}\n\nIn view of \\cref{simplemodules}, the generic simple objects of $\\mathcal{C}$ are the Verma modules $V_\\alpha$ with $\\alpha \\in \\mathbb{C} \\setminus \\mathbb{Z}$.\n\n\\subsection{Braiding}\\label{sec:braiding}\n\t\nIn this section, we construct a braiding on $\\mathcal{C}$. The form of the braiding is motivated by the universal $R$-matrix for the $\\hbar$-adic quantum group $U_{\\hbar}(\\mathfrak{sl}_2(\\mathbb{C}))$, as described in \\cite[\\S 10]{drinfeld1986quantum}, \\cite[\\S\\S 4.5 and A.2]{ohtsuki2002quantum}.\n\t\n\\begin{definition}\n\tThe \\emph{$r$-truncated q-exponential map} is $\\exp_q^<(x) = \\sum_{l=0}^{r-1} \\frac{q^{l (l-1) \/ 2}}{[l]!} x^l$.\n\\end{definition}\n\t\nLet $V, W \\in \\mathcal{C}$ with weight bases $\\{v_i\\}_i$ and $\\{w_j\\}_j$ of weights $\\{\\lambda_i^v\\}_i$ and $\\{\\lambda_j^w\\}_j$, respectively. Define $q^{H\\otimes H\/2} \\in \\textnormal{End}_{\\mathbb{C}}(V \\otimes W)$ by \n\\[\nq^{H\\otimes H\/2}(v_i\\otimes w_j) = q^{\\lambda_i^v \\lambda_j^w\/2}v_i\\otimes w_j\n\\]\nand $R \\in \\textnormal{End}_{\\mathbb{C}}(V \\otimes W)$ as\n\\[\nR = q^{H \\otimes H \/ 2} \\circ \\exp_q^<(\\{1\\} E \\otimes F) = q^{H\\otimes H\/2} \\circ \\sum_{l=0}^{r-1}\\frac{\\{1\\}^{2l}}{\\{l\\}!}q^{l(l-1)\/2}E^l\\otimes F^l,\n\\]\nwhere $\\exp_q^<(\\{1\\} E \\otimes F)$ is viewed as a $\\mathbb{C}$-linear map via left multiplication. Finally, define $c_{V,W} \\in \\textnormal{Hom}_{\\mathbb{C}}(V \\otimes W, W \\otimes V)$ as\n\\[\nc_{V,W}(v\\otimes w) = \\tau R(v\\otimes w),\n\\]\nwhere $\\tau$ is the swap map $V \\otimes W \\rightarrow W \\otimes V$, $v \\otimes w \\mapsto w \\otimes v$.\n\n\\begin{lemma}\\label{homomorphism}\n\tThe map $c_{V,W}$ is $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$-linear.\n\\end{lemma}\n\\begin{proof}\n\tIt suffices to check linearity of $c_{V,W}$ on the generators $H,F,E$. Let $v \\in V$ and $w \\in W$ of weight $\\lambda^v$ and $\\lambda^w$, respectively. We have\n\t\\begin{align*}\n\t\tH \\cdot c_{V,W}(v \\otimes w) &= \\tau q^{H\\otimes H\/2}\\sum_{l=0}^{r-1}\\frac{q^{l (l-1) \/ 2}}{[l]!}E^l\\otimes F^l ((H+2l)v \\otimes w + v \\otimes (H-2l)w)\\\\\n\t\t&= (\\lambda^v + \\lambda^w) c_{V,W}(v \\otimes w)\\\\\n\t\t&= c_{V,W}(H \\cdot v \\otimes w).\n\t\\end{align*}\n\tWe prove $E$-linearity. We have an equality\n\t\\[\n\tK \\otimes E \\circ q^{H \\otimes H \/ 2} = q^{H \\otimes H \/ 2} \\circ 1 \\otimes E\n\t\\]\n\tin $\\textnormal{End}_{\\mathbb{C}}(V \\otimes W)$. Indeed, we compute\n\t\\[\n\tK \\otimes E \\circ q^{H \\otimes H \/ 2} (v_i \\otimes w_j) = q^{\\lambda^v_i}q^{\\lambda^v_i \\lambda^w_j \/ 2} v_i \\otimes E w_{j} = q^{\\lambda^v_i (\\lambda^w_j +2) \/ 2} v_i \\otimes E w_{j}\n\t\\]\n\tand\n\t\\[\n\tq^{H \\otimes H \/ 2} \\circ 1 \\otimes E (v_i \\otimes w_j) = q^{H \\otimes H \/ 2} v_i \\otimes E w_{j} = q^{\\lambda^v_i (\\lambda^w_j +2) \/ 2} v_i \\otimes E w_{j}.\n\t\\]\n\tA similar calculation shows that $E \\otimes 1 \\circ q^{H \\otimes H \/ 2} = q^{H \\otimes H \/ 2} \\circ E \\otimes K^{-1}$. Using these two equalities, $E$-linearity of $c_{V,W}$ reduces to the equality\n\t\\[\n\t(E \\otimes K^{-1} + 1 \\otimes E) \\exp_q^<(\\{1\\} E \\otimes F) = \\exp_q^<(\\{1\\} E \\otimes F) (E \\otimes K + 1 \\otimes E)\n\t\\]\n\tin $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C})) \\otimes \\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$, which is proved in \\cite[Equation A.10]{ohtsuki2002quantum}. Linearity of $F$ is proved similarly.\n\\end{proof}\n\t\n\\begin{lemma}\\label{invertible}\n\tThe map $c_{V,W}$ is invertible.\n\\end{lemma}\n\\begin{proof}\n\tClearly $\\tau$ is invertible. We claim that the inverse of $R$ is\n\t\\[\n\tR^{-1} = \\exp_{q^{-1}}^< (-\\{1\\} E \\otimes F) q^{-H \\otimes H \/2}.\n\t\\]\n Compare with \\cite[\\S A.2]{ohtsuki2002quantum}.\n\tWe have $q^{H \\otimes H \/2} \\circ q^{-H \\otimes H \/2}= 1$.\n\tBy definition,\n\t\\begin{multline*}\n\t\t\\exp_q^<(\\{1\\} E \\otimes F) \\cdot \\exp_{q^{-1}}^<(-\\{1\\} E \\otimes F) \\\\ = \\sum_{l=0}^{r-1} \\sum_{k=0}^{r-1} \\frac{q^{l (l-1) \/ 2} q^{-k (k-1) \/ 2}}{[l]![k]!} (-1)^k (\\{1\\} E \\otimes F)^{l+k}.\n\t\\end{multline*}\n\tSince $(E\\otimes F)^r=0$, the double sum is\n\t\\[\n \\sum_{i=0}^{r-1} \\frac{q^{-i (i-1) \/ 2}}{[i]!} (-\\{1\\} E \\otimes F)^{i} \\sum_{l=0}^{i} (-1)^l \\genfrac{[}{]}{0pt}{}{i}{l} q^{l (i-1)} .\n\t\\]\n\tThe sum $\\sum_{l=0}^{i} (-1)^l \\genfrac{[}{]}{0pt}{}{i}{l} q^{l (i-1)}$ is $0$ for $i > 0$ and $1$ if $i=0$; see \\cite[\\S 0.2]{jantzen}. The inverse of $R$ is thus as stated.\n\\end{proof}\n\t\n\\begin{proposition}\\label{ribbon}\n\tThe maps $\\{c_{V,W}:V\\otimes W \\rightarrow W\\otimes V\\}_{V,W}$ define a braiding on $\\mathcal{C}$.\n\\end{proposition}\n\\begin{proof}\n\t\\cref{homomorphism,invertible} yield that the maps $c_{V,W}$ give a family of isomorphisms in $\\mathcal{C}$. Naturality of $c_{V,W}$ follows from the fact that the endomorphism $q^{H \\otimes H \/2}$ commutes with $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$-linear maps. \n\tIt remains to verify the hexagon axioms \\cite[Definition 8.1.1]{etingof2016tensor}. Let $V,W,U \\in \\mathcal{C}$. We prove that\n\t\\begin{equation}\n\t\tc_{V, W \\otimes U} = (\\mathrm{id}_W \\otimes c_{V,U})\\circ (c_{V,W} \\otimes \\mathrm{id}_U)\n\t\\end{equation}\n\tand leave the verification of the equality $c_{V \\otimes W, U} = (c_{V,U} \\otimes \\mathrm{id}_W)\\circ (\\mathrm{id}_V \\otimes c_{W,U})$ to the reader. Let $v \\in V, w \\in W, u \\in U$ be weight vectors of weights $\\lambda^v,\\lambda^w$, $\\lambda^u$, respectively.\n\tWe compute\n\t\\[ \n\tc_{V, W \\otimes U}(v\\otimes w \\otimes u) = q^{H \\otimes H \/ 2} \\sum_{l=0}^{r-1} \\frac{\\{1\\}^{2l}}{\\{l\\}!}q^{l(l-1)\/2} \\Delta(F^l) (w \\otimes u) \\otimes E^lv. \n\t\\]\n\tA straightforward induction argument shows that\n\t\\[\n\t\\Delta(F^l) = \\sum_{i=0}^l q^{i(l-i)} \\genfrac{[}{]}{0pt}{}{l}{i} F^i K^{-(l-i)} \\otimes F^{l-i}.\n\t\\]\n\tCompare with \\cite[\\S 3.1]{jantzen}, where slightly different conventions are used. Using this, we find\n\t\\[\n\tc_{V, W \\otimes U}(v\\otimes w \\otimes u) = q^{H \\otimes H \/ 2} \\sum_{l=0}^{r-1} \\frac{\\{1\\}^{2l}}{\\{l\\}!}q^{l(l-1)\/2} \\sum_{i=0}^l q^{(i- \\lambda^w)(l-i)}\\genfrac{[}{]}{0pt}{}{l}{i} F^i \\otimes F^{l-i} \\otimes E^l (w \\otimes u \\otimes v)\n\t\\]\n which evaluates to\n\t\\[\n\t\\sum_{l=0}^{r-1} \\frac{\\{1\\}^{2l}}{\\{l\\}!}q^{l(l-1)\/2} \\sum_{i=0}^l q^{(\\lambda^w + \\lambda^u -2l) (\\lambda^v + 2l)\/2} q^{(i- \\lambda^w)(l-i)}\\genfrac{[}{]}{0pt}{}{l}{i} F^i \\otimes F^{l-i} \\otimes E^l (w \\otimes u \\otimes v).\n\t\\]\n\tOn the other hand, we compute\n\t\\begin{align*}\n\t\tc_{V,W} \\otimes \\mathrm{id}_U (v\\otimes w \\otimes u) &= q^{H \\otimes H \/2} \\sum_{l=0}^{r-1} \\frac{\\{1\\}^{2l}}{\\{l\\}!}q^{l(l-1)\/2} F^n w \\otimes E^n v\\otimes u \\\\\n\t\t&= \\sum_{l=0}^{r-1} \\frac{\\{1\\}^{2l}}{\\{l\\}!}q^{l(l-1)\/2} q^{(\\lambda^w - 2l)(\\lambda^v + 2l)\/2} F^l w \\otimes E^l v\\otimes u.\n\t\\end{align*}\n\tApplying $\\mathrm{id}_W \\otimes c_{V,U}$ then gives\n\t\\begin{multline*}\n\t\t\\mathrm{id}_W \\otimes c_{V,U} (\\sum_{l=0}^{r-1} \\frac{\\{1\\}^{2l}}{\\{l\\}!}q^{l(l-1)\/2} q^{(\\lambda^w - 2l)(\\lambda^v + 2l)\/2} F^l w \\otimes E^l v\\otimes u) \\\\\n\t =\\sum_{l=0}^{r-1} \\sum_{k=0}^{r-1} \\frac{\\{1\\}^{2l +2k}}{\\{l\\}!\\{k\\}!} q^{l(l-1)\/2}q^{k(k-1)\/2} q^{(\\lambda^w - 2l)(\\lambda^v + 2l)\/2} \\cdot \\\\ \\cdot q^{(\\lambda^u - 2k)(\\lambda^v + 2l + 2k)\/2} \\cdot (F^l w \\otimes F^k u \\otimes E^k E^l v).\n\t\\end{multline*}\n\tTo check the equality $c_{V, W \\otimes U}(v\\otimes w \\otimes u) = c_{V,W} \\otimes \\mathrm{id}_U (v\\otimes w \\otimes u)$, we compare the coefficients of $F^a w \\otimes F^b u \\otimes E^{a+b} v$. The coefficients on the left and right-hand sides of the desired equality are\n\t\\[\n\t\\frac{\\{1\\}^{2(a+b)}}{\\{a+b\\}!}q^{(a+b)((a+b)-1)\/2} q^{(\\lambda^w + \\lambda^u -2(a+b)) (\\lambda^v + 2(a+b))\/2} q^{(a- \\lambda^w)((a+b)-a)}\\genfrac{[}{]}{0pt}{}{a+b}{a}\n\t\\]\n\tand\n\t\\[\n\t\\frac{\\{1\\}^{2a +2b}}{\\{a\\}!\\{b\\}!} q^{a(a-1)\/2} q^{b(b-1)\/2} q^{(\\lambda^w - 2a)(\\lambda^v + 2a)\/2} q^{(\\lambda^u - 2b)(\\lambda^v + 2a + 2b)\/2},\n\t\\]\n\trespectively, which are equal by direct verification. \\qedhere\n\\end{proof}\n\n\\subsection{Ribbon structure}\n\t\nIn this section, we construct a ribbon structure on $\\mathcal{C}$. Having already established that $\\mathcal{C}$ is braided (\\cref{ribbon}), a ribbon structure is the additional data of a \\emph{twist}, that is, a natural automorphism of the identity functor $\\theta: \\mathrm{id}_{\\mathcal{C}} \\Rightarrow \\mathrm{id}_{\\mathcal{C}}$ which satisfies the \\emph{balancing condition}\n$$\\theta_{V \\otimes W} = (\\theta_V \\otimes \\theta_W) \\circ c_{W,V} \\circ c_{V,W}$$ and the \\emph{ribbon condition} \t\n\\begin{equation}\\label{ribCond}\n (\\theta_V)^\\vee = \\theta_{V^\\vee}\n\\end{equation}\nfor all $V,W \\in \\mathcal{C}$\n\t\nRecall that the right partial trace of $f \\in \\textnormal{End}_{\\mathcal{C}}(V \\otimes W)$ is the endomorphism $\\mathrm{ptr}_R(f) \\in \\textnormal{End}_{\\mathcal{C}}(V)$ defined by\n\\[\nV \\xrightarrow{\\mathrm{id}_V \\otimes \\mathrm{coev}_W} V \\otimes W \\otimes W^\\vee \\xrightarrow{f \\otimes \\mathrm{id}_{W^\\vee}} V \\otimes W \\otimes W^\\vee \\xrightarrow{\\mathrm{id}_V \\otimes \\widehat{\\mathrm{ev}}_W} V.\n\\]\nDefine a natural automorphism $\\theta: \\mathrm{id}_{\\mathcal{C}} \\Rightarrow \\mathrm{id}_{\\mathcal{C}}$ by\n\\begin{equation} \\label{eq:defTheta}\n\t\\theta_V := \\mathrm{ptr}_R(c_{V,V}), \\qquad V \\in \\mathcal{C}\n\\end{equation}\nwhere $c$ is the braiding of $\\mathcal{C}$. The hexagon axioms of the braiding ensure that $\\theta$ satisfies the balancing condition. To verify that $\\theta$ also satisfies the ribbon condition, we use the following generic extension result.\n\t\n\\begin{theorem}[{\\cite[Theorem 9]{geer_2017}}] \\label{extendTwist}\n\tLet $\\mathcal{D}$ be a generically semisimple pivotal braided category. Define a natural automorphism $\\theta: \\mathrm{id}_{\\mathcal{D}} \\Rightarrow \\mathrm{id}_{\\mathcal{D}}$ so that its components are given by Equation \\eqref{eq:defTheta}. If $\\theta_V^{\\vee} = \\theta_{V^{\\vee}}$ for any generic simple object $V \\in \\mathcal{D}$, then $\\theta$ is a twist on $\\mathcal{D}$.\n\\end{theorem}\n\nWe can now prove the main result of this section.\n\n\\begin{theorem}\\label{ribbon theorem}\n\tThe natural transformations $c$ and $\\theta$ equip $\\mathcal{C}$ with the structure of a ribbon category.\n\\end{theorem}\n\t\n\\begin{proof}\n\tRecall that the generic simple objects of $\\mathcal{C}$ are the Verma modules $V_{\\alpha}$ with $\\alpha \\in \\mathbb{C} \\setminus \\mathbb{Z}$. For any $\\alpha \\in \\mathbb{C}$ the right partial trace of $c_{V_\\alpha, V_\\alpha}$ is\n \\[\n\tV_\\alpha \\xrightarrow{\\mathrm{id}_{V_\\alpha} \\otimes \\mathrm{coev}_{V_\\alpha}} V_\\alpha \\otimes V_\\alpha \\otimes V_\\alpha^\\vee \\xrightarrow{c_{V_\\alpha, V_\\alpha} \\otimes \\mathrm{id}_{V_\\alpha^\\vee}} V_\\alpha \\otimes V_\\alpha \\otimes V_\\alpha^\\vee \\xrightarrow{\\mathrm{id}_{V_{\\alpha}} \\otimes \\widehat{\\mathrm{ev}}_{V_\\alpha}} V_\\alpha.\n\t\\]\n\tSince $\\textnormal{End}_{\\mathcal{C}}(V_{\\alpha}) \\simeq \\mathbb{C}$ (see the proof of \\cref{verma}), it suffices to compute the image of the highest weight vector $v_{0} \\in V_{\\alpha}$ under this composition. We have\n\t\\begin{multline*}\n\t v_{0} \\mapsto \\sum_{i=0}^{r-1} v_{0} \\otimes v_i \\otimes v_i^{\\vee} \\mapsto \\sum_{i=0}^{r-1} q^{(\\alpha + r - 1) (\\alpha + r - 1 - 2i) \/2} v_i \\otimes v_0 \\otimes v_i^{\\vee} \\\\\n\t \\mapsto q^{(\\alpha + r - 1) (\\alpha + r - 1) \/2} q^{(\\alpha + r - 1) (1-r)} v_0\n\t = q^{(\\alpha + r - 1) (\\alpha - r + 1) \/2} v_0.\n\t\\end{multline*}\n\tSince $V^\\vee_\\alpha \\simeq V_{-\\alpha}$ and the scalar $q^{(\\alpha + r - 1) (\\alpha - r + 1) \/2}$ is unchanged under the substitution $\\alpha \\mapsto - \\alpha$, it follows that $\\theta_{V_{\\alpha}^{\\vee}} = \\theta_{V_{\\alpha}}^{\\vee}$ for all $\\alpha \\in \\mathbb{C}$. \\cref{extendTwist} therefore applies in the present setting, allowing the conclusion that the maps $\\{\\theta_V\\}_{V \\in \\mathcal{C}}$ define a twist on $\\mathcal{C}$.\n\\end{proof}\n \n\\section{Reshetikhin--Turaev invariants} \\label{rti}\n\t\nWe recall basic background material on Reshetikhin--Turaev invariants of links \\cite{reshetikhin1990ribbon}. For a detailed introduction to the theory, the reader is referred to \\cite{turaev2016quantum}. We end this section by modifying the Reshetikhin--Turaev construction to produce a non-zero invariant for knots colored by simple objects of vanishing quantum dimension. Readers who are well-versed in Reshetikhin--Turaev theory could remind themselves of Lemma \\ref{cutting} and proceed to \\cref{mqi}.\n\t\n\\subsection{Reshetikhin--Turaev invariants of links}\n\t\nLet $\\mathcal{D}$ be a ribbon category. Associated to $\\mathcal{D}$ is the ribbon category of $\\mathcal{D}$-colored ribbon graphs $\\mathbf{Rib}_\\mathcal{D}$ \\cite[\\S I.I.2]{turaev2016quantum}. Objects of $\\mathbf{Rib}_\\mathcal{D}$ are finite sequences of pairs $(V, \\epsilon)$, where $V \\in \\mathcal{D}$ and $\\epsilon \\in \\{\\pm \\}$. Morphisms in $\\mathbf{Rib}_\\mathcal{D}$ are isotopy classes of $\\mathcal{D}$-colored ribbon graphs bordering two such sequences of objects. The colorings of the ribbon graphs are required to be compatible with the domain and codomain objects in the obvious sense. Composition of morphisms is defined by concatenation of ribbon graphs. The monoidal structure of $\\mathbf{Rib}_\\mathcal{D}$ is defined on objects by concatenation of sequences and on morphisms by disjoint union.\n\t\n\\begin{theorem}[{\\cite[Theorem 2.5]{turaev2016quantum}}]\\label{reshturfunctor}\n There exists a unique ribbon functor $F_{\\mathcal{D}}: \\mathbf{Rib}_\\mathcal{D} \\rightarrow \\mathcal{D}$ such that $F_{\\mathcal{D}}(V,+)=V$ and $F_{\\mathcal{D}}(V,-) =V^\\vee$ for all $V \\in \\mathcal{D}$. \n\\end{theorem}\n\t\nThe functor $F_{\\mathcal{D}}$ is called the \\emph{Reshetikhin--Turaev functor}. The precise definition of the ribbon structure of $\\mathbf{Rib}_\\mathcal{D}$ and the fact that $F_{\\mathcal{D}}$ is ribbon implies that $F_{\\mathcal{D}}$ takes the following values on morphisms in $\\mathbf{Rib}_\\mathcal{D}$:\n\\begin{equation*} \\label{eq:7}\n\tF_{\\mathcal{D}} \\left(\n\t\\begin{tikzpicture}[anchorbase]\n\t\t\\draw[->,very thick] (0,0) -- node[left] {$V$} (0,1);\n\t\\end{tikzpicture} \\right )\n\t\\;\\;\\; =\n\t\\begin{tikzpicture}[anchorbase]\n\t\t\\draw[->] (0,0) node[below] {$V$} -- node[left] {$\\mathrm{id}_V$} (0,1) node[above] {$V$};\n\t\\end{tikzpicture}\n\t\\qquad \\qquad \\qquad\n\tF_{\\mathcal{D}}\\left(\n\t\\begin{tikzpicture}[anchorbase]\n\t\t\\draw [->, very thick] (.8,0) to [out=270,in=0] (.6,-.3) to [out=180,in=320] (.15,.1) to [out=140,in=270] (0,.8);\n\t\t\\draw [very thick,cross line] (0,-.7) to [out=90,in=180] (.6,.3) to [out=0,in=90] (.8,0);\n\t\t\\node at (.3,-.7) {$V$};\n\t\\end{tikzpicture}\n\t\\right) =\n\t\\begin{tikzpicture}[anchorbase]\n\t\t\\draw[->] (0,0) node[below] {$V$} -- node[left] {$\\theta_V$} (0,1) node[above] {$V$};\n\t\\end{tikzpicture}\n\\end{equation*}\n\\begin{equation*}\n\tF_{\\mathcal{D}}\\left(\n\t\\begin{tikzpicture}[anchorbase]\n \t\\draw[->,very thick] (1,0) -- node[right,near start] {$W$} (0,1);\n\t\t\\draw[->,very thick,cross line] (0,0) -- node[left,near start] {$V$} (1,1);\n\t\\end{tikzpicture}\n\t\\right) =\n\t\\begin{tikzpicture}[anchorbase]\n \\draw[->] (0,0) node[below] {$V \\otimes W$} -- node[left] {$c_{V,W}$} (0,1) node[above] {$W \\otimes V$};\n\t\\end{tikzpicture}\n\t\\qquad \\qquad\n\tF_{\\mathcal{D}}\\left(\n\t\\begin{tikzpicture}[anchorbase]\n\t\t\\draw[->,very thick] (0,0) -- node[left,near start] {$W$} (1,1);\n\t\t\\draw[->,very thick,cross line] (1,0) -- node[right,near start] {$V$} (0,1);\n\t\\end{tikzpicture}\n\t\\right) =\n\t\\begin{tikzpicture}[anchorbase]\n\t\t\\draw[->] (0,0) node[below] {$W \\otimes V$} -- node[left] {$c^{-1}_{V,W}$} (0,1) node[above] {$V \\otimes W$};\n\t\\end{tikzpicture}\n\\end{equation*}\n\t\n\\begin{equation*}\n\tF_{\\mathcal{D}}\\left(\n\t\\begin{tikzpicture}[anchorbase]\n\t\t\\draw[->,very thick] (0,0) arc (0:180:0.5 and 0.75);\n\t\t\\node at (-.3,.1) {$V$};\n\t\\end{tikzpicture}\n\t\\right) =\n\t\\begin{tikzpicture}[anchorbase]\n\t\t\\draw[->] (0,0) node[below] {$V^\\vee \\otimes V$} -- node[left] {$\\mathrm{ev}_V$} (0,1) node[above] {$\\mathbb{C}$};\n\t\\end{tikzpicture}\n\t\\qquad\\qquad\n\tF_\\mathcal{D}\\left(\n\t\\begin{tikzpicture}[anchorbase]\n\t\t\\draw[<-,very thick] (0,0) arc (180:360:0.5 and 0.75);\n\t\t\\node at (.3,0) {$V$};\n\t\\end{tikzpicture}\n\t\\right) =\n \\begin{tikzpicture}[anchorbase]\n\t\t\\draw[->] (0,0) node[below] {$\\mathbb{C}$} -- node[left] {$\\mathrm{coev}_V$} (0,1) node[above] {$V \\otimes V^\\vee$};\n\t\\end{tikzpicture}\n \\end{equation*}\n \\begin{equation*}\n\tF_{\\mathcal{D}}\\left(\n\t\\begin{tikzpicture}[anchorbase]\n\t\t\\draw[<-,very thick] (0,0) arc (0:180:0.5 and 0.75);\n\t\t\\node at (-.7,.1) {$V$};\n\t\\end{tikzpicture}\n\t\\right) =\n\t\\begin{tikzpicture}[anchorbase]\n\t\t\\draw[->] (0,0) node[below] {$V \\otimes V^\\vee$} -- node[left] {$\\widehat{\\mathrm{ev}}_V$} (0,1) node[above] {$\\mathbb{C}$};\n\t\\end{tikzpicture}\n\t\\qquad\\qquad\n\tF_\\mathcal{D}\\left(\n\t\\begin{tikzpicture}[anchorbase]\n\t\t\\draw[->,very thick] (0,0) arc (180:360:0.5 and 0.75);\n\t\t\\node at (.7,0) {$V$};\n\t\\end{tikzpicture}\n\t\\right) =\n\t\\begin{tikzpicture}[anchorbase]\n\t\t\\draw[->] (0,0) node[below] {$\\mathbb{C}$} -- node[left] {$\\widehat{\\mathrm{coev}}_V$} (0,1) node[above] {$V^\\vee \\otimes V$};\n\t\\end{tikzpicture}\n\t.\n\\end{equation*}\nThe above eight morphisms in $\\mathbf{Rib}_\\mathcal{D}$ generate all morphisms of $\\mathbf{Rib}_\\mathcal{D}$ \\cite[\\S I.3-4]{turaev2016quantum}. In particular, the value $F_{\\mathcal{D}}$ on any morphism of $\\mathbf{Rib}_\\mathcal{D}$ can be computed as an iterated composition of (co)evaluations, (inverse) braidings and (inverse) twists in $\\mathcal{D}$. Colored framed links are particular examples of morphisms in $\\mathbf{Rib}_\\mathcal{D}$- they are endomorphisms of the empty sequence. Thus, the assignment $L \\mapsto \\langle F_{\\mathcal{D}} (L) \\rangle$ is a isotopy invariant of colored framed links.\n\t\nWe record the following result which will be used below.\n\t\n\\begin{lemma} \\label{rotate}\n\tFor any $V, W \\in \\mathcal{D}$, the following equality of morphisms in $\\mathcal{D}$ holds:\n\t\\begin{equation}\\label{rotate_loop}\n\t\t\\begin{aligned}\n\t\t\tF_\\mathcal{D} \\left (\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.4 with {\\arrow{>}}}] (-.1,-.5) to [out=180,in=270] (-.8,0) to [out=90,in=180] (0,.5) to [out=0,in=90] (.8,0) to [out=270,in=0] (.1,-.5);\n\t\t\t\t\\draw [very thick] (0,-1) to (0,.4);\n\t\t\t\t\\draw [->, very thick] (0,.6) to (0,1);\n\t\t\t\t\\node at (1.1,0) {$W$};\n\t\t\t\t\\node at (.25,-.9) {$V$};\n\t\t\t\\end{tikzpicture} \\right )\n\t\t\t\\,=\\, F_\\mathcal{D} \\left (\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.65 with {\\arrow{<}}}] (.1,.5) to [out=0,in=90] (.8,0) to [out=270,in=0] (0,-.5) to [out=180,in=270] (-.8,0) to [out=90,in=180] (-.1,.5);\n\t\t\t\t\\draw [very thick] (0,-1) to (0,-.6);\n\t\t\t\t\\draw [->, very thick] (0,-.4) to (0,1);\n\t\t\t\t\\node at (1.1,0) {$W$};\n\t\t\t\t\\node at (.25,-.9) {$V$};\n\t\t\t\\end{tikzpicture} \\right ).\n\t\t\\end{aligned}\n\t\\end{equation}\n Moreover, if $V$ is simple, then the following equality of scalars holds:\n \\begin{equation} \\label{rotate_diagram}\n\t\t\\begin{aligned}\n\t\t\t\\left \\langle F_\\mathcal{D} \\left (\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.4 with {\\arrow{>}}}] (-.1,-.5) to [out=180,in=270] (-.8,0) to [out=90,in=180] (0,.5) to [out=0,in=90] (.8,0) to [out=270,in=0] (.1,-.5);\n\t\t\t\t\\draw [very thick] (0,-1) to (0,.4);\n\t\t\t\t\\draw [->, very thick] (0,.6) to (0,1);\n\t\t\t\t\\node at (1.1,0) {$W$};\n\t\t\t\t\\node at (.25,-.9) {$V$};\n\t\t\t\\end{tikzpicture}\n\t\t\t\\right ) \\right \\rangle\n\t\t\t\\,=\\, \\left \\langle F_\\mathcal{D} \\left ( \\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.4 with {\\arrow{>}}}] (.1,.5) to [out=0,in=90] (.8,0) to [out=270,in=0] (0,-.5) to [out=180,in=270] (-.8,0) to [out=90,in=180] (-.1,.5);\n\t\t\t\t\\draw [<-,very thick] (0,-1) to (0,-.6);\n\t\t\t\t\\draw [very thick] (0,-.4) to (0,1);\n\t\t\t\t\\node at (1.1,0) {$W$};\n\t\t\t\t\\node at (.4,-.9) {$V$};\n\t\t\t\\end{tikzpicture} \\right ) \\right \\rangle.\n\t\t\\end{aligned}\n\t\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\n\t\\cref{rotate_loop} holds by the following indicated combination of framed Reidemeister moves:\n\t\\begin{multline*}\n\t\t\tF_\\mathcal{D} \\left (\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.65 with {\\arrow{<}}}] (.1,.5) to [out=0,in=90] (.8,0) to [out=270,in=0] (0,-.5) to [out=180,in=270] (-.8,0) to [out=90,in=180] (-.1,.5);\n\t\t\t\t\\draw [very thick] (0,-1) to (0,-.6);\n\t\t\t\t\\draw [->, very thick] (0,-.4) to (0,1);\n\t\t\t\t\\node at (1.1,0) {$W$};\n\t\t\t\t\\node at (.25,-.9) {$V$};\n\t\t\t\\end{tikzpicture} \\right )\n\t\t\t\\overset{\\text{RI}}{=}\n\t\t\tF_\\mathcal{D} \\left (\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\n\t\t\t\t\\draw [very thick] (.1,.5) to [out=0,in=90] (.8,0) to (.8,-.4);\n\t\t\t\t\\draw [very thick] (-.1,.5) to [out=180,in=90] (-.8,0) to (-.8,-.4);\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.4 with {\\arrow{>}}}] (-.8,-.6) to [out=270,in=0] (-1,-.8) to [out=180,in=270] (-1.2,-.6) to [out=90,in=180] (-1,-.5) to (1,-.5) to [out=0,in=90] (1.2,-.6) to [out=270,in=0] (1,-.8) to [out=180,in=270] (.8,-.6);\n\t\t\t\n\t\t\t\t\\draw [very thick] (0,-1) to (0,-.6);\n\t\t\t\t\\draw [->, very thick] (0,-.4) to (0,1);\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\\node at (1.1,0) {$W$};\n\t\t\t\t\\node at (.25,-.9) {$V$};\n\t\t\t\\end{tikzpicture} \\right )\n\t\t\t=\n\t\t\tF_\\mathcal{D} \\left (\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\n\t\t\t\t\\draw [very thick] (-.1,.5) to [out=180,in=90] (-.8,0) to (-.8,-.4);\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.4 with {\\arrow{>}}}] (-.8,-.6) to [out=270,in=0] (-1,-.8) to [out=180,in=270] (-1.2,-.6) to [out=90,in=180] (-1,-.5) to (.2,-.5) to [out=0,in=270] (.5,.7) to [out=90,in=180] (.7,.9) to [out=0,in=90] (.9,.7) to [out=270,in=0] (.6,.5);\n\t\t\t\t\\draw [very thick] (.1,.5) to (.4, .5);\n\t\t\t\n\t\t\t\t\\draw [very thick] (0,-1) to (0,-.6);\n\t\t\t\t\\draw [->, very thick] (0,-.4) to (0,1);\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t\\node at (.86,0) {$W$};\n\t\t\t\t\\node at (.25,-.9) {$V$};\n\t\t\t\\end{tikzpicture} \\right )\n\t\t\t \\\\\n\t\t\t\\overset{\\text{RIII}}{=} F_\\mathcal{D} \\left (\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\n\t\t\t\t\\draw [very thick] (-.8,-.4) to [out=90,in=180] (-.6,0);\n\t\t\t\t\\draw [very thick] (-.4,0) to (-.1,0);\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.62 with {\\arrow{>}}}] (-.8,-.6) to [out=270,in=0] (-1,-.8) to [out=180,in=270] (-1.2,-.6) to [out=90,in=180] (-1,-.5) to (-.7,-.5) to [out=0,in=270] (-.5,.3) to [out=90,in=180](-.4,.4) to (.4,.4) to [out=0,in=90] (.6,.2) to [out=270,in=0] (.4,0) to (.1,0);\n\t\t\t\n\t\t\t\t\\draw [very thick] (0,-1) to (0,.3);\n\t\t\t\t\\draw [->, very thick] (0,.5) to (0,1);\n\t\t\t\n\t\t\t\t\\node at (.9,0) {$W$};\n\t\t\t\t\\node at (.25,-.9) {$V$};\n\t\t\t\\end{tikzpicture} \\right )\n\t\t\t\\overset{\\text{RII}}{=}\n\t\t\tF_\\mathcal{D} \\left (\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.4 with {\\arrow{>}}}] (-.1,-.5) to [out=180,in=270] (-.8,0) to [out=90,in=180] (0,.5) to [out=0,in=90] (.8,0) to [out=270,in=0] (.1,-.5);\n\t\t\t\t\\draw [very thick] (0,-1) to (0,.4);\n\t\t\t\t\\draw [->,very thick] (0,.6) to (0,1);\n\t\t\t\t\\node at (1.1,0) {$W$};\n\t\t\t\t\\node at (.25,-.9) {$V$};\n\t\t\t\\end{tikzpicture} \\right ).\n\t\\end{multline*}\n If $V$ is simple then \\cref{rotate_diagram} holds by planar isotopy:\n\t\\begin{equation*}\n\t\t\\begin{aligned}\n\t\t\\left \\langle\tF_\\mathcal{D} \\left (\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.4 with {\\arrow{>}}}] (-.1,-.5) to [out=180,in=270] (-.8,0) to [out=90,in=180] (0,.5) to [out=0,in=90] (.8,0) to [out=270,in=0] (.1,-.5);\n\t\t\t\t\\draw [very thick] (0,-1) to (0,.4);\n\t\t\t\t\\draw [->, very thick] (0,.6) to (0,1);\n\t\t\t\t\\node at (1.1,0) {$W$};\n\t\t\t\t\\node at (.25,-.9) {$V$};\n\t\t\t\\end{tikzpicture} \\right ) \\right \\rangle\n\t\t\t\\;&=\\;\n\t\t\t\\left \\langle F_\\mathcal{D} \\left (\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.4 with {\\arrow{>}}}] (-.1,-.5) to [out=180,in=270] (-.8,0) to [out=90,in=180] (0,.5) to [out=0,in=90] (.8,0) to [out=270,in=0] (.1,-.5);\n\t\t\t\t\\draw [very thick] (-2.4,-1) to (-2.4,.8) to [out=90,in=180] (-1.8,1) to [out=0,in=90] (-1.2,.8) to (-1.2,-.8) to [out=270,in=180] (-.6,-1) to [out=0,in=270] (0,-.8) to (0,.4);\n\t\t\t\t\\draw [->, very thick] (0,.6) to (0,1);\n\t\t\t\t\\node at (1.1,0) {$W$};\n\t\t\t\t\\node at (.25,-.9) {$V$};\n\t\t\t\\end{tikzpicture} \\right ) \\right \\rangle\n\t\t\t\\\\\\;&=\\;\n\t\t\t\\left \\langle F_\\mathcal{D} \\left (\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.4 with {\\arrow{>}}}] (.1,.5) to [out=0,in=90] (.8,0) to [out=270,in=0] (0,-.5) to [out=180,in=270] (-.8,0) to [out=90,in=180] (-.1,.5);\n\t\t\t\t\\draw [->,very thick] (0,-.6) to (0,-.8) to [out=270,in=180] (.6,-1) to [out=0,in=270] (1.2,-.8) to (1.2,1);\n\t\t\t\t\\draw [very thick] (0,-.4) to (0,.8) to [out=90,in=0] (-.6,1) to [out=180,in=90] (-1.2,.8) to (-1.2,-1);\n\t\t\t\t\\node at (.5,.75) {$W$};\n\t\t\t\t\\node at (-.9,-.9) {$V$};\n\t\t\t\\end{tikzpicture} \\right ) \\right \\rangle\n \\\\\\;&=\\;\n \\left \\langle F_\\mathcal{D} \\left (\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.4 with {\\arrow{>}}}] (.1,.5) to [out=0,in=90] (.8,0) to [out=270,in=0] (0,-.5) to [out=180,in=270] (-.8,0) to [out=90,in=180] (-.1,.5);\n\t\t\t\t\\draw [<-,very thick] (0,-1) to (0,-.6);\n\t\t\t\t\\draw [very thick] (0,-.4) to (0,1);\n\t\t\t\t\\node at (1.1,0) {$W$};\n\t\t\t\t\\node at (.4,-.9) {$V$};\n\t\t\t\\end{tikzpicture} \\right ) \\right \\rangle\n \\left \\langle F_\\mathcal{D} \\left (\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [->,very thick] (0,-.6) to (0,-.8) to [out=270,in=180] (.6,-1) to [out=0,in=270] (1.2,-.8) to (1.2,1);\n\t\t\t\t\\draw [very thick] (0,-.8) to (0,.8) to [out=90,in=0] (-.6,1) to [out=180,in=90] (-1.2,.8) to (-1.2,-1);\n\t\t\t\t\\node at (-.9,-.9) {$V$};\n\t\t\t\\end{tikzpicture} \\right ) \\right \\rangle.\n\t\t\\end{aligned}\n\t\\end{equation*}\nThe snake relation \\eqref{eq:snakeRel} implies that the second scalar in the final line is $1$.\n\\end{proof}\n\t\n\\subsection{Reshetikhin--Turaev invariants and quantum dimension}\n\t\n Let $K$ be the unknot. Color $K$ by an object $V$ of a $\\mathbb{C}$-linear ribbon category $\\mathcal{D}$. The scalar $\\langle F_\\mathcal{D}(K) \\rangle$ associated to the map $F_\\mathcal{D}(K): \\mathbb{C} \\rightarrow \\mathbb{C}$ is called the \\emph{quantum dimension of $V$} and is denoted by $\\textnormal{qdim}_\\mathcal{D}(V)$. Explicitly, we have $\\textnormal{qdim}_\\mathcal{D}(V) = \\langle \\widehat{\\mathrm{ev}}_V \\circ \\mathrm{coev}_V \\rangle$.\n\t\n\\begin{example}\\label{lemmasf}\n\tConsider again the category $\\mathcal{C}$ of weight $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$-modules. Let $K$ be the unknot colored by $V_\\alpha$, $\\alpha \\in \\mathbb{C}$. Let $\\{v_i \\mid 0 \\leq i \\leq r-1\\}$ be the weight basis of $V_\\alpha$ described in \\cref{sec:simpObj} with $\\{v_i^\\vee \\mid 0 \\leq i \\leq r-1\\}$ its dual basis. Then $F_\\mathcal{C}(K)$ is the composition\n\t\\[\n\t1\n\t\\xmapsto{\\mathrm{coev}_{V_{\\alpha}}} \\sum_{i=0}^{r-1} v_i \\otimes v_i^\\vee\n\t\\xmapsto{\\widehat{\\mathrm{ev}}_{V_{\\alpha}}}\n\t\\sum_{i=0}^{r-1} q^{(\\alpha + r - 1 -2i)(1-r)}\n\t=\n\tq^{(\\alpha + r - 1)(1-r)} \\sum_{i=0}^{r-1} q^{-2i + 2ir}.\n\t\\]\n\tAs $q$ is a primitive $2r$\\textsuperscript{th} root of unity, we have $\\sum_{i=0}^{r-1} q^{-2i + 2ir} = \\sum_{i=0}^{r-1} q^{-2i}=0$. Hence, $F_\\mathcal{C}(K) = 0$ and $\\textnormal{qdim}_\\mathcal{C}(V_\\alpha)=0$. If instead $K$ is colored by the simple module $S^{lr}_n$, $0 \\leq n \\leq r-2$ and $l \\in \\mathbb{Z}$, then\n\t\\[\n\t\\langle F_{\\mathcal{C}}(K) \\rangle\n\t=\n\t\\sum_{j=0}^n v_j^{\\vee}(K^{1-r} v_j)\n\t=\n\tq^{(1-r)(lr+n)} \\sum_{j=0}^n q^{-2j}\n\t=\n\t(-1)^{n + l + lr} [n+1],\n\t\\]\n\twhence $\\textnormal{qdim}_{\\mathcal{C}}(S^{lr}_n) \\neq 0$. \n\\end{example}\n\n\\begin{lemma}\\label{cutting}\n Let $\\mathcal{D}$ be a $\\mathbb{C}$-linear ribbon category, $V \\in \\mathcal{D}$ a simple object, $L$ a $\\mathcal{D}$-colored link and $T$ a $(1,1)$-tangle whose closure is $L$ and whose open strand is colored by $V$. Then\n \\begin{equation}\\label{formula}\n\t\\langle F_\\mathcal{D}(L) \\rangle = \\textnormal{qdim}_{\\mathcal{D}}(V) \\langle F_\\mathcal{D}(T) \\rangle.\n \\end{equation} \n\\end{lemma}\n\\begin{proof}\nUsing isotopy invariance we can draw a diagram of $L$ of the form\n\\begin{equation*}\n\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\\draw [very thick] (-.5,-.25) -- (.5,-.25) -- (.5,.25) -- (-.5,.25) -- cycle;\n\t\t\\draw [<-, very thick] (1.25,0) to [out=90, in=0] (.75,1) to [out=180,in = 90] (0,.25);\n\t\t\\draw [very thick] (1.25,0) to [out=270, in=00] (.75,-1) to [out=180,in=270] (0,-.25);\n\t\t\\node at (1.7,0) {\\small $V$};\n\t\t\\node at (0,0) {\\small $T$};\n\t\\end{tikzpicture}.\n\\end{equation*}\nSince $V$ is simple, the endomorphism $F_\\mathcal{D}(T)$ is a scalar and \\cref{formula} follows.\n\\end{proof}\n\t\n\nThus, whenever a knot is colored by a simple object of vanishing quantum dimension, the Reshetikhin--Turaev invariant is trivial. In particular, in view of \\cref{lemmasf}, the Reshetikhin--Turaev invariants of $\\mathcal{C}$-colored links with at least one component colored by a simple Verma module are zero.\n\n\\subsection{Knot invariants via cutting}\n\t\n\\cref{formula} is the starting point of the theory of renormalized quantum invariants of \\cite{geer_2009}. The main idea is that even though $\\textnormal{qdim}_{\\mathcal{D}}(V)$, and hence $F_{\\mathcal{D}}(K)$, vanish, $F_{\\mathcal{D}}(T)$ need not and may provide an interesting invariant of $K$. In graphical language, to get a non-trivial invariant of a knot $K$ we cut it to obtain a $(1,1)$-tangle $T$ and apply the standard Reshetikhin--Turaev functor to $T$.\n\t\n\\begin{figure}\n \\begin{equation*} \\label{trefoil}\n\tT = \n\t\\begin{tikzpicture}[anchorbase, scale=0.5]\n\t\t\\draw [very thick] (-1,0.3) to [out=90, in=225] (-1,1) to [out=45, in=315] (0,2) to [out=135, in=320] (-.32,2.42);\n\t\t\\draw [very thick] (-.65, 2.6) to [out=150, in=225] (-1,3) to [out=45, in=180] (0,4) to [out=0, in=180] (.5,4) to [out=0, in=90] (1.5, 2.5) to [out=270, in=0] (.5,1) to [out=180, in=0] (0,1) to [out=170, in=320] (-.25,1.32);\n\t\t\\draw [very thick] (-.5,1.5) to [out=135, in=225] (-1,2) to [out=45, in=225] (0,3) to [out=45, in=320] (-.48,3.5);\n\t\t\\draw [->,very thick] (-.75,3.6) to [out=135, in=270] (-1,4) to (-1,4.6);\n\t\t\\node at (2.2,1.5) {$V_\\alpha$};.\n \\end{tikzpicture}\n\t\\end{equation*}\n \\caption{A $(1,1)$-tangle $T$ whose closure is the right-handed trefoil.}\n \\label{fig:trefoiltangle}\n\\end{figure}\n\n\n\\begin{example}\n\\label{ex:cutTrefoil}\n\tLet $K$ be the right-handed trefoil knot colored by a Verma module $V_{\\alpha} \\in \\mathcal{C}$. \\cref{lemmasf} shows that $\\textnormal{qdim}_{\\mathcal{C}}(V_{\\alpha})=0$. It follows from \\cref{cutting} that $F_{\\mathcal{C}}(K) = 0$.\n\t\n\tLet $T$ be the $(1,1)$-tangle pictured in \\cref{fig:trefoiltangle}. The closure of $T$ is $K$. The endomorphism $F_{\\mathcal{C}}(T) \\in \\textnormal{End}_{\\mathcal{C}}(V_{\\alpha})$ is the composition\n\t\\[\n\t\tV_{\\alpha} \\xrightarrow{\\mathrm{id}_{V_{\\alpha}} \\otimes \\mathrm{coev}_{V_{\\alpha}}} V_{\\alpha} \\otimes V_{\\alpha} \\otimes V_{\\alpha}^{\\vee} \\xrightarrow[]{(c_{V_{\\alpha},V_{\\alpha}} \\otimes \\mathrm{id}_{V_{\\alpha}^{\\vee}})^{\\circ 3}} V_{\\alpha} \\otimes V_{\\alpha} \\otimes V_{\\alpha}^{\\vee} \\xrightarrow[]{\\mathrm{id}_{V_{\\alpha}} \\otimes \\widehat{\\mathrm{ev}}_{V_{\\alpha}}} V_{\\alpha}.\n\t\\]\n\tSince $V_{\\alpha}$ is highest weight, $F_{\\mathcal{C}}(T)$ is determined by its value on a highest weight vector $v_0 \\in V_{\\alpha}$. Using the explicit form of the braiding, we compute\n\t\\[\n \\langle F_{\\mathcal{C}}(T) \\rangle\n =\n q^{\\frac{3}{2}(\\alpha+r-1)^2+(\\alpha+r-1)(1-r)} \\sum_{i=0}^{r-1} q^{i(-3\\alpha -r+1)}\\prod_{j=0}^{i-1} \\{i-j-\\alpha\\}.\n \\]\n For example, when $r=2$ and $\\alpha=2$, this specializes to $\\langle F_{\\mathcal{C}}(T) \\rangle = - 3 e^{\\frac{\\pi \\sqrt{-1}}{4}} \\neq 0$.\n\\end{example}\n\t\n\\begin{proposition} \\label{modified_invariant}\n The assignment $K \\mapsto \\langle F_\\mathcal{D}(T) \\rangle$, where $K$ is a colored framed knot and $T$ is a $(1,1)$-tangle whose closure is $K$, is a well-defined invariant of colored framed knots.\n\\end{proposition}\n\n\n\\begin{proof}\nThis follows from \\cref{reshturfunctor} and the standard fact that two connected $(1,1)$-tangles $T$ and $T^{\\prime}$ are isotopic if and only if their closures are framed isotopic knots.\n\\end{proof}\n\t\n\\section{Renormalized Reshetikhin--Turaev invariants of $\\mathcal{C}$}\\label{mqi}\n\t\nWe henceforth restrict attention to the ribbon category $\\mathcal{C}$ of weight $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$-modules and write $F$ for the Reshetikhin--Turaev functor $F_{\\mathcal{C}}$.\n\t\n\\subsection{Ambidextrous modules} \n\t\nThe idea of constructing a non-zero invariant from a knot by cutting to obtain a $(1,1)$-tangle does not immediately extend to links, as the following example shows.\n \n\\begin{example}\\label{hopfLink}\n Let $\\alpha,\\beta \\in \\mathbb{C}$ and $L$ the Hopf link with components colored by $V_{\\alpha}$ and $V_{\\beta}$. Up to isotopy, there are two choices of how to cut $L$. Cutting the strand colored by $V_\\alpha$ gives\n \\begin{equation*}\n\t\tF\\left(\n\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.4 with {\\arrow{>}}}] (-.1,-.5) to [out=180,in=270] (-.8,0) to [out=90,in=180] (0,.5) to [out=0,in=90] (.8,0) to [out=270,in=0] (.1,-.5);\n\t\t\t\\draw [very thick] (0,-1) to (0,.4);\n\t\t\t\\draw [->, very thick] (0,.6) to (0,1);\n\t\t\t\\node at (1.2,0) {$V_\\beta$};\n\t\t\t\\node at (.45,-.9) {$V_\\alpha$};\n\t\t\\end{tikzpicture}\n\t\t\\right) = \\begin{cases} q^{\\beta\\alpha} \\frac{\\{\\alpha r\\}}{\\{\\alpha\\}} \\cdot \\mathrm{id}_{V_\\alpha} & \\mbox{if } \\alpha \\in \\mathbb{C}\\setminus r\\mathbb{Z}, \\\\ q^{\\beta rz} \\cdot (-1)^{(r + 1)z} r \\cdot \\mathrm{id}_{V_\\alpha}& \\mbox{if } \\alpha = rz \\in r\\mathbb{Z}. \\end{cases}\n\t\\end{equation*} \n\tIndeed, the map defined by the above tangle is the composition\n\t\\begin{equation*}\n\t\tV_\\alpha \\xrightarrow{\\mathrm{coev}_{V_{\\beta}}} V_\\alpha \\otimes V_\\beta \\otimes V_\\beta^\\vee \\xrightarrow{c_{V_{\\alpha},V_{\\beta}}} V_\\beta \\otimes V_\\alpha \\otimes V_\\beta^\\vee \\xrightarrow{c_{V_{\\beta},V_{\\alpha}}} V_\\alpha \\otimes V_\\beta \\otimes V_\\beta^\\vee \\xrightarrow{\\widehat{\\mathrm{ev}}_{V_{\\beta}}} V_\\alpha .\n\t\\end{equation*}\n\tAs in the proof of \\cref{ribbon theorem}, it suffices to compute the image under this map of a highest weight vector $v_0 \\in V_\\alpha$. Let $\\{w_i \\mid 0 \\leq i \\leq r-1\\}$ be a weight basis of $V_\\beta$ with dual basis $\\{w_i^\\vee \\mid 0 \\leq i \\leq r-1\\}$. Then we have under the above composition\n\t\\begin{multline*}\n\t\tv_0 \\xmapsto{\\mathrm{id} \\otimes \\mathrm{coev}_{V_\\beta}} \\sum_{i=0}^{r-1} v_0 \\otimes w_i \\otimes w_i^\\vee\n\t\t\\xmapsto{c_{V_{\\alpha},V_{\\beta}} \\otimes \\mathrm{id}} \\sum_{i=0}^{r-1} q^{(\\alpha + r - 1)(\\beta + r - 1 - 2i)\/2} w_i \\otimes v_0 \\otimes w_i^\\vee\n\t\t\\xmapsto{c_{V_{\\beta},V_{\\alpha}} \\otimes \\mathrm{id}} \\\\ \\sum_{i=0}^{r-1} q^{(\\alpha + r - 1)(\\beta + r - 1 - 2i)}v_0 \\otimes w_i \\otimes w_i^\\vee + \\cdots\n\t\t\\xmapsto{\\mathrm{id} \\otimes \\widehat{\\mathrm{ev}}_{V_\\beta}} \\sum_{i=0}^{r-1} q^{(\\alpha + r - 1)(\\beta + r - 1 - 2i)} q^{(\\beta + r - 1 - 2i)(1 - r)}v_0,\n\t\\end{multline*}\n\twhere the omitted quantity $\\cdots$ is a linear combination of terms of the form $E^j v_0 \\otimes w_{i+j} \\otimes w_i^{\\vee}$, $j >0$, and so is in the kernel of $\\mathrm{id}_{V_{\\alpha}} \\otimes \\widehat{\\mathrm{ev}}_{V_{\\beta}}$. We have\n\t\\[\n\t\t\\sum_{i=0}^{r-1} q^{(\\alpha + r - 1)(\\beta + r - 1 - 2i)} q^{(\\beta + r - 1 - 2i)(1 - r)}\n \n =q^{\\alpha(\\beta + r - 1)}\\sum_{i=0}^{r-1}q^{-2\\alpha i}.\n\t\\]\n\tIf $\\alpha \\notin r\\mathbb{Z}$, then $q^{-2\\alpha} \\neq 1$ and the previous line evaluates to \n\t\\begin{equation*}\n\t\tq^{\\alpha(\\beta + r - 1)} \\frac{1 - q^{-2\\alpha r}}{1 - q^{-2\\alpha}} = q^{\\alpha\\beta} \\frac{q^{\\alpha r} - q^{-\\alpha r}}{q^{\\alpha} - q^{-\\alpha}} =q^{\\alpha\\beta} \\frac{\\{\\alpha r\\}}{\\{\\alpha\\}}.\n\t\\end{equation*}\n\tIf instead $\\alpha = rz \\in r \\mathbb{Z}$, then\n \\begin{equation*}\n\t\tq^{rz(\\beta + r - 1)}\\sum_{i=0}^{r-1}q^{-2rzi} = q^{rz(\\beta + r - 1)} r = (-1)^{rz + z} q^{\\beta rz} r.\n \\end{equation*}\n\t\n\tIn particular, taking $r=2$ with $\\alpha=0$ and $\\beta=2$, we obtain\n\t\\[\n\tF\\left(\n\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n \t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.4 with {\\arrow{>}}}] (-.1,-.5) to [out=180,in=270] (-.8,0) to [out=90,in=180] (0,.5) to [out=0,in=90] (.8,0) to [out=270,in=0] (.1,-.5);\n \t\\draw [very thick] (0,-1) to (0,.4);\n \t\\draw [->, very thick] (0,.6) to (0,1);\n \t\\node at (1.2,0) {$V_2$};\n \t\\node at (.35,-.9) {$V_0$};\n \\end{tikzpicture}\n \\right)\n\t= 2 \\mathrm{id}_{V_0},\n\t\\qquad\n\tF\\left(\n\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.4 with {\\arrow{>}}}] (-.1,-.5) to [out=180,in=270] (-.8,0) to [out=90,in=180] (0,.5) to [out=0,in=90] (.8,0) to [out=270,in=0] (.1,-.5);\n\t\t\\draw [very thick] (0,-1) to (0,.4);\n\t\t\\draw [->, very thick] (0,.6) to (0,1);\n\t\t\\node at (1.2,0) {$V_0$};\n\t\t\\node at (.35,-.9) {$V_2$};\n\t\\end{tikzpicture}\n\t\\right)\n\t= -2 \\mathrm{id}_{V_2}.\n\t\\]\n\tIn view of \\cref{formula}, we want to attach to the Hopf link colored by $V_0$ and $V_2$ a scalar given by cutting the Hopf link open to a $(1,1)$-tangle. However, we see that the scalar depends non-trivially on which strand we choose to cut.\n\\end{example}\n\t\nThe following notion is the key to resolving the cutting ambiguity illustrated by the previous example.\n\n\\begin{definition}[{\\cite[\\S 3]{geer_2009}}] A module $V \\in \\mathcal{C}$ is called \\emph{ambidextrous} if the equality\n\t\\begin{equation}\\label{ambiequation}\n\t\t\\begin{aligned}\n\t\t\tF\\left(\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [very thick] (-.5,-.25) -- (.5,-.25) -- (.5,.25) -- (-.5,.25) -- cycle;\n\t\t\t\t\\draw [<-, very thick] (-1.25,0) to [out=90, in=180] (-.75,1) to [out=0,in = 90] (-.25,.25);\n\t\t\t\t\\draw [very thick] (-1.25,0) to [out=270, in=180] (-.75,-1) to [out=0,in=270] (-.25,-.25);\n\t\t\t\t\\draw [->, very thick] (.25,.25) to [out=90, in=230] (.5,1);\n\t\t\t\t\\draw [very thick] (.25,-.25) to [out=270, in=130] (.5,-1);\n\t\t\t\t\\node at (-1.7,0) {$V$};\n\t\t\t\t\\node at (.8,-.9) {$V$};\n\t\t\t\t\\node at (0,0) {$T$};\n\t\t\t\\end{tikzpicture}\n\t\t\t\\right)\n\t\t\t&=F\\left(\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [very thick] (-.5,-.25) -- (.5,-.25) -- (.5,.25) -- (-.5,.25) -- cycle;\n\t\t\t\t\\draw [->, very thick] (-.25,.25) to [out=90, in=310] (-.5,1);\n\t\t\t\t\\draw [very thick] (-.25,-.25) to [out=270, in=50] (-.5,-1);\n\t\t\t\t\\draw [<-, very thick] (1.25,0) to [out=90, in=0] (.75,1) to [out=180,in = 90] (.25,.25);\n\t\t\t\t\\draw [very thick] (1.25,0) to [out=270, in=00] (.75,-1) to [out=180,in=270] (.25,-.25);\n\t\t\t\t\\node at (1.7,0) {$V$};\n\t\t\t\t\\node at (-.8,-.9) {$V$};\n\t\t\t\t\\node at (0,0) {$T$};\n\t\t\t\\end{tikzpicture}\n\t\t\t\\right)\n\t\t\\end{aligned}\n \\end{equation}\n holds for all (2,2)-tangles $T$ whose open strands are colored by $V$.\n\\end{definition}\n\t\n\\begin{lemma}\\label{ambiqd}\n\tIf $V \\in \\mathcal{C}$ is simple with non-vanishing quantum dimension, then $V$ is ambidextrous.\n\\end{lemma}\n\\begin{proof}\nLet $T$ be a $(2,2)$-tangle whose open strands are colored by $V$ and let $L$ be the diagram\n \\begin{equation*}\n\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\\draw [very thick] (-.5,-.25) -- (.5,-.25) -- (.5,.25) -- (-.5,.25) -- cycle;\n\t\t\t\\draw [<-, very thick] (-1.25,0) to [out=90, in=180] (-.75,1) to [out=0,in = 90] (-.25,.25);\n\t\t\t\\draw [very thick] (-1.25,0) to [out=270, in=180] (-.75,-1) to [out=0,in=270] (-.25,-.25);\n\t\t\t\\draw [<-, very thick] (1.25,0) to [out=90, in=0] (.75,1) to [out=180,in = 90] (.25,.25);\n\t\t\t\\draw [very thick] (1.25,0) to [out=270, in=00] (.75,-1) to [out=180,in=270] (.25,-.25);\n\t\t\t\\node at (-1.7,0) {$V$};\n\t\t\t\\node at (1.7,0) {$V$};\n\t\t\t\\node at (0,0) {$T$};\n\t\t\\end{tikzpicture}.\n\t\\end{equation*}\n\tSince $L$ is obtained by taking the right and left partial traces of the $(1,1)$-tangles appearing in \\cref{ambiequation}, we find that both sides of this equation are equal to $ \\frac{\\langle F(L) \\rangle}{\\textnormal{qdim}_\\mathcal{C}(V)} \\mathrm{id}_V$.\n\\end{proof}\n\nWhen $V$ has vanishing quantum dimension, we need to investigate further.\n\n\\begin{lemma} \\label{endocommutative}\n\tLet $V, W \\in \\mathcal{C}$ be simple objects such that $V \\otimes W$ is semisimple and multiplicity free. Then the algebra $\\textnormal{End}_{\\mathcal{C}}(V \\otimes W)$ is isomorphic to a direct sum of copies of $\\mathbb{C}$.\n\\end{lemma}\n\\begin{proof}\n Let $U_1, \\ldots, U_n$ be pairwise non-isomorphic simples such that $V \\otimes W \\simeq U_1 \\oplus \\cdots \\oplus U_n$. Schur's Lemma implies algebra isomorphisms $\n\t\\textnormal{End}_{\\mathcal{C}}(V \\otimes W)\n\t\\simeq \n\t\\bigoplus_{i=1}^n \\textnormal{End}_{\\mathcal{C}}(U_i)\n\t\\simeq\n\t\\mathbb{C}^n$.\n\\end{proof}\n\t\n\\begin{lemma} \\label{multiplicityfree}\n\tLet $\\eta \\in \\mathbb{C} \\setminus \\frac{1}{2}\\mathbb{Z}$. Then $V_\\eta \\otimes V_\\eta \\in \\mathcal{C}$ is semisimple and multiplicity free.\n\\end{lemma}\n\\begin{proof}\nSince $2\\eta \\notin \\mathbb{Z}$, the object $V_\\eta \\otimes V_\\eta \\in \\mathcal{C}_{\\overline{2\\eta}}$ is semisimple by \\cref{Cisgss}.\n\tHence, there exist unique integers $m_{2\\eta_r -r +2i+1} \\in \\mathbb{Z}_{\\geq 0}$ such that\n\t\\[\n\tV_\\eta \\otimes V_\\eta \\simeq \\bigoplus_{i=0}^{r-1} V_{2 \\eta -r +2i+1}^{\\oplus m_{2\\eta_r -r +2i+1}}.\n\t\\]\n\t\n\tConsider $\\mathbb{Q}[\\mathbb{C}]$, the group algebra of $\\mathbb{C}$, with basis $\\{x^{\\lambda}\\}_{\\lambda \\in \\mathbb{C}}$. The \\emph{character} of $V \\in \\mathcal{C}$ is $\\text{ch}(V) = \\sum_{\\lambda \\in \\mathbb{C}} \\dim_{\\mathbb{C}}(V[\\lambda]) x^\\lambda$. The explicit description of $V_\\alpha$ gives $\\text{ch}(V_\\alpha) = x^{\\alpha}[r]_x$, \twhere $[r]_x = \\sum_{i=0}^{r-1} x^{r-1-2i}$. We claim that the set\n\t\\[\n \\mathcal{S} = \\{\\text{ch}(V_{2\\eta - r + 2i + 1}) \\mid 0 \\leq i \\leq r-1\\} \\subset \\mathbb{Q}[\\mathbb{C}]\n\t\\]\n\tis linearly independent. Suppose that $\\sum_{i=0}^{r-1} a_i \\text{ch}(V_{2\\eta - r + 2i + 1}) = 0$ for some $a_i \\in \\mathbb{Q}$. Since all powers of $x$ which appear in this equation lie on the same affine real line in $\\mathbb{C}$, they are naturally ordered. The largest such power is $x^{2\\eta + 2(r-1)}$ with coefficient $a_{r-1}$, resulting from $\\text{ch}(V_{2\\eta - r + 2(r-1) + 1})$. Hence, $a_{r-1}=0$. Continuing in this way shows that $a_{r-2}=\\cdots = a_0=0$ and $\\mathcal{S}$ is linearly independent.\n\t\t\n\tThe character of $V_\\eta \\otimes V_\\eta$ is $\\text{ch}(V_\\eta)^2 = x^{2\\eta}[r]_x^2$. On the other hand,\n\t\\[\n\t\\text{ch}(V_\\eta)^2 = \\big( \\sum_{i=0}^{r-1} m_{2\\eta-r+2i+1}x^{2 \\eta-r+2i} \\big) [r]_x.\n\t\\]\n\tSetting each $m_{2\\eta-r+1+2i}=1$, the right-hand side of the previous equation becomes\n\t\\[\n\t\\left ( \\sum_{i=0}^{r-1} x^{2 \\eta-r+2i} \\right ) [r]_x\n\t=\n\tx^{2\\eta}[r]^2_x\n\t=\n\t\\text{ch}(V_\\eta)^2.\n\t\\]\n\tIn view of the linear independence of $\\mathcal{S}$, this completes the proof.\n\\end{proof}\n\t\n\\begin{theorem} \\label{ambid}\nLet $\\eta \\in \\mathbb{C}\\setminus\\frac{1}{2}\\mathbb{Z}$. Then $V_\\eta$ is ambidextrous.\n\\end{theorem}\n\\begin{proof}\n\tBy \\cref{endocommutative,multiplicityfree}, the algebra $\\textnormal{End}_{\\mathcal{C}}(V_\\eta \\otimes V_\\eta)$ is commutative. Let $T$ be a $(2,2)$-tangle whose open strands are colored by $V_{\\eta}$. Then we have the following sequence of equalities, where we implicitly apply $F$ to each tangle and the coupons are colored by $T$:\n\t\\begin{multline*}\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\n\t\t\t\t\\draw [very thick] (-.5,-.25) -- (.5,-.25) -- (.5,.25) -- (-.5,.25) -- cycle;\n\t\t\t\n\t\t\t\t\\draw [<-, very thick] (-1.25,0) to [out=90, in=180] (-.75,1) to [out=0,in = 90] (-.25,.25);\n\t\t\t\t\\draw [very thick] (-1.25,0) to [out=270, in=180] (-.75,-1) to [out=0,in=270] (-.25,-.25);\n\t\t\t\n\t\t\t\t\\draw [->, very thick] (.25,.25) to [out=90, in=230] (.5,1);\n\t\t\t\t\\draw [very thick] (.25,-.25) to [out=270, in=130] (.5,-1);\n\t\t\t\n\t\t\t\t\\node at (-1.5,-.8) {$V_\\eta$};\n\t\t\t\t\\node at (.8,-.8) {$V_\\eta$};\n\t\t\t\\end{tikzpicture}\n\t\t\t\\;\\overset{\\text{RII}}{=}\\;\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\n\t\t\t\t\\draw [very thick] (-.5,-.25) -- (.5,-.25) -- (.5,.25) -- (-.5,.25) -- cycle;\n\t\t\t\n\t\t\t\t\\draw [<-, very thick] (-1.25,0) to [out=90, in=180] (-.75,1) to [out=0,in = 90] (-.25,.25);\n\t\t\t\t\\draw [very thick] (-1.25,0) to [out=270, in=180] (-.75,-1) to [out=0,in=270] (.75,-.75);\n\t\t\t\t\\draw [very thick] (-.25,-.25) to [out=270, in=90] (.75,-.75);\n\t\t\t\n\t\t\t\t\\draw [->, very thick] (.25,.25) to [out=90, in=230] (.5,1);\n\t\t\t\t\\draw [very thick] (.25,-.25) to [out=270, in=90] (.25,-.43);\n\t\t\t\t\\draw [very thick] (.25,-.57) to [out=270, in=90] (.25,-1.03);\n\t\t\t\t\\draw [very thick] (.25,-1.17) to [out=270, in=160] (.5,-1.30);\n\t\t\t\n\t\t\t\t\\node at (-1.35,-1) {$V_\\eta$};\n\t\t\t\t\\node at (.85,.8) {$V_\\eta$};\n\t\t\t\\end{tikzpicture}\n\t\t\t\\;=\\;\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\n\t\t\t\t\\draw [very thick] (-.5,-.25) -- (.5,-.25) -- (.5,.25) -- (-.5,.25) -- cycle;\n\t\t\t\n\t\t\t\t\\draw [<-, very thick] (-1.25,0) to [out=90, in=180] (-.5,1) to [out=0,in = 90] (.25,.25);\n\t\t\t\t\\draw [very thick] (-1.25,0) to [out=270, in=180] (-.5,-1) to [out=0,in=270] (.25,-.25);\n\t\t\t\n\t\t\t\t\\draw [very thick] (-.25,.25) to [out=90, in=235] (.05,.65);\n\t\t\t\t\\draw [->, very thick] (.15,.75) to [out=45, in=230] (.5,1);\n\t\t\t\t\\draw [very thick] (-.25,-.25) to [out=270, in=135] (.05,-.65);\n\t\t\t\t\\draw [very thick] (.15,-.75) to [out=315, in=130] (.5,-1);\n\t\t\t\n\t\t\t\t\\node at (-1.3,-.8) {$V_\\eta$};\n\t\t\t\t\\node at (.7,-.8) {$V_\\eta$};\n\t\t\t\\end{tikzpicture}\n\t\t\t\\;\\overset{\\text{RI}}{=}\\; \n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\n\t\t\t\t\\draw [very thick] (-.5,-.25) -- (.5,-.25) -- (.5,.25) -- (-.5,.25) -- cycle;\n\t\t\t\n\t\t\t\t\\draw [<-, very thick] (-1.25,0) to [out=90,in=270] (-1.25,1) to [out=90,in=0] (-1.5,1.25) to [out=180,in=90] (-1.75,1) to [out=270,in=180] (-1.33,.75);\n\t\t\t\t\\draw [very thick] (-1.17,.75) to [out=0,in=180] (-.5,1) to [out=0,in=90] (.25,.25);\n\t\t\t\n\t\t\t\t\\draw [very thick] (-1.25,0) to [out=270,in=90] (-1.25,-1.25) to [out=270,in=0] (-1.5,-1.5) to [out=180,in=270] (-1.75,-1.25) to [out=90,in=180] (-1.33,-1);\n\t\t\t\t\\draw [very thick] (-1.17,-1) to [out=0, in=180] (-.5,-1) to [out=0,in=270] (.25,-.25);\n\t\t\t\n\t\t\t\t\\draw [very thick] (-.25,.25) to [out=90, in=235] (.05,.65);\n\t\t\t\t\\draw [->, very thick] (.15,.75) to [out=45, in=230] (.5,1);\n\t\t\t\t\\draw [very thick] (-.25,-.25) to [out=270, in=135] (.05,-.65);\n\t\t\t\t\\draw [very thick] (.15,-.75) to [out=315, in=130] (.5,-1);\n\t\t\t\n\t\t\t\t\\node at (-1.6,-.7) {$V_\\eta$};\n\t\t\t\t\\node at (.8,-1) {$V_\\eta$};\n\t\t\t\\end{tikzpicture}\n\t\t\t\\;\\overset{\\text{RII}}{=}\\;\n\t\t\t\\\\\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\n\t\t\t\t\\draw [very thick] (-.5,-.25) -- (.5,-.25) -- (.5,.25) -- (-.5,.25) -- cycle;\n\t\t\t\n\t\t\t\t\\draw [<-, very thick] (-1.25,0) to [out=90,in=180] (-1,.41) to (-.2,.41) to [out=0,in=270] (0,.53) to [out=90,in=0] (-.2,.65) to (-1,.65) to [out=180,in=270] (-1.25,.75) to [out=90,in=270] (-1.25,1) to [out=90,in=0] (-1.5,1.25) to [out=180,in=90] (-1.75,1) to [out=270,in=180] (-1.33,.75);\n\t\t\t\t\\draw [very thick] (-1.17,.75) to [out=0,in=180] (-.5,1) to [out=0,in=90] (.25,.25);\n\t\t\t\n\t\t\t\t\\draw [very thick] (-1.25,0) to [out=270,in=90] (-1.25,-1.25) to [out=270,in=0] (-1.5,-1.5) to [out=180,in=270] (-1.75,-1.25) to [out=90,in=180] (-1.33,-1);\n\t\t\t\t\\draw [very thick] (-1.17,-1) to [out=0, in=180] (-.5,-1) to [out=0,in=270] (.25,-.25);\n\t\t\t\n\t\t\t\t\\draw [very thick] (-.25,.25) to [out=90, in=270] (-.25,.35);\n\t\t\t\t\\draw [very thick] (-.25,.45) to [out=90, in=270] (-.25,.6);\n\t\t\t\t\\draw [very thick] (-.25,.7) to [out=90, in=270] (-.25,.9);\n\t\t\t\t\\draw [->, very thick] (-.25,1) to [out=90, in=270] (-.25,1.3);\n\t\t\t\t\\draw [very thick] (-.25,-.25) to [out=270, in=90] (-.25,-.9);\n\t\t\t\t\\draw [very thick] (-.25,-1.03) to [out=270, in=90] (-.25,-1.55);\n\t\t\t\n\t\t\t\t\\node at (-1.6,-.7) {$V_\\eta$};\n\t\t\t\t\\node at (.2,-1.3) {$V_\\eta$};\n\t\t\t\\end{tikzpicture}\n\t\t\t\\;=\\;\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\n\t\t\t\t\\draw [very thick] (-.5,-.25) -- (.5,-.25) -- (.5,.25) -- (-.5,.25) -- cycle;\n\t\t\t\n\t\t\t\t\\draw [<-, very thick] (-1.25,0) to [out=90,in=180] (-1,.41) to (.5,.41) to [out=0,in=270] (.75,.65) to [out=90,in=0] (.5,.89) to [out=180,in=90] (.25,.65) to (.25,.48);\n\t\t\t\t\\draw [very thick] (.25,.25) to (.25,.35);\n\t\t\t\n\t\t\t\t\\draw [very thick] (-1.25,0) to [out=270,in=90] (-1.25,-1.25) to [out=270,in=0] (-1.5,-1.5) to [out=180,in=270] (-1.75,-1.25) to [out=90,in=180] (-1.33,-1);\n\t\t\t\t\\draw [very thick] (-1.17,-1) to [out=0, in=180] (-.5,-1) to [out=0,in=270] (.25,-.25);\n\t\t\t\n\t\t\t\t\\draw [very thick] (-.25,.25) to [out=90, in=270] (-.25,.35);\n\t\t\t\t\\draw [->, very thick] (-.25,.45) to [out=90, in=270] (-.25,1.3);\n\t\t\t\t\\draw [very thick] (-.25,-.25) to [out=270, in=90] (-.25,-.9);\n\t\t\t\t\\draw [very thick] (-.25,-1.03) to [out=270, in=90] (-.25,-1.55);\n\t\t\t\n\t\t\t\t\\node at (-1.6,-.7) {$V_\\eta$};\n\t\t\t\t\\node at (.2,-1.3) {$V_\\eta$};\n\t\t\t\\end{tikzpicture}\n\t\t\t\\;=\\;\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\n\t\t\t\t\\draw [very thick] (-.5,-.25) -- (.5,-.25) -- (.5,.25) -- (-.5,.25) -- cycle;\n\t\t\t\n\t\t\t\t\\draw [<-, very thick] (-1.25,-.75) to [out=90,in=180] (-.5,-.5) to (.5,-.5) to [out=0,in=270] (.75,0) to [out=90,in=0] (.5,.5) to [out=180,in=90] (.25,.25);\n\t\t\t\n\t\t\t\t\\draw [very thick] (-1.25,-.75) to [out=270,in=90] (-1.25,-1.25) to [out=270,in=0] (-1.5,-1.5) to [out=180,in=270] (-1.75,-1.25) to [out=90,in=180] (-1.33,-1);\n\t\t\t\t\\draw [very thick] (-1.17,-1) to [out=0, in=180] (-.5,-1) to [out=0,in=270] (.25,-.55);\n\t\t\t\t\\draw [very thick] (.25,-.25) to (.25,-.45);\n\t\t\t\n\t\t\t\t\\draw [->, very thick] (-.25,.25) to [out=90, in=270] (-.25,1.55);\n\t\t\t\t\\draw [very thick] (-.25,-.25) to [out=270, in=90] (-.25,-.45);\n\t\t\t\t\\draw [very thick] (-.25,-.55) to [out=270, in=90] (-.25,-.9);\n\t\t\t\t\\draw [very thick] (-.25,-1.03) to [out=270, in=90] (-.25,-1.55);\n\t\t\t\n\t\t\t\t\\node at (-1.7,-.7) {$V_\\eta$};\n\t\t\t\t\\node at (-.6,1.1) {$V_\\eta$};\n\t\t\t\\end{tikzpicture}\n\t\t\t\\;=\\;\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\n\t\t\t\t\\draw [very thick] (-.5,-.25) -- (.5,-.25) -- (.5,.25) -- (-.5,.25) -- cycle;\n\t\t\t\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.6 with {\\arrow{<}}}] (0,-.75) to [out=90,in=180] (.1,-.5) to (.5,-.5) to [out=0,in=270] (.75,0) to [out=90,in=0] (.5,.5) to [out=180,in=90] (.25,.25);\n\t\t\t\n\t\t\t\t\\draw [very thick] (0,-.75) to [out=270,in=90] (.25,-1) to [out=270,in=0] (.125,-1.15) to [out=180,in=270] (0,-1) to [out=90,in=250] (.06,-.92);\n\t\t\t\t\\draw [very thick] (.17,-.84) to [out=60,in=270] (.25,-.75) to (.25,-.55);\n\t\t\t\t\\draw [very thick] (.25,-.25) to (.25,-.45);\n\t\t\t\n\t\t\t\t\\draw [->, very thick] (-.25,.25) to [out=90, in=270] (-.25,1.25);\n\t\t\t\t\\draw [very thick] (-.25,-.25) to [out=270, in=90] (-.25,-1.25);\n\t\t\t\n\t\t\t\t\\node at (-.6,.8) {$V_\\eta$};\n\t\t\t\t\\node at (.8,-1) {$V_\\eta$};\n\t\t\t\\end{tikzpicture}\n\t\t\t\\;\\overset{\\text{RII}}{=}\\;\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\n\t\t\t\t\\draw [very thick] (-.5,-.25) -- (.5,-.25) -- (.5,.25) -- (-.5,.25) -- cycle;\n\t\t\t\n\t\t\t\t\\draw [<-, very thick] (1.25,0) to [out=90, in=0] (.75,1) to [out=180,in = 90] (.25,.25);\n\t\t\t\t\\draw [very thick] (1.25,0) to [out=270, in=00] (.75,-1) to [out=180,in=270] (.25,-.25);\n\t\t\t\n\t\t\t\t\\draw [->, very thick] (-.25,.25) to [out=90, in=310] (-.5,1);\n\t\t\t\t\\draw [very thick] (-.25,-.25) to [out=270, in=50] (-.5,-1);\n\t\t\t\t\\node at (-.85,-.8) {$V_\\eta$};\n\t\t\t\t\\node at (1.6,-.8) {$V_\\eta$};\n\t\t\t\\end{tikzpicture}.\n\t\t\\end{multline*}\n\tThe second equality is implied by the commutativity of $\\textnormal{End}_{\\mathcal{C}}(V_\\eta \\otimes V_\\eta)$. The fifth and seventh equalities are each a combination of framed Reidemeister moves $\\text{RII}$ and $\\text{RIII}$. The sixth equality holds by a combination of framed Reidemeister moves that depends on $T$. The other equalities hold by the indicated framed Reidemeister moves.\n\\end{proof}\n\t\n\\subsection{Modified quantum dimensions}\n\nDefine a function $S': \\mathbb{C} \\times \\mathbb{C} \\rightarrow \\mathbb{C}$ by\n\\begin{equation*}\n\tS'(\\beta, \\alpha) = \\, \\left \\langle F \\left(\n\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.4 with {\\arrow{>}}}] (-.1,-.5) to [out=180,in=270] (-.8,0) to [out=90,in=180] (0,.5) to [out=0,in=90] (.8,0) to [out=270,in=0] (.1,-.5);\n\t\t\\draw [very thick] (0,-1) to (0,.4);\n\t\t\\draw [->, very thick] (0,.6) to (0,1);\n\t\t\\node at (1.2,0) {$V_\\beta$};\n\t\t\\node at (.35,-.9) {$V_\\alpha$};\n\t\\end{tikzpicture}\\right) \\right \\rangle.\n\\end{equation*}\n\t\n\\begin{proposition}\\label{sHopf} \n\tThe equality\n\t\\begin{equation*}\n\t\tS'(\\beta, \\alpha) = \\begin{cases} q^{\\beta\\alpha} \\frac{\\{\\alpha r\\}}{\\{\\alpha\\}} & \\mbox{if } \\alpha \\in \\mathbb{C}\\setminus r\\mathbb{Z}, \\\\ q^{\\beta rz} \\cdot (-1)^{(r + 1)z} r & \\mbox{if } \\alpha = rz \\in r\\mathbb{Z} \\end{cases}\n\t\\end{equation*}\n \tholds. In particular, $S'(\\beta,\\alpha)$ is nonzero for all $\\alpha \\in \\mathbb{C} \\setminus \\mathbb{Z} \\cup r\\mathbb{Z}$.\n\\end{proposition}\n\\begin{proof}\n\tThis was computed in \\cref{hopfLink}.\n\\end{proof}\n\t\n\\begin{definition}\\label{modifiedqd}\n\tLet $\\eta \\in \\mathbb{C}$. The \\emph{modified quantum dimension with respect to $\\eta$} is the function $\\mathbf{d}_\\eta : \\mathbb{C} \\setminus \\mathbb{Z} \\cup r\\mathbb{Z} \\rightarrow \\mathbb{C}$ given by $\\mathbf{d}_\\eta(\\alpha) = \\frac{S'(\\alpha,\\eta)}{S'(\\eta,\\alpha)}$.\n\\end{definition}\n\nBy \\cref{sHopf}, the modified quantum dimension $\\mathbf{d}_{\\eta}$ is nowhere zero. Modified quantum dimensions associated to different parameters $\\eta$ are related as follows.\n\n\\begin{proposition}\\label{etaetaprime}\n\tFor $\\eta, \\eta' \\in \\mathbb{C} \\setminus \\mathbb{Z}$ and $\\alpha \\in \\mathbb{C} \\setminus \\mathbb{Z} \\cup r\\mathbb{Z}$, the following equality holds:\n\t\\[\n\t\\mathbf{d}_\\eta(\\alpha) = \\frac{\\sin(\\pi \\frac{\\eta}{r})\\sin(\\eta' \\pi)}{\\sin(\\eta \\pi)\\sin(\\pi \\frac{\\eta'}{r})}\\mathbf{d}_{\\eta'}(\\alpha).\n\t\\]\n\\end{proposition}\n\\begin{proof}\nThis follows immediately from \\cref{sHopf} and the definition of $\\mathbf{d}_{(-)}$.\n\\end{proof}\n\t\n\\begin{theorem}[{\\cite[Lemma 2]{geer_2009}}]\\label{invariant}\n Let $\\eta \\in \\mathbb{C} $ be such that $V_\\eta$ is ambidextrous and $\\alpha, \\beta \\in \\mathbb{C} \\setminus \\mathbb{Z} \\cup r\\mathbb{Z}$. Then for all $(2,2)$-tangles $T$, the following equality holds:\n\t\\begin{equation*}\n\t\t\\begin{aligned}\n\t\t\t\\mathbf{d}_\\eta(\\beta)\\; \\left \\langle F \\left(\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [very thick] (-.5,-.25) -- (.5,-.25) -- (.5,.25) -- (-.5,.25) -- cycle;\n\t\t\t\t\\draw [<-, very thick] (-1.25,0) to [out=90, in=180] (-.75,1) to [out=0,in = 90] (-.25,.25);\n\t\t\t\t\\draw [very thick] (-1.25,0) to [out=270, in=180] (-.75,-1) to [out=0,in=270] (-.25,-.25);\n\t\t\t\t\\draw [->, very thick] (.25,.25) to [out=90, in=230] (.5,1);\n\t\t\t\t\\draw [very thick] (.25,-.25) to [out=270, in=130] (.5,-1);\n\t\t\t\t\\node at (-1.7,0) {$V_\\alpha$};\n\t\t\t\t\\node at (.9,-.9) {$V_\\beta$};\n\t\t\t\t\\node at (0,0) {$T$};\n\t\t\t\\end{tikzpicture}\n\t\t\t\\right) \\right \\rangle\n\t\t\t&= \\mathbf{d}_\\eta(\\alpha) \\; \\left \\langle F \\left(\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [very thick] (-.5,-.25) -- (.5,-.25) -- (.5,.25) -- (-.5,.25) -- cycle;\n\t\t\t\t\\draw [->, very thick] (-.25,.25) to [out=90, in=310] (-.5,1);\n\t\t\t\t\\draw [very thick] (-.25,-.25) to [out=270, in=50] (-.5,-1);\n\t\t\t\t\\draw [<-, very thick] (1.25,0) to [out=90, in=0] (.75,1) to [out=180,in = 90] (.25,.25);\n\t\t\t\t\\draw [very thick] (1.25,0) to [out=270, in=00] (.75,-1) to [out=180,in=270] (.25,-.25);\n\t\t\t\t\\node at (1.7,0) {$V_\\beta$};\n\t\t\t\t\\node at (-.9,-.9) {$V_\\alpha$};\n\t\t\t\t\\node at (0,0) {$T$};\n\t\t\t\\end{tikzpicture}\\right) \\right \\rangle.\n\t\t\\end{aligned}\n\t\\end{equation*}\n\\end{theorem}\n\\begin{proof}\n\tBecause $V_\\eta$ is ambidextrous, there is an equality\n\t\\begin{equation}\\label{weird}\n\t\t\\begin{aligned}\n\t\t\t\\left \\langle F \\left (\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [very thick] (-.5,-.25) -- (.5,-.25) -- (.5,.25) -- (-.5,.25) -- cycle;\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.7 with {\\arrow{<}}}] (-1.25,.3) to [out=90, in=180] (-.75,1) to [out=0,in = 90] (-.25,.25);\n\t\t\t\t\\draw [very thick] (-1.25,.1) to (-1.25,-.25) to [out=270, in=180] (-.75,-1) to [out=0,in=270] (-.25,-.25);\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.8 with {\\arrow{<}}}] (1.15,-.7) to [out=60,in=270] (1.25,0) to [out=90, in=0] (.75,1) to [out=180,in = 90] (.25,.25);\n\t\t\t\t\\draw [very thick] (1.05,-.9) to [out=240, in=0] (.75,-1) to [out=180,in=270] (.25,-.25);\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.4 with {\\arrow{>}}}] (-1.15,-.2) to [out=0, in=270] (-.75,0) to [out=90, in=0] (-1.25,.2) to [out=180, in=90] (-1.75,0) to [out=270, in=180] (-1.35,-.2);\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.5 with {\\arrow{>}}}] (1.4,-1) to [out=90,in=270] (.9,-.6) to (.9, .6) to [out=90,in=45] (.95,.7);\n\t\t\t\t\\draw [very thick] (1.2,.85) to [in=270](1.4, 1);\n\t\t\t\n\t\t\t\n\t\t\t\t\\node at (-2.1,0) {$V_\\eta$};\n\t\t\t\t\\node at (-1.5,.9) {$V_\\alpha$};\n\t\t\t\t\\node at (1.6,0) {$V_\\beta$};\n\t\t\t\t\\node at (1.8,-.9) {$V_\\eta$};\n\t\t\t\t\\node at (0,0) {$T$};\n\t\t\t\\end{tikzpicture} \\right ) \\right \\rangle\n\t\t\t&= \\left \\langle F \\left (\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [very thick] (-.5,-.25) -- (.5,-.25) -- (.5,.25) -- (-.5,.25) -- cycle;\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.7 with {\\arrow{<}}}] (1.25,-.1) to (1.25,.25) to [out=90, in=0] (.75,1) to [out=180,in = 90] (.25,.25);\n\t\t\t\t\\draw [very thick] (1.25,-.3) to [out=270, in=0] (.75,-1) to [out=180,in=270] (.25,-.25);\n\t\t\t\t\\draw [very thick] (-1.15,.7) to [out=240,in=90] (-1.25,0) to [out=270, in=180] (-.75,-1) to [out=0,in=270] (-.25,-.25);\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.48 with {\\arrow{<}}}] (-1.05,.89) to [out=60, in=180] (-.75,1) to [out=0,in=90] (-.25,.25);\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.65 with {\\arrow{<}}}] (1.15,.2) to [out=180, in=90] (.75,0) to [out=270, in=180] (1.25,-.2) to [out=0, in=270] (1.75,0) to [out=90, in=0] (1.35,.2);\n\t\t\t\t\\draw [very thick] (-1.4,-1) to [out=90,in=215] (-1.2,-.85);\n\t\t\t\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.5 with {\\arrow{>}}}] (-1.02,-.75) to [out=45,in=270] (-.9,-.6) to (-.9, .6) to [out=90,in=270] (-1.4, 1);\n\t\t\t\n\t\t\t\n\t\t\t\t\\node at (2.1,0) {$V_\\eta$};\n\t\t\t\t\\node at (1.5,.9) {$V_\\beta$};\n\t\t\t\t\\node at (-1.6,0) {$V_\\alpha$};\n\t\t\t\t\\node at (-1.8,-.9) {$V_\\eta$};\n\t\t\t\t\\node at (0,0) {$T$};\n\t\t\t\\end{tikzpicture} \\right ) \\right \\rangle.\n\t\t\\end{aligned}\n\t\\end{equation}\n\tWe expand both sides of this equality. The left-hand side becomes\n\t\\begin{equation*}\n\t\t\\begin{aligned} &\\left \\langle F \\left (\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [very thick] (-.5,-.25) -- (.5,-.25) -- (.5,.25) -- (-.5,.25) -- cycle;\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.7 with {\\arrow{<}}}] (-1.25,.3) to [out=90, in=180] (-.75,1) to [out=0,in = 90] (-.25,.25);\n\t\t\t\t\\draw [very thick] (-1.25,.1) to (-1.25,-.25) to [out=270, in=180] (-.75,-1) to [out=0,in=270] (-.25,-.25);\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.8 with {\\arrow{<}}}] (1.15,-.7) to [out=60,in=270] (1.25,0) to [out=90, in=0] (.75,1) to [out=180,in = 90] (.25,.25);\n\t\t\t\t\\draw [very thick] (1.05,-.9) to [out=240, in=0] (.75,-1) to [out=180,in=270] (.25,-.25);\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.4 with {\\arrow{>}}}] (-1.15,-.2) to [out=0, in=270] (-.75,0) to [out=90, in=0] (-1.25,.2) to [out=180, in=90] (-1.75,0) to [out=270, in=180] (-1.35,-.2);\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.5 with {\\arrow{>}}}] (1.4,-1) to [out=90,in=270] (.9,-.6) to (.9, .6) to [out=90,in=45] (.95,.7);\n\t\t\t\t\\draw [very thick] (1.2,.85) to [in=270](1.4, 1);\n\t\t\t\n\t\t\t\n\t\t\t\t\\node at (-2.1,0) {$V_\\eta$};\n\t\t\t\t\\node at (-1.5,.9) {$V_\\alpha$};\n\t\t\t\t\\node at (1.6,0) {$V_\\beta$};\n\t\t\t\t\\node at (1.8,-.9) {$V_\\eta$};\n\t\t\t\t\\node at (0,0) {$T$};\n\t\t\t\\end{tikzpicture} \\right ) \\right \\rangle\n\t\t\t= \\left \\langle F \\left (\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.4 with {\\arrow{<}}}] (-.1,-.5) to [out=180,in=270] (-.8,0) to [out=90,in=180] (0,.5) to [out=0,in=90] (.8,0) to [out=270,in=0] (.1,-.5);\n\t\t\t\t\\draw [<-,very thick] (0,-1) to (0,.4);\n\t\t\t\t\\draw [very thick] (0,.6) to (0,1);\n\t\t\t\n\t\t\t\n\t\t\t\t\\node at (1.2,0) {$V_\\eta$};\n\t\t\t\t\\node at (.5,-.9) {$V_\\alpha$};\n\t\t\t\\end{tikzpicture} \\right ) \\right \\rangle\n\t\t\t\\;\\;\\left \\langle F \\left (\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [very thick] (-.5,-.25) -- (.5,-.25) -- (.5,.25) -- (-.5,.25) -- cycle;\n\t\t\t\t\\draw [<-, very thick] (-1.25,0) to [out=90, in=180] (-.75,1) to [out=0,in = 90] (-.25,.25);\n\t\t\t\t\\draw [very thick] (-1.25,0) to [out=270, in=180] (-.75,-1) to [out=0,in=270] (-.25,-.25);\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.8 with {\\arrow{<}}}] (1.15,-.7) to [out=60,in=270] (1.25,0) to [out=90, in=0] (.75,1) to [out=180,in = 90] (.25,.25);\n\t\t\t\t\\draw [very thick] (1.05,-.9) to [out=240, in=0] (.75,-1) to [out=180,in=270] (.25,-.25);\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.5 with {\\arrow{>}}}] (1.4,-1) to [out=90,in=270] (.9,-.6) to (.9, .6) to [out=90,in=45] (.95,.7);\n\t\t\t\t\\draw [very thick] (1.2,.85) to [in=270](1.4, 1);\n\t\t\t\n\t\t\t\n\t\t\t\t\\node at (-1.7,0) {$V_\\alpha$};\n\t\t\t\t\\node at (1.6,0) {$V_\\beta$};\n\t\t\t\t\\node at (1.8,-.9) {$V_\\eta$};\n\t\t\t\t\\node at (0,0) {$T$};\n\t\t\t\\end{tikzpicture} \\right ) \\right \\rangle\n\t\t\t\\\\\n\t\t\t&=\\left \\langle F \\left (\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.4 with {\\arrow{<}}}] (.1,.5) to [out=0,in=90] (.8,0) to [out=270,in=0] (0,-.5) to [out=180,in=270] (-.8,0) to [out=90,in=180] (-.1,.5);\n\t\t\t\t\\draw [very thick] (0,-1) to (0,-.6);\n\t\t\t\t\\draw [->, very thick] (0,-.4) to (0,1);\n\t\t\t\n\t\t\t\n\t\t\t\t\\node at (1.2,0) {$V_\\eta$};\n\t\t\t\t\\node at (.35,-.9) {$V_\\alpha$};\n\t\t\t\\end{tikzpicture} \\right ) \\right \\rangle\n\t\t\t\\;\n\t\t\t\\left \\langle F \\left (\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [very thick] (-.5,-.25) -- (.5,-.25) -- (.5,.25) -- (-.5,.25) -- cycle;\n\t\t\t\t\\draw [<-, very thick] (-1.25,0) to [out=90, in=180] (-.75,1) to [out=0,in = 90] (-.25,.25);\n\t\t\t\t\\draw [very thick] (-1.25,0) to [out=270, in=180] (-.75,-1) to [out=0,in=270] (-.25,-.25);\n\t\t\t\t\\draw [->, very thick] (.25,.25) to [out=90, in=230] (.5,1);\n\t\t\t\t\\draw [very thick] (.25,-.25) to [out=270, in=130] (.5,-1);\n\t\t\t\n\t\t\t\n\t\t\t\t\\node at (-1.7,0) {$V_\\alpha$};\n\t\t\t\t\\node at (.9,-.9) {$V_\\beta$};\n\t\t\t\t\\node at (0,0) {$T$};\n\t\t\t\\end{tikzpicture} \\right ) \\right \\rangle\n\t\t\t\\;\n\t\t\t\\left \\langle F \\left (\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.4 with {\\arrow{>}}}] (-.1,-.5) to [out=180,in=270] (-.8,0) to [out=90,in=180] (0,.5) to [out=0,in=90] (.8,0) to [out=270,in=0] (.1,-.5);\n\t\t\t\t\\draw [very thick] (0,-1) to (0,.4);\n\t\t\t\t\\draw [->, very thick] (0,.6) to (0,1);\n\t\t\t\n\t\t\t\n\t\t\t\t\\node at (1.2,0) {$V_\\beta$};\n\t\t\t\t\\node at (.35,-.9) {$V_\\eta$};\n\t\t\t\\end{tikzpicture} \\right ) \\right \\rangle\n\t\t\t\\\\\n\t\t\t&=\\left \\langle F \\left (\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.4 with {\\arrow{>}}}] (-.1,-.5) to [out=180,in=270] (-.8,0) to [out=90,in=180] (0,.5) to [out=0,in=90] (.8,0) to [out=270,in=0] (.1,-.5);\n\t\t\t\t\\draw [very thick] (0,-1) to (0,.4);\n\t\t\t\t\\draw [->, very thick] (0,.6) to (0,1);\n\t\t\t\n\t\t\t\n\t\t\t\t\\node at (1.2,0) {$V_\\eta$};\n\t\t\t\t\\node at (.35,-.9) {$V_\\alpha$};\n\t\t\t\\end{tikzpicture} \\right ) \\right \\rangle\n\t\t\t\\;\n\t\t\t\\left \\langle F \\left (\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [very thick] (-.5,-.25) -- (.5,-.25) -- (.5,.25) -- (-.5,.25) -- cycle;\n\t\t\t\t\\draw [<-, very thick] (-1.25,0) to [out=90, in=180] (-.75,1) to [out=0,in = 90] (-.25,.25);\n\t\t\t\t\\draw [very thick] (-1.25,0) to [out=270, in=180] (-.75,-1) to [out=0,in=270] (-.25,-.25);\n\t\t\t\t\\draw [->, very thick] (.25,.25) to [out=90, in=230] (.5,1);\n\t\t\t\t\\draw [very thick] (.25,-.25) to [out=270, in=130] (.5,-1);\n\t\t\t\n\t\t\t\n\t\t\t\t\\node at (-1.7,0) {$V_\\alpha$};\n\t\t\t\t\\node at (.9,-.9) {$V_\\beta$};\n\t\t\t\t\\node at (0,0) {$T$};\n\t\t\t\\end{tikzpicture} \\right ) \\right \\rangle\n\t\t\t\\;\n\t\t\t\\left \\langle F \\left (\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.4 with {\\arrow{>}}}] (-.1,-.5) to [out=180,in=270] (-.8,0) to [out=90,in=180] (0,.5) to [out=0,in=90] (.8,0) to [out=270,in=0] (.1,-.5);\n\t\t\t\t\\draw [very thick] (0,-1) to (0,.4);\n\t\t\t\t\\draw [->, very thick] (0,.6) to (0,1);\n\t\t\t\n\t\t\t\n\t\t\t\t\\node at (1.2,0) {$V_\\beta$};\n\t\t\t\t\\node at (.35,-.9) {$V_\\eta$};\n\t\t\t\\end{tikzpicture} \\right ) \\right \\rangle\n\t\t\t\\\\\n\t\t\t&= S'(\\eta,\\alpha)\\;\n\t\t\t\\left \\langle F \\left (\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [very thick] (-.5,-.25) -- (.5,-.25) -- (.5,.25) -- (-.5,.25) -- cycle;\n\t\t\t\t\\draw [<-, very thick] (-1.25,0) to [out=90, in=180] (-.75,1) to [out=0,in = 90] (-.25,.25);\n\t\t\t\t\\draw [very thick] (-1.25,0) to [out=270, in=180] (-.75,-1) to [out=0,in=270] (-.25,-.25);\n\t\t\t\t\\draw [->, very thick] (.25,.25) to [out=90, in=230] (.5,1);\n\t\t\t\t\\draw [very thick] (.25,-.25) to [out=270, in=130] (.5,-1);\n\t\t\t\n\t\t\t\n\t\t\t\t\\node at (-1.7,0) {$V_\\alpha$};\n\t\t\t\t\\node at (.9,-.9) {$V_\\beta$};\n\t\t\t\t\\node at (0,0) {$T$};\n\t\t\t\\end{tikzpicture} \\right ) \\right \\rangle\\; S'(\\beta,\\eta),\n\t\t\\end{aligned}\n\t\\end{equation*}\n\twhere we have repeatedly applied \\cref{rotate}. The right-hand side becomes\n\t\\begin{equation*}\n\t\t\\begin{aligned}\n\t\t &\\left \\langle F \\left (\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [very thick] (-.5,-.25) -- (.5,-.25) -- (.5,.25) -- (-.5,.25) -- cycle;\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.7 with {\\arrow{<}}}] (1.25,-.1) to (1.25,.25) to [out=90, in=0] (.75,1) to [out=180,in = 90] (.25,.25);\n\t\t\t\t\\draw [very thick] (1.25,-.3) to [out=270, in=0] (.75,-1) to [out=180,in=270] (.25,-.25);\n\t\t\t\t\\draw [very thick] (-1.15,.7) to [out=240,in=90] (-1.25,0) to [out=270, in=180] (-.75,-1) to [out=0,in=270] (-.25,-.25);\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.48 with {\\arrow{<}}}] (-1.05,.89) to [out=60, in=180] (-.75,1) to [out=0,in=90] (-.25,.25);\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.65 with {\\arrow{<}}}] (1.15,.2) to [out=180, in=90] (.75,0) to [out=270, in=180] (1.25,-.2) to [out=0, in=270] (1.75,0) to [out=90, in=0] (1.35,.2);\n\t\t\t\t\\draw [very thick] (-1.4,-1) to [out=90,in=215] (-1.2,-.85);\n\t\t\t\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.5 with {\\arrow{>}}}] (-1.02,-.75) to [out=45,in=270] (-.9,-.6) to (-.9, .6) to [out=90,in=270] (-1.4, 1);\n\t\t\t\n\t\t\t\n\t\t\t\t\\node at (2.1,0) {$V_\\eta$};\n\t\t\t\t\\node at (1.5,.9) {$V_\\beta$};\n\t\t\t\t\\node at (-1.6,0) {$V_\\alpha$};\n\t\t\t\t\\node at (-1.8,-.9) {$V_\\eta$};\n\t\t\t\t\\node at (0,0) {$T$};\n\t\t\t\\end{tikzpicture}\n\t\t\t\\right ) \\right \\rangle\n\t\t\t= \\left \\langle F \\left (\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [very thick] (-.5,-.25) -- (.5,-.25) -- (.5,.25) -- (-.5,.25) -- cycle;\n\t\t\t\t\\draw [<-, very thick] (1.25,0) to [out=90, in=0] (.75,1) to [out=180,in = 90] (.25,.25);\n\t\t\t\t\\draw [very thick] (1.25,0) to [out=270, in=00] (.75,-1) to [out=180,in=270] (.25,-.25);\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.48 with {\\arrow{<}}}] (-1.05,.89) to [out=60, in=180] (-.75,1) to [out=0,in=90] (-.25,.25);\n\t\t\t\t\\draw [very thick] (-1.15,.7) to [out=240,in=90] (-1.25,0) to [out=270, in=180] (-.75,-1) to [out=0,in=270] (-.25,-.25);\n\t\t\t\n\t\t\t\t\\draw [very thick] (-1.4,-1) to [out=90,in=215] (-1.2,-.85);\n\t\t\t\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.5 with {\\arrow{>}}}] (-1.02,-.75) to [out=45,in=270] (-.9,-.6) to (-.9, .6) to [out=90,in=270] (-1.4, 1);\n\t\t\t\n\t\t\t\n\t\t\t\t\\node at (1.7,0) {$V_\\beta$};\n\t\t\t\t\\node at (-1.6,0) {$V_\\alpha$};\n\t\t\t\t\\node at (-1.8,-.9) {$V_\\eta$};\n\t\t\t\t\\node at (0,0) {$T$};\n\t\t\t\\end{tikzpicture} \\right ) \\right \\rangle\n\t\t\t\\;\\;\n\t\t\t\\left \\langle F \\left (\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.4 with {\\arrow{>}}}] (.1,.5) to [out=0,in=90] (.8,0) to [out=270,in=0] (0,-.5) to [out=180,in=270] (-.8,0) to [out=90,in=180] (-.1,.5);\n\t\t\t\t\\draw [<-,very thick] (0,-1) to (0,-.6);\n\t\t\t\t\\draw [very thick] (0,-.4) to (0,1);\n\t\t\t\n\t\t\t\n\t\t\t\t\\node at (1.2,0) {$V_\\eta$};\n\t\t\t\t\\node at (.5,-.9) {$V_\\beta$};\n\t\t\t\\end{tikzpicture} \\right ) \\right \\rangle\n\t\t\t\\\\\n\t\t\t&= \\left \\langle F \\left (\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.6 with {\\arrow{<}}}] (.1,.5) to [out=0,in=90] (.8,0) to [out=270,in=0] (0,-.5) to [out=180,in=270] (-.8,0) to [out=90,in=180] (-.1,.5);\n\t\t\t\t\\draw [very thick] (0,-1) to (0,-.6);\n\t\t\t\t\\draw [->, very thick] (0,-.4) to (0,1);\n\t\t\t\n\t\t\t\n\t\t\t\t\\node at (1.2,0) {$V_\\alpha$};\n\t\t\t\t\\node at (.35,-.9) {$V_\\eta$};\n\t\t\t\\end{tikzpicture} \\right ) \\right \\rangle\n\t\t\t\\; \\left \\langle F \\left (\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [very thick] (-.5,-.25) -- (.5,-.25) -- (.5,.25) -- (-.5,.25) -- cycle;\n\t\t\t\t\\draw [->, very thick] (-.25,.25) to [out=90, in=310] (-.5,1);\n\t\t\t\t\\draw [very thick] (-.25,-.25) to [out=270, in=50] (-.5,-1);\n\t\t\t\t\\draw [<-, very thick] (1.25,0) to [out=90, in=0] (.75,1) to [out=180,in = 90] (.25,.25);\n\t\t\t\t\\draw [very thick] (1.25,0) to [out=270, in=00] (.75,-1) to [out=180,in=270] (.25,-.25);\n\t\t\t\n\t\t\t\n\t\t\t\t\\node at (1.7,0) {$V_\\beta$};\n\t\t\t\t\\node at (-.9,-.9) {$V_\\alpha$};\n\t\t\t\t\\node at (0,0) {$T$};\n\t\t\t\\end{tikzpicture} \\right ) \\right \\rangle\n\t\t\t\\; \\left \\langle F \\left (\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.4 with {\\arrow{>}}}] (-.1,-.5) to [out=180,in=270] (-.8,0) to [out=90,in=180] (0,.5) to [out=0,in=90] (.8,0) to [out=270,in=0] (.1,-.5);\n\t\t\t\t\\draw [very thick] (0,-1) to (0,.4);\n\t\t\t\t\\draw [->, very thick] (0,.6) to (0,1);\n\t\t\t\n\t\t\t\n\t\t\t\t\\node at (1.2,0) {$V_\\eta$};\n\t\t\t\t\\node at (.35,-.9) {$V_\\beta$};\n\t\t\t\\end{tikzpicture} \\right ) \\right \\rangle\n\t\t\t\\\\\n\t\t\t&= \\left \\langle F \\left (\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.4 with {\\arrow{>}}}] (-.1,-.5) to [out=180,in=270] (-.8,0) to [out=90,in=180] (0,.5) to [out=0,in=90] (.8,0) to [out=270,in=0] (.1,-.5);\n\t\t\t\t\\draw [very thick] (0,-1) to (0,.4);\n\t\t\t\t\\draw [->, very thick] (0,.6) to (0,1);\n\t\t\t\n\t\t\t\n\t\t\t\t\\node at (1.2,0) {$V_\\alpha$};\n\t\t\t\t\\node at (.35,-.9) {$V_\\eta$};\n\t\t\t\\end{tikzpicture} \\right ) \\right \\rangle\n\t\t\t\\; \\left \\langle F \\left (\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [very thick] (-.5,-.25) -- (.5,-.25) -- (.5,.25) -- (-.5,.25) -- cycle;\n\t\t\t\t\\draw [->, very thick] (-.25,.25) to [out=90, in=310] (-.5,1);\n\t\t\t\t\\draw [very thick] (-.25,-.25) to [out=270, in=50] (-.5,-1);\n\t\t\t\t\\draw [<-, very thick] (1.25,0) to [out=90, in=0] (.75,1) to [out=180,in = 90] (.25,.25);\n\t\t\t\t\\draw [very thick] (1.25,0) to [out=270, in=00] (.75,-1) to [out=180,in=270] (.25,-.25);\n\t\t\t\n\t\t\t\n\t\t\t\t\\node at (1.7,0) {$V_\\beta$};\n\t\t\t\t\\node at (-.9,-.9) {$V_\\alpha$};\n\t\t\t\t\\node at (0,0) {$T$};\n\t\t\t\\end{tikzpicture} \\right ) \\right \\rangle\n\t\t\t\\; \\left \\langle F \\left (\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [very thick, postaction={decorate}, decoration={markings, mark=at position 0.4 with {\\arrow{>}}}] (-.1,-.5) to [out=180,in=270] (-.8,0) to [out=90,in=180] (0,.5) to [out=0,in=90] (.8,0) to [out=270,in=0] (.1,-.5);\n\t\t\t\t\\draw [very thick] (0,-1) to (0,.4);\n\t\t\t\t\\draw [->, very thick] (0,.6) to (0,1);\n\t\t\t\n\t\t\t\n\t\t\t\t\\node at (1.2,0) {$V_\\eta$};\n\t\t\t\t\\node at (.35,-.9) {$V_\\beta$};\n\t\t\t\\end{tikzpicture} \\right ) \\right \\rangle\n\t\t\t\\\\\n\t\t\t&=S'(\\alpha,\\eta)\\; \\left \\langle F \\left (\n\t\t\t\\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\t\\draw [very thick] (-.5,-.25) -- (.5,-.25) -- (.5,.25) -- (-.5,.25) -- cycle;\n\t\t\t\t\\draw [->, very thick] (-.25,.25) to [out=90, in=310] (-.5,1);\n\t\t\t\t\\draw [very thick] (-.25,-.25) to [out=270, in=50] (-.5,-1);\n\t\t\t\t\\draw [<-, very thick] (1.25,0) to [out=90, in=0] (.75,1) to [out=180,in = 90] (.25,.25);\n\t\t\t\t\\draw [very thick] (1.25,0) to [out=270, in=00] (.75,-1) to [out=180,in=270] (.25,-.25);\n\t\t\t\n\t\t\t\n\t\t\t\t\\node at (1.7,0) {$V_\\beta$};\n\t\t\t\t\\node at (-.9,-.9) {$V_\\alpha$};\n\t\t\t\t\\node at (0,0) {$T$};\n\t\t\t\\end{tikzpicture} \\right ) \\right \\rangle\\; S'(\\eta,\\beta).\n\t\t\\end{aligned}\n\t\\end{equation*}\n\tBy \\cref{sHopf}, $S'(\\eta,\\alpha)$ and $S'(\\eta,\\beta)$ are non-zero. We can therefore divide both sides of \\cref{weird} by $S'(\\eta,\\alpha)S'(\\eta,\\beta)$ to complete the proof.\n\\end{proof}\n\t\n\\begin{corollary}\\label{manyAmbi}\n\tFor each $\\alpha \\in \\mathbb{C}\\setminus \\mathbb{Z} \\cup r\\mathbb{Z}$, the module $V_\\alpha$ is ambidextrous. In particular, any simple module in $\\mathcal{C}$ is ambidextrous.\n\\end{corollary}\n\\begin{proof}\n Let $\\eta \\in \\mathbb{C} \\setminus \\frac{1}{2} \\mathbb{Z}$. By \\cref{ambid}, the module $V_{\\eta}$ is ambidextrous. By \\cref{sHopf}, the scalar $\\mathbf{d}_{\\eta}(\\alpha)$ is non-zero. The first statement now follows from taking $\\alpha =\\beta$ in \\cref{invariant}. Using the classification of simple objects of $\\mathcal{C}$ given in \\cref{simplemodules}, the second claim follows from the above and \\cref{ambiqd}.\n\\end{proof}\n\t\n\\subsection{Renormalized Reshetikhin--Turaev invariants of links}\n\nDenote by $\\mathfrak{L}$ the set of all framed colored links for which at least one of its colors is of the form $V_\\alpha$ for some $\\alpha \\in \\mathbb{C} \\setminus \\mathbb{Z} \\cup r\\mathbb{Z}$. We view $\\mathfrak{L}$ as a subset of morphisms of $\\mathbf{Rib}_\\mathcal{C}$.\n\n\\begin{theorem}[{\\cite[Theorem 3]{geer_2009}}]\\label{invaraiant}\n Let $\\eta \\in \\mathbb{C} \\setminus \\mathbb{Z} \\cup r \\mathbb{Z}$. Then the map $F_\\eta': \\mathfrak{L} \\rightarrow \\mathbb{C}$ given by\n\t\\begin{equation*}\n\t\tF_\\eta'(L) = \\mathbf{d}_\\eta(\\alpha) \\langle F(T) \\rangle,\n\t\\end{equation*}\n\twhere $T$ is a $(1,1)$-tangle whose closure is $L$ and whose open strand is colored by $V_{\\alpha}$ for some $\\alpha \\in \\mathbb{C} \\setminus \\mathbb{Z} \\cup r \\mathbb{Z}$, is a well-defined isotopy invariant of links in $\\mathfrak{L}$.\n\\end{theorem}\n\\begin{proof}\n Well-definedness of $F_\\eta'$ is the statement that $F_\\eta'(L)$ is independent of the choice of $(1,1)$-tangle $T$ used in its definition. If $T$ and $T'$ are $(1,1)$-tangles constructed from $L$ by cutting along strands colored by $V_\\alpha$ and $V_\\beta$, respectively, then, in view of \\cref{manyAmbi}, \\cref{invariant} shows that $\\mathbf{d}_\\eta(\\alpha) \\langle F(T) \\rangle = \\mathbf{d}_\\eta(\\beta) \\langle F(T^{\\prime}) \\rangle$. Isotopy invariance of $F_\\eta'$ follows from \\cref{reshturfunctor}.\n\\end{proof}\n\nComparing the definition of $ F_\\eta'(L)$ with \\cref{formula} shows that $\\mathbf{d}_{\\eta}(\\alpha)$ plays the role of $\\textnormal{qdim}_{\\mathcal{C}}(V_{\\alpha})$ in the standard theory. This justifies the term \\emph{modified quantum dimension} for the function $\\mathbf{d}_{\\eta}$.\n \n\n\\subsection{Basic properties and examples}\n\\label{sec:examples}\n\nWe begin by discussing the dependence of the renormalized invariants of \\cref{invaraiant} on the parameter $\\eta$. By \\cref{etaetaprime}, the modified quantum dimensions functions associated to two such parameters $\\eta$ and $\\eta^{\\prime}$ differ by an explicit global scalar. In fact, more is true.\n\n\\begin{lemma}\\label{multiples}\n\tLet $D: \\mathbb{C}\\setminus \\mathbb{Z} \\cup r \\mathbb{Z} \\rightarrow \\mathbb{C}$ be a function such that the assignment $L \\mapsto D(\\alpha) \\langle F(T) \\rangle$, where $T$ is any $(1,1)$-tangle whose closure is isotopic to $L$ and whose open strand is colored by $V_{\\alpha}$ for some $\\alpha \\in \\mathbb{C} \\setminus \\mathbb{Z} \\cup r \\mathbb{Z}$, is a well-defined invariant of colored framed links in $\\mathfrak{L}$. Then, for any $\\eta \\in \\mathbb{C} \\setminus \\mathbb{Z}$, there exists a scalar $d \\in \\mathbb{C}$ such that $D = d \\cdot \\mathbf{d}_\\eta$.\n\\end{lemma}\n\n\\begin{proof}\n\tLet $L$ be a Hopf link with strands colored by $V_\\alpha$ and $V_{\\eta}$ for some $\\alpha \\in \\mathbb{C} \\setminus \\mathbb{Z} \\cup r\\mathbb{Z}$ and $\\eta \\in \\mathbb{C} \\setminus \\mathbb{Z}$. The assumption of well-definedness of the invariant implies that $D(\\alpha) S'(\\eta, \\alpha) = D(\\eta) S'(\\alpha, \\eta)$, whence \n\t$D(\\alpha) = D(\\eta) \\mathbf{d}_\\eta(\\alpha)$. Taking $d = D(\\eta)$ proves the lemma. \n\\end{proof}\n\nIt follows from \\cref{multiples} that renormalized invariants arising from a modified quantum dimension $\\mathbf{d}_\\eta$ are effectively the only invariants that incorporate the cutting process described above. When $r=2$, \\cref{multiples} can be strengthened. By \\cref{etaetaprime}, any function $D$ that gives an invariant incorporating the cutting procedure is equal to $\\pm \\mathbf{d}_\\eta$ for some $\\eta \\in \\mathbb{C}$, that is, the scalar $d$ in \\cref{multiples} can be taken to be $\\pm 1$.\n\n\\begin{remark}\n By \\cref{lemmasf}, the simple modules $S^{lr}_n$ have non-zero quantum dimension and so, by \\cref{ambiqd}, are ambidextrous. In view of this, there is a modification of \\cref{invaraiant} in which the standard Reshetikhin--Turaev invariant is renormalized with respect to $S^{lr}_n$ instead of $V_{\\eta}$. The resulting invariant $F'_{n,lr}$ is defined on $\\mathfrak{L}$ as well as those knots $\\tilde{\\mathfrak{L}}$ colored by at least one simple module of the form $S_{m}^{k r}$. A calculation as in \\cref{hopfLink} shows that the $S^{lr}_n$-renormalized quantum dimension $\\mathbf{d}_n^{lr}$ vanishes on $V_\\alpha$, $\\alpha \\in \\mathbb{C} \\setminus \\mathbb{Z} \\cup r \\mathbb{Z}$. In particular, $F'_{n,lr}$ is zero on $\\mathfrak{L}$. On the other hand, a direct calculation gives\n \\[\n \\textnormal{qdim}_\\mathcal{C}(S^{kr}_m) = \\textnormal{qdim}_\\mathcal{C}(S^{lr}_n) \\cdot \\mathbf{d}_n^{lr}(S^{kr}_m).\n \\] \n It follows that the restriction of $F'_{n,lr}$ to $\\tilde{\\mathfrak{L}}$ is equal to $\\textnormal{qdim}_\\mathcal{C}(S^{lr}_n)^{-1}$ times the standard Reshetikhin--Turaev invariant $F$. From this perspective, the renormalized theory recovers the standard theory.\n\\end{remark}\n\nNext, we describe the behavior of renormalized invariants under connect sum.\n\n\\begin{proposition}\n Let $L, L' \\in \\mathcal{L}$, each of which have at least one strand colored by $V_\\alpha$ for some $\\alpha \\in \\mathbb{C} \\setminus \\mathbb{Z} \\cup r \\mathbb{Z}$. Then the $\\eta$-renormalized Reshetikhin--Turaev invariant of the connect sum $L\\#L'$ along strands colored by $V_\\alpha$ satisfies\n $$\\mathbf{d}_\\eta(\\alpha) F^{\\prime}_{\\eta}(L\\#L^{\\prime}) = F^{\\prime}_{\\eta}(L) \\cdot F^{\\prime}_{\\eta}(L^{\\prime}) .$$\n\\end{proposition}\n\n\\begin{proof}\nConsider a knot diagram for $L \\# L^{\\prime}$ of the form\n\\[\n\\begin{tikzpicture}[anchorbase,scale=1.0]\n\t\t\t\\draw [very thick](-.25,.25) -- (-.25,.75) -- (.25,.75) -- (.25,.25) -- cycle;\n\t\t\t\\draw [very thick](-.25,-.25) -- (-.25,-.75) -- (.25,-.75) -- (.25,-.25) -- cycle;\n\t\t\t\\draw[very thick,->](-.1,.25) -- (-.1,-.25);\n\t\t\t\\draw[very thick,<-](.1,.25) -- (.1,-.25);\n\t\\node at (0,.5) {\\tiny $U$};\n \\node at (0,-.5) {\\tiny $U'$};\n \n \\node at (-.3,0) {\\tiny $\\alpha$};\n \\node at (.3,0) {\\tiny $\\alpha$};\n \n\t\\end{tikzpicture}\n\t\\]\n\twhere the obvious closures of the $(2,0)$-tangle $U$ and $(0,2)$-tangle $U^{\\prime}$ are knot diagrams for $L$ and $L^{\\prime}$, respectively, and we have written $\\alpha$ for the color $V_{\\alpha}$. Cutting this diagram along one of the connecting strands gives\n\t\\[\n\tF^{\\prime}_{\\eta}(L \\# L^{\\prime})\n\t=\n\\mathbf{d}_\\eta(\\alpha)\\left \\langle F \\left (\n \\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\\draw [very thick](-.25,.25) -- (-.25,.75) -- (.25,.75) -- (.25,.25) -- cycle;\n\t\t\t\\draw [very thick](-.25,-.25) -- (-.25,-.75) -- (.25,-.75) -- (.25,-.25) -- cycle;\n\t\t\t\\draw[very thick,->](0,.75) -- (0,1.2);\n\t\t\t\\draw[very thick,<-](0,.25) -- (0,-.25);\n\t\t\t\\draw[very thick,<-](0,-.75) -- (0,-1.2);\n\t\\node at (0,.5) {\\tiny $U$};\n \\node at (0,-.5) {\\tiny $U'$};\n \\node at (-.33,1) {\\tiny $\\alpha$};\n \\node at (-.35,0) {\\tiny $\\alpha$};\n \\node at (-.35,-1) {\\tiny $\\alpha$};\n\t\\end{tikzpicture} \\right ) \\right \\rangle\n \\]\n where, by a slight abuse of notation, we now denote by $U$ the $(1,1)$-tangle obtained from $U$ by pulling one open strand to the top, and similarly for $U^{\\prime}$. Since $V_\\alpha$ is simple, the right hand side of this equation is equal to \\[\n \\mathbf{d}_\\eta(\\alpha)\n \\left \\langle F \\left (\n \\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\\draw [very thick](-.25,.25) -- (-.25,.75) -- (.25,.75) -- (.25,.25) -- cycle;\n\t\t\t\\draw[very thick,->](0,.75) -- (0,1.2);\n\t\t\t\\draw[very thick,<-](0,.25) -- (0,-.25);\n\t\\node at (0,.5) {\\tiny $U$};\n \\node at (-.35,1) {\\tiny $\\alpha$};\n \\node at (-.35,0) {\\tiny $\\alpha$};\n\t\\end{tikzpicture} \\right ) \\right \\rangle\n\\left \\langle F \\left (\n \\begin{tikzpicture}[anchorbase,scale=0.8]\n\t\t\t\\draw [very thick](-.25,.25) -- (-.25,.75) -- (.25,.75) -- (.25,.25) -- cycle;\n\t\t\t\\draw[very thick,->](0,.75) -- (0,1.2);\n\t\t\t\\draw[very thick,<-](0,.25) -- (0,-.25);\n\t\\node at (0,.5) {\\tiny $U^{\\prime}$};\n \\node at (-.35,1) {\\tiny $\\alpha$};\n \\node at (-.35,0) {\\tiny $\\alpha$};\n\t\\end{tikzpicture} \\right ) \\right \\rangle\n =\n \\mathbf{d}_\\eta(\\alpha)^{-1} F^{\\prime}_{\\eta}(L) F^{\\prime}_{\\eta}(L^{\\prime}).\n \\]\n This gives the desired expression for $F^{\\prime}_{\\eta}(L \\# L^{\\prime})$.\n\\end{proof}\n\t\n\\begin{example}\n\tLet $r=2$. Set $V=V_{\\alpha}$ for some $\\alpha \\in \\mathbb{C}$. Consider $V \\otimes V$ with basis $\\{v_0 \\otimes v_0, v_0 \\otimes v_1, v_1 \\otimes v_0, v_1 \\otimes v_1\\}$. A direct computation gives\n \\[\n c_{V,V} = \\left( \n \\begin{smallmatrix}\n q^{\\frac{(\\alpha+1)^2}{2}} & 0 & 0 & 0 \\\\\n 0 & 0 & q^{\\frac{(\\alpha+1)(\\alpha-1)}{2}} & 0 \\\\\n 0 & q^{\\frac{(\\alpha+1)(\\alpha-1)}{2}} & q^{\\frac{(\\alpha+1)(\\alpha-1)}{2}}\\{1-\\alpha\\} & 0 \\\\\n 0 & 0 & 0 & q^{\\frac{(\\alpha-1)^2}{2}}\n \\end{smallmatrix}\n \\right).\n \\]\n For example, the $(3,2)$ and $(3,3)$ entries of $c_{V,V}$ are the coefficients of $c_{V,V}(v_1 \\otimes v_0)$:\n \\begin{eqnarray*}\n c_{V,V}(v_1 \\otimes v_0)\n &=&\n \\tau (q^{H\\otimes H\/2} (1 +\\frac{\\{1\\}^2}{\\{1\\}}E \\otimes F) (v_1 \\otimes v_0) \\\\\n &=&\n \\tau (q^{H\\otimes H\/2} (v_1 \\otimes v_0 +\\frac{\\{1\\}^2}{\\{1\\}}Ev_1 \\otimes Fv_0) \\\\\n &=&\n q^{\\frac{(\\alpha-1)(\\alpha+1)}{2}}v_0 \\otimes v_1 +q^{\\frac{(\\alpha-1)(\\alpha+1)}{2}}\\{1-\\alpha\\} v_1 \\otimes v_0.\n \\end{eqnarray*}\n Using the explicit formula for $c_{V,V}$, we verify the equality\n \\begin{equation} \\label{alexSkein}\n q^{-\\frac{(\\alpha+1)(\\alpha-1)}{2}}c_{V,V}- q^{\\frac{(\\alpha+1)(\\alpha-1)}{2}}c_{V,V}^{-1} = \\{\\alpha+1\\} \\mathrm{id}_{V \\otimes V}.\n \\end{equation}\n Recall from the proof of \\cref{ribbon theorem} that $\\theta_V = q^{-\\frac{(\\alpha+1)(\\alpha-1)}{2}} \\mathrm{id}_V$. Define $\\mathcal{F}^{\\prime}(L)=q^{\\frac{(\\alpha+1)(\\alpha-1)}{2} {\\operatorname{wr}}(L)}F^{\\prime}(L)$, where ${\\operatorname{wr}}(L)$ is the writhe of $L$. Then $\\mathcal{F}^{\\prime}$ is an invariant of oriented links colored by $V_{\\alpha}$. This is an instance of the deframing procedure, explained, for example, in \\cite[\\S 3.3]{jackson2019}. The relation \\eqref{alexSkein} implies that $\\mathcal{F}^{\\prime}$ satisfies the Alexander skein relation with $t=q^{\\alpha+1}$.\n\\end{example}\n\nWe end this section with some calculations of renormalized invariants for knots with few crossings.\n\n\\begin{example}\\label{ex:renormalizedTrefoil}\nUsing the result of \\cref{ex:cutTrefoil}, the renormalized invariant of the right-handed trefoil $K=3_1$ colored by a simple module $V_{\\alpha}$ is \n\\[\nF^{\\prime}_{\\eta}(K)\n=\n\\frac{\\{\\eta r\\} \\{\\alpha\\}}{\\{\\eta\\}\\{\\alpha r\\}} q^{\\frac{3}{2}(\\alpha+r-1)^2 + (\\alpha+r-1)(1-r)} \\sum_{i=0}^{r-1} q^{i(-3\\alpha - r +i)} \\prod_{j=0}^{i-1} \\{i-j-\\alpha\\}.\\qedhere\n\\]\n\\end{example}\n\n\\begin{example}\n Proceeding analogously to \\cref{ex:renormalizedTrefoil}, the renormalized invariant of the figure eight knot $K=4_1$ colored by a simple module $V_\\alpha$ is\n \\begin{multline*}\n F^{\\prime}_{\\eta}(K)\n =\n \\frac{\\{\\eta r\\} \\{\\alpha\\}}{\\{\\eta\\}\\{\\alpha r\\}} \n \\sum_{i=0}^{r-1} \\sum_{j=0}^{i} \\sum_{k=0}^{r-i} q^{(-\\alpha + r - 1 -2i)(\\alpha - r + 1)\/2} q^{-(-\\alpha + r - 1 -2(i-j))(i + r-1-j)} \\cdot \\\\\n q^{-(-\\alpha + r - 1 - 2(i-j +k))(\\alpha + r - 1 -2(i+k))\/2} q^{j(j-1)\/2} q^{k(k-1)\/2} q^{(i + k - r + 1)(i + k - r)\/2} \\cdot \\\\\n \\prod_{x = r-1}^{r-1-j} \\frac{\\{x\\} \\{x- \\alpha\\}}{\\{j\\}!} \\prod_{y = i-j}^{i-j+k} \\frac{\\{y\\} \\{y + \\alpha\\}}{\\{k\\}!} ) \\prod_{z = i+k}^{r-1} \\frac{\\{z\\} \\{z - \\alpha\\}}{\\{l\\}!} . \\qedhere\n \\end{multline*}\n\\end{example}\n\n\n\\section{Further reading}\\label{sec:furtherReading}\n\nIn this final section, we give a brief sample of recent developments in renormalized Reshetikhin--Turaev theory.\n\nThe construction of renormalized Reshetikhin--Turaev invariants of links, given in \\cref{invariant} for the category of weight modules over $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$, applies more generally to certain non-semisimple ribbon categories \\cite{geer_2009}. The key new notions in this more general context are \\emph{modified traces} (and so modified quantum dimensions) and ambidextrous objects \\cite{GKP1,GKP2,GKP3,geer_2017,fontalvo2018,gainutdinov2020,beliakova2021}. Renormalized Reshetikhin--Turaev invariants of links have been studied for categories of weight modules over unrolled quantum groups of complex simple Lie algebras \\cite{geer2013} and Lie superalgebras \\cite{geer2007b,geer2010}. The renormalized invariants are motivated by and recover previous \\emph{ad hoc} renormalized invariants \\cite{kauffman1991,akutsu1992,viro2006,geer2010}.\n\nThe extension of the renormalized theory from links to $3$-manifolds was achieved in \\cite{costantino2014quantum} for the category of weight modules over $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$, where properties \\ref{ite:nonss} and \\ref{ite:infSimp} are serious obstructions. In general, the additional data and constraints on the input ribbon category required to obtain a $3$-manifold invariant are termed \\emph{non-degenerate relative pre-modularity}. The resulting $3$-manifold invariants have novel properties, including the ability to distinguish homotopy classes of lens spaces and connections with the Volume Conjecture, and are related to earlier non-semisimple $3$-manifold invariants, including those of Hennings \\cite{hennings1996} and Kerler--Lyubashenko \\cite{kerler2001}. See \\cite{derenzi2018,derenzi2023}. Further examples of $3$-manifold invariants associated to non-degenerate relative pre-modular categories are studied in \\cite{AGP, beliakova2021b, ha2018,bao2022}. For connections between standard and renormalized Reshetikhin--Turaev invariants of links and 3-manifolds, see \\cite{costantino2015b,costantino2021,derenzi2020,mori2022}. \n\nThe further extension of renormalized $3$-manifold invariants to three dimensional topological quantum field theories (TQFTs) was first accomplished in the case of weight modules over $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$ \\cite{BCGP}. The TQFTs have interesting features, including the extension of Reidemeister torsion to a TQFT and the possibility of producing representations of mapping class groups that are faithful modulo their centers. A general framework for the construction of (extended) TQFTs from \\emph{relative modular categories} was given by De Renzi \\cite{derenzi2022}, generalizing the so-called universal construction of semisimple TQFTs \\cite{blanchet1995}. See \\cite{blanchet2021} for an overview of this circle of ideas. Further examples of TQFTs from renormalized invariants are constructed and studied in \\cite{derenzi2022b,geerYoung2022}.\n\nAt present, the main source of relative modular categories is the representation theory of unrolled quantum groups, thereby making this class of quantum groups central to non-semisimple topology. Motivated by the success of rational conformal field theoretic techniques in semisimple topology, a number of authors have pursued a conjectural logarithmic variant of the Kazhdan--Lusztig correspondence, which asserts an equivalence between categories of weight modules over unrolled quantum groups and modules over non-rational, or logarithmic, vertex operator algebras. Much progress has been made in the case of $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$, where connections between weight modules and the singlet, triplet and Feigin--Tipunin algebras have been found \\cite{creutzig2018,creutzig2022}. \n\nFinally, there has been exciting progress in connecting non-semisimple mathematical TQFTs to physical quantum field theories. This can be seen as a non-semisimple generalization of the celebrated connection between compact Chern--Simons theory and Reshetikhin--Turaev TQFTs \\cite{witten1989,reshetikhin1991invariants}. The case of TQFTs arising from $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$ is studied in \\cite{creutzig2021}, where it is connected to a topological twist of $3$d $\\mathcal{N}=4$ Chern--Simons matter theory with gauge group $SU(2)$. Similarly, TQFTs arising from an unrolled quantization of the Lie superalgebra $\\mathfrak{gl}(1 \\vert 1)$ were shown in \\cite{geerYoung2022} to be related to supergroup Chern--Simons theories with gauge group $\\mathfrak{psl}(1 \\vert 1)$ and $U(1 \\vert 1)$. A key feature in both physical realizations is the presence of global symmetry groups, allowing the quantum field theories to be coupled to background flat connections. Further physical studies of such quantum field theories can be found in \\cite{gukov2021,jagadale2022}. In condensed matter physics, Levin and Wen used unitary spherical fusion categories to give a mathematical foundation of topological order and string-net condensation \\cite{levin2005}. Recently, this construction was extended to the setting of the non-semisimple category of weight modules over $\\overline{U}^H_q(\\mathfrak{sl}_2(\\mathbb{C}))$ \\cite{geer2022,geer2022b}. \n\n\n\\newcommand{\\etalchar}[1]{$^{#1}$}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nSum rules, constraints on energy-difference weighted combinations of\non- and off-diagonal matrix elements, have been an important theoretical construct in\nquantum mechanics since its earliest days. For example, the\nThomas-Reiche-Kuhn (TRK) \\cite{trk_sum_rule} sum rule was one of the earliest\nquantitative checks on the oscillator strengths of atomic transitions and\nan important confirmation of the applicability of quantum mechanics to\natomic systems. Their use continues today, for example, with QCD sum\nrules used to probe the masses of both light and heavy quarks in\nhadronic systems. For a list of recent applications of sum rules,\nsee Ref.~\\cite{belloni_robinett_sum_rules}.\n\nThe study of sum rules and how such constraints are realized \\cite{belloni_robinett_sum_rules}\ncan provide useful examples of a variety of mathematical techniques utilized for their confirmation.\nIn addition, sum rules can be used to generate new mathematical constraints\n\\cite{mavromatis} on quantities involving infinite sums.\nFor example, in exactly that context, two of us have recently\nshown how it is possible to systematically derive\nnew constraints on the zeros, ($-\\zeta_n$), of the Airy function, $Ai(\\zeta)$,\nof the form $S_p(n) = \\sum_{k \\neq n} (\\zeta_k - \\zeta_n)^{-p}$ for\nnatural values of\n$p \\geq 2$ using quantum-mechanical sum rules applied to the so-called `quantum\nbouncer' problem \\cite{belloni_robinett_airy_zeros}.\nDefined by the potential\n\\begin{equation}\nV(z) =\n\\left\\{\n\\begin{array}{cc}\nFz & \\mbox{for $z>0$} \\\\\n\\infty &\\mbox{for $z<0$}\n\\end{array}\n\\right.\n\\, ,\n\\label{bouncer_potential}\n\\end{equation}\nthis quantum-mechanical problem has been applied to recent novel\nexperiments, ranging from the bound states of neutrons in\nthe Earth's gravitational field \\cite{neutron_bound_states}\nto optical analogs of gravitational wave packets \\cite{photon_bouncing_ball}\nwhich demonstrate the presence of predicted \\cite{bouncing_ball}\nwave packet revivals.\n\n\nIn our earlier study, a number of the most well-known sum rules\ndid not converge\ndue to the lack of continuity of the bouncer potential at the origin.\nIn this note, we extend the results of\nRef.~\\cite{belloni_robinett_airy_zeros} to discuss mathematical\nconstraints which arise from the study of quantum-mechanical sum rules\nin a closely related system, namely the symmetric linear potential,\ndefined by\n\\begin{equation}\nV(z) = F|z|\n\\label{symmetric_linear_potential}\n\\, .\n\\end{equation}\nThis is an example of a \\textit{parity-extended} potential, namely taking one that is\ndefined only for $z>0$, and extending it in a symmetric way over the\nreal axis.\nIn this case, we find new relationships involving \\emph{both} the zeros\nof $Ai(\\zeta)$ \\emph{and} its derivative $Ai'(\\zeta)$, namely $-\\zeta_n$ and\n$-\\eta_n$, respectively. In Sec.~\\ref{sec:constraints} we also explore the \nconvergence properties of sum rules in this system, compared to the quantum\nbouncer, because of the improved mathematical behavior of the\npotential energy function in this case.\n\nFor comparison, we also consider in Sec.~\\ref{sec:half_sho}\na \\textit{parity-restricted} version of the most well-studied system in\nquantum mechanics, namely the harmonic oscillator. Specifically, we\nexamine the structure of sum rules in the `half'-SHO potential,\ndefined by\n\\begin{equation}\nV(x) =\n\\left\\{\n\\begin{array}{cc}\nm\\omega^2 x^2\/2 & \\mbox{for $x\\geq 0$} \\\\\n0 & \\mbox{for $x< 0$}\n\\end{array}\n\\right.\n\\label{half_sho}\n\\end{equation}\nand find dramatically different behavior for many sum rules.\n\nWe are motivated to study the relation between such parity-related\npotentials since the solution space of the parity-restricted version of\na symmetric potential, $V(x) = V(-x)$, consists solely of the\nodd energy eigenvalues and eigenstates, with only a trivial change in\nnormalization. Thus, half of the wavefunctions and\nenergies will be closely related in the two systems. Since\nsum rules involve intricate relationships between energy differences and\nmatrix elements derived from the wavefunctions,\nit is instructive to see how identical sum rules are\nrealized in two parity-related systems. We find that\nmany matrix elements can be very different in character between\ntwo such related systems, yielding sum rule identities which are realized\nin novel ways.\n\n\nFor these two systems, we then make contact with semi-classical (WKB-like)\nprobability distributions (in Sec.~\\ref{sec:classical_versus_quantum})\nand examine the similarities with the new exact quantum results presented\nhere in the large $n$ limit.\nWe also note that a recent examination of the Stark effect in the symmetric\nlinear potential \\cite{robinett_stark}\nhas found interesting constraints on second-order perturbation theory\nsums which are very similar to the energy-weighted sum rules we will\ndiscuss below, and we extend that analysis to the `half'-SHO system\nintroduced here, allowing for insight into the structure of the Stark effect\nin that system.\nWe begin by briefly reviewing the results of\nRef.~\\cite{robinett_stark} in the next section, mostly to establish\nnotation.\n\n\n\n\n\n\n\\section{Solutions for the symmetric linear potential}\n\\label{sec:solutions}\n\nWe begin by reviewing the properties of the solutions for the\nsymmetric linear potential in Eqn.~(\\ref{symmetric_linear_potential}),\nusing a slight modification of the notation in\nRef.~\\cite{robinett_stark}. We note that the\nonly difference in notation with Ref.~\\cite{belloni_robinett_airy_zeros}\nis in the normalization\nof the wavefunctions, corresponding only to a physically irrelevant\ndifference in phase.\n\nUsing the fact that the linear potential admits Airy function solutions,\nwith different forms for positive and negative values of position, we can\nconstruct piecewise-continuous solutions of the corresponding Schr\\\"odinger\nequation. Because of the symmetry properties of the potential,\nwe can also classify the solutions by their parity, with the\nodd functions, which must vanish at the origin, related to the solutions\nof the quantum bouncer problem. For background on the solutions to\nthe bouncer problem, see also Refs.~\\cite{bouncing_ball},\n\\cite{vallee} - \\cite{goodmanson}.\n\nSpecifically, the odd solutions can be written in the form\n\\begin{equation}\n\\psi_n^{(-)}(z) =\nN_{n}^{(-)}\n\\left\\{\n\\begin{array}{cc}\nAi(z\/\\rho - \\beta_n) & \\qquad \\mbox{for $0 \\leq z$} \\\\\n-Ai(-z\/\\rho - \\beta_n) & \\qquad \\mbox{for $z \\leq 0$}\n\\end{array}\n\\right.\n\\label{odd_states}\n\\end{equation}\nwhere $\\rho \\equiv (\\hbar^2\/2mF)^{1\/3}$.\nThe appropriate boundary condition at the origin is that the wavefunction must vanish there which implies that\n\\begin{equation}\n\\psi_n^{(-)}(0) = Ai(-\\beta_n)\n\\end{equation}\nwhere we identify $\\beta_n = \\zeta_n$ where the $-\\zeta_n$ are the zeros of $Ai(\\zeta)$.\nThe corresponding combination for even parity solutions is written as\n\\begin{equation}\n\\psi_n^{(+)}(x) =\nN_{n}^{(+)}\n\\left\\{\n\\begin{array}{cc}\nAi(z\/\\rho - \\beta_n) & \\qquad \\mbox{for $0 \\leq z$} \\\\\nAi(-z\/\\rho - \\beta_n) & \\qquad \\mbox{for $z \\leq 0$}\n\\end{array}\n\\right.\n\\label{even_states}\n\\end{equation}\nwhich at the origin must satisfy $[\\psi_n^{(+)}(z\\!=\\!0)]' = 0$, implying that\n$\\beta_n = \\eta_n$ where the $-\\eta_n$ are the zeros of $Ai'(\\zeta)$.\n\nHandbook results \\cite{stegun}, \\cite{book}\ngive approximations for these zeros, in the large $n$ limit, as\n\\begin{equation}\n\\zeta_n \\sim \\left[\\frac{3\\pi}{4} (2n-1\/2)\\right]^{2\/3}\n\\qquad\n\\quad\n\\mbox{and}\n\\quad\n\\qquad\n\\eta_n \\sim \\left[\\frac{3\\pi}{4} ((2n-1)-1\/2)\\right]^{2\/3}\n\\label{large_n_expansion}\n\\end{equation}\nwhere the labeling starts with $n=1$ in both cases.\nWe note that WKB quantization gives the same results \\cite{robinett_stark},\nvalid for large quantum numbers.\n\n\nThe normalizations required in Eqns.~(\\ref{odd_states}) and\n(\\ref{even_states}) are obtained by using integrals first derived by Gordon \\cite{gordon} and\nAlbright \\cite{albright} collected in the Appendix in\nSec.~\\ref{sec:indefinite}, specifically Eqn.~(\\ref{diagonal_0}),\nand are given by\n\\begin{equation}\nN_n^{(-)} = \\frac{1}{\\sqrt{2\\rho} \\,Ai'(-\\zeta_n)}\n\\qquad\n\\quad\n\\mbox{and}\n\\qquad\n\\quad\nN_n^{(+)} = \\frac{1}{\\sqrt{2\\rho\\eta_n} \\, Ai(-\\eta_n)}\n\\, .\n\\label{normalizations}\n\\end{equation}\nWe note that the wavefunctions at the origin satisfy\n\\begin{equation}\n\\psi_n^{(-)}(0) = 0\n\\qquad\n\\quad\n\\mbox{and}\n\\quad\n\\qquad\n\\psi_n^{(+)}(0) = \\frac{1}{\\sqrt{2\\rho \\eta_n}}\n\\, .\n\\label{wavefunctions_at_the_origin}\n\\end{equation}\nThe energy eigenvalues are then given directly in terms of the\n$\\zeta_n,\\eta_n$ by\n\\begin{equation}\nE_n^{(-)} = {\\cal E}_0 \\zeta_n\n\\qquad\n\\quad\n\\mbox{and}\n\\quad\n\\qquad\nE_n^{(+)} = {\\cal E}_0 \\eta_n\n\\end{equation}\nwhere $ {\\cal E}_0 \\equiv \\rho F$ and the energy spectrum satisfies\n\\begin{equation}\nE_{1}^{(+)} < E_{1}^{(-)} < E_{2}^{(+)} < E_{2}^{(-)} < \\cdots\n\\,\\,\\, .\n\\end{equation}\n\nJust as for the bouncer system \\cite{goodmanson}, we have\na power-law potential of the form $V_{(k)}(x) = V_0 |x\/a|^k$,\nand the quantum-mechanical virial theorem,\n$\\langle\\hat{T}\\rangle = \\frac{k}{2}\\langle\\hat{V}\\rangle$,\nis satisfied for $k = 1$ which we can confirm by direct evaluation.\nFor example, for odd states we have the matrix elements\n\\begin{eqnarray}\n\\langle \\psi_{n}^{(-)} | V(z) | \\psi_{n}^{(-)} \\rangle\n& = &\n\\int_{-\\infty}^{+\\infty}\\, |\\psi_n^{(-)}(z)|^2\\,F|z|\\, dz\n\\nonumber \\\\\n& = &\n\\frac{2F\\rho^2}{2 \\rho [Ai'(-\\zeta_n)]^2} \\int_{0}^{\\infty} y\\,\n[Ai(y-\\zeta_n)]^2\\,dy\n\\nonumber \\\\\n& = &\n\\frac{2}{3} {\\cal E}_0 \\zeta_n = \\frac{2}{3}E_{n}^{(-)} \\, ,\n\\label{virial_theorem}\n\\end{eqnarray}\nusing the integral in Eqn.~(\\ref{diagonal_1}), with a similar result\nfor the expectation value of the potential energy for the even states. \nThe expectation value of the kinetic energy operator,\n\\begin{equation}\n\\langle \\psi_n^{(\\pm)} | \\hat{T} |\\psi_n^{(\\pm)} \\rangle\n=\n\\frac{1}{2m}\\langle \\psi_n^{(\\pm)} | \\hat{p}^2 |\\psi_n^{(\\pm)} \\rangle\n= \\frac{1}{3}E_n^{(\\pm)}\n\\, ,\n\\label{kinetic_energy_virial}\n\\end{equation}\ncan be evaluated by either: (i) using the definition of the Airy differential equation,\n$Ai''(\\zeta-\\beta) = (\\zeta-\\beta)Ai(\\zeta)$, to rewrite the integral in terms of\nthe one in Eqn.~(\\ref{diagonal_1}) or (ii) using an integration by parts\nto move one derivative onto the first wavefunction, and then using the\nintegral in Eqn.~(\\ref{diagonal_derivative}).\n\nThe dipole matrix elements needed for many of the best-known sum rules have\na different form than in the quantum bouncer. Using the symmetry of \nthe potential, we have\n\\begin{equation}\n\\langle \\psi_{n}^{(-)} | z | \\psi_{n}^{(-)} \\rangle\n=\n\\langle \\psi_{n}^{(+)} | z | \\psi_{n}^{(+)} \\rangle\n= 0\n\\, ,\n\\label{diagonal_dipole_matrix_element}\n\\end{equation}\nwhile using Eqn.~(\\ref{off_diagonal_1}), we find that\n\\begin{equation}\n\\langle \\psi_{n}^{(-)} | z | \\psi_{k}^{(+)} \\rangle\n= \\frac{-2\\rho }{\\sqrt{\\eta_k}\\, (\\eta_k - \\zeta_n)^3}\n\\,.\n\\label{off_diagonal_dipole_matrix_element}\n\\end{equation}\nAs with all of the matrix element expressions and sum rule identities\nwe present, we have confirmed these expressions numerically using {\\it Mathematica}$^{\\trade}$\\,\\,.\n\nIn contrast to the quantum bouncer problem \\cite{belloni_robinett_airy_zeros} \nwhere all off-diagonal matrix elements gave terms proportional\nto inverse powers of $(\\zeta_n - \\zeta_k)$, using sum rules\ninvolving only dipole-matrix\nelements for the symmetric linear potential\nwill give constraints on sums of powers of\n$1\/(\\eta_k - \\zeta_n)$. Matrix elements for even powers of $z$,\nfor example\n\\begin{equation}\n\\langle \\psi_{n}^{(\\pm)} | z^2 | \\psi_{n}^{(\\pm)} \\rangle\n\\, ,\n\\end{equation}\nwill connect even-even and odd-odd states. The corresponding\nodd-odd sum rules\nwill reproduce some of the same identities found in\nRef.~\\cite{belloni_robinett_airy_zeros},\nwhile the even-even sum rules will generate new identities,\ninvolving sums over inverse powers of $(\\eta_k - \\eta_n)$.\n\n\n\\section{New constraints from the symmetric\nlinear potential}\n\\label{sec:constraints}\n\n\nTo illustrate the range of new constraints which\nthe symmetric linear potential places on the $\\zeta_n$ and $\\eta_n$,\nwe first consider the most well-known sum rules involving dipole matrix elements. \nThe famous Thomas-Reiche-Kuhn (TRK) sum rule \\cite{trk_sum_rule} is given by\n\\begin{equation}\n\\sum_{k\\neq n} (E_k - E_n)|\\langle n |z|k\\rangle|^2 =\n\\frac{\\hbar^2}{2m}\n\\label{trk_sum_rule}\n\\end{equation}\nwhile a completeness relation for the matrix elements of the momentum\noperator can be written in terms of dipole matrix matrix elements as\n\\begin{equation}\n\\sum_{k\\neq n} (E_k - E_n)^2 |\\langle n |z|k \\rangle|^2\n=\\frac{\\hbar^2}{m^2} \\langle n |\\hat{p}^2|n\\rangle\n=\\frac{2\\hbar^2}{m} \\left [ E_n - \\langle n|V(z)|n\\rangle \\right ]\n\\label{second_power_momentum_sum_rule}\n\\,.\n\\end{equation}\nWe note that since the states $n$ and $k$ that contribute\nto the sum rule will be of opposite parity, the\n$k\\neq n$ restriction is unnecessary and we can sum over all\nintermediate state $k$ labels.\n\nTwo other sum rules involving dipole matrix elements have been\ndiscussed by Bethe and Jackiw \\cite{bethe_intermediate},\n\\cite{jackiw_sum_rules}, namely\n\\begin{equation}\n\\sum_{k}(E_k - E_n)^3 |\\langle n |z| k \\rangle|^2\n= \\frac{\\hbar^4}{2m^2} \\left\\langle n \\left|\n\\frac{d^2 V(z)}{dz^2} \\right| n \\right\\rangle\n\\label{first_potential_sum_rule}\n\\end{equation}\nand\n\\begin{equation}\n\\sum_{k} (E_k - E_n)^4 |\\langle n |z| k \\rangle|^2\n= \\frac{\\hbar^4}{m^2} \\left\\langle n \\left|\\left(\\frac{dV(z)}{dz}\\right)^2\n\\right| n \\right\\rangle\n\\label{second_potential_sum_rule}\n\\end{equation}\nwhich are sometimes called the {\\it force times momentum} and\n{\\it force-squared} sum rules, respectively.\nWe note that not all such sum rules necessarily lead\nto convergent expressions. In Ref.~\\cite{belloni_robinett_airy_zeros}, \nEqns.~(\\ref{first_potential_sum_rule}) and (\\ref{second_potential_sum_rule})\ncould not be applied in the quantum bouncer system due to the\ndiscontinuous nature of the potential in Eqn.~(\\ref{bouncer_potential}).\nIn the case of the symmetric linear potential, the system is defined by a\ncontinuous potential and even the discontinuous derivative will lead to a convergent result in the\ncorresponding sum rule in Eqn.~(\\ref{first_potential_sum_rule}).\nThese results regarding convergence are consistent with the\nform of the dipole matrix elements in\nEqn.~(\\ref{off_diagonal_dipole_matrix_element}), where their values\nfor fixed $k$ or $n$ clearly decrease more quickly with\nincreasing $n$ or $k$ than their counterparts in the bouncer system,\nindicating better convergence.\n\n\nWhen we fix the state labeled $n$ as being one of the odd solutions and\nsum over $k$ values corresponding to the even states, the\n$1\/\\sqrt{\\eta_k}$ factors remain inside the summation, giving sums of the form\n\\begin{equation}\nT_{p}(n) \\equiv \\sum_{\\textrm{all}\\, k} \\frac{1}{\\eta_k (\\eta_k - \\zeta_n)^p}\n\\,.\n\\label{T_definition}\n\\end{equation}\nIn contrast, when the even states are fixed, switching the roles of $n$ and\n$k$ in Eqn.~(\\ref{off_diagonal_dipole_matrix_element}), then the\ncommon $1\/\\sqrt{\\eta_n}$ factor can be removed from inside\nthe summation and eventually moved to the right-hand side\nof the identity involved. In those cases, we find expressions of the form\n\\begin{equation}\nU_{p}(n) \\equiv \\sum_{\\textrm{all}\\, k} \\frac{1}{\\eta_n(\\zeta_k - \\eta_n)^p}\n= \\left(\\frac{1}{\\eta_n}\\right)\n\\sum_{\\textrm{all}\\, k} \\frac{1}{(\\zeta_k - \\eta_n)^p}\n\\equiv \\frac{1}{\\eta_n} \\tilde{U}_p(n)\n\\label{new_U}\n\\end{equation}\nwhere\n\\begin{equation}\n\\tilde{U}_p(n)\n\\equiv\n\\sum_{\\textrm{all}\\, k} \\frac{1}{(\\zeta_k - \\eta_n)^p}\n\\,.\n\\end{equation}\nThe related quantity where we sum over $k$ values corresponding to\nodd states, namely\n\\begin{equation}\n\\tilde{T}_p(n)\n\\equiv\n\\sum_{\\textrm{all}\\, k} \\frac{1}{(\\eta_k - \\zeta_n)^p}\n\\,.\n\\end{equation}\ncan be obtained by using the definition in Eqn.~(\\ref{T_definition}) to\nwrite\n\\begin{equation}\n\\tilde{T}_p(n) = \\zeta_n\\, T_p(n) + T_{p-1}(n)\n\\label{new_T}\n\\,.\n\\end{equation}\nFor completeness, we recall from Ref.~\\cite{belloni_robinett_airy_zeros}\nthat for the quantum bouncer system constraints on sums were of the form\n\\begin{equation}\nS_{p}(n) \\equiv \\sum_{k\\neq n} \\frac{1}{(\\zeta_k-\\zeta_n)^{p}}\n\\, .\n\\end{equation}\nWe will focus on evaluation of the $T_p(n)$ and $U_p(n)$, noting that\nit is straightforward to obtain the related $\\tilde{T}_p(n),\\tilde{U}_p(n)$\nfor comparison to $S_p(n)$, using\nEqns.~(\\ref{new_U}) and (\\ref{new_T}).\n\nWhile not formally a sum rule,\nwe note that the completeness relation for dipole matrix elements\ncan be written as\n\\begin{equation}\n\\sum_{k\\neq n} |\\langle n |z|k \\rangle|^2\n+ \\langle n|z|n\\rangle =\n\\sum_{\\textrm{all}\\, k} |\\langle n |z|k \\rangle|^2 = \\langle n |z^2|n \\rangle\n\\label{x_completeness}\n\\end{equation}\nand that because of the parity of the potential the `diagonal' term is in fact zero. While not a\nsum rule in the classic sense, it will also provide novel constraints on\nthe $Ai$ and $Ai'$ zeros.\n\nFor the sum rules in Eqns.~(\\ref{trk_sum_rule}) -\n(\\ref{second_potential_sum_rule}) and (\\ref{x_completeness}),\nwe can now fix a state of definite parity,\neither $\\psi_n^{(+)}(z)$ or $\\psi_n^{(-)}(z)$, and then sum over\nthe states of opposite parity, namely the $\\psi_k^{(-)}(z)$ or\n$\\psi_k^{(+)}(z)$, respectively. In this way, we obtain two different\nconstraints for each quantum-mechanical sum rule.\n\nFor example, starting with the TRK sum rule in Eqn.~(\\ref{trk_sum_rule})\nwe find the relations\n\\begin{equation}\nT_5(n) \\equiv \\sum_{\\textrm{all} \\, k} \\frac{1}{\\eta_k(\\eta_k - \\zeta_n)^5} = \\frac{1}{4}\n= \\left(\\frac{1}{\\eta_n}\\right) \\sum_{\\textrm{all} \\, k} \\frac{1}{(\\zeta_k - \\eta_n)^5}\n\\equiv U_5(n)\n\\label{trk_result}\n\\end{equation}\nby using odd ($\\psi_n^{(-)})$ and even ($\\psi_n^{(+)}$) states respectively.\nFor comparison, we note that the constraint arising in this case for the\nquantum bouncer is\n\\begin{equation}\nS_3(n) = \\sum_{k\\neq n} \\frac{1}{(\\zeta_k-\\zeta_n)^{3}} = \\frac{1}{4}\n\\, .\n\\end{equation}\n\n\nFrom the momentum-completeness sum rule in\nEqn.~(\\ref{second_power_momentum_sum_rule}), we use the evaluation of\nthe the $\\hat{p}^2$ operator from Eqn.~(\\ref{kinetic_energy_virial})\nand find\n\\begin{eqnarray}\nT_{4}(n) & = & \\sum_{\\textrm{all}\\, k} \\frac{1}{\\eta_k(\\eta_k - \\zeta_n)^4}\n= \\frac{\\zeta_n}{3}\n\\label{momentum_completeness_1} \\\\\nU_{4}(n) & = & \\left(\\frac{1}{\\eta_n}\\right)\\sum_{\\textrm{all} \\, k} \\frac{1}{(\\zeta_k - \\eta_n)^4}\n= \\frac{\\eta_n}{3}\n\\label{momentum_completeness_2}\n\\end{eqnarray}\nwhile the corresponding sum rule constraint for the bouncer is\n\\begin{equation}\nS_3(n) = \\sum_{k\\neq n} \\frac{1}{(\\zeta_k-\\zeta_n)^{2}} = \\frac{\\zeta_n}{3}\n\\, .\n\\end{equation}\n\n\nFor the sum rule in Eqn.~(\\ref{second_potential_sum_rule}), \nthe fact that $[V'(z)]^2 = (\\pm F)^2 = F^2$ for both positive and\nnegative values of $z$, yields\n\\begin{equation}\nT_{2}(n) = \\sum_{\\textrm{all} \\, k} \\frac{1}{\\eta_k(\\eta_k - \\zeta_n)^2}\n= 1\n=\n\\left( \\frac{1}{\\eta_n}\\right)\\sum_{\\textrm{all} \\, k} \\frac{1}{(\\zeta_k - \\eta_n)^2}\n= U_{2}(n)\n\\,.\n\\label{force_squared_result}\n\\end{equation}\nThe convergence of these sums is clear since\nthe large $k$ behavior of the $\\zeta_k \\sim k^{2\/3}$ implies that each\nterm scales as $k^{-4\/3}$.\nThus, $p=2$ is the smallest power which will\nlead to a convergent sum rule. For the case of the\nquantum bouncer, this sum rule did not converge.\n\n\nFor the sum rule in Eqn.~(\\ref{first_potential_sum_rule}) we require the\nvalue of $V''(z)$. Given the cusp in the definition of $V(z)$, we find that\n$V''(z) = 2F\\delta(z)$ so that the right-hand side of\nEqn.~(\\ref{first_potential_sum_rule}) becomes\n\\begin{equation}\n\\frac{\\hbar^4F}{m^2} |\\psi_n^{(\\pm)}(0)|^2\n\\, .\n\\label{force_times_momentum_result}\n\\end{equation}\nWe can then use the results of Eqn.~(\\ref{wavefunctions_at_the_origin})\nand immediately see that\n\\begin{eqnarray}\nT_{3}(n) & = & \\sum_{\\textrm{all} \\, k} \\frac{1}{\\eta_k(\\eta_k - \\zeta_n)^3}\n= 0 \\\\\nU_{3}(n) & = & \\left(\\frac{1}{\\eta_n}\\right)\n\\sum_{\\textrm{all} \\, k} \\frac{1}{(\\zeta_k - \\eta_n)^3} = \\frac{1}{2}\n\\, ,\n\\end{eqnarray}\nand note that the form of the right-hand side of this sum-rule constraint varies in\nform between the odd and even states. The only other case of which we are aware where the {\\it forces times\nmomentum} sum rule gives a similar result,\nnamely where the sum rule depends on the wavefunction at the origin,\nis for the Coulomb problem \\cite{bethe_intermediate}. For that system, we have\n$\\nabla^2 V({\\bf r}) = -e \\delta({\\bf r})$ and the sum rule gives a non-zero\nresult for $S$-wave ($l=0$ states) only.\n\n\n\n\nTo evaluate the constraint arising from the\n$z$-completeness relation in Eqn.~(\\ref{x_completeness}), we require the\nexpectation values of $z^2$ in the even and odd states. These can be\nderived by using the integral in Eqn.~(\\ref{diagonal_2}) to obtain\n\\begin{equation}\n\\langle \\psi_{n}^{(-)} | z^2 | \\psi_{n}^{(-)} \\rangle\n =\n\\rho^2 \\left(\\frac{8 \\zeta_n^2}{15} \\right)\n\\qquad\n\\mbox{and}\n\\qquad\n\\langle \\psi_{n}^{(+)} | z^2 | \\psi_{n}^{(+)} \\rangle\n = \\rho^2 \\left( \\frac{8 \\eta_n^2}{15} + \\frac{1}{5\\eta_n} \\right)\n\\label{second_moments}\n\\end{equation}\nwhere we once again notice the difference in form between the results\nfor the two parities. Using these results, we find the following sum rules\n\\begin{eqnarray}\nT_6(n) = \\sum_{\\textrm{all} \\, k} \\frac{1}{\\eta_k(\\eta_k - \\zeta_n)^6}\n& = & \\frac{2\\zeta_n^2}{15} \\\\\nU_6(n) = \\left(\\frac{1}{\\eta_n}\\right)\n\\sum_{\\textrm{all} \\, k} \\frac{1}{(\\zeta_k - \\eta_n)^6}\n& = & \\frac{2\\eta_n^2}{15} + \\frac{1}{20\\eta_n}\n\\,.\n\\end{eqnarray}\nThe corresponding result for the quantum bouncer\n\\cite{belloni_robinett_airy_zeros} has a non-zero\n`diagonal' term, as in Eqn.~(\\ref{x_completeness}),\nwhich leads to the result\n\\begin{equation}\nS_4 = \\sum_{k\\neq n} \\frac{1}{(\\zeta_k-\\zeta_n)^{4}} = \\frac{\\zeta_n^2}{45}\n\\end{equation}\n\n\n\nIt has been stressed \\cite{belloni_robinett_sum_rules} that the form of the\nstandard second-order perturbation theory result, namely\n\\begin{equation}\nE_n^{(2)} =\n\\sum_{k \\neq n}\n\\frac{|\\langle n |\\overline{V}(x)|k\\rangle|^2}\n{(E_n^{(0)} - E_k^{(0)})}\n\\label{general_second_order_shift}\n\\end{equation}\ncan also be thought of as an energy-weighted sum rule. \nJackiw \\cite{jackiw_sum_rules} discussed energy-difference weighted sum rules, \ncontaining factors such as $(E_k-E_n)^p$\nwith negative values of $p$. This concept was\napplied in Ref.~\\cite{belloni_robinett_airy_zeros}\nto obtain another independent constraint\non the $S_p(n)$ for the quantum bouncer by considering the perturbing\neffect of an additional constant force (linear field) via the Stark effect.\n\nThe Stark effect for the symmetric linear potential has recently\nbeen discussed \\cite{robinett_stark} where a straightforward expansion of the\nexact eigenvalue condition was shown to led to closed-form expressions for\nthe second-order energy shifts. If the perturbing potential is defined as\n$\\overline{V}(z) = \\overline{F}z$, then the results for the Stark shifts for\nthe odd- and even-parity states respectively are found to be\n\\begin{equation}\nE_n^{(-,2)} = - \\frac{7}{9} \\left(\\frac{\\overline{F}}{F}\\right)^2\\,\nE_n^{(-)}\n\\qquad\n\\quad\n\\mbox{and}\n\\qquad\n\\quad\nE_n^{(+,2)} = - \\frac{5}{9} \\left(\\frac{\\overline{F}}{F}\\right)^2\\,\nE_n^{(+)}\n\\, .\n\\label{symmetric_linear_stark_shift}\n\\end{equation}\nUsing the expression in Eqn.~(\\ref{general_second_order_shift}),\nwe find the the second-order shifts given by perturbation theory are\n\\begin{eqnarray}\nE_n^{(-,2)} & = &\n- 4\\left(\\frac{\\overline{F}}{F}\\right)^2 {\\cal E}_0\n\\left[\\sum_{\\textrm{all} \\, k} \\frac{1}{\\eta_k(\\eta_k - \\zeta_n)^7}\\right]\n\\\\\nE_n^{(+,2)} & = &\n- 4\\left(\\frac{\\overline{F}}{F}\\right)^2 {\\cal E}_0\n\\left[\\sum_{\\textrm{all} \\, k} \\frac{1}{\\eta_n(\\zeta_k - \\eta_n)^7}\\right]\n\\end{eqnarray}\nwhich gives two new constraints,\n\\begin{eqnarray}\nT_7(n) & = & \\sum_{\\textrm{all} \\, k} \\frac{1}{\\eta_k(\\eta_k - \\zeta_n)^7} =\n+\\frac{7 \\zeta_n}{36} \\\\\nU_7(n) & = &\n\\left(\\frac{1}{\\eta_n}\\right)\\sum_{\\textrm{all} \\, k} \\frac{1}{(\\zeta_k - \\eta_n)^7} =\n+\\frac{5 \\eta_n}{36}\n\\end{eqnarray}\nwhich can be compared to the the quantum bouncer,\n\\begin{equation}\nS_5(n) = \\sum_{k \\neq n}\\frac{1}{(\\zeta_k - \\zeta_n)^5} = \\frac{\\zeta_n}{36}\n\\, .\n\\end{equation}\n\nOur earlier discussion of sum rules for the quantum bouncer\n\\cite{belloni_robinett_airy_zeros} allowed for\na systematically calculable hierarchy of constraints on sums of inverse powers\nof $(\\zeta_k-\\zeta_n)$. This was made possible by repeated use of the\ncommutation relation $[x^q,\\hat{p}] = i q\\hbar\nx^{q-1}$ to recursively obtain sums over matrix elements for higher and\nhigher moments. For the symmetric potential, where matrix elements of\nodd powers of $z$ vanish, that connection is lost and we know of no \nstrategy to generate all of the $T_p(n)$ and $U_p(n)$ in a systematic\nway. We can, however, make use of the two relations\n\\begin{eqnarray}\n\\sum_{all \\, k} \\langle n | x^q| k\\rangle \\langle k |x| n\\rangle\n& = & \\langle n |x^{q+1}| n \\rangle\n\\label{odd_extend_1}\\\\\n\\sum_{all \\, k} (E_k - E_n)\\langle n | x^q| k\\rangle \\langle k |x| n\\rangle\n& = & q \\left(\\frac{\\hbar^2}{2m}\\right)\\langle n |x^{q-1}| n \\rangle\n\\label{odd_extend_2}\n\\end{eqnarray}\nfor $q$ odd to generate an infinite number of new constraints on inverse powers\nof $(\\eta_n - \\zeta_k)$ similar to the ones derived already, requiring\nonly the expectation values on the right-hand sides, which can in turn be\nevaluated using the recursion relations in Appendix~\\ref{sec:recursion}.\n\n\n\nThe sum rules involving dipole matrix elements for the\nsymmetric linear potential, using Eqns.~(\\ref{diagonal_dipole_matrix_element}) and\n(\\ref{off_diagonal_dipole_matrix_element}), give\nconstraints on sums of inverse powers of $(\\eta_k - \\zeta_n)$. In order\nto obtain constraints on differences of just the zeros of\n$Ai'(\\zeta)$ separately, namely the $(\\eta_k - \\eta_n)$,\nwe need to consider matrix elements which are non-vanishing\nfor even-even states. For example, because of the parity constraints in\nthe symmetric linear potential, the so-called monopole sum rule\n\\cite{bohigas}\n\\begin{equation}\n\\sum_{k \\neq n} (E_k - E_n)|\\langle n |z^2|k\\rangle|^2 =\n\\frac{2\\hbar^2}{m} \\langle n |z^2|n \\rangle\n\\label{monopole_sum_rule}\n\\end{equation}\nwill connect only odd-odd and even-even states; in this case the\nconstraint that $k \\neq n$ is indeed required. For the odd-states,\nthe corresponding constraint on the $\\zeta_n$ was found in\nRef.~\\cite{belloni_robinett_airy_zeros} to be\n\\begin{equation}\nS_7(n) = \\sum_{k \\neq n} \\frac{1}{(\\zeta_k - \\zeta_n)^7} =\n\\frac{\\zeta_n^2}{270}\n\\,.\n\\end{equation}\nFor the even-even case, we can use the new matrix element result\nin Eqn.~(\\ref{off_diagonal_2}) to evaluate\n\\begin{equation}\n\\langle \\psi_n^{(+)} | z^2 | \\psi_k^{(+)} \\rangle\n= - \\frac{12 (\\eta_n + \\eta_k)}{\\sqrt{\\eta_n \\eta_k} (\\eta_n - \\eta_k)^4}\n\\end{equation}\nwhich gives a more complicated constraint, namely\n\\begin{equation}\n\\sum_{k \\neq n} \\frac{(\\eta_n+\\eta_k)^2}{\\eta_n \\eta_k (\\eta_n -\\eta_k)^7}\n= \\frac{1}{36}\\left( \\frac{8 \\eta_n^2}{15} + \\frac{1}{5\\eta_n}\n\\right)\n\\,.\n\\end{equation}\nOnce again, we have checked all of these results numerically using\n{\\it Mathematica}$^{\\trade}$\\,\\,.\n\nThe new classes of constraints on the zeros of $Ai$ and $Ai'$ derived\nhere, from application of sum rules for the symmetric linear potential,\nare seen to be qualitatively similar to those obtained from the\nparity-restricted version of this potential, namely the quantum bouncer.\nIn contrast, the nature of the mathematical relations dictated\nby sum rules applied to the parity-restricted version of the harmonic\noscillator are qualitatively very different, as we will see in\nSec.~\\ref{sec:half_sho}, after first reviewing sum rules in the\nstandard oscillator system.\n\n\n\n\n\\section{Review of sum rules for the harmonic oscillator}\n\\label{sec:sho}\n\nBefore examining the parity-restricted version of the harmonic oscillator,\nwe briefly review the solutions for the the familiar oscillator\npotential, as well as\nthe structure of the quantum mechanical sum rules for this system.\nThe solutions for the Schr\\\"{o}dinger equation\n\\begin{equation}\n- \\frac{\\hbar^2}{2m} \\frac{d^2 \\psi_n(x)}{dx^2}\n+ \\frac{1}{2} m\\omega^2 x^2\\,\\psi_n(x) = E_n\\psi_n(x)\n\\end{equation}\ncan be written in the form\n\\begin{equation}\n\\psi_n(x) = \\frac{c_n}{\\sqrt{\\beta}}\\, H_n(y) \\, e^{-y^2\/2}\n\\qquad\n\\quad\n\\mbox{where}\n\\quad\n\\qquad\nc_n = \\frac{1}{\\sqrt{2^n n! \\sqrt{\\pi}}}\n\\label{sho_states}\n\\end{equation}\nwhere\n\\begin{equation}\nx = \\beta y\n\\qquad\n\\quad\n\\mbox{and}\n\\qquad\n\\quad\n\\beta \\equiv \\sqrt{\\frac{\\hbar}{m\\omega}}\n\\end{equation}\nwith $y$ dimensionless and the $H_n(y)$ are the Hermite polynomials.\n The corresponding energy eigenvalues are\n\\begin{equation}\nE_n = (n+1\/2)\\hbar \\omega\n\\qquad\n\\quad\n\\mbox{or}\n\\qquad\n\\quad\n\\epsilon_n \\equiv \\frac{2E_n}{\\hbar \\omega} = 2n+1\n\\end{equation}\nwith $n=0,1,...$\nso that the differential equation in dimensionless form can be written as\n\\begin{equation}\n\\psi_n''(y) = (y^2 - \\epsilon_n) \\psi_n\n\\, .\n\\label{dimensionless_sho}\n\\end{equation}\nThe solutions have parity given by $P_n = (-1)^n$ and the expectation\nvalues of any odd power of $x$ vanishes,\nso that, for example, $\\langle n |x^{2p+1}|n\\rangle = 0$.\n\nBoth $x$ and $\\hat{p}$ can be written in terms of raising and lowering\noperators, $\\hat{A}$ and $\\hat{A}^{\\dagger}$,\nand hence the multipole matrix elements exhibit an exceptionally simple\nstructure leading to absolute selection rules.\nFor example, we have from standard textbooks the relations\n\\begin{eqnarray}\n\\langle n |x| k \\rangle\n& = & \\frac{\\beta}{\\sqrt{2}}\n\\left\\{ \\delta_{n,k-1} \\sqrt{k} + \\delta_{n,k+1} \\sqrt{k+1}\\right\\}\n\\label{full_sho_1} \\\\\n\\langle n |x^2 | k \\rangle\n& = & \\frac{\\beta^2}{2}\n\\left\\{\n\\delta_{n,k-2} \\sqrt{k(k-1)}\n+ \\delta_{n,k} (2k+1)\n+ \\sqrt{n,k+2} \\sqrt{(k+1)(k+2)} \\right\\}\n\\label{full_sho_2} \\\\\n\\langle n |x^3 | k \\rangle\n& = & \\frac{\\beta^3}{2\\sqrt{2}}\n\\left\\{\n\\delta_{n,k-3} \\sqrt{k(k-1)(k-2)}\n+ 3\\delta_{n,k-1} k^{3\/2}\n+ \\delta_{n,k+1} (k+1)^{3\/2} \\right. \\nonumber \\\\\n& &\n\\qquad \\qquad \\qquad\n\\left.\n+ \\delta_{n,k+3} \\sqrt{(k+1)(k+2)(k+3)}\n\\right\\}\n\\, .\n\\label{full_sho_3}\n\\end{eqnarray}\nThese, and similar expressions for matrix elements of powers $\\hat{p}$,\nimply that all of the familiar sum rules discussed above\nwill be `super-convergent', namely\nthat the relevant infinite sums will actually be saturated by a finite number\nof terms, and hence satisfied in a trivial way. We will find\nthat the situation is dramatically different in the case of the oscillator\nrestricted to the half-line, at least for the matrix elements of odd\nvalues of $x$, and that is the subject of the next section. The matrix\nelement relations for even powers of $x$, including\nEqn.~(\\ref{full_sho_2}) will still be relevant for the restricted oscillator case with minor relabeling.\n\n\n\\section{The parity-restricted harmonic oscillator}\n\\label{sec:half_sho}\n\nThe solutions to the quantum mechanical problem of a particle\nin the potential in Eqn.~(\\ref{half_sho}) are easily obtained\nfrom the odd-parity solutions in Eqn.~(\\ref{sho_states}).\nIn dimensionless notation, we have\n\\begin{equation}\n\\tilde{\\psi}_n(y) =\n\\left\\{\n\\begin{array}{cc}\n\\sqrt{2} \\, \\psi_{2n+1}(y) & \\mbox{for $y \\geq 0$} \\\\\n0 & \\mbox{for $y\\leq 0$}\n\\end{array}\n\\right.\n\\label{half_sho_solutions}\n\\end{equation}\nwhere $n=0,1,2,...$ for the `half'-SHO states $\\tilde{\\psi}_n(y)$ associated\nwith the odd solutions of the oscillator, and an appropriate change in\noverall normalization. We will henceforth use integrals over either\nthe dimensional $x$ or dimensionless $y$ variable as deemed most useful.\nWe note for future reference that\nthe derivatives of the solutions at the origin necessary for matrix\nelement calculations are given by\n\\begin{equation}\n\\tilde{\\psi}_n'(0) = \\sqrt{2} \\, c_{2n+1} \\, H'_{2n+1}(0)\n=\n\\frac{(-1)^n \\, 2}{\\sqrt{(2n+1)! \\sqrt{\\pi}}} (2n+1)!!\n=\n\\frac{(-1)^n 2}{2^n n!} \\sqrt{\\frac{(2n+1)!}{\\sqrt{\\pi}}}\n\\end{equation}\nand\n\\begin{equation}\n[\\tilde{\\psi}_n'(0)]^2 = \\frac{4}{2^{2n} (n!)^2} \\frac{(2n+1)!!}{\\sqrt{\\pi}}\n= \\frac{1}{\\sqrt{\\pi}}\n\\left[\\frac{4(2n+1)!}{2^{2n} (n!)^2}\\right]\n= \\frac{D_n}{\\sqrt{\\pi}}\n\\end{equation}\nwhere\n\\begin{equation}\nD_n \\equiv\n\\frac{4(2n+1)!}{2^{2n} (n!)^2}\n\\label{definition_of_d_n}\n\\,.\n\\end{equation}\nUsing the Stirling approximation, $n! \\sim \\sqrt{2\\pi n} (n\/e)^n$, we\nhave for large $n$\n\\begin{equation}\nD_n \\rightarrow 8 \\sqrt{\\frac{n}{\\pi}}\n\\end{equation}\nwhich will be useful in examining the semi-classical limit.\n\n\nThe quantized energies for the `half'-SHO\nare then given by $\\tilde{E}_n = E_{2n+1} = (2n+3\/2)\\hbar\n\\omega$ or $\\tilde{\\epsilon}_n = 4n+3$. The $\\tilde{\\psi}_n(y)$\nform an orthogonal set which can be seen explicitly by using the recursion relation\nderived for oscillator solutions in Appendix~\\ref{sec:recursion}.\nUsing $f(y) = 1$ in\nEqn.~(\\ref{full_sho_recursion_relation}), we find that\n\\begin{equation}\n(\\tilde{\\epsilon}_n - \\tilde{\\epsilon}_m)^2\n\\int_{0}^{\\infty} \\tilde{\\psi}_n(y) \\, \\tilde{\\psi}_m(y) \\, dy = 0\n\\end{equation}\nso that $\\langle \\tilde{\\psi}_m | \\tilde{\\psi}_n \\rangle = 0$ if $n\\neq m$.\n\nFor the important dipole matrix elements, we use $f(y) = y$, so that\n$f'(0) = 1$ and find that\n\\begin{equation}\n\\langle \\tilde{\\psi}_m |y| \\tilde{\\psi}_n \\rangle =\n\\int_{0}^{\\infty} y\\, \\tilde{\\psi}_n(y) \\, \\tilde{\\psi}_m(y) \\, dy\n= - \\frac{\\tilde{\\psi}_n'(0) \\tilde{\\psi}_m'(0)}{2[4(n-m)^2-1]}\n\\label{half_sho_dipole_matrix_element}\n\\end{equation}\nsince $\\tilde{\\epsilon}_{n} = 4n+3$.\n\nFor the special case of $n=m$, the expectation values in the state\n$\\tilde{\\psi}_n$ are therefore given by\n\\begin{equation}\n\\langle \\tilde{\\psi}_n |x| \\tilde{\\psi}_n \\rangle = \\frac{D_n}{2\\sqrt{\\pi}}\n\\beta\n\\quad\n\\longrightarrow\n\\quad\n\\frac{4\\sqrt{n}}{\\pi}\\beta\n\\label{large_n_quantum}\n\\end{equation}\nin the large $n$ limit. We can confirm this using the classical (WKB-like)\nprobability distribution for the `half'-SHO, namely\n\\begin{equation}\nP_{CL}^{(n)}(x) = \\frac{2}{\\pi} \\frac{1}{\\sqrt{A^2-x^2}}\n\\qquad\n\\mbox{where}\n\\qquad 0 \\leq x \\leq A_n\n\\end{equation}\nwhere the upper classical turning point, $A_n$, is given by\n\\begin{equation}\n \\frac{1}{2} m \\omega^2 A_n^2 = E_n = \\hbar\\omega (2n+3\/2)\n\\quad\n\\qquad\n\\mbox{or}\n\\qquad\n\\quad\nA_n = \\beta \\sqrt{4n+3}\n\\,.\n\\end{equation}\nThe classical expectation value is then\n\\begin{equation}\n\\langle n |x| n \\rangle_{CL}\n\\equiv\n\\int_{0}^{A_n} x\\, P_{CL}^{(n)}(x)\\,dx\n=\\frac{2A_n}{\\pi} = \\frac{2\\sqrt{4n+3}\\,\\beta}{\\pi}\n\\quad\n\\longrightarrow\n\\quad\n\\left(\\frac{4\\sqrt{n}}{\\pi}\\right)\\beta\n\\label{classical_half_sho_result}\n\\end{equation}\nfor large $n$, which agrees with Eqn.~(\\ref{large_n_quantum}).\n\n\nThe dipole matrix elements in Eqn.~(\\ref{half_sho_dipole_matrix_element})\nare clearly very different from the `full'-SHO\ncase, with each state being connected to all of the\nothers in a very simple form,\nproportional to the derivative of the wavefunction at the origin for each\nstate, and with an `energy denominator' factor. This is the only\nexample of a dipole matrix element in a model 1D system of which we are\naware for which the diagonal ($n=m$) and off-diagonal ($n\\neq m$) cases\ncan be written with the same simple expression. The corresponding sum rules\nwill then be realized in a completely different manner than the\nsuper-convergent form for the more familiar oscillator.\n\n\n\nFor example, the TRK sum rule in Eqn.~(\\ref{trk_sum_rule}) gives the constraint\n\\begin{equation}\nD_n \\sum_{k \\neq n} \\frac{(k-n) D_k}{[4(n-k)^2-1]^2}\n= \\pi\n\\label{half_sho_1}\n\\end{equation}\nwhere the $D_n$ are given in Eqn.~(\\ref{definition_of_d_n}). In a similar\nway, the $x$-completeness relation of Eqn.~(\\ref{x_completeness}) gives\n\\begin{equation}\nD_n \\sum_{k \\neq n} \\frac{D_k}{[4(n-k)^2-1]^2} = (8n+6)\\pi\n\\end{equation}\nwhere we use the fact that\n\\begin{equation}\n\\langle \\tilde{\\psi}_n |y^2 | \\tilde{\\psi}_n \\rangle = (2n+3\/2)\n\\label{half_sho_2}\n\\end{equation}\nmaking use of the matrix element in Eqn.~(\\ref{full_sho_2}) with a suitable\nrelabeling. One can then combine Eqns.~(\\ref{half_sho_1}) and\n(\\ref{half_sho_2}) to obtain the constraint\n\\begin{equation}\nD_n \\sum_{k \\neq n} \\frac{k\\,D_k}{[4(n-k)^2-1]^2}\n= (4n+1)(2n+1)\\pi\n\\, .\n\\label{half_sho_3}\n\\end{equation}\nLooking at the $n$ and $k$ dependence of the dipole matrix elements, one can confirm that the sum rules in\nEqns.~(\\ref{first_potential_sum_rule})\nand (\\ref{second_potential_sum_rule}) which depend on derivatives of\n$V(x)$ are not convergent, this result is consistent with the nature of the\ndiscontinuous potential.\n\nMatrix elements involving even powers of $x$ can be written in terms\n of the results for the `full'-SHO and have the same simple `nearby\nneighbor' selection rule structure. For example, a simple relabeling\nof Eqn.~(\\ref{full_sho_2}) gives\n\\begin{equation}\n\\langle \\tilde{\\psi}_n |y^2| \\tilde{\\psi}_k \\rangle\n=\n\\frac{1}{2}\n\\left\\{\n\\delta_{n,k-1} \\sqrt{(2k+1)(2k)}\n+ \\delta_{n,k}(4k+3)\n+ \\delta_{n,k+1} \\sqrt{(2k+2)(2k+3)}\n\\right\\}\n\\end{equation}\nand the monopole sum rule in Eqn.~(\\ref{monopole_sum_rule}) is saturated\nby a finite number of terms.\nMatrix element relations using odd powers of $x$, such as those in\nEqns.~(\\ref{odd_extend_1}) and (\\ref{odd_extend_2}) give increasing\ncomplicated relations involving the $D_{k,n}$ since the matrix elements\nof $x^{2q+1}$ all are proportional to $\\langle \\tilde{\\psi}_n |x|\n\\tilde{\\psi}_k \\rangle$. For example, using the recursion relation\nin Eqn.~(\\ref{full_sho_recursion_relation}), we find that\n\\begin{eqnarray}\n\\langle \\tilde{\\psi}_n |y^3| \\tilde{\\psi}_k \\rangle\n& = &\n\\frac{-6 (2n+2k+3)}{[4(n-k)^2 - 9]}\\,\n\\langle \\tilde{\\psi}_n |y| \\tilde{\\psi}_k \\rangle \\nonumber \\\\\n& = &\n\\frac{-3(2n+2k+3)}{[4(n-k)^2-9][4(n-k)^2-1]}\n\\left(\\frac{\\tilde{\\psi}_n'(0) \\tilde{\\psi}_k'(0)}{2}\\right)\n\\end{eqnarray}\nOne can continue to generate increasingly complex constraints by\nuse of Eqns.~(\\ref{odd_extend_1}) and (\\ref{odd_extend_2}) for the non-trivial\ncase of matrix elements of odd powers of $x$.\n\nThe realization of the sum-rules for the\nparity-restricted version of the oscillator are completely\ndifferent than the trivial manner in which they are satisfied for the\nordinary oscillator system and generate an infinite number of constraints\non the $D_n$.\n\n\n\n\n\\section{Classical versus quantum mechanical results}\n\\label{sec:classical_versus_quantum}\n\nWe have seen that some of the matrix element expressions and\/or sum\nrules for the symmetric linear potential\nyield identical results when considering both the even and odd cases,\nwith only the substitution $\\zeta_n \\leftrightarrow \\eta_n$ required: \nthe virial theorem result\nin Eqn.~(\\ref{virial_theorem}) and the TRK, momentum completeness,\nand {\\it force-squared} sum rules in Eqns.~(\\ref{trk_result}),\n(\\ref{momentum_completeness_1}),\n(\\ref{momentum_completeness_2}), and (\\ref{force_squared_result}).\nFor others, however, such as the second-moment expectation values in\nEqn.~(\\ref{second_moments}) or the {\\it force times momentum} result \none finds slightly different results.\n\nTo examine these small differences, we will consider the\nclassical underpinnings of some of the results which arise in the\nevaluation of expectation values. By generalizing the\nrecursion relation derived by Goodmanson \\cite{goodmanson}\nin Sec.~\\ref{sec:recursion} one can generate the expectation values and\noff-diagonal matrix elements of any power of $z$.\n\nFor example, from Ref.~\\cite{belloni_robinett_airy_zeros}\nwe know that expectation values of $z^p$ for the quantum bouncer\nsolutions are given by\n\\begin{eqnarray}\n\\langle n | y|n \\rangle & = &\\frac{2\\zeta_n}{3}\n\\label{old_diagonal_1} \\\\\n\\langle n | y^2 |n \\rangle & = & \\frac{8 \\zeta_n^2}{15}\n\\label{old_diagonal_2} \\\\\n\\langle n | y^3 |n \\rangle & = & \\frac{16\\zeta_n^3}{35}\n+ \\frac{3}{7}\n\\label{old_diagonal_3} \\\\\n\\langle n | y^4|n \\rangle & = &\n\\frac{128\\zeta_n^4}{315} + \\frac{80\\zeta_n}{63}\n\\label{old_diagonal_4} \\\\\n\\langle n | y^5|n \\rangle & = &\n\\frac{256 \\zeta_n^5}{693} + \\frac{1808\\zeta_n^2}{3003}\n\\label{old_diagonal_5}\n\\end{eqnarray}\nwhere $y = z\/\\rho$ and we only show the dimensionless results. \nEach result has a highest-order term of order $(\\zeta_n)^p$,\nfollowed by sub-leading terms of order $\\zeta^{p-3k}_n$, if present at all.\nUsing the large-$n$ expansion of the $\\zeta_n$ in\nEqn.~(\\ref{large_n_expansion}), we see that the sub-leading terms\nare a factor of $(\\zeta_n)^{-3k} \\sim n^{-k}$ smaller and so become\nnegligible in the classical limit. This suggests that the leading terms\nare indeed what one would expect from a purely classical probability\ndensity.\n\n\nTo confirm this, we note that for the quantum bouncer we have\n\\begin{equation}\nP_{CL}^{(n)}(z) = \\frac{1}{2\\sqrt{A_n(A_n-z)}}\n\\label{classical_bouncer_distribution}\n\\end{equation}\nwhere $A_n$ is the upper classical turning point, defined by $E_n = FA_n$.\nIf we equate the total energy with the quantum mechanical result\n$E_n = (F\\rho)\\zeta_n$ and write $A_n = \\rho \\zeta_n$,\nthe classical probability density reduces to\n\\begin{equation}\nP_{CL}^{(n)}(z) = \\frac{1}{2\\rho \\sqrt{\\zeta_n(\\zeta_n-z\/\\rho)}}\n\\label{purely_classical_bouncer}\n\\,.\n\\end{equation}\nWe briefly discuss, in Sec.~\\ref{sec:semiclassical_limit},\nhow this classical distribution can also be extracted\ndirectly from the large $n$ limit of the exact quantum solutions.\n\n\n\n\n\nThe expectation values of moments of position are then given by\n\\begin{eqnarray}\n\\langle n |z^p | n \\rangle_{CL}\n& = &\\int_{0}^{A_n} z^p \\, P_{CL}^{(n)}(z)\\,dz \\nonumber \\\\\n& = & \\rho^p \\int_{0}^{\\zeta_n} \\frac{z^p}{2\\sqrt{\\zeta_n (\\zeta_n -z)}}\n\\, dz \\nonumber \\\\\n& = & \\frac{\\rho^p \\zeta_n^p}{2} \\int_{0}^{1} \\frac{y^p}{\\sqrt{1-y}}\\,dy\n\\nonumber \\\\\n& = & \\frac{(\\rho \\zeta_n)^p}{2}\nB(p+1,1\/2) \\nonumber \\\\\n& = &\n\\frac{(\\rho \\zeta_n)^p}{2}\n\\frac{\\Gamma(1+p)\\Gamma(1\/2)}{\\Gamma(p+3\/2)}\n\\label{classical_result_for_leading_term}\n\\end{eqnarray}\nand this expression agrees with the leading order terms in\nEqns.~(\\ref{old_diagonal_1}) - (\\ref{old_diagonal_5}) up through $p=5$.\nUsing the\nrecursion relation of Goodmanson, reviewed and extended in the\nAppendix, we can assume generally that the behavior of the leading\nterm for the expectation values can be written as $\\langle n |x^p |n\n\\rangle = A_n^p$ and the recursion relation requires that\n\\begin{equation}\nA_n^q = \\frac{2q \\zeta_n}{(2q+1)} A_n^{q-1}\n\\qquad\n\\quad\n\\mbox{or}\n\\qquad\n\\quad\nA_n^{p} = \\frac{2^p p!}{(2p+1)!!} \\zeta_n^p\n\\end{equation}\nwhich agrees with the classical result in\nEqn.~(\\ref{classical_result_for_leading_term}) for all $p$ values.\n\n\n\nFor the symmetric linear potential, the expectation values\nfor odd powers of $x$ vanish, but the integrals in\nEqns.~(\\ref{old_diagonal_1}) - (\\ref{old_diagonal_5}) are still useful.\nFor example, for the expectation value of $V(z) = F|z|$, we require the result\nin Eqn.~(\\ref{old_diagonal_1}). Moreover, because of the piecewise\ncontinuous definition of the wavefunctions in Eqns.~(\\ref{odd_states})\nand (\\ref{even_states}), the integrals obtained by using the\nrecursion relations in Appendix~\\ref{sec:recursion} are\nnecessarily defined over the interval $(0,\\infty)$ and then extended\nover all space, so intermediate results for integrals over the half-line\nwhich eventually vanish due to parity constraints can still be useful.\n\n\n\nTo compare the leading and sub-leading contributions to integrals used for\nthe expectation values for odd and even states, we can compare the results\nin Eqns.~(\\ref{old_diagonal_1}) - (\\ref{old_diagonal_5}) to similar results\nfor the even states. These can be defined by using the states in\nEqn.~(\\ref{even_states}), integrated over positive values of $x$ and\nnormalized as for the quantum bouncer. For those cases we find\n\\begin{eqnarray}\n\\langle n |y| n \\rangle & = & \\frac{2\\eta_n}{3} \\\\\n\\langle n |y^2| n \\rangle & = & \\frac{8\\eta_n^2}{15} + \\frac{1}{4\\eta_n} \\\\\n\\langle n |y^3| n \\rangle & = & \\frac{16\\eta_n^3}{35} + \\frac{3}{5} \\\\\n\\langle n |y^4| n \\rangle & = & \\frac{128 \\eta_n^4}{315} +\n\\frac{64 \\eta_n}{45} \\\\\n\\langle n |y^5| n \\rangle & = & \\frac{256 \\eta_n^5}{693} +\n\\frac{272 \\eta_n^2}{99} + \\frac{6}{11\\eta_n}\n\\,\n\\end{eqnarray}\nWe note that the leading terms in each case (and quite generally for all\nvalues of $p$, using a recursion relation argument as above) are identical\n(with $\\zeta_n \\leftrightarrow \\eta_n$), but that the next-to-leading orders\nreflect differences between the classical and quantum probability densities.\nThese differences vanish in the large $n$ limit.\n\nIn the context of the `half'-SHO,\nwe have already noted that there can be similar agreement between the\nexact quantum-mechanical expectation value of $x$\nin Eqn.~(\\ref{large_n_quantum})\nand the corresponding classical result\nin Eqn.~(\\ref{classical_half_sho_result}) in the large $n$ limit.\nOne can use the recursion relations in Eqn.~(\\ref{full_sho_recursion_relation})\nto obtain the highest-order terms in the expectation values of $x^p$\nand one again finds agreement with the semi-classical results for large $n$.\nFor example, the classical result is\n\\begin{equation}\n\\langle n |x^p| n\\rangle _{CL}\n=\n\\frac{2}{\\pi} \\int_{0}^{A_n} \\frac{x^p}{\\sqrt{A_n^2 - x^2}}\\,dx\n= \\frac{2(A_n)^p}{\\pi} \\int_{0}^{1} \\frac{y^p\\,dy}{\\sqrt{1-y^2}}\n= \\frac{(A_n)^p}{\\sqrt{\\pi}} \\frac{\\Gamma((1+p)\/2)}{\\Gamma(1+p\/2)}\n\\,.\n\\end{equation}\n\n\n\nGiven that traditional sum rules involve transition matrix elements,\nthere is no reason to expect that semi-classical probability arguments\nwill provide any useful information on their evaluation.\nOne important exception, however, is the form\nof the second-order perturbation theory result for the energy,\nas in Eqn.~(\\ref{general_second_order_shift}), which is of the form\nof an energy-difference weighted sum rule. In that special case,\nsemi-classical expressions for the quantized energy, such as the\nWKB approximation, can sometimes provide guidance on the form of the\nenergies, at least in the large $n$ limit.\n\nAn example of such a connection is the use of approximate WKB-type methods\nin the evaluation of first-order perturbation theory results using classical\nprobability densities, as in Ref.~\\cite{robinett_wkb}. More surprisingly,\nit has been pointed out that WKB energy quantization methods can give the\ncorrect large $n$ behavior of the second-order energy shift due to the\nStark effect in two familiar model\nsystems, the harmonic oscillator and infinite well \\cite{robinett_polar}.\nThis approach was used in Ref.~\\cite{robinett_stark} where the\nexact result for the second-order energy shift due to a constant\nexternal field for the symmetric linear potential was derived for the\nfirst time, giving the results in Eqn.~(\\ref{symmetric_linear_stark_shift}).\nThe WKB prediction for this case is given by the quantization condition\n\\begin{equation}\n\\sqrt{2m} \\int_{A_{-}}^{A_{+}}\n\\sqrt{E_n - (F|z| + \\overline{F}z)}\\,dz = (n+1\/2)\\hbar \\pi\n\\end{equation}\nwhere $n=0,1,2,..$ and the classical turning points are\n$A_{\\pm} = \\pm E_n\/(F \\pm \\overline{F})$. The WKB prediction for the\nenergies are then\n\\begin{equation}\nE_n =\nE_n^{(0)}\n\\left(1 - \\frac{\\overline{F}^2}{F^2}\\right)^{2\/3}\n\\approx\nE_n^{(0)}\n\\left(1 - \\frac{2}{3} \\left(\\frac{\\overline{F}}{F}\\right)^2\n+ \\cdots \\right)\n\\end{equation}\nwhere $E_n = {\\cal E}_0 (3\\pi(n+3\/4)\/2)^{2\/3}$ is the zero-field\nWKB approximation, which agrees with the exact results in\nEqn.~(\\ref{large_n_expansion}) for large $n$. This implies that the\nfirst-order Stark shift vanishes (as it must by symmetry) and that the\nsecond-order terms are\n\\begin{equation}\nE_n^{(2)} = - \\frac{6}{9} \\left(\\frac{\\overline{F}}{F}\\right)^2\nE_n^{(0)}\n\\,.\n\\end{equation}\nAs pointed out in Ref.~\\cite{robinett_stark}, this semi-classical\nresult brackets the two exact quantum mechanical expressions for\nthe even and odd states in Eqn.~(\\ref{symmetric_linear_stark_shift}),\ngiving it as the `average' effect.\n\nPrompted by this partial success, we wish to examine to what extent\na similar WKB-type analysis will give reliable answers for the\nfirst- and second-order energy shifts due to the Stark effect for the\n`half'-SHO discussed above. We first note that the WKB result for the\nenergy eigenvalues for the potential in Eqn.~(\\ref{half_sho}) without an\nexternal field are given by\n\\begin{equation}\n\\sqrt{2m} \\int_{0}^{A_n} \\sqrt{E_n - m\\omega^2x^2\/2}\\,dx\n= (n + C_L + C_R)\\hbar \\pi\n\\end{equation}\nwhere $C_L,C_R$ are the appropriate matching constants for an\ninfinite wall and `linear' potential respectively, given by\n$C_L = 1\/2$ and $C_R = 1\/4$, respectively. Evaluating the integral,\nthe WKB prediction is then\ngiven by $E_n = (2n+3\/2)\\hbar \\omega$, reproducing the exact result.\n\nThe corresponding expression including a perturbing\nlinear field, $\\overline{V}(x)\n= \\overline{F}x$, is then\n\\begin{equation}\n\\sqrt{2m} \\int_{0}^{A^{(+)}_n} \\sqrt{E - m\\omega^2x^2\/2 - \\overline{F}x}\\,dx\n= (n + 3\/4)\\hbar \\pi\n\\end{equation}\nwhere the upper turning point is given by energy conservation to be\n\\begin{equation}\nA^{(+)}_n = \\frac{\\sqrt{2m\\omega^2 E_n + \\overline{F}^2} - \\overline{F}}\n{m\\omega^2}\n\\, .\n\\end{equation}\nThe integral can be done in closed form, but we only require the result\nexpanded to second order in $\\overline{F}$, which gives\n\\begin{equation}\n\\frac{\\pi}{2} \\sqrt{\\frac{m}{m\\omega^2}}\n\\left[ E_n - \\frac{2}{\\pi} \\sqrt{\\frac{2E_n}{m\\omega^2}}\n\\overline{F}\n+ \\frac{\\overline{F}^2}{2m\\omega^2}\\right]\n= (n+3\/4)\\hbar \\pi\n\\, .\n\\end{equation}\nThis can be rationalized to give the simple quadratic equation\n\\begin{equation}\nE_n = R_n \\pm \\sqrt{R_n^2 - Z_n^2}\n\\end{equation}\nwhere\n\\begin{equation}\nZ_n \\equiv \\frac{\\overline{F}^2}{2m\\omega^2}\n\\qquad\n\\quad\n\\mbox{and}\n\\quad\n\\qquad\nR_n = Z_n + \\frac{4\\overline{F}^2}{\\pi^2 m\\omega^2}\n\\,.\n\\end{equation}\nSolving for $E_n$, again as a series in $\\overline{F}$, we find that\n\\begin{eqnarray}\nE_n^{(0)}(WKB) & = & (2n+3\/2) \\hbar \\omega\n\\label{zero_order}\\\\\nE_n^{(1)}(WKB) & = & \\frac{2\\overline{F}}{\\pi}\n\\sqrt{\\frac{2E_n^{(0)}}{m\\omega^2}}\n\\label{first_order}\\\\\nE_n^{(2)}(WKB) & = & \\left( - \\frac{1}{2} + \\frac{4}{\\pi^2}\\right)\n\\frac{\\overline{F}^2}{m\\omega^2}\n\\label{second_order}\n\\,.\n\\end{eqnarray}\nThe zero-order result is the standard WKB prediction noted above,\nwhile the first-order expression coincides with first-order\nperturbation theory (using the matrix element in Eqn.~(\\ref{large_n_quantum})\nin the large $n$ limit.\n\nThe second-order result is more interesting. The exact quantum mechanical\nresult for the second-order Stark effect for the ordinary harmonic\noscillator is $E_n^{(2)} = - \\overline{F}^2\/2m\\omega^2$ which is most\neasily obtained by a simple change of variables in the original\nSchr\\\"{o}dinger equation, and trivially confirmed in second-order perturbation\ntheory, using the matrix elements in Eqn.~(\\ref{full_sho_1}). For\nthe `half'-SHO, the WKB prediction is still a constant negative\nshift, the same for all states, but with a non-trivially different\ncoefficient. We can find no simple way to extract the exact second-order\nresult from a direct solution of the differential equation, but the\nsecond-order perturbation theory expression in\nEqn.~(\\ref{general_second_order_shift}) can\nbe evaluated numerically using the dipole matrix elements in\nEqn.~(\\ref{half_sho_dipole_matrix_element}).\n\nTo compare the first- and second-order predictions from the WKB approach\nwith the exact results from first- and second-order perturbation theory (PT),\nwe plot the differences between the WKB and PT methods in\nFig.~\\ref{fig:classical_quantum_comparison},\nas a function of the quantum number $n$. As mentioned above,\nthe first order predictions in both approaches agree in the large $n$ limit,\nwhich we've demonstrated here analytically.\nWe note the more interesting result that\nthe simple expression in Eqn.~(\\ref{second_order}) gives the correct\nlarge $n$ behavior of the second-order Stark shifts for the `half'-SHO\nproblem, reproducing a non-trivial numerical factor which would otherwise\nbe difficult to extract.\n\nWe note that this is another example of where a WKB approach to the\nevaluation of the second-order Stark effect correctly predicts the\nlarge $n$ behavior. This further justifies the discussion\nRef.~\\cite{robinett_polar} where such an approach was used to\nsystematically evaluate the second-order energy shifts due to an\nexternal field for general power-law potentials, $V_{k}(x) = V_0|x\/a|^k$.\n\n\\section{Conclusions and Discussion}\n\nWe have examined two cases of parity-related potentials, the quantum\nbouncer extended to the symmetric linear potential, and the harmonic\noscillator reduced to the `half'-SHO, in order to probe the importance of\nthe continuity of the potential on the convergence of quantum mechanical\nsum rules. We have indeed seen that the smoothness of $V(x)$ has\nclear consequences on which sum rules will be realized, which in turn is\nclosely related to\nthe convergence properties of the individual matrix elements. For the\nsymmetric linear potential, we find new constraints on the zeros of the\nderivative of the Airy function, but note that they are very similar in\nfunctional form to those derived from the quantum bouncer. On the other hand,\nthe infinite set of constraints which arise from the `half'-SHO are\nqualitatively very different from the super-convergent sums found\nin the realization of sum rules for the more familiar oscillator.\n\nThe study of parity-related potentials is also motivated by the desire to\nfind examples where there is a substantial overlap between the energies\nand wavefunctions (needed in the evaluation of matrix elements)\nof two quantum systems as they relate to sum rules.\n That connection is realized here by the fact that\nthe odd states of a symmetric potential remain solutions of the\nparity-restricted version, so that all of the resulting energies and\nwavefunctions (save a trivial normalization) are still solutions of the\nparity-restricted partner potential.\n\nAnother class of quantum mechanical problems which has similar strong\nconnections between the energy levels and wavefunctions are\nsuper-partner potentials \\cite{sukhatme}, $V^{(-)}(x)$ and\n$V^{(+)}(x)$, in the context of supersymmetric quantum mechanics (SUSY-QM).\n In that case, the spectra of the two systems are identical,\nexcept for the zero-energy ground state ($E_0^{(-)} = 0$) of the first\nsystem, which is absent in the second.\n Another reason for us to consider the `half'-SHO potential in such detail\nis that it is easily extended to generate an appropriate $V^{(\\pm)}(x)$ pair\nin SUSY-QM. For example, the potential\n\\begin{equation}\nV^{(-)}(x) = \\frac{1}{2}m\\omega^2 x^2 - \\frac{3}{2}\\hbar \\omega\n\\qquad\n\\mbox{for $x\\geq 0$}\n\\end{equation}\nhas the energy spectrum $E_n^{(-)} = 2n\\hbar \\omega$ for $n=0,1,2,...$\nwith the ground state wavefunction\n\\begin{equation}\n\\psi_0(x) = \\frac{2x}{\\sqrt{\\beta^3\\sqrt{\\pi}}}\n\\, e^{-(x\/\\beta)^2\/2}\n\\qquad\n\\mbox{for $x\\geq0$}\n\\end{equation}\nand the remaining states given by Eqn~(\\ref{half_sho_solutions}).\n\nUsing this ground-state solution to form the super-potential, we find\n\\begin{equation}\nW(x) =\n-\\frac{\\hbar}{\\sqrt{2m}}\n\\left( \\frac{\\psi_0'(x)}{\\psi_0(x)}\\right)\n=\n-\\frac{\\hbar}{\\sqrt{2m}}\n\\left( \\frac{1}{x} - \\frac{x}{\\beta^2}\\right)\n\\end{equation}\nallowing us to construct the super-partner potential\n\\begin{equation}\nV^{(+)}(x) = \\frac{1}{2} m\\omega^2 x^2 + \\frac{2\\hbar^2}{2mx^2}\n- \\frac{1}{2} \\hbar \\omega\n\\,.\n\\end{equation}\nThis has the form of the radial equation for the three-dimensional harmonic\noscillator with the special choice of the angular momentum quantum\n number $l=1$ (giving the $l(l+1) = 2$ factor in the centrifugal barrier term)\nand an overall constant shift in energy. Using standard\nresults for the energy eigenvalues for that system, we find that\n\\begin{equation}\nE_n^{(+)} = \\hbar \\omega \\left(2n +l + \\frac{3}{2}\\right) - \\frac{1}{2}\\hbar\n\\omega\n= 2\\hbar \\omega (n+1)\n\\end{equation}\nfor the relevant $l=1$ case. One can also use standard textbook results\nto obtain the properly normalized solutions to be\n\\begin{equation}\n\\psi_n^{(+)}(x) = N_n x^2\\, e^{-(x\/\\beta)^2\/2}\\, L_n^{(3\/2)}(x^2\/\\beta^2)\n\\qquad\n\\mbox{with}\n\\qquad\nN_n = \\sqrt{\\frac{2^{k+3} k!}{\\beta^5 (2k+3)!! \\sqrt{\\pi}}}\n\\,.\n\\end{equation}\nThis is one example of many familiar super-partner potentials\n\\cite{sukhatme} which can\nbe systematically studied in the context of quantum mechanics\nto probe the delicate interplay between energy level differences and\nmatrix elements which must exist to guarantee the realization of the\ninfinite number of sum rules which one can generate using the simple\nprocedures outlined here.\n\n\n\\section{Acknowledgments}\nO.A.A., K.C., and M.B. were funded in part by a Davidson College Faculty Study and \nResearch Grant and by the National Science Foundation (DUE-0442581).\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}