diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzfjqy" "b/data_all_eng_slimpj/shuffled/split2/finalzzfjqy" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzfjqy" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThis paper addresses the numerical solution of the Dirichlet problem for the Monge-Amp\\`ere \nequation\n\\begin{equation}\n\\det D^2 u = f \\ \\text{in} \\ \\Omega, \\quad u=g \\ \\text{on} \\ \\partial \\Omega, \n\\label{m1}\n\\end{equation}\nwhere $D^2 u=\\bigg( \\frac{\\partial^2 u}{\\partial x_i \\partial x_j}\\bigg)_{i,j=1,\\ldots, n} $ \nis the Hessian of $u$\nand $f,g$ are given functions with $f>0$.\nThe domain $\\Omega \\subset \\mathbb{R}^n$ is a convex domain with\npolygonal\nboundary $\\partial \\Omega$.\n\nThe above equation is a fully nonlinear equation in the sense that it is nonlinear in the\nhighest order derivatives. Fully nonlinear equations have in general multiple solutions,\nand even if the domain is smooth, the solution may not be smooth. \nFor the Monge-Amp\\`ere equation, the notion of generalized solution in the\nsense of Alexandrov-Bakelman and that of viscosity solution \\cite{Guti'errez2001}\nare the best known to\ngive a meaning to the second derivatives even when the solution is not smooth. \nTo a continuous convex function, one associates the so-called Monge-Amp\\`ere measure and\n\\eqref{m1} is said to have a solution in the sense of Alexandrov if the density of that measure with respect to the\nLebesgue measure is equal to $f$. \nContinuous convex viscosity solutions are defined ``\nin terms of certain inequalities holding wherever the graph of the solution is touched on one side or the other by a smooth test function ''\n\\cite{Lindenstrauss94}.\nIn the case of \n\\eqref{m1} the two notions are equivalent for $f$ continuous, \\cite{Guti'errez2001}.\nWe will assume throughout this paper that $f$ is continuous on $\\Omega$ and g continuous on $\\partial \\Omega$. \nEquation \n\\eqref{m1} then has at most two solutions when $n=2$, \\cite{Courant1989} p. 324.\n and a unique\ngeneralized solution\nin the class of convex functions, \\cite{Aleksandrov1961, ChengYau77}. In general, the theory of viscosity\nsolutions \\cite{Crandall1992, Caffarelli1995, Guti'errez2001} provides a framework for existence and uniqueness of solutions\nof fully nonlinear equations.\n\nThe more general Monge-Amp\\`ere equation has form\n\\begin{equation}\n\\det D^2 u = H(x,u,D u) \n\\ \\text{in} \\ \\Omega, \\quad u=g \\ \\text{on} \\ \\partial \\Omega, \n\\label{m2}\n\\end{equation} \nwhere $D u$ denotes the gradient of $u$ and $H$ is is a given Hamiltonian, at least \ncontinuous and nondecreasing in u.\nThey appear in various geometric and variational problems, e.g. the\nMonge-Kantorovich problem, and\nin kinetic theory. They also appear in applied fields such as meteorology, fluid mechanics,\nnonlinear elasticity, antenna design, material sciences and mathematical finance.\nA huge amount of literature on theoretical questions about these equations is available.\nA selection in the areas cited above include \\cite{Aubin1982,\nWestcott1983, Bakelman1994,\nAvellaneda1994, Caffarelli1995,\nOckendon2003,\nGangbo2000, Cullen2001, Carlen2004}.\n\nResearchers working on\nthe Monge-Kantorovich Problem, MKP, c.f. \\cite{Gangbo2004} for background, have noted the\nproblematic lack of good numerical solvers for the Monge-Amp\\`ere type equations.\nFollowing \\cite{Dean2004}, we quote from\n\\cite{Benamou2000}, ''It follows from this theoretical result that a natural \ncomputational solution of\nthe $L^2$ MKP is the numerical resolution of the Monge-Amp\\`ere equation'' \\ldots\n''Unfortunately, this fully nonlinear second-order elliptic equation has not\nreceived much attention from numerical analysts and, to the best of our knowledge,\nthere is no efficient finite-difference or finite-element methods, comparable\nto those developed for linear second-order elliptic equations\n(such as fast Poisson solvers, multigrid methods,\npreconditioned conjugate gradient methods,\\ldots).''\n\n\nExisting numerical work on the Monge-Amp\\`ere type equations included\n\\cite{Oliker1988, Kochengin1998, Caffarelli1999} where the generalized solution\nin the sense of Alexandrov-Bakelman is approximated directly. \nOther works with proven convergence results is \\cite{Oberman2008,Oberman2010a}\nwhere finite difference schemes satisfying conditions\nfor convergence of \\cite{Barles1991} were constructed.\nThere have been an explosion of recent numerical results for the Monge-Amp\\`ere equations. \nWe have the recent papers \\cite{Loeper2005,Mohammadi2007,Zheligovsky10,Brenner2010a, Brenner2010b} which do not address adequately the situations\nwhere the Monge-Amp\\`ere equation does not have a smooth solution, c.f. Test 2 and Test 3 in Section \\ref{Numerical}. For progress\nin this direction we refer to the series of papers\n\\cite{Dean2003,Dean2004,Dean2006} and the vanishing moment method in \\cite{Feng2009,Feng2009a,Feng2009b}.\nFinite difference methods which computes viscosity solutions of the Monge-Amp\\`ere equation \nand an iterative method amenable to finite element computations \nwere reported in \\cite{Benamou2010,Oberman2010b}. See also \\cite{Belgacem2006} for an optimization approach.\n\nIn this paper, we use the spline element method to compute numerical solution of the Monge-Amp\\`ere equation.\nIt is a conforming finite element implementation with Lagrange multipliers. We will obtain conforming approximations\nfor the three dimensional Monge-Amp\\`ere equation. We extend the convergence analysis of Newton's method due to\n\\cite{Loeper2005} to bounded smooth domains using Schauder estimates proved in \\cite{Trudinger08}. \nHowever \\cite{Loeper2005} did not address the convergence of the discrete approximations.\nWe give error estimates\nfor conforming approximations of a smooth solution. The Monge-Amp\\`ere equation leads to a non-coercive variational\nproblem, a difficulty which is partially handled by the vanishing moment method (the parameter $\\epsilon$ cannot be taken equal to 0). We show that for smooth solutions, the spline\nelement method is robust for the associated singular perturbation problem. The numerical results mainly examine the performance\nof three numerical methods, Newton's method, the vanishing moment method and the Benamou-Froese-Oberman iterative method, on three test functions\nsuggested in \\cite{Dean2006}: a smooth radial solution, a non-smooth solution for which no exact formula is known and a solution not in $H^2(\\Omega)$.\nIn this paper, we will refer\nto the Benamou-Froese-Oberman iterative method as the BFO iterative method, which we extend to three dimensions.\n\nThe paper is organized as follows: In the first section, we review \nthe spline element discretization. \nThe following section is devoted to the variational formulations associated to Newton's method, \nfor which we give convergence results for smooth solutions,\nand the vanishing moment method.\nHere we introduce the three dimensional version of the BFO iterative method.\nThe last section is devoted to numerical experiments.\nWe will use $C$ for a generic constant but will index specific constants.\n\n\\section{Spline element discretization} \\label{spline}\nThe spline element method has been described in \\cite{Awanou2003, Awanou2005a, Awanou2006, Baramidze2006, Hu2007}\nunder different names and more recently in \\cite{Awanou2008}.\nIt can be described as a conforming finite element implementation with Lagrange multipliers. We first outline the main steps\nof the method, discuss its advantages and possible disadvantage. We then give more details of this approach but refer to\nthe above references for explicit formulas. \n\nFirst, start with a representation of a piecewise discontinuous polynomial as a vector in $\\mathbb{R}^N$, for some integer $N>0$.\nThen express boundary conditions and constraints including global continuity or smoothness conditions as linear relations.\nIn our work, we use the Bernstein basis representation, \\cite{Awanou2003,Awanou2008} which is very convenient to express smoothness conditions\nand very popular in computer aided geometric design. Hence the term ``spline'' in the name of the method.\nSplines are piecewise polynomials\nwith smoothness properties.\nOne then write a discrete version of the equation along with a discrete version of the spaces of trial and test functions.\nThe boundary conditions and constraints are enforced using Lagrange multipliers.\nWe are lead to saddle point problems which are solved by an augmented Lagrangian algorithm (sequences of linear equations with size $N \\times N$).\nThe approach here should be contrasted with other approaches where Lagrange multipliers are introduced before discretization,\ni.e. in \\cite{Babuska73} or the discontinuous Galerkin methods.\n \nThe spline\nelement method, stands out as a robust,\nflexible, efficient and accurate method.\nIt can be applied to a wide range of PDEs in science and engineering in both two and\nthree dimensions;\nconstraints and smoothness are enforced exactly and there is no need to implement\nbasis functions with the required properties; it is particularly suitable for fourth\norder PDEs;\nno inf-sup condition are needed to approximate Lagrange multipliers which arise\ndue to the constraints, e.g. the pressure term in the Navier-Stokes equations;\none gets in a single implementation approximations of variable order.\nOther advantages of the method include the flexibility\nof using polynomials of different degrees on different elements \\cite{Hu2007}, the facility \nof implementing boundary conditions and the simplicity\nof a posteriori error estimates since the method is conforming for many problems.\nA possible disadvantage of this approach is the high number of degrees of freedom and the need to solve saddle point problems.\n\n\nLet $\\mathcal{T}$ be a conforming partition of $\\Omega$\ninto triangles or tetrahedra. We consider a general variational problem: Find $u \\in W$ such that\n\\begin{equation}\na(u,v) = \\ \\quad \\text{for all} \\ v \\in V, \\label{var3}\n\\end{equation}\nwhere $W$ and $V$ are respectively the space of trial and test functions.\nWe will assume that\nthe form $l$ is bounded and linear and $a$ is a continuous \nmapping in some sense on $W \\times V$ which is linear in the argument $v$. \n\nLet $W_h$ and $V_h$ be conforming subspaces of $W$ and $V$ respectively. We can write\n$$\nW_h =\\{ c \\in \\mathbf{R}^N, R c =G \\}, \\ V_h = \\{ c \\in \\mathbf{R}^N, R c =0 \\},\n$$\nfor a suitable vector $G$ and $R$ a suitable matrix which encodes the constraints\non the solution, e.g. smoothness and boundary conditions. \nHere $h$ is a discretization parameter which controls the size \nof the elements in the partition.\n\nThe condition $a(u,v)=\\$ for all $v \\in V$ translates to\n$$\nK(c)d = L^T d \\quad \\forall d \\in V_h, \\ \\text{that is for all} \\ d \\ \\text{with} \\ R d =0,\n$$\nfor a suitable matrix $K(c)$ which depends on $c$ and $L$ is a vector of coefficients \nassociated to the linear form $l$. If for \nexample $\\= \\int_{\\Omega} f v$, then $L^T d=d^T M F$ where $M$ is a mass matrix and $F$ a \nvector of coefficients associated to the spline interpolant of $f$. In the linear case\n$K(c)$ can be written $c^T K$.\n\nIntroducing a Lagrange multiplier $\\lambda$, \nthe functional \n$$\nK(c)d - L^Td + \\lambda^T R d,\n$$\nvanishes identically on $V_h$. The stronger condition\n$$\nK(c) + \\lambda^T R = L^T,\n$$ \nalong with the side condition $R c =G$ are the discrete equations to be solved.\n\nBy a slight abuse of notation, after linearization by Newton's method, \nthe above nonlinear equation leads to solving systems of type\n$$\nc^T K + \\lambda^T R = L^T.\n$$ \nThe approximation $c$ of $u \\in W$ thus is a limit of a sequence of\nsolutions of systems of type\n\\begin{align*}\n\\begin{split}\n\\left[\\begin{array}{cc}K^T & R^T \\\\ R & 0 \\\\\n\\end{array} \\right]\n\\left[\n\\begin{array}{c} c \\\\ \\lambda \n\\end{array}\\right] = \\left[\n\\begin{array}{c} L\\\\G\n\\end{array}\\right].\n\\end{split}\n\\end{align*}\nIt is therefore enough to consider the linear case.\nIf we assume for simplicity that $V=W$ and that the form $a$ is bilinear, symmetric, continuous and $V$-elliptic, existence of a discrete solution\nfollows from Lax-Milgram lemma. On the other hand, the ellipticity assures uniqueness of the component $c$ which can \nbe retrieved by a least squares solution of the above system \\cite{Awanou2003}. \nThe Lagrange multiplier $\\lambda$ may not be unique.\nTo avoid \nsystems of large size, a variant of the augmented Lagrangian algorithm is used. \nFor this, we consider the sequence of problems\n\\begin{align}\n\\begin{split} \n\\left(\\begin{array}{cc}K^T & R^T \\\\ R & -\\mu M\n\\end{array} \\right) \\left[\n\\begin{array}{c} \\mathbf{c}^{(l+1)} \\\\ \\mathbf{\\lambda}^{(l+1)}\n\\end{array}\\right] = \\left[\n\\begin{array}{c} L\\\\ G-\\mu M\\lambda^{(l)}\n\\end{array}\\right], \\label{iter1}\n\\end{split}\n\\end{align}\nwhere $\\lambda^{(0)}$ is a suitable initial guess for example $\\lambda^{(0)}=0$,\n$M$ is a suitable matrix and $\\mu>0$ is a small parameter taken in practice in the \norder of $10^{-5}$. It is possible to solve for $\\mathbf{c}^{(l+1)}$ in terms of\n$\\mathbf{c}^{(l)}$.\nA uniform convergence rate in $\\mu$ for this algorithm was shown in \\cite{Awanou2005}.\n\n\n\\section{Variational formulations}\nThe BFO iterative method for solving \\eqref{m1} has a clear variational formulation as it consists in solving a sequence\nof Poisson problems:\n\\begin{equation} \\label{BFO1}\n\\Delta u_{k+1} = \\sqrt{(\\Delta u_k)^2 + 2(f-\\det D^2 u_k)},\n\\end{equation}\nfor the two dimensional case \\cite{Benamou2010} and extended here to three dimensions\n\\begin{equation}\n\\Delta u_{k+1} = ((\\Delta u_k)^3 + 9(f-\\det D^2 u_k))^{\\frac{1}{3}},\n\\end{equation}\n with $u_{k+1}=g$ on $\\partial \\Omega$.\nSince \n\\begin{equation} \\label{lemdet}\n\\det D^2 u \\leq \\frac{1}{n^n} (\\Delta u)^n,\n\\end{equation}\nthe above formula enforces partial convexity since $\\Delta u \\geq 0$ is a necessary condition for the Hessian of $u$ to be\nsemi positive definite. We note that the constant 2 in \\eqref{BFO1} may be changed to 4.\nWe next discuss\nNewton's method and the vanishing moment method.\nThe use of Newton's method for proving existence of a solution of \\eqref{m1} appeared in \\cite{Bakelman65} combined with a method of continuity\nargument and more recently for approximation with a direct approach by finite differences in \\cite{Loeper2005} for a Monge-Amp\\`ere equation on the torus.\nWe will extend the proof of convergence of Newton's method of \\cite{Loeper2005} in H$\\ddot{\\text{o}}$lder spaces on bounded smooth domains.\nWe then characterize the Newton's iterates as solutions of variational problems and a solution of \\eqref{m1} is also shown to \nbe characterized by a variational formulation for which we derive error estimates for finite element approximations.\n\n\\subsection{Newton's method}\n\nWe denote by $C^k(\\Omega)$ the set of all functions having all derivatives of order $\\leq k$ continuous on $\\Omega$ where\n$k$ is a\nnonnegative integer or infinity and by $C^k(\\tir{\\Omega})$, the set of all functions in $C^k(\\Omega)$\nwhose derivatives of order $\\leq k$ have continuous\nextensions to $\\tir{\\Omega}$. The norm in $C^k(\\Omega)$ is given by\n$$\n||u||_{C^k(\\Omega)} = \\sum_{j=0}^k \\, \\text{sup}_{|\\beta|=j} \\text{sup}_{\\Omega} |D^{\\beta}u(x)|.\n$$\nA function $u$ is said to be uniformly H$\\ddot{\\text{o}}$lder continuous with exponent $\\alpha, 0 <\\alpha \\leq 1$ in $\\Omega$ if\nthe quantity\n$$\n \\text{sup}_{x \\neq y} \\frac{|u(x)-u(y)|}{|x-y|^{\\alpha}}\n$$\nis finite. \nThe space $C^{k,\\alpha}(\\tir{\\Omega})$\nconsists of functions whose $k$-th order derivatives are uniformly H$\\ddot{\\text{o}}$lder\ncontinuous with exponent $\\alpha$ in $\\Omega$. It is a Banach space with norm\n$$\n||u||_{C^{k,\\alpha}(\\tir{\\Omega})} = ||u||_{C^k(\\Omega)} + \\text{sup}_{|\\beta|=k} \\text{sup}_{x \\neq y} \\frac{|D^{\\beta}u(x)-\nD^{\\beta} u(y)|}{|x-y|^{\\alpha}}.\n$$\nNext note that for any $n \\times n$ matrix $A$, $\\det A = 1\/n (\\text{cof} \\, A):A$, \nwhere $\\text{cof} \\, A$ is the matrix of cofactors of $A$\nand for two $n \\times n$ matrices $M, N$, $M:N = \\sum_{i,j=1}^n M_{ij} N_{ij}$ is the Kronecker product of\n$M$ and $N$.\nFor any sufficiently smooth matrix field $A$ and vector field $v$,\n $\\div A^T v = (\\div A) \\cdot v + A:D v$. \nHere the divergence of a matrix field is the divergence operator applied row-wise.\n If we put $v= D u$, then \n$\\det D^2 u= 1\/n (\\text{cof} \\ D^2 u):D^2 u =1\/n \\, (\\text{cof} \\ D v): D v $ and\n$\\div(\\text{cof} \\ D v)^T v = \\div (\\text{cof} \\ D v) \\cdot v + (\\text{cof} \\ D v): D v.$\nBut $\\div \\text{cof} \\ D v =0$, c.f. for example \\cite{Evans1998} p. 440. Hence \nsince $D^2 u$ and $\\text{cof} \\ D^2 u$ are symmetric matrices\n\\begin{equation}\\label{detform}\n\\det D^2 u = \\frac{1}{n} (\\text{cof} \\ D^2 u): D^2 u =\n\\frac{1}{n} \\div\\big((\\text{cof} \\ D^2 u) D u \\big). \n\\end{equation}\nPut $F(u) = \\det D^2 u -f$. The operator $F$ maps $C^{m,\\alpha}$ into $C^{m-2,\\alpha}, m\\geq 2$.\nThis can be seen from the properties of the H$\\ddot{\\text{o}}$lder norm of a product \\cite{Chen98}, p. 18.\nWe have\n$F'(u) w = (\\text{cof} \\ D^2 u): D^2 w = \\div \\big((\\text{cof} \\ D^2 u) D w \\big) $ for $u,w$ \nsufficiently smooth.\nThe Newton's iterates can then equivalently be written\n\\begin{align*}\nF'(u_k) (u_{k+1}-u_k) &= -F(u_k), \\\\\n\\div \\big((\\text{cof} \\ D^2 u_k) (D u_{k+1}-D u_k) & = - \\frac{1}{n} \\div\\big((\\text{cof} \\ D^2 u_k) D u_k \\big)+f, \\\\\n(\\text{cof} \\ D^2 u_k) :(D^2 u_{k+1}-D^2 u_k) & = - \\det D^2 u_k +f.\n\\end{align*}\nWe will use the last expression as it requires less regularity on\n$u_k$.\nMore precisely, we will consider the following damped version of Newton's method: \nGiven an initial guess $u_0$, we consider the sequence $u_k$ defined by\n\\begin{align}\\label{newtonseq}\n(\\text{cof} \\ D^2 u_k) :D^2 \\theta_k & = \\frac{1}{\\tau} (f-f_k), \\quad f_k = \\det D^2 u_k, \\quad \\theta_k = u_{k+1}- u_k, \n\\end{align}\nfor a parameter $\\tau \\geq 1$. Our numerical results however use only $\\tau=1$. \n\nWe recall that the domain $\\Omega$ is uniformly convex \\cite{Hartman66}, if\nthere exists a number $m_0>0$\nsuch that through every point $x_0 \\in \\partial \\Omega$, there passes a supporting hyperplane\n$\\pi$ of $\\Omega$ satisfying dist $(x, \\pi) \\geq m_0 |x-x_0|^2$ for $x \\in \\partial \\Omega$.\nWe will need the following global regularity result, \\cite{Trudinger08}.\n\n\\begin{thm}\\label{MongeSchauder}\nLet $\\Omega$ be a uniformly convex domain in $R^n$,\nwith boundary in $C^3$. Suppose $\\phi \\in C^3(\\tir{\\Omega})$, inf $f > 0$, and $f \\in C^{\\alpha}$ for some\n$\\alpha \\in (0, 1)$. Then \\eqref{m1} has a convex solution $u$ which satisfies the a priori estimate\n$$\n||u||_{C^{2,\\alpha}(\\tir{\\Omega})} \\leq C,\n$$\nwhere $C$ depends only on $n, \\alpha, \\ \\text{inf} \\ f, \\Omega, ||f||_{C^{\\alpha}(\\tir{\\Omega})}$ and $||\\phi||_{C^3}$.\n\\end{thm}\nAccording to \\cite{Trudinger08}, all assumptions in the above theorem are sharp.\nWe have the following analogue of Theorem 2.1 in \\cite{Loeper2005}.\n\\begin{thm}\nLet $\\Omega$ be a uniformly convex domain in $R^2$,\nwith boundary in $C^3$. Let $0 < m \\leq f \\leq M, f\\in C^{\\alpha}$ for some $m, M >0$ and $\\alpha \\in (0, 1)$.\nThere exists $\\tau \\geq 1$ depending on $m,f$, such that if $u_k$ is the sequence defined by \\eqref{newtonseq}, it converges in\n$C^{2,\\beta}$ to a solution $u$ of \\eqref{m1}, for every $\\beta < \\alpha$.\n\\end{thm}\n\n\\begin{proof}\nThe proof essentially depends on global H$\\ddot{\\text{o}}$lder regularity of the solution of the Monge-Amp\\`ere equation.\nTheorem \\eqref{MongeSchauder} essentially gives the conditions under which such a regularity result holds on a bounded domain.\nPart of the proof has been more or less repeated in \\cite{Oberman2010a}.\nWe note that the iterates may converge to either a concave or a convex solution if both exists.\n\\end{proof}\n\n\\subsection{Variational formulation}\nUsing the divergence form of \\eqref{newtonseq}, \nthe iterates can be characterized as solutions of the problems: Find $u_{k+1} \\in H^1(\\Omega), u_{k+1}=g$ on $\\partial \\Omega$ and\n\\begin{align}\n\\begin{split}\n\\int_{\\Omega} (\\text{cof} \\ D^2 u_k) D u_{k+1} \\cdot D w \\ \\ud x & =\n\\frac{n-1}{n} \\int_{\\Omega} (\\text{cof} \\ D^2 u_k) D u_{k} \\cdot D w \\ \\ud x \\\\\n& \\qquad \\qquad + \\int_{\\Omega} f w \\ \\ud x , \\quad \\forall w \\in H_0^1(\\Omega). \\label{nvar1}\n\\end{split}\n\\end{align}\nWith an initial guess $u_0$ which solves $\\Delta u = n f^{1\/n}$, for $f$ in $C^{\\alpha}, 0 < \\alpha <1$, we have a sequence of uniformly elliptic problems,\n(see proof of Theorem 2.1 in \\cite{Loeper2005}) and the problems \\eqref{newtonseq} and \\eqref{nvar1} are equivalent.\nWe then know that the iterates converge to a solution of \\eqref{m1} which solves the formal variational limit of \\eqref{nvar1}:\nFind $u \\in H^n(\\Omega)$, $u=g$ on $\\partial \\Omega$ such\nthat\n\\begin{equation}\n\\int_{\\Omega} (\\text{cof} \\ D^2 u) D u \\cdot D w \\ \\ud x =\n-n \\int_{\\Omega} f w \\ \\ud x , \\quad \\forall w \\in H^n(\\Omega) \\cap H_0^1(\\Omega) \\label{var1}.\n\\end{equation}\nThe problem \\eqref{var1} is not well posed in general. For example if $g=0$, then both $u$ and $-u$ are \nsolutions. \n\nTo see that the left hand side is bounded for $u \\in H^n(\\Omega)$, notice that for $n=2$\n\\begin{align*}\n\\bigg| \\int_{\\Omega} (\\text{cof} \\ D^2 u) D u \\cdot D w \\bigg| \\leq C ||D^2u||_{L^2(\\Omega)} ||Du||_{L^4(\\Omega)} ||Dw||_{L^4(\\Omega)}.\n\\end{align*}\nNext for $u \\in H^2(\\Omega), \\partial u\/\\partial x_i \\in H^1(\\Omega), i=1,\\ldots,n$ and by the compactness of the embedding of $H^1(\\Omega)$\nin $L^q(\\Omega)$ for $q \\geq 1$ when $n=2$,\nthe right hand side above is bounded by \n$C ||D^2u||_{L^2(\\Omega)} ||u||_{H^2(\\Omega)} ||w||_{H^2(\\Omega)}$.\nHowever in three dimensions, the term $\\text{cof} \\ D^2 u$ involves the product of two second order derivatives. For it to be bounded,\nwe will need $u \\in H^3(\\Omega)$ so that $\\partial^2 u\/\\partial x_i \\partial x_j, i,j,=1,\\ldots,n \\in H^1(\\Omega)$ and we can use the\nembedding of $H^1(\\Omega)$\nin $L^q(\\Omega)$ for $1\\leq q \\leq 6$ when $n=3$. \nWe give below error estimates only for the two dimensional case using techniques borrowed from \\cite{Feng2009}. \nWe note that\nin the spline element method, it is possible to impose the $C^2$ continuity requirements for a conforming finite element subspace of\n$H^3(\\Omega)$. However for a smooth solution, $C^1$ continuity was enough for numerical evidence of convergence.\n\n\\subsection{Error estimate for 2D conforming approximation of a smooth solution}\nIn this section, we will assume that $\\Omega$ is a two dimensional domain.\nPut $V=H^2(\\Omega)$ and $V_0 = H^2(\\Omega) \\cap H_0^1(\\Omega)$.\nLet $V^h$ be a conforming finite element subspace of $H^2(\\Omega)$, \n$V^h_0$ be a conforming finite element subspace of $H^2(\\Omega) \\cap H_0^1(\\Omega)$\nwith approximation properties\n\\begin{equation} \\label{ap}\n\\text{inf}_{v_h \\in V^h} ||v-v_h||_j \\leq C_1 h^{p-j} ||v||_4, j =0,1,2 \n\\end{equation}\nfor all $v \\in H^4(\\Omega)$ for some $p\\geq 4$.\n\nFor example, in this paper, we take $V_h$ as the spline space of degree $d$ and smoothness $r \\geq 1$\n\\begin{align*}\nS^r_d(\\mathcal{T})=\\{p \\in C^r(\\Omega), \\ p_{|t} \\in P_d, \\ \\forall t \\in \\mathcal{T} \\},\n\\end{align*}\nwhere $P_d$ denotes the space of polynomials of degree $d$ in two variables and\n$\\mathcal{T}$ denotes the triangulation of the domain $\\Omega$.\nIt is known that \\cite{Lai2007}, for $d \\geq 3r+2$ and $0 \\leq m \\leq d$, \nthere exists a linear quasi-interpolation operator\n$Q$ mapping $L_1(\\Omega)$ into the spline space $S^r_d(\\mathcal{T})$ and a constant $C$\nsuch that if $f$ is in the Sobolev space $H^{m+1}(\\Omega),$\n\\begin{equation}\n| f -Q f |_{k} \\leq C h^{m+1-k} |f|_{m+1}, \\label{schum}\n\\end{equation}\nfor $0 \\leq k \\leq m$. \nFor our purposes, $m=3$ and $p=4$.\nIf $\\Omega$ is convex, the constant $C$ depends only on $d,m$ and on the smallest\nangle $\\theta_{\\triangle}$ in $\\mathcal{T}$. In the non convex case, $C$ depends only on \nthe Lipschitz constant associated\nwith the boundary of $\\Omega$.\nIt is also known c.f. \\cite{Dyer2003} that\nthe full approximation property for spline spaces holds for certain combinations of\n$d$ and $r$ on special triangulations.\nWe have the following theorem\n\\begin{thm} \\label{errest}\nAssume that problem \\eqref{m1} has a solution $u \\in H^4(\\Omega)$ hence in $C^2(\\Omega)$ by Sobolev embedding, then\nthe discrete problem, find $u_h \\in V_h$, $u_h=g$ on $\\partial \\Omega$ such\nthat\n\\begin{equation}\n-\\frac{1}{2}\\int_{\\Omega} (\\text{cof} \\ D^2 u_h) D u_h \\cdot D w_h \\ \\ud x =\n \\int_{\\Omega} f w_h \\ \\ud x , \\quad \\forall w_h \\in V_0^h, \\label{var1h}\n\\end{equation}\nhas a unique solution in a neighborhood of $I_h u$ where $I_h$\nis an interpolation operator associated with $V_h$. Moreover $||u_h-I_h(u)||_{2}$ is \nat least O($h$). \n\\end{thm}\nThe proof of the above theorem follows the combined fixed point and linearization method used in \\cite{Feng2009}. \nThe difference here is the assumption that the solution is smooth and the use of an inverse inequality.\nWe start with some\npreliminaries.\n\nWe consider the linear problem: Find \n$v \\in H^1_0(\\Omega)$ such that\n\\begin{align}\n\\begin{split} \\label{m11}\n\\int_{\\Omega} (\\text{cof} \\ D^2 u) D v \\cdot D w & = \\int_{\\Omega} \\phi \\, w, \\forall w \\in H^1_0(\\Omega), \\\\\n\\end{split}\n\\end{align}\nfor $\\phi \\in L^2(\\Omega)$.\n\nSince $u \\in C^2(\\Omega), \\exists \\, m, M > 0$ such that\n$$\nm \\leq \\frac{\\partial u}{\\partial x_i \\partial x_j}(x) \\leq M, \\, i,j=1,\\ldots,2, \\forall x \\in \\Omega.\n$$ \nThe existence and uniqueness of a solution to \\eqref{m11} follows immediately from Lax-Milgram lemma. \nSimilarly, there exists a unique solution to the problem: Find $v_h \\in V^h_0$ such that\n\\begin{align}\n\\begin{split} \\label{m11h}\n\\int_{\\Omega} (\\text{cof} \\ D^2 u) D v_h \\cdot D w_h & = \\int_{\\Omega} \\phi\\, w_h, \\forall w_h \\in V_0^h. \\\\\n v &= 0 \\ \\text{on} \\ \\partial \\Omega.\n\\end{split}\n\\end{align} \nWe note that the constant $C$ above depends on $u$ and that since $\\Omega$ is assumed convex, $v\\in \nH^2(\\Omega)$ by elliptic regularity.\n\nWe define a bilinear form on $H_0^1(\\Omega) \\times \nH_0^1(\\Omega)$ by\n\\begin{equation}\nB[v,w] = \\int_{\\Omega} (\\text{cof} \\ D^2 u) D v \\cdot D w,\n\\end{equation}\nand for any $v_h \\in V^h$, $v_h=g$ on $\\partial \\Omega$, we define $T(v_h)$ by \n\\begin{equation}\nB[v_h-T(v_h),w_h] = \\frac{1}{2}\\int_{\\Omega} (\\text{cof} \\ D^2 v_h) D v_h \\cdot D \\psi_h \\ \\ud x \n+ \\int_{\\Omega} f \\psi_h \\ \\ud x , \\quad \\forall \\psi_h \\in V_0^h.\n\\end{equation}\nSince the solution of the above problems exists and is in $V^h_0$, $T(v_h) \\in V^h$, $T(v_h)=g$ on $\\partial \\Omega$. A fixed point of the \nnonlinear operator $T$ corresponds to a solution of \\eqref{var1h} and conversely if $v_h$ is a solution of \n\\eqref{var1h}, then $v_h$ is a fixed point of $T$. We will show that $T$ has a unique fixed point in a \nneighborhood of $I_h(u)$. Put\n$$\nB_h(\\rho) = \\{v_h \\in V_h, v_h=g \\, \\text{on} \\, \\partial \\Omega, ||v_h-I_h u||_2 \\leq \\rho \\}.\n$$\nFirst, we note that\n\\begin{equation}\nB[v_h-T(v_h),w_h] = -\\int_{\\Omega} \\det D^2 v_h \\psi_h \\ \\ud x \n+ \\int_{\\Omega} f \\psi_h \\ \\ud x , \\quad \\forall \\psi_h \\in V_0^h.\n\\end{equation}\n\\begin{lemma} There exists $C_2>0$ such that\n$$\n||I_h u -T(I_h u)||_2 \\leq C_2 h^{p-3} ||u||_4.\n$$\n\\end{lemma}\n\\begin{proof}\nPut $w_h=I_h u -T(I_h u)$.\nWe have using $\\det D^2 u =f$,\n\\begin{equation*}\nB[w_h,w_h] = \\int_{\\Omega} \\big( \\det D^2 u - \\det D^2 I_h u \\big) w_h \\ud x. \n\\end{equation*}\nNow, let $I_h^{\\epsilon} (u)$ be a mollifier of $I_h u$. We have\n\\begin{align*}\nB[w_h,w_h] & = \\int_{\\Omega} \\big(\\det D^2 u - \\det D^2 I_h^{\\epsilon} (u) \\big) w_h \\ud x+ \\int_{\\Omega} \\big(\\det D^2 I_h^{\\epsilon} (u) - \\det D^2 I_h u \\big) w_h \\ud x\\\\\n& = \\int_{\\Omega} \\bigg( (\\text{cof} \\ t D^2 u +(1-t) D^2 I_h^{\\epsilon} (u)): D^2(u-I_h^{\\epsilon} (u)) \\bigg) w_h \\ud x\\\\\n& \\qquad \\qquad + \\int_{\\Omega} \\big(\\det D^2 I_h^{\\epsilon} (u) - \\det D^2 I_h u \\big) w_h \\ud x\\, \\text{for some} \\, t \\in [0,1] \\\\\n& \\leq ||\\text{cof} \\ t D^2 u +(1-t) D^2 I_h^{\\epsilon} (u))||_{\\infty} ||u-I_h^{\\epsilon} (u)||_2 ||w_h||_0 \\\\\n& \\qquad \\qquad \\qquad + ||\\det D^2 I_h^{\\epsilon} (u) - \\det D^2 I_h u||_0 ||w_h||_0\\\\\n& \\leq C ||D^2 u||_{\\infty}||u-I_h(u)||_2 ||w_h||_0 \\leq C M h^{p-2}||u||_4 ||w_h||_0,\n\\end{align*}\nsince $||\\det D^2 I_h^{\\epsilon} (u) - \\det D^2 I_h u||_0 \\to 0$ as $\\epsilon \\to 0$. We conclude that\n$$\n||w_h||_1^2 \\leq C h^{p-2}||u||_4 ||w_h||_0, \\ \\text{and} \\ ||w_h||_2 \\leq \\frac{C}{h} ||w_h||_1 \\leq C h^{p-3}||u||_4,\n$$\nusing the coercivity of the bilinear form $B$ with a constant $C$ which depends on $m$ and $M$\nand an inverse estimate.\n\\end{proof}\n\n\\begin{lemma} There exists $h_0 > 0$ and $0 < \\rho_0(h_0)$ such that $T$ is a contraction mapping in the ball\n$B_h(\\rho_0)$ with a contraction factor 1\/2.\n\\end{lemma}\n\\begin{proof} For $v_h, w_h \\in B_h(\\rho_0)$, and $\\psi_h \\in V_0^h$, let $v_h^{\\epsilon}$ and $w_h^{\\epsilon}$\ndenotes mollifiers for $v_h$\nand $w_h$ respectively.\n\\begin{align*}\nB[T(v_h)-T(w_h), \\psi_h] & = B[T(v_h)-v_h, \\psi_h]+ B[v_h-w_h, \\psi_h]+B[w_h-T(w_h), \\psi_h] \\\\\n& = \\int_{\\Omega} (\\det D^2 v_h - \\det D^2 w_h) \\psi_h \\ud x \\\\\n& \\qquad \\qquad + \\int_{\\Omega} \n(\\text{cof} \\ D^2 u) (D v_h - D w_h) \\cdot D \\psi_h \\ \\ud x \n \\\\\n& = A_{\\epsilon} + \\int_{\\Omega} \n(\\text{cof} \\ D^2 u) (D v_h - D w_h) \\cdot D \\psi_h \\ \\ud x \\\\\n& \\qquad \\qquad + \n\\int_{\\Omega} (\\det D^2 v_h^{\\epsilon} - \\det D^2 w_h^{\\epsilon}) \\psi_h \\ud x,\n\\end{align*}\nwhere $A_{\\epsilon} = \\int_{\\Omega} (\\det D^2 v_h - \\det D^2 v_h^{\\epsilon}) \\psi_h \\ud x\n+ \\int_{\\Omega} (\\det D^2 w_h - \\det D^2 w_h^{\\epsilon}) \\psi_h \\ud x \\to 0$ as $\\epsilon \\to 0$.\nWe have for some $t \\in [0,1])$,\n\\begin{align*}\nB[T(v_h)-T(w_h), \\psi_h] & = \n A_{\\epsilon} + \\int_{\\Omega} \n(\\text{cof} \\ D^2 u) (D v_h - D w_h) \\cdot D \\psi_h \\ \\ud x \\\\\n& \\qquad \\qquad - \\int_{\\Omega} (\\text{cof} \\ t D^2 v_h^{\\epsilon} + (1-\\tau) D^2 w_h^{\\epsilon}) (D v_h^{\\epsilon} - D w_h^{\\epsilon}) \\psi_h\n\\ud x\\\\\n& = \\int_{\\Omega} (\\text{cof} \\ D^2 u- (\\text{cof} \\ t D^2 v_h^{\\epsilon} + (1-\\tau) D^2 w_h^{\\epsilon}) ) (D v_h - D w_h) \\cdot D \\psi_h \\ \\ud x \n\\\\\n& \\qquad \\qquad +A_{\\epsilon} + B_{\\epsilon},\n\\end{align*}\nwhere $B_{\\epsilon}=\\int_{\\Omega} (\\text{cof} \\ t D^2 v_h^{\\epsilon} + (1-\\tau) D^2 w_h^{\\epsilon}) (\nD v_h - D w_h -D v_h^{\\epsilon} + D w_h^{\\epsilon}) \\psi_h\n\\ud x$ $\\to 0$ as $\\epsilon \\to 0$.\nPut $\\Psi_{\\epsilon}=\\text{cof} \\ D^2 u- (\\text{cof} \\ t D^2 v_h^{\\epsilon} + (1-\\tau) D^2 w_h^{\\epsilon})$. We have \n\\begin{align*}\n||\\Psi_{\\epsilon}||_0 & = \n||D^2 u- D^2 v_h^{\\epsilon} + \\tau ( D^2 w_h^{\\epsilon}-D^2 v_h^{\\epsilon})||_0 \\leq ||D^2 u- D^2 v_h^{\\epsilon}||_0+ ||D^2 w_h^{\\epsilon}-D^2 v_h^{\\epsilon}||_0 \\\\\n& \\leq ||D^2 u - D^2 I_h u||_0 + ||D^2 I_h u - D^2 v_h||_0+ || D^2 v_h - D^2 v_h^{\\epsilon}||_0\n+||D^2 w_h^{\\epsilon}-D^2 w_h||_0 \\\\\n& \\qquad +||D^2 w_h - D^2 v_h||_0+ ||D^2 v_h-D^2 v_h^{\\epsilon}||_0\\\\\n& \\leq C h^{p-2} ||u||_4 + 3 \\rho_0 + 2 ||v_h-v_h^{\\epsilon}||_2 +||w_h-w_h^{\\epsilon}||_2.\n\\end{align*}\nAs $\\epsilon \\to 0$, we obtain \n\\begin{align*}\nB[T(v_h)-T(w_h), \\psi_h] & \\leq C(h^{p-2} ||u||_4 + \\rho_0) ||v_h-w_h||_2 ||\\psi_h||)2.\n\\end{align*}\nBy coercivity and an inverse estimate,\n$$\n||T(v_h)-T(w_h)||_2 \\leq C(h^{p-3} ||u||_4 + \\frac{\\rho_0}{h}) ||v_h-w_h||_2.\n$$\nFirst choose $h$\nsuch that $h^{p-3} ||u||_4 \\leq 1\/4$ then $\\rho_0 \\leq h\/4$. The result follows\n\\end{proof}\n\\begin{proof} (of Theorem \\ref{errest})\nLet $\\rho_1= 2 C_2 h^{p-3} ||u||_4 $. We first show that $T$ maps $B_h(\\rho_1)$ into itself. For $v_h \\in B_h(\\rho_1)$,\n\\begin{align*}\n||I_h u -T(v_h)||_2 & \\leq ||I_h u -T(I_h u)||_2 + ||T(I_h u) -T(v_h)||_2 \\leq \\frac{\\rho_1}{2} + \\frac{1}{2} ||I_h u - v_h||_2 \\\\\n& \\leq \\frac{\\rho_1}{2} + \\frac{\\rho_1}{2} = \\rho_1. \n\\end{align*}\nBy the Brouwer's fixed point theorem, $T$ has a unique fixed point in\n$B_h(\\rho_1)$ which is $u_h$, i.e. $T(u_h)=u_h$. Next,\n\\begin{align*}\n||u-u_h||_2 & \\leq ||u-I_h u||_2 + ||I_h u - u_h||_2 \\leq C_1 h^{p-2} ||u||_4 + ||I_h u - T(u_h)||_2 \\\\\n& \\leq C_1 h^{p-2} ||u||_4 + 2 C_2 h^{p-3} ||u||_4 \\leq C h^{p-3} ||u||_4,\n\\end{align*}\nfor $h$ sufficiently small. We have the result.\n\\end{proof}\n\n\n\n\\subsection{Vanishing moment methodology}\nThe vanishing moment methodology approach to \\eqref{m1}, consists in computing a solution of the\nsingular perturbation problem\n\\begin{align} \\label{var2}\n- \\epsilon \\Delta^2 u + \\text{det} \\ D^2 u = f,\n\\ \\text{in} \\ \\Omega, \\quad u=g, \\ \\Delta u = \\epsilon^2 \\ \\text{on} \\ \\partial \\Omega.\n\\end{align}\nIt is an analogue of a singular perturbation problem\n$$\n\\epsilon \\Delta^2 u -\\Delta u = f \\ \\text{in} \\ \\Omega, u=0, \\frac{\\partial u}{\\partial n} = 0, \\ \\text{on} \\ \\partial \\Omega\n$$\nwhich was addressed numerically in \\cite{Winther2001} and also in the spline element method \\cite{Awanou2008}. The analogy holds as many properties\nof the Laplace operator have a counterpart for the Monge-Amp\\`ere operator.\n\nThe Newton's iterates in the vanishing moment formulation consisting in solving the sequence of problems:\nFind $u_{k+1} \\in H^n(\\Omega)$ with $u_{k+1}=g$ on $\\partial \\Omega$\n\\begin{align}\\label{newtonvm}\n\\begin{split}\n\\epsilon \\int_{\\Omega} \\Delta u_{k+1} \\Delta v \\ud x+ \\int_{\\Omega} (\\text{cof} \\ D^2 u_k) D u_{k+1} \\cdot D v \\ \\ud x = \\frac{n-1}{n} & \\int_{\\Omega} (\\text{cof} \\ D^2 u_k) D u_{k} \\cdot D v \\ \\ud x \\\\\n\\qquad \\qquad + \\epsilon^3 \\int_{\\partial \\Omega} \\frac{\\partial v}{\\partial n} \\ud s \n-\\int_{\\Omega} f v \\ \\ud x , \\quad \\forall v \\in H^2(\\Omega) \\cap H_0^1(\\Omega).\n\\end{split}\n\\end{align}\nFormally as $\\epsilon$ approaches 0, the solution of the above problem degenerates to the solution of \\eqref{nvar1}, a result which will be illustrated numerically in the next section.\n\n\n\n\n\n\n\\section{Numerical results} \\label{Numerical}\nThe first two subsections are devoted to two dimensional and three dimensional numerical results respectively.\nThe three methods are compared on three test functions for 2D experiments.\n\nTest 1: A smooth solution $u(x,y)=e^{(x^2+y^2)\/2}$ so that \n$f(x,y)=(1+x^2+y^2)e^{(x^2+y^2)}$ and $g(x,y)=e^{(x^2+y^2)\/2}$ on $\\partial \\Omega$.\n\nTest 2: A solution not in $H^2(\\Omega)$, $u(x,y)=-\\sqrt{2-x^2-y^2}$ \nso that $f(x,y)=2\/(2-x^2-y^2)^2$ and $g(x,y)=-\\sqrt{2-x^2-y^2}$ on $\\partial \\Omega$.\n\nTest 3: No exact solution is known. Solutions are either convex or concave. Here\n$f(x,y)=1$ and $g(x,y)=0$. \n\nWe give numerical evidence of the robustness of the spline element method for the singular perturbation problem\nassociated to the vanishing moment methodology. \nFormally as $\\epsilon$ approaches 0, the problem \\eqref{var2} degenerates to the problem \\eqref{m1}, which can be solved by Newton's method\nwhen a smooth solution exists. We show here numerically that the solution of \\eqref{newtonvm} \nconverges to that of \\eqref{nvar1} as $\\epsilon$ approaches 0. Unlike \\cite{Feng2009b},\nthere is no boundary layers issue with the spline element method.\nThese results are illustrated in Tables \\ref{2Drobust} and \\ref{3Drobust}.\n\n\nIn general, vanishing moment (with Newton) gives Newton's method result for $\\epsilon = 10^{-9}$.\nIn particular, Newton's method diverges for the non smooth solutions of Test 2 and 3. However with $\\epsilon$ large,\nwhich implies that the equation solved is much further from the actual problem,\ndivergence can be avoided in the vanishing moment methodology. \nWe refer to the problem \\eqref{m1} as reduced\nin the tables.\n\nIn the two dimensional case of Test 3, both concave and convex solutions can be computed by either changing the initial guess or the structure of\nthe approximations.\n\\begin{enumerate}\n\\item Newton's method: initial guess $\\pm u_0$,\n\n\\item Vanishing moment: $ \\pm \\epsilon$ and initial guess $\\pm u_0$,\n\n\\item BFO iterative method: $u_{k+1} = \\pm \\sqrt{(\\Delta u_k)^2 + 2(f-\\det D^2 u_k)}$. \n\\end{enumerate}\nSince Newton's method diverges for Test 3, we illustrate this feature of the method on \na fourth test on a non-square domain. This also helps contrast with finite difference methods.\n\nTest 4: The domain is the unit circle which is discretized with a Delanauy triangulation\nwith 824 triangles and the test functions are $u(x,y)=e^{(x^2+y^2)\/2}$ \n(convex) and $u(x,y)=-e^{(x^2+y^2)\/2}$ (concave) with the corresponding right hand side and boundary conditions.\n\nSince none of the methods perform convincingly on Test 2 in the spline element framework, the methods are tested \nfor the three dimensional case on two other test functions analogues of Test 1 and Test 3.\nWe only consider the performance of the BFO and vanishing moment method.\n\nTest 5: $u(x,y,z)=e^{(x^2+y^2+z^2)\/3}$ so that\n$f(x,y,z)=8\/81(3+2(x^2+y^2+z^2))e^{(x^2+y^2+z^2)}$ and $g(x,y,z)=e^{(x^2+y^2+z^2)\/3}$ on \n$\\partial \\Omega$. \n\nTest 6: $f(x,y,z)=1$ and $g(x,y,z)=0$. \n\nThe initial guess of the Newton's iterations is the solution of the Poisson equation\n$\\Delta u = n f^{1\/n} \\, n=2,3 \\, \\ \\text{in} \\ \\Omega, \\quad u=g \\ \\text{on} \\ \\partial \\Omega$.\n\nWe also illustrate the performance of the 3D BFO method on a degenerate Monge-Amp\\`ere equation,\n\nTest 7: $f(x,y,z)=0$ and $g(x,y,z)=|x-1\/2|$ .\n\n\nIn the two dimensional case, to approximate a concave solution, one should solve $\\Delta u = -2 \\sqrt{f}$. \nBut unless $u=0$ on $\\partial \\Omega$, as in Test 3, it is not clear which boundary condition to use.\nNote that if $u$ is a smooth convex function, $\\Delta u \\geq 0$. \nTo compute the concave solution of Test 4,\nwe first solved the Monge-Amp\\`ere equation with the negative of the boundary condition, then use\nthe negative of that solution as an initial guess.\nHowever, a good initial guess could not be found if we uniformly refine a Delanauy triangulation of the circle with 143 triangles, \nbut convergence was obtained with the choice in Test 4, perhaps because the domain is closer to being smooth.\nFor the vanishing moment calculations, the initial guess was taken as the biharmonic regularization\nof a suitable Poisson equation, for example,\n$-\\epsilon \\Delta^2 u + \\Delta u = n f^{1\/n} \\, n=2,3 \\, \\ \\text{in} \\ \\Omega, \\quad u=g, \\Delta u =\\epsilon^2 \\ \\text{on} \\ \\partial \\Omega$.\nWe simply took the zero function as initial guess in the BFO method.\nUnless otherwise stated, we use $C^1$ splines for all the numerical experiments. \nEven for the BFO iterative method which requires only solving Poisson equations as in that case we obtained better\nnumerical results (smooth graphs) for Test 3.\nWe listed $n_{\\text{it}}$, the number of iterations of the BFO method.\nWe do not claim that our numerical solutions are convex but that they approximate convex functions.\nConvexity (or concavity) is not explicitly enforced in the numerical iterations.\n\n\\subsection{Two-dimensional Monge-Amp\\`ere equation}\nThe computational domain is the unit square $[0,1]^2$\nwhich is first divided into squares of side length $h$. Then each square is \ndivided into two triangles by the diagonal with negative slope.\nAs evidenced in the last line of Table \\ref{BFOsmooth}, we noted a degradation of the performance of the BFO iterative method\neven for a smooth solution when the number of degrees of freedom is large, either for smaller $h$ or higher degree.\nThis may be an indication that the method is not suitable for a general finite element implementation\nbut is more likely a consequence of the conditioning properties of the iterative method \\eqref{iter1}.\nFor Test 2, we display the results for $C^1$ cubic splines, but much higher order accuracy, of the order of $10^{-5}$\nwas obtained with $C^0$ splines. We caution that in our implementation, this may lead to non smooth numerical solutions.\n\nFor Test 3, we displayed both the graph and the contour of both convex and concave solutions.\nTo get good results with the vanishing moment method, we experimented with a combination of\nthe parameters $\\epsilon$\nand $h$. \n\n\\begin{table}\n\\begin{tabular}{|c|c|c|c|} \\hline\nd & $L^{2}$ norm & $H^{1}$ norm & $H^{2}$ norm \\\\ \\hline\n3 & 1.0610 $10^{-3}$ & 1.1101 $10^{-2}$ & 1.6383 $10^{-1}$ \\\\ \\hline\n4 & 3.5127 $10^{-5}$ & 4.8553 $10^{-4}$ & 9.0596 $10^{-3}$ \\\\ \\hline\n5 & 4.1572 $10^{-6}$ & 6.5142 $10^{-5}$ & 1.9364 $10^{-3}$ \\\\ \\hline\n6 & 1.9685 $10^{-7}$ & 3.6401 $10^{-6}$ & 1.4774 $10^{-4}$ \\\\ \\hline\n7 & 2.2699 $10^{-8}$ & 4.1498 $10^{-7}$ & 2.2424 $10^{-5}$ \\\\ \\hline\n8 & 1.2430 $10^{-9}$ & 2.2586 $10^{-8}$ & 1.5479 $10^{-6}$ \\\\ \\hline\n\\end{tabular}\n\\bigskip\n\\caption{2D Newton's method for Test 1, $h=1\/2$}\n\\end{table}\n\n\\begin{table}\n\\begin{tabular}{|c|c|c|c|} \\hline\nd & $L^{2}$ norm & $H^{1}$ norm & $H^{2}$ norm \\\\ \\hline\n3 & 1.2809 $10^{-4}$ & 2.6554 $10^{-3}$& 8.9587 $10^{-2}$ \\\\ \\hline\n4 & 1.6278 $10^{-6}$ & 4.5619 $10^{-5}$& 1.7395 $10^{-3}$ \\\\ \\hline\n5 & 1.1531 $10^{-7}$ & 2.3916 $10^{-6}$& 1.3444 $10^{-4}$ \\\\ \\hline\n6 & 1.7705 $10^{-9}$ & 6.8506 $10^{-8}$& 5.5403 $10^{-6}$ \\\\ \\hline\n7 & 1.4548 $10^{-10}$ & 3.7545 $10^{-9}$& 3.9490 $10^{-7}$ \\\\ \\hline\n8 & 8.1014 $10^{-12}$ & 5.3353 $10^{-10}$& 7.2159 $10^{-8}$ \\\\ \\hline\n\\end{tabular}\n\\bigskip\n\\caption{2D Newton's method for Test 1, $h=1\/4$}\n\\end{table}\n\n\\begin{table} \\label{2Drobust}\n\\begin{tabular}{|c|c|c|c|} \\hline\n$\\epsilon$ & $L^{2}$ norm & $H^{1}$ norm & $H^{2}$ norm \\\\ \\hline\n$10^{-3}$ & 1.7271$10^{-3}$ & 2.0910$10^{-2}$ & 8.6338$10^{-1}$ \\\\ \\hline\n$10^{-4}$ & 1.8563$10^{-4}$ & 3.5581$10^{-3}$ & 2.0050$10^{-1}$ \\\\ \\hline\n$10^{-5}$ & 1.8917$10^{-5}$ & 4.0700$10^{-4}$ & 2.4119$10^{-2}$ \\\\ \\hline\n$10^{-6}$ & 1.8182$10^{-6}$ & 4.0775$10^{-5}$ & 2.4388$10^{-3}$ \\\\ \\hline\n$10^{-7}$ & 1.2441$10^{-7}$ & 4.0951$10^{-6}$ & 2.4949$10^{-4}$ \\\\ \\hline\n$10^{-8}$ & 1.0119$10^{-7}$ & 2.2962$10^{-6}$ & 1.3029$10^{-4}$ \\\\ \\hline\n$10^{-9}$ & 1.1384$10^{-7}$ & 2.3790$10^{-6}$ & 1.3382$10^{-4}$ \\\\ \\hline\n$10^{-10}$ & 1.1516$10^{-7}$ & 2.3903$10^{-6}$ & 1.3438$10^{-4}$ \\\\ \\hline\n$10^{-11}$ & 1.1530$10^{-7}$ & 2.3914$10^{-6}$ & 1.3443$10^{-4}$ \\\\ \\hline\n$10^{-14}$ & 1.1531$10^{-7}$ & 2.3916$10^{-6}$ & 1.3444$10^{-4}$ \\\\ \\hline\nReduced & 1.1531 $10^{-7}$ & 2.3916 $10^{-6}$ & 1.3444 $10^{-4}$\\\\ \\hline\n\\end{tabular}\n\\bigskip\n\\caption{2D numerical robustness Test 1, $h=1\/4$, $d=5$. \n}\n\\end{table}\n\n\\begin{table}\\label{BFOsmooth}\n\\begin{tabular}{|c|c|c|c|c|} \\hline\n$h$ & $n_{\\text{it}}$ & $L^{2}$ norm & $H^{1}$ norm & $H^{2}$ norm \\\\ \\hline\n$1\/2^1$ & 41 & 2.8275 $10^{-6}$ & 6.1372 $10^{-5}$ & 1.8845 $10^{-3}$ \\\\ \\hline\n$1\/2^2$ & 37 & 5.4642 $10^{-8}$ & 2.1971 $10^{-6}$ & 1.2972 $10^{-4}$ \\\\ \\hline\n$1\/2^3$ & 38 & 8.3164 $10^{-10}$ & 7.2252 $10^{-8}$ & 8.4790 $10^{-6}$ \\\\ \\hline\n$1\/2^4$ & 37 & 2.7871 $10^{-9}$ & 1.4089 $10^{-8}$ & 1.0809 $10^{-6}$ \\\\ \\hline\n\\end{tabular}\n\\bigskip\n\\caption{BFO iterative method for Test 1, $d=5$}\n\\end{table}\n\n\\begin{table\n\\begin{minipage}{2in} %\n\\begin{tabular}{|c|c|c|} \\hline\n$h$ & $L^{2}$ norm & $H^{1}$ norm \\\\ \\hline\n$1\/2^1$ & 2.1954$10^{-2}$ & 1.6409$10^{-1}$ \\\\ \\hline\n$1\/2^2$ & 3.6097$10^{-3}$ & 6.1405$10^{-2}$ \\\\ \\hline\n$1\/2^3$ & 1.0685$10^{-3}$ & 4.0978$10^{-2}$ \\\\ \\hline\n$1\/2^4$ & 5.0838$10^{-3}$ & 2.8048$10^{-1}$ \\\\ \\hline\n$1\/2^5$ & 2.5797$10^{+3}$ & 2.2688$10^{+5}$ \\\\ \\hline\n$1\/2^6$ & 1.8452$10^{+4}$ & 3.5922$10^{+6}$ \\\\ \\hline\n\\end{tabular}\n\\end{minipage}\n\\hspace{1cm}\n\\begin{minipage}{2in}\n\\begin{tabular}{|c|c|c|c|} \\hline\n$h$ & $n_{\\text{it}}$ & $L^{2}$ norm & $H^{1}$ norm \\\\ \\hline\n$1\/2^1$ & 50 & 2.3921 $10^{-1}$ & 1.1900 \\\\ \\hline\n$1\/2^2$ & 159 & 1.2585 $10^{-1}$ & 7.1292 $10^{-1}$ \\\\ \\hline\n$1\/2^3$ & 151 & 1.0341 $10^{-1}$ & 6.4299 $10^{-1}$ \\\\ \\hline\n$1\/2^4$ & 160 & 9.6031 $10^{-2}$ & 6.2088 $10^{-1}$ \\\\ \\hline\n$1\/2^5$ & 199 & 9.4551 $10^{-2}$ & 6.2453 $10^{-1}$ \\\\ \\hline\n$1\/2^6$ & 8 & 1.6977 $10^{-2}$ & 2.2925 $10^{-1}$ \\\\ \\hline\n\\end{tabular}\n\\end{minipage}\n\\bigskip\n\\caption{Newton's method and BFO iterative method for Test 2, $d=3$}\n\\end{table}\n\n\n\n\n\n\\begin{table\n\\begin{minipage}{2in}\n\\begin{tabular}{|c|c|c|} \\hline\n$h$ & $L^{2}$ norm & $H^{1}$ norm \\\\ \\hline\n$1\/2^1$ & 7.6680$10^{-3}$ & 7.4491$10^{-2}$ \\\\ \\hline\n$1\/2^2$ & 1.4536$10^{-3}$ & 3.9244$10^{-2}$ \\\\ \\hline\n$1\/2^3$ & 9.8727$10^{-3}$ & 2.5112$10^{-1}$ \\\\ \\hline\n$1\/2^4$ & 5.6819$10^{-3}$ & 2.4927$10^{-1}$ \\\\ \\hline\n$1\/2^5$ & 1.9830 $10^{+4}$ & 1.1812 $10^{+6}$ \\\\ \\hline\n\\end{tabular}\n\\end{minipage}\n\\hspace{1cm}\n\\begin{minipage}{2in}\n\\begin{tabular}{|c|c|c|} \\hline\n$h$ & $L^{2}$ norm & $H^{1}$ norm \\\\ \\hline\n$1\/2^1$ & 7.8254$10^{-3}$ & 9.3184$10^{-2}$ \\\\ \\hline\n$1\/2^2$ & 1.0646$10^{-2}$ & 9.5201$10^{-2}$ \\\\ \\hline\n$1\/2^3$ & 1.1306$10^{-2}$ & 9.6154$10^{-2}$ \\\\ \\hline\n$1\/2^4$ & 1.1500$10^{-2}$ & 9.1336$10^{-2}$ \\\\ \\hline\n$1\/2^5$ & 1.1625$10^{-2}$ & 8.7785$10^{-2}$ \\\\ \\hline\n $1\/2^6$ & 1.1681$10^{-2}$ & 8.5632$10^{-2}$ \\\\ \\hline\n\\end{tabular}\n\\end{minipage}\n\\bigskip\n\\caption{ Vanishing moment Test 2 $\\epsilon=10^{-3}$ and $\\epsilon=10^{-2}$, $d=5$}\n\\end{table}\n\n\\begin{figure}[tbp]\n\\begin{center}\n\\includegraphics[angle=0, height=5cm]{VMConvexC4N4d3u4e3.png}\n\\includegraphics[angle=0, height=3.5cm]{VMContourConvexC4N4d3u4e3.png}\n\\end{center}\n\\caption{Vanishing moment on Test 3, $h=1\/2^4, d=3, \\epsilon=10^{-3}$.\n\\end{figure}\n\n\\begin{figure}[tbp]\n\\begin{center}\n\\includegraphics[angle=0, height=5cm]{VMConvaveC4N5d5u4e2.png}\n\\includegraphics[angle=0, height=3.5cm]{VMContourConvaveC4N5d5u4e2.png}\n\\end{center}\n\\caption{Vanishing moment on Test 3, $h=1\/2^4, d=5, \\epsilon=-10^{-2}$.\n\\end{figure}\n\n\\begin{figure}[tbp]\n\\begin{center}\n\\includegraphics[angle=0, height=5cm]{BFOConvexC4N4d3u4.png}\n\\includegraphics[angle=0, height=3.5cm]{BFOContourConvexC4N4d3u4.png}\n\\end{center}\n\\caption{BFO iterative method on Test 3, $h=1\/2^4, d=3$.\n\\end{figure}\n\n\\begin{figure}[tbp]\n\\begin{center}\n\\includegraphics[angle=0, height=5cm]{BFOConcaveC4N4d3u4.png}\n\\includegraphics[angle=0, height=3.5cm]{BFOContourConcaveC4N4d3u4.png}\n\\end{center}\n\\caption{BFO iterative method on Test 3, $h=1\/2^4, d=3$.\n\\end{figure}\n\n\\begin{figure}[tbp]\n\\begin{center}\n\\includegraphics[angle=0, height=5cm]{Circle08Concave824d5r1C1.png}\n\\includegraphics[angle=0, height=5cm]{Circle08Convex824d5r1C1.png}\n\\end{center}\n\\caption{Approximations of smooth concave and convex solutions on a non rectangular domain, Test 4, $d=5, r=1$\n\\end{figure}\n\n\n\\subsection{Three-dimensional Monge-Amp\\`ere equation}\nWe used two computational domains both on the unit cube $[0,1]^3$\nwhich is first divided into six tetrahedra (Domain 1 for Test 4) or twelve tetrahedra (Domain 2 for Test 5) forming a tetrahedral partition $\\mathcal{T}_1$. \nThis partition is uniformly refined\nfollowing a strategy introduced in \\cite{Awanou2003} similar to the one of \\cite{Ong94}\nresulting in successive level of refinements $\\mathcal{T}_k$, $k=2,3,\\ldots$.\nFor Test 6, we plot the graph of the function as well as its contour in the plane $x=1\/2$ as well as slices \nin the $x-$ direction.\n\n\\begin{table}\n\\begin{tabular}{|c|c|c|c|} \\hline\nd & $L^{2}$ norm & $H^{1}$ norm & $H^{2}$ norm \\\\ \\hline\n3 & 1.2338 $10^{-2}$ & 7.6984 $10^{-2}$ & 4.4411 $10^{-1}$ \\\\ \\hline\n4 & 1.6289 $10^{-3}$ & 1.4719 $10^{-2}$ & 1.3983 $10^{-1}$ \\\\ \\hline\n5 & 1.5333 $10^{-3}$ & 8.7312 $10^{-3}$ & 6.0412 $10^{-2}$\\\\ \\hline\n6 & 1.2324 $10^{-4}$ & 9.7171 $10^{-4}$ & 1.0584 $10^{-2}$ \\\\ \\hline\n\\end{tabular}\n\\bigskip\n\\caption{Newton's method Test 5, Domain 1 on $ \\mathcal{I}_1 $}\n\\end{table}\n\n\\begin{table}\n\\begin{tabular}{|c|c|c|c|c|} \\hline\nd & $L^{2}$ norm & $H^{1}$ norm & $H^{2}$ norm \\\\ \\hline\n3 & 3.1739 $10^{-3}$ & 2.3005 $10^{-2}$ & 2.4496 $10^{-1}$ \\\\ \\hline\n4 & 3.2786 $10^{-4}$ & 3.5626 $10^{-3}$ & 5.2079 $10^{-2}$ \\\\ \\hline\n5 & 2.4027 $10^{-5}$ & 3.9210 $10^{-4}$ & 8.8868 $10^{-3}$ \\\\ \\hline\n6 & 1.3821 $10^{-6}$ & 2.2369 $10^{-5}$ & 6.0918 $10^{-4}$ \\\\ \\hline\n\\end{tabular}\n\\bigskip\n\\caption{Newton's method Test 5, Domain 1 on $\\mathcal{T}_2$}\n\\end{table}\n\n\\begin{table}\\label{3Drobust}\n\\begin{tabular}{|c|c|c|c|} \\hline\n$\\epsilon$ & $L^{2}$ norm & $H^{1}$ norm & $H^{2}$ norm \\\\ \\hline\n$10^{-1}$ & 6.6870 $10^{-2}$ & 3.9292 $10^{-1}$ & 2.8852 \\\\ \\hline\n$10^{-2}$ & 1.8832 $10^{-2}$ & 1.3137 $10^{-1}$ & 1.5882 \\\\ \\hline\n$10^{-3}$ & 2.4237 $10^{-3}$ & 2.5273 $10^{-2}$ & 5.3206 $10^{-1}$ \\\\ \\hline\n$10^{-4}$ & 2.5661 $10^{-4}$ & 3.2633 $10^{-3}$ & 7.9936 $10^{-2}$ \\\\ \\hline\n$10^{-5}$ & 3.1058 $10^{-5}$ & 5.0367 $10^{-4}$ & 1.2543 $10^{-2}$ \\\\ \\hline\n$10^{-6}$ & 2.3519 $10^{-5}$ & 3.9165 $10^{-4}$ & 8.9744 $10^{-3}$ \\\\ \\hline\n$10^{-7}$ & 2.3964 $10^{-5}$ & 3.9193 $10^{-4}$ & 8.8921 $10^{-3}$ \\\\ \\hline\n$10^{-10}$ & 2.4027 $10^{-5}$ & 3.9210 $10^{-4}$ & 8.8868 $10^{-3}$ \\\\ \\hline \nReduced & 2.4027 $10^{-5}$ & 3.9210 $10^{-4}$ & 8.8868 $10^{-3}$ \\\\ \\hline\n\\end{tabular} \n\\bigskip\n\\caption{3D numerical robustness Test 5, Domain 1 on $\\mathcal{T}_2$, $d=5$}\n\\end{table}\n\n\\begin{table}\n\\begin{tabular}{|c|c|c|c|} \\hline\n2 & 3.1739 $10^{-3}$ & 2.3005 $10^{-2}$ & 2.4496 $10^{-1}$ \\\\ \\hline\n3 & 1.6859 $10^{-2}$ & 1.0519 $10^{-1}$ & 9.1615 $10^{-1}$ \\\\ \\hline\n59 & 1.1283 $10^{-3}$ & 7.1385 $10^{-3}$ & 7.3671 $10^{-2}$ \\\\ \\hline\n38 & 2.1423 $10^{-4}$ & 1.4452 $10^{-3}$ & 1.8083 $10^{-2}$ \\\\ \\hline\n35 & 4.5582 $10^{-5}$ & 3.0440 $10^{-4}$ & 4.0506 $10^{-3}$ \\\\ \\hline\n\\end{tabular} \n\\bigskip\n\\caption{BFO iterative method for Test 5, on $\\mathcal{T}_2$, $d=5$}\n\\end{table}\n\n\\begin{figure}[tbp]\n\\begin{center}\n\\includegraphics[angle=0, height=6cm]{graphxhalfD3N2d3r2.png}\n\\includegraphics[angle=0, height=5cm]{contourxhalfD3N2d3r2.png}\n\\end{center}\n\\bigskip\n\\caption{Vanishing moment Test 6, on $ \\mathcal{I}_3 $, $r=2, d=3$}\n\\end{figure}\n\n\n\\begin{figure}[tbp]\n\\begin{center}\n\\includegraphics[angle=0, height=6cm]{BFOxhalfgraphC2d5D2r1u5.png}\n\\includegraphics[angle=0, height=5cm]{BFOxhalfcontourC2d5D2r1u5.png}\n\\end{center}\n\\bigskip\n\\caption{BFO iterative method Test 6, on $ \\mathcal{I}_3 $, $d=5, r=1$}\n\\end{figure}\n\n\\begin{figure}[tbp]\n\\begin{center}\n\\includegraphics[angle=0, height=5cm]{slicesxD3N2d3r2.png}\n\\includegraphics[angle=0, height=5cm]{BFOxsliceC2d5D2r1u5.png}\n\\end{center}\n\\bigskip\n\\caption{Slices in the $x-$direction Test 6 on Domain 2 and $ \\mathcal{I}_3, d=3$ , Vanishing moment $r=2, \\epsilon=10^{-5}$ and BFO $d=5, r=1$}\n\\end{figure}\n\n\\begin{figure}[tbp]\n\\begin{center}\n\\includegraphics[angle=0, height=7cm]{BFOC27D2N2r1d5.png}\n\\end{center}\n\\bigskip\n\\caption{BFO Test 7 on Domain 2 and $ \\mathcal{I}_3$, $d=5, r=1$}\n\\end{figure}\n\n\n\n\n\\section{Concluding Remarks}\n\n\\begin{rem}\nFor the finite element approximation of \\eqref{m1}, we note the remark in \\cite{Dean2006}, \n``Newton's and conjugate gradients methods may be well-suited for the solution of \\ldots \ncombines the difficulty of both harmonic and bi-harmonic problems, making the approximation a delicate\nmatter, albeit solvable \\ldots If \\ldots has no solution, we can expect the divergence of the Newton \\ldots''\nWe have established that Newton's method performed well for smooth solutions. Another problem which also combines\nthe difficulty of both the harmonic and biharmonic problem is a singular perturbation problem we addressed in\n\\cite{Awanou2008} by the spline element method. Here it is seen that for the singular perturbation problem arising from the vanishing\nmoment methodology, the spline element method is robust.\nMoreover we note that numerical results for smooth solutions using Newton's method \nin the spline element method are more accurate than what can be achieved using Argyris elements \nand the vanishing moment methodology \\cite{Feng2009b}.\n\\end{rem}\n\n\\begin{rem}\nIt is still not known whether the BFO iterative method always converges even in the case of smooth solution. Nor is known\nwhether Newton's method always converges on a non-smooth domain. We have not addressed the convergence of the vanishing moment\nmethodology to viscosity solutions as these results have been announced in \\cite{Feng2009b}. \n\\end{rem}\n\n\n\\begin{rem} We used $C^1$ cubic splines on most of the approximations even though they do not have full approximation\npower on general meshes. This reduced the computational cost. One may use \nfor full approximation power, special meshes as in \\cite{Dyer2003} or \\cite{Schumaker2009}.\n\\end{rem}\n\n\\begin{rem}\nThe BFO iterative method introduced in \\cite{Benamou2010} was tested on some very singular right hand sides. It was noted that\nit is slower than another iterative method based on a different algebraic manipulation of the Monge-Amp\\`ere equation. The latter\ndoes not seem directly amenable to finite element computations. We note that for singular sources, specialized finite elements may have to be used\nand even in the finite difference context, specialized finite difference methods \\cite{Tornberg03} \nor fast Poisson solvers \\cite{Swarztrauber84} or \npreconditioners could have been used.\n\\end{rem}\n\\begin{rem}\nThe problem: Find $u$ such that\n\\begin{equation}\n\\int_{\\Omega} (\\text{cof} \\ D^2 u) D u \\cdot D v \\ \\ud x =\n-n \\int_{\\Omega} f v \\ \\ud x ,\n\\end{equation} \nis the Euler-Lagrange equation of the functional\n\\begin{equation}\n\\mathcal{J}(v) = \\int_{\\Omega} ( \\text{cof} \\ D^2 v) D v \\cdot D v dx + 2 n\n\\int_{\\Omega} f v dx. \\label{fun1}\n\\end{equation}\nIf $v=0$ on $\\partial \\Omega$, we have\n$$\n\\mathcal{J}(v) = -n \\int_{\\Omega} (\\det D^2 v) v + 2 n\n\\int_{\\Omega} f v dx,\n$$\nand a generalized solution of \\eqref{m1} has been shown in \\cite{Bakelman1983,Tso1990} to be a minimizer of a related functional\non the set of convex functions. \n\\end{rem}\n\\bibliographystyle{amsplain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIt is well known that orthogonal polynomials have a great history and continuing important applications in mathematics, physics and engineering and so on \\cite{sze,sim1,sim2,deift,mehta}. Boundary value problems for analytic functions is a living research field with a beautiful and rich theory as well as diverse and interesting applications. It also has a fascinating history which can be traced back to the origins of function theory and comes into sight after 1851 via Riemann's famous habilitation thesis. Riemann's treatment on these problems is heuristic. It was Hilbert who first proposed a partly rigorous approach to attack the problems in the linear case. The main defect of Hilbert's approach lies in the ignorance of the indexes of the problems which pointed out by F. Noether. For this reason, nowadays these problems are usually called Riemann-Hilbert problems (for short, RHPs). In the past almost thirty years, a remarkable fact is that one can construct some (usually, $2\\times2$) matrix-valued RHPs to characterize many different types of orthogonal polynomials with respect to general weight functions or probability measures. These RHPs are always called Riemann-Hilbert characterizations (simply, RH characterizations; or more simply, RHCs) for the corresponding orthogonal polynomials.\n\nIn fact, RHPs appear in many different settings. There are many systematic approaches to formulate RH characterizations for some interesting problems in modern studies. Nevertheless, RH characterizations for orthogonal polynomials come ``out of the blue\" according to Deift's view \\cite{deift1}. In this regard, the first breakthrough was due to Fokas, Its and Kitaev \\cite{fik}. There they proposed the RH characterization for orthogonal polynomials on the real line (simply, OPRL). More precisely, they formulated the following $2\\times2$ matrix-valued RHP for a $2\\times2$ matrix-valued function $\\mathcal{Y}: \\mathbb{C}\\setminus \\mathbb{R}\\rightarrow \\mathbb{C}^{2\\times 2}$ satisfying\n\\begin{equation} (\\mbox{RHC for OPRL})\\,\\,\n\\begin{cases}\n\\mathcal{Y}\\,\\, \\mbox{is analytic in}\\,\\,\\mathbb{C}\\setminus \\mathbb{R},\\vspace{2mm}\\\\\n\\mathcal{Y}^{+}(x)=\\mathcal{Y}^{-}(x)\\left(\n \\begin{array}{cc}\n 1 & w(x) \\\\\n 0 & 1 \\\\\n \\end{array}\n \\right)\n \\,\\,\\mbox{for} \\,\\,x\\in \\mathbb{R},\\vspace{2mm}\\\\\n\\mathcal{Y}(z)=\\left(I+O\\left(\\frac{1}{z}\\right)\\right)\\left(\n \\begin{array}{cc}\n z^{n} & 0 \\\\\n 0 & z^{-n} \\\\\n \\end{array}\n \\right)\n \\,\\,\\mbox{as}\\,\\, z\\rightarrow \\infty,\n\\end{cases}\n\\end{equation}\nwhere $w$ is a weight function on $\\mathbb{R}$ and $I$ is the $2\\times2$ identity matrix.\n\nIn \\cite{bdj}, Baik, Deift and Johansson proposed the RH characterization for orthogonal polynomials on the unit circle (concisely, OPUC). That is, for a $2\\times2$ matrix-valued function $Y: \\mathbb{C}\\setminus \\partial \\mathbb{D}\\rightarrow \\mathbb{C}^{2\\times 2}$, the following conditions are fulfilled:\n\\begin{equation} (\\mbox{RHC for OPUC})\\,\\,\n\\begin{cases}\nY\\,\\, \\mbox{is analytic in}\\,\\,\\mathbb{C}\\setminus \\partial \\mathbb{D},\\vspace{2mm}\\\\\nY^{+}(t)=Y^{-}(t)\\left(\n \\begin{array}{cc}\n 1 & t^{-n}w(t) \\\\\n 0 & 1 \\\\\n \\end{array}\n \\right)\n \\,\\,\\mbox{for} \\,\\,t\\in \\partial \\mathbb{D},\\vspace{2mm}\\\\\nY(z)=\\left(I+O(\\frac{1}{z})\\right)\\left(\n \\begin{array}{cc}\n z^{n} & 0 \\\\\n 0 & z^{-n} \\\\\n \\end{array}\n \\right)\n \\,\\,\\mbox{as}\\,\\, z\\rightarrow \\infty,\n\\end{cases}\n\\end{equation}\nwhere $\\mathbb{D}$ is the unit disc, $\\partial \\mathbb{D}$ is the unit circle, $w$ is a weight function on $\\partial \\mathbb{D}$ and $I$ is the $2\\times2$ identity matrix.\n\nWith respect to orthogonal trigonometric polynomials (simply, OTP), in \\cite{dd08}, Du and the author constructed the RH characterization for them. More precisely, it is the following $2\\times2$ matrix-valued RHP: for a $2\\times2$ matrix-valued function $\\mathfrak{Y}: \\mathbb{C}\\setminus \\partial \\mathbb{D}\\rightarrow \\mathbb{C}^{2\\times 2}$ satisfying\n\\begin{equation} (\\mbox{RHC for OTP})\\,\\,\n\\begin{cases}\n\\mathfrak{Y}\\,\\, \\mbox{is analytic in}\\,\\,\\mathbb{C}\\setminus \\partial \\mathbb{D},\\vspace{2mm}\\\\\n\\mathfrak{Y}^{+}(t)=\\mathfrak{Y}^{-}(t)\\left(\n \\begin{array}{cc}\n 1 & t^{-2n}w(t) \\\\\n 0 & 1 \\\\\n \\end{array}\n \\right)\n \\,\\,\\mbox{for} \\,\\,t\\in \\partial \\mathbb{D},\\vspace{2mm}\\\\\n\\mathfrak{Y}(z)=\\left(I+O(\\frac{1}{z})\\right)\\left(\n \\begin{array}{cc}\n z^{2n} & 0 \\\\\n 0 & z^{-2n+1} \\\\\n \\end{array}\n \\right)\n \\,\\,\\mbox{as}\\,\\, z\\rightarrow \\infty,\\vspace{2mm}\\\\\n\\mathfrak{Y}_{11}(0)=\\mathfrak{Y}_{21}(0)=0,\n\\end{cases}\n\\end{equation}\nwhere $\\mathbb{D}$ is the unit disc, $\\partial \\mathbb{D}$ is the unit circle, $w$ is a weight function on $\\partial \\mathbb{D}$ and $I$ is the $2\\times2$ identity matrix.\n\nObserving all of the above RH characterizations, the innovation from OPRL to OPUC is that the 12 entry in the jump matrix with $t^{-n}w$ replacing $w$. However, from OPUC to OTP, there is another completely new innovation except the 12 entry $t^{-2n}w$ in place of $t^{-n}w$ in the jump matrix. That is, in the matrix about the growth conditions at $\\infty$ for $Y$, the 11 entry $z^{n}$ is replaced by $z^{2n}$ and the 22 entry $z^{-n}$ is replaced by $z^{-2n+1}$. Moreover, the 11 and 21 entries are prescribed to be $0$ at the origin. Based on such innovations, a surprisingly remarkable fact is that the RHP (1.3) can be characterized for both OTP and OPUC. For this reason, Du and the author discovered and established the mutual representation theorem for OTP and OPUC. It becomes a bridge connecting such two isolated classes of orthogonal polynomials. However, it should be pointed out that the RHC (1.2) for OPUC can also be as a RHC for OTP when replacing $n$ with $2n-1$ (see Remark 3.2 in \\cite{dd08}). Such two RHCs can be transformed to each other by an explicit $2\\times2$ matrix-valued multiplier (see the uniqueness part of the proof of Theorem 3.1 in \\cite{dd08}). Nevertheless, we still use the RHP (1.3) as the RH characterization for OTP in the present paper.\n\nIn addition, for general orthogonal polynomials (simply, GOP), the author also formulated a semi-conjugate $2\\times2$ matrix-valued boundary value problem to characterize orthogonal polynomials on an arbitrary smooth Jordan curve in $\\mathbb{C}$ (see \\cite{d}). But it isn't a RHP since the semi-conjugate operator appears. More precisely, for a $2\\times2$ matrix-valued function $\\mathrm{Y}: \\mathbb{C}\\setminus \\Gamma\\rightarrow \\mathbb{C}^{2\\times 2}$, the following conditions are satisfied:\n\\begin{equation} (\\mbox{RHC for GOP})\\,\\,\n\\begin{cases}\n\\mathrm{Y}\\,\\, \\mbox{is analytic in}\\,\\,\\mathbb{C}\\setminus \\Gamma,\\vspace{2mm}\\\\\n(\\mathrm{D}\\mathrm{Y})^{+}(t)=(\\mathrm{D}\\mathrm{Y})^{-}(t)\\left(\n \\begin{array}{cc}\n 1 & w(t)s^{\\prime}(t) \\\\\n 0 & 1 \\\\\n \\end{array}\n \\right)\n \\,\\,\\mbox{for} \\,\\,t\\in \\Gamma,\\vspace{2mm}\\\\\n\\mathrm{Y}(z)=\\left(I+O(\\frac{1}{z})\\right)\\left(\n \\begin{array}{cc}\n z^{n} & 0 \\\\\n 0 & z^{-n} \\\\\n \\end{array}\n \\right)\n \\,\\,\\mbox{as}\\,\\, z\\rightarrow \\infty,\n\\end{cases}\n\\end{equation}\nwhere $\\Gamma$ is an arbitrary smooth Jordan curve in $\\mathbb{C}$ oriented counter-clockwisely, $s(t)$ is the arc-length function, $w$ is a weight function on $\\Gamma$, $I$ is the $2\\times2$ identity matrix and the semi-conjugate operator $\\mathrm{D}$ is defined by\n\\begin{equation}\n ({\\mathrm{D}}\\mathrm{Y})(z)=\\left(\n \\begin{array}{ll}\n \\overline{\\mathrm{Y}_{11}(z)} \\hspace{2mm} \\mathrm{Y}_{12}(z) \\\\\n \\overline{\\mathrm{Y}_{21}(z)} \\hspace{2mm} \\mathrm{Y}_{22}(z)\n \\end{array}\n \\right)\n \\quad {\\mathrm{for}} \\quad \\mathrm{Y}(z)=\n \\left(\n \\begin{array}{ll}\n \\mathrm{Y}_{11}(z) \\hspace{2mm} \\mathrm{Y}_{12}(z) \\\\\n \\mathrm{Y}_{21}(z) \\hspace{2mm} \\mathrm{Y}_{22}(z)\n \\end{array}\n \\right),\n\\end{equation}\nin which $\\overline{z}$ is the conjugate complex number of $z$.\n\nAs some applications of the mutual representation theorem, four-term recurrences, Christoffel-Darboux formulae and some properties of zeros for OTP were obtained in \\cite{dd08}. In fact, by the mutual representation theorem, some important theorems (such as Favard, Baxter, Geronimus, Rakhmanov, Szeg\\\"o and the\nstrong Szeg\\\"o) in the theory of OPUC can be established for OTP. This is one of the themes in the present paper.\n\nAt present, together with the nonlinear steepest descent method due to Deift and Zhou \\cite{dz}, Riemann-Hilbert problems are mainly applied into some asymptotic analysis problems on integrable systems, orthogonal polynomials, combinatorics and random matrices, etc. \\cite{bdj,deift,mehta}. However, except for asymptotic analysis \\cite{bdj,dd06,dd08,dz}, Riemann-Hilbert problems can also be used to some analytic and algebraic problems such as yielding some difference equations, differential equations and so on \\cite{deift,deift1}. As an example to show the subtlety and power of RHPs in this facet, a new proof is given for Szeg\\\"o recursions of OPUC and four-term recurrences of OTP by using their RH characterizations in Section 4. As a byproduct, some new identities on Cauchy integrals for both OPUC and OTP as well as Hilbert transforms for OTP are also obtained.\n\nThis paper is organized as follows.\n\nIn Section 2, some definitions and notations are introduced involving OPUC and OTP as well some coefficients about them such as Verblunsky coefficients and so on, the mutual representation theorem and some of its consequences are given.\n\nIn Section 3, some theorems for OTP are obtained in terms of the mutual representation theorem including Favard, Baxter, Geronimus,\nRakhmanov, Szeg\\\"o and the strong Szeg\\\"o theorems which are important in the theory of OPUC. However, Favard theorem in this section is only in a weak form.\n\nAs stated above, in Section 4, some identities such as Szeg\\\"o recursions of OPUC, four-term recurrences of OTP and so on are obtained by using RH characterizations for OPUC and OTP respectively.\n\nThe final section is mainly devoted to prove a Favard theorem stronger than the one in Section 3.\n\n\n\n\n\\section{Mutual representation and its consequences}\nLet $\\mathbb{D}$ be the unit disc in the complex plane,\n$\\partial\\mathbb{D}$ be the unit circle and $\\mu$ be a nontrivial\nprobability measure on $\\partial \\mathbb{D}$ (i.e. with infinity\nsupport, nonnegative and $\\mu(\\partial \\mathbb{D})=1$). Throughout this paper,\nby decomposition, we always write\n\\begin{equation}\nd\\mu(\\tau)=w(\\tau)\\frac{d\\tau}{2\\pi i\\tau}+d\\mu_{s}(\\tau),\n\\end{equation}\nwhere $\\tau\\in\\partial \\mathbb{D}$, $w(\\tau)=2\\pi i\\tau\nd\\mu_{ac}\/d\\tau$ in which $d\\mu_{ac}$ is the absolutely continuous part of $d\\mu$, and $d\\mu_{s}$ is the singular part of $d\\mu$.\n\nIntroduce two class of inner products, one is complex as follows\n\\begin{equation}\n\\langle f, g\\rangle_{\\mathbb{C}}=\\int_{\\partial\n\\mathbb{D}}\\overline{f(\\tau)}g(\\tau)d\\mu(\\tau)\n\\end{equation}\nwith norm $||f||_{\\mathbb{C}}=[\\int_{\\partial\n\\mathbb{D}}|f(\\tau)|^{2}d\\mu(\\tau)]^{1\/2}$, where $f, g$ are complex\nintegrable functions on $\\partial \\mathbb{D}$. The other is real and defined by\n\\begin{equation}\n\\langle f, g\\rangle_{\\mathbb{R}}=\\int_{\\partial\n\\mathbb{D}}f(\\tau)g(\\tau)d\\mu(\\tau)\n\\end{equation}\nwith norm $||f||_{\\mathbb{R}}=[\\int_{\\partial\n\\mathbb{D}}|f(\\tau)|^{2}d\\mu(\\tau)]^{1\/2}$, where $f, g$ are real\nintegrable functions on $\\partial \\mathbb{D}$.\n\nBy the complex inner product (2.2), applying Gram-Schmidt procedure\nto the following system\n\\begin{equation*}\n\\{1,z,z^{2},\\ldots,z^{n},\\ldots\\},\n\\end{equation*}\nwhere $z\\in \\mathbb{C}$, we get the unique system $\\{\\Phi_{n}(z)\\}$\nof monic orthogonal polynomials on the unit circle with respect to\n$\\mu$ satisfying\n\\begin{equation}\n\\langle\\Phi_{n},\\Phi_{m}\\rangle_{\\mathbb{C}}=\\kappa_{n}^{-2}\\delta_{nm}\\,\\,\\,\\text{with}\\,\\,\\,\n\\kappa_{n}>0.\n\\end{equation}\nThen the orthonormal polynomials $\\varphi_{n}(z)$ on the unit circle\nsatisfy\n\\begin{equation}\n\\langle\\varphi_{n},\\varphi_{m}\\rangle_{\\mathbb{C}}=\\delta_{nm}\\,\\,\\,\\text{and}\\,\\,\\,\n\\varphi_{n}(z)=\\kappa_{n}\\Phi_{n}(z).\n\\end{equation}\n\nFor any polynomial $Q_{n}$ of order $n$, its reversed polynomial\n$Q_{n}^{*}$ is defined by\n\\begin{equation}\nQ_{n}^{*}(z)=z^{n}\\overline{Q_{n}(1\/\\overline{z})}.\n\\end{equation}\n\nOne famous property of OPUC is Szeg\\\"o recurrence \\cite{sze}, i.e.\n\\begin{equation}\n\\Phi_{n+1}(z)=z\\Phi_{n}(z)-\\overline{\\alpha}_{n}\\Phi^{*}_{n}(z),\n\\end{equation}\nwhere $\\alpha_{n}=-\\overline{\\Phi_{n+1}(0)}$ are called Verblunsky\ncoefficients. It is well known that $\\alpha_{n}\\in \\mathbb{D}$ for\n$n\\in \\mathbb{N}\\cup\\{0\\}$. By convention, $\\alpha_{-1}=-1$ (see\n\\cite{sim1}). Szeg\\\"o recurrence (2.7) is extremely useful in the\ntheory of OPUC. Especially, Verblunsky coefficients play an\nimportant role in many interesting problems for OPUC (see \\cite{sim1,sim2}).\n\nUsing the real inner product (2.3) and Gram-Schmidt procedure to the\nfollowing over $\\mathbb{R}$ linearly independent ordered set\n \\begin{equation}\n\\Big\\{1, \\frac {z-z^{-1}}{2i}, \\frac {z+z^{-1}}{2}, \\ldots,\n \\frac{z^{n}-z^{-n}}{2i}, \\frac{z^{n}+z^{-n}}{2}, \\ldots \\Big\\},\n \\end{equation}\nwhere $z\\in \\mathbb{C}\\setminus\\{0\\}$, we get the unique system\n\\begin{equation}\n\\{1, b_{1}\\pi_{1}(z), a_{1}\\sigma_{1}(z), \\ldots, b_{n}\\pi_{n}(z),\na_{n}\\sigma_{n}(z), \\ldots\\}\n\\end{equation} of\nthe ``monic\" orthogonal Laurent polynomials (concisely, OLP) of the first class on the unit circle\nwith respect to $\\mu$ fulfilling\n\\begin{equation}\n\\langle\\pi_{m},\\sigma_{n}\\rangle_{\\mathbb{R}}=0,\n\\langle\\pi_{m},\\pi_{n}\\rangle_{\\mathbb{R}}=\\langle\\sigma_{m},\\sigma_{n}\\rangle_{\\mathbb{R}}=\\delta_{mn},\\,\\,\\,m,n=1,2,\\ldots\n\\end{equation}\nand\n\\begin{equation}\na_{n}\\sigma_{n}(z)=\\frac{z^{n}+z^{-n}}{2}-\\beta_{n}b_{n}\\pi_{n}(z)-\\imath_{n}a_{n-1}\\sigma_{n-1}(z)\n-\\jmath_{n}b_{n-1}\\pi_{n-1}(z)+\\text{lower order}\n\\end{equation}\nas well as\n\\begin{equation}\nb_{n}\\pi_{n}(z)=\\frac{z^{n}-z^{-n}}{2i}-\\varsigma_{n}a_{n-1}\\sigma_{n-1}(z)\n-\\zeta_{n}b_{n-1}\\pi_{n-1}(z)+\\text{lower order},\n\\end{equation}\nwhere $a_{n},b_{n}>0$, which are respectively the norms of the\n``monic\" orthogonal Laurent polynomials of the first class given by right hand sides\nof (2.11) and (2.12),\n\\begin{equation}\n\\beta_{n}=\\langle\\frac{z^{n}+z^{-n}}{2},b_{n}^{-1}\\pi_{n}\\rangle_{\\mathbb{R}},\n\\end{equation}\n\\begin{equation}\n\\imath_{n}=\\langle\\frac{z^{n}+z^{-n}}{2},a_{n-1}^{-1}\\sigma_{n-1}\\rangle_{\\mathbb{R}},\\,\\,\n\\jmath_{n}=\\langle\\frac{z^{n}+z^{-n}}{2},b_{n-1}^{-1}\\pi_{n-1}\\rangle_{\\mathbb{R}}\n\\end{equation}\nand\n\\begin{equation}\n\\varsigma_{n}=\\langle\\frac{z^{n}-z^{-n}}{2i},a_{n-1}^{-1}\\sigma_{n-1}\\rangle_{\\mathbb{R}},\\,\\,\n\\zeta_{n}=\\langle\\frac{z^{n}-z^{-n}}{2i},b_{n-1}^{-1}\\pi_{n-1}\\rangle_{\\mathbb{R}}.\n\\end{equation}\n\nThroughout, as a convention, take $\\sigma_{0}=1$, $\\pi_{0}=0$ and\n$\\beta_{0}=0$ as well as $a_{0}=b_{0}=1$.\n\nIn deed, identifying the unit circle with the interval $[0,2\\pi)$\nvia the map $\\theta\\rightarrow e^{i\\theta}$, we get the\northonormal trigonometric polynomials of the first class $\\pi_{n}(\\theta)$ and\n$\\sigma_{n}(\\theta)$ for the over $\\mathbb{R}$ linearly ordered trigonometric system\n\\begin{equation}\n\\{1, \\sin\\theta, \\cos\\theta, \\ldots, \\sin n\\theta, \\cos n\\theta,\n\\ldots\\}\n\\end{equation}\nby the above process when $z=e^{i\\theta},\\,\\theta\\in [0,2\\pi)$.\n\nAs noted in the introduction, by the uniqueness of solution of the the RHC (1.3), we have the following mutual representation theorem for OPUC and OTP.\n\n\\begin{thm}[\\!\\!\\cite{dd08}]\nLet $\\mu$ be a nontrivial probability measure on the unit circle\n$\\partial \\mathbb{D}$, $\\{1, \\pi_{n}, \\sigma_{n}\\}$ be\nthe unique system of the orthonormal Laurent polynomials of the first class on\nthe unit circle with respect to $\\mu$, and $\\{\\Phi_{n}\\}$ be the\nunique system of the monic orthogonal polynomials on the unit circle\nwith respect to $\\mu$. Then for any $z\\in \\mathbb{C}$ and $n\\in\n\\mathbb{N}$,\n\\begin{equation}\n\\Phi_{2n-1}(z)=z^{n-1}[a_{n}\\sigma_{n}(z)+(\\beta_{n}+i)b_{n}\\pi_{n}(z)]\n\\end{equation}\nand\n\\begin{equation}\n\\kappa^{2}_{2n}\\Phi^{*}_{2n}(z)=\\frac{1}{2}z^{n}[a^{-1}_{n}(1+\\beta_{n}i)\\sigma_{n}(z)\n-ib^{-1}_{n}\\pi_{n}(z)],\n\\end{equation}\nwhere $\\kappa_{n}$ is the leading coefficient of the orthonormal\npolynomial of order $n$ on the unit circle with respect to $\\mu$, $\\kappa_{n}=\\|\\Phi_{n}\\|^{-1}_{\\mathbb{C}}$, $\\Phi_{n}^{*}$ is the reversed polynomial of $\\Phi_{n}$, and $a_{n},\nb_{n}, \\beta_{n}$ are given in (2.11)-(2.13).\n\\end{thm}\n\nDenote\n$\\Lambda_{n}=-\\frac{1}{2}[a_{n}^{-2}(1+\\beta_{n}^{2})+b_{n}^{-2}]i$,\nby (2.17) and (2.18), we obtain\n\\begin{thm}\n\\begin{equation}\na_{n}\\sigma_{n}(z)=-\\frac{1}{2}z^{-n}[\\Lambda_{n}^{-1}b_{n}^{-2}iz\\Phi_{2n-1}(z)-(1-\\beta_{n}i)\\Phi_{2n}^{*}(z)]\n\\end{equation}\nand\n\\begin{equation}\nb_{n}\\pi_{n}(z)=-\\frac{1}{2}z^{-n}[\\Lambda_{n}^{-1}a_{n}^{-2}(1+\\beta_{n}i)z\\Phi_{2n-1}(z)-i\\Phi_{2n}^{*}(z)]\n\\end{equation}\nfor $n\\in \\mathbb{N}$ and $z\\in \\mathbb{C}\\setminus\\{0\\}$.\n\\end{thm}\n\n\n\nAs some consequences, we have\n\\begin{thm} [\\!\\!\\cite{dd08}]\n\\begin{equation}\n\\kappa_{2n}^{2}=\\frac{1}{4}[a_{n}^{-2}(1+\\beta_{n}^{2})+b_{n}^{-2}]\n\\end{equation}\nfor $n\\in \\mathbb{N}\\cup\\{0\\}$.\n\\end{thm}\n\n\\begin{thm}\n\\begin{equation}\n\\alpha_{2n-1}=\\frac{1}{4}\\kappa_{2n}^{-2}[b_{n}^{-2}-a_{n}^{-2}(1-\\beta_{n}^{2})]-\\frac{1}{2}\\kappa_{2n}^{-2}a_{n}^{-2}\n\\beta_{n}i\n\\end{equation}\nand\n\\begin{equation}\n\\alpha_{2n-2}=\\frac{1}{2}(\\imath_{n}+\\beta_{n-1}\\varsigma_{n}-\\zeta_{n})-\\frac{i}{2}(\\jmath_{n}-\\imath_{n}\\beta_{n-1}\n+\\varsigma_{n})\n\\end{equation}\nfor $n\\in \\mathbb{N}$.\n\\end{thm}\n\\begin{proof}\n(2.22) is referred to \\cite{dd08}. (2.23) follows from (2.11),\n(2.12), (2.17) and the fact\n$\\alpha_{2n-2}=-\\overline{\\Phi_{2n-1}(0)}.$\n\\end{proof}\n\nSince $\\kappa_{n}^{2}\/\\kappa_{n+1}^{2}=1-|\\alpha_{n}|^{2}$ for $n\\in\n\\mathbb{N}\\cup\\{0\\}$, by Theorem 2.3 and 2.4, we get\n\n\\begin{thm}\n\\begin{equation}\n\\kappa_{2n-1}^{2}=[a_{n}^{2}+b_{n}^{2}(1+\\beta^{2}_{n})]^{-1}\n\\end{equation}\nfor $n\\in \\mathbb{N}$.\n\\end{thm}\n\nTherefore, by (2.21) and (2.24), we obtain\n\\begin{thm}\n\\begin{equation}\n\\lim_{n\\rightarrow\n\\infty}a_{n}b_{n}=\\frac{1}{2}\\exp\\Big(\\frac{1}{2\\pi i}\\int_{\\partial\n\\mathbb{D}}\\log w(\\tau)\\frac{d\\tau}{\\tau}\\Big)\n\\end{equation}\nand\n\\begin{equation}\n\\lim_{n\\rightarrow\n\\infty}[a_{n}^{2}+b_{n}^{2}(1+\\beta^{2}_{n})]=\\exp\\Big(\\frac{1}{2\\pi\ni}\\int_{\\partial \\mathbb{D}}\\log w(\\tau)\\frac{d\\tau}{\\tau}\\Big).\n\\end{equation}\n\\end{thm}\n\\begin{proof}\nSince (see \\cite{sze,sim1})\n\\begin{equation}\n\\lim_{n\\rightarrow \\infty}\\kappa_{n}^{-2}=\\exp\\Big(\\frac{1}{2\\pi\ni}\\int_{\\partial \\mathbb{D}}\\log w(\\tau)\\frac{d\\tau}{\\tau}\\Big),\n\\end{equation}\nthen (2.26) follows from (2.24) whereas (2.25) follows from\n\\begin{equation}\n\\kappa_{2n-1}^{2}\\kappa_{2n}^{2}=\\frac{1}{4}a_{n}^{-2}b_{n}^{-2}\n\\end{equation}\nwhich holds by (2.21) and (2.24).\n\\end{proof}\n\nIn addition, we also have\n\\begin{thm}\n\\begin{align}\n&[a_{n}^{-2}(1+\\beta_{n}^{2})+b_{n}^{-2}][a_{n+1}^{2}+b_{n+1}^{2}(1+\\beta_{n+1}^{2})]+(\\imath_{n+1}+\\beta_{n}\\varsigma_{n+1}-\\zeta_{n+1})^{2}\\nonumber\\\\\n&+(\\jmath_{n+1}-\\imath_{n+1}\\beta_{n}\n+\\varsigma_{n+1})^{2}=4\n\\end{align}\nfor $n\\in \\mathbb{N}\\cup\\{0\\}$.\n\\end{thm}\n\\begin{proof}\nIt immediately follows from (2.21), (2.23) and (2.24) since\n$\\kappa_{2n}^{2}\/\\kappa_{2n+1}^{2}=1-|\\alpha_{2n}|^{2}$ for $n\\in\n\\mathbb{N}\\cup\\{0\\}$.\n\\end{proof}\n\nIn the rest of this section, we give another identity on the coefficients $a_{n}, b_{n},\\beta_{n}$ of OTP and $\\alpha_{n}, \\kappa_{n}$ of OPUC. The main idea will also be used in Section 5 below. To do so, we need the following simple facts.\n\\begin{lem}\nLet $\\Phi_{n}$ be the monic orthogonal polynomial on the unit circle of order $n$ with respect to $\\mu$, and $\\Phi^{*}_{n}$ be the reversed polynomial of $\\Phi_{n}$, then\n\\begin{equation}\n\\langle 1,\\Phi_{n}^{*}\\rangle_{\\mathbb{R}}=\\int_{\\partial \\mathbb{D}}\\Phi_{n}^{*}(\\tau)d\\mu(\\tau)=\\kappa_{n}^{-2}\n\\end{equation}\nand\n\\begin{equation}\n\\langle 1,z\\Phi_{n}\\rangle_{\\mathbb{R}}=\\int_{\\partial \\mathbb{D}}\\tau\\Phi_{n}(\\tau)d\\mu(\\tau)=\\alpha_{n}^{-1}\\Big(\\kappa_{n}^{-2}-\\kappa_{n+1}^{-2}\\Big),\n\\end{equation}\nwhere the Verblunsky coefficient $\\alpha_{n}$ is restricted in $\\mathbb{D}\\setminus\\{0\\}$.\n\\end{lem}\n\\begin{proof}\nBy (2.6), we have\n\\begin{align}\n\\int_{\\partial \\mathbb{D}}\\Phi_{n}^{*}(\\tau)d\\mu(\\tau)=\\int_{\\partial \\mathbb{D}}\\tau^{n}\\overline{\\Phi_{n}(\\tau)}d\\mu(\\tau)=\\langle z^{n},\\Phi_{n}\\rangle_{\\mathbb{C}}=\\langle \\Phi_{n},\\Phi_{n}\\rangle_{\\mathbb{C}}.\n\\end{align}\nThus (2.30) holds on account of (2.4). For $\\alpha_{n}\\in\\mathbb{D}\\setminus\\{0\\}$, by Szeg\\\"o recurrence (2.7) (or see (4.12) below),\n\\begin{align}\n\\int_{\\partial \\mathbb{D}}\\tau\\Phi_{n}(\\tau)d\\mu(\\tau)=\\alpha_{n}^{-1}\\Big[\\int_{\\partial \\mathbb{D}}\\Phi_{n}^{*}(\\tau)d\\mu(\\tau)-\\int_{\\partial \\mathbb{D}}\\Phi_{n+1}^{*}(\\tau)d\\mu(\\tau)\\Big].\n\\end{align}\nTherefore, (2.31) follows from (2.30).\n\\end{proof}\n\n\\begin{thm}\n\\begin{align}\n&\\alpha_{2n-1}\\beta_{n}+\\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)(1+\\alpha_{2n-1})\\kappa_{2n-1}^{-2}\\nonumber\n\\\\-&\\frac{1}{4}\\Big[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)+\\alpha_{2n-1}b_{n}^{-2}i\\Big]\\kappa_{2n}^{-2}=0\n\\end{align}\nfor $n\\in \\mathbb{N}$.\n\\end{thm}\n\\begin{proof}\nIn the case of $\\alpha_{2n-1}=0$, since $\\kappa_{2n-1}=\\kappa_{2n}$ by $\\kappa_{2n-1}^{2}\/\\kappa_{2n}^{2}=1-|\\alpha_{2n-1}|^{2}$, it is easy to get (2.34).\nSo in what follows, we always assume that $\\alpha_{2n-1}\\in \\mathbb{D}\\setminus\\{0\\}$.\n\nBy Theorem 2.2, Lemma 2.8 and Szeg\\\"o recurrence, we have\n\\begin{align}\n\\langle z^{n},b_{n}\\pi_{n}\\rangle_{\\mathbb{R}}=&\\langle z^{n},-\\frac{1}{2}z^{-n}[\\Lambda_{n}^{-1}a_{n}^{-2}(1+\\beta_{n}i)z\\Phi_{2n-1}-i\\Phi_{2n}^{*}]\\rangle_{\\mathbb{R}}\\nonumber\\\\\n=&-\\frac{1}{2}\\Lambda_{n}^{-1}a_{n}^{-2}(1+\\beta_{n}i)\\langle 1,z\\Phi_{2n-1}\\rangle_{\\mathbb{R}}+\\frac{i}{2}\\langle 1,\\Phi_{2n}^{*}\\rangle_{\\mathbb{R}}\\nonumber\\\\\n=&-\\frac{1}{2}\\Lambda_{n}^{-1}a_{n}^{-2}(1+\\beta_{n}i)\\alpha_{2n-1}^{-1}\\kappa_{2n-1}^{-2}+\\Big[\\frac{1}{2}\\Lambda_{n}^{-1}a_{n}^{-2}(1+\\beta_{n}i)\\alpha_{2n-1}^{-1}+\\frac{i}{2}\\Big]\\kappa_{2n}^{-2}\\nonumber\n\\end{align}\nand\n\\begin{align}\n\\langle z^{-n},b_{n}\\pi_{n}\\rangle_{\\mathbb{R}}=&\\langle z^{-n},-\\frac{1}{2}z^{-n}[\\Lambda_{n}^{-1}a_{n}^{-2}(1+\\beta_{n}i)z\\Phi_{2n-1}-i\\Phi_{2n}^{*}]\\rangle_{\\mathbb{R}}\\nonumber\\\\\n=&-\\frac{1}{2}\\Lambda_{n}^{-1}a_{n}^{-2}(1+\\beta_{n}i)\\langle 1,z^{-(2n-1)}\\Phi_{2n-1}\\rangle_{\\mathbb{R}}+\\frac{i}{2}\\langle 1,z^{-2n}\\Phi_{2n}^{*}\\rangle_{\\mathbb{R}}\\nonumber\\\\\n=&-\\frac{1}{2}\\Lambda_{n}^{-1}a_{n}^{-2}(1+\\beta_{n}i)\\langle z^{(2n-1)},\\Phi_{2n-1}\\rangle_{\\mathbb{C}}+\\frac{i}{2}\\overline{\\langle 1,\\Phi_{2n}\\rangle}_{\\mathbb{R}}\\nonumber\\\\\n=&-\\frac{1}{2}\\Lambda_{n}^{-1}a_{n}^{-2}(1+\\beta_{n}i)\\langle \\Phi_{2n-1},\\Phi_{2n-1}\\rangle_{\\mathbb{C}}\\nonumber\\\\\n=&-\\frac{1}{2}\\Lambda_{n}^{-1}a_{n}^{-2}(1+\\beta_{n}i)\\kappa_{2n-1}^{-2}.\n\\end{align}\nThus\n\\begin{align}\n\\beta_{n}=&\\langle\\frac{z^{n}+z^{-n}}{2},b_{n}^{-1}\\pi_{n}\\rangle_{\\mathbb{R}}=b_{n}^{-2}\\langle\\frac{z^{n}+z^{-n}}{2},b_{n}\\pi_{n}\\rangle_{\\mathbb{R}}\\nonumber\\\\\n=&-\\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\Big(\\alpha_{2n-1}^{-1}+1\\Big)\\kappa_{2n-1}^{-2}\\nonumber\\\\\n&+\\frac{1}{4}\\Big[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\alpha_{2n-1}^{-1}+b_{n}^{-2}i\\Big]\\kappa_{2n}^{-2}.\n\\end{align}\nMultiplying by $\\alpha_{2n-1}$ on two sides of (2.36), (2.34) immediately follows.\n\\end{proof}\n\n\\begin{rem}\nNoting\n\\begin{equation}\n\\Lambda_{n}=-\\frac{1}{2}[a_{n}^{-2}(1+\\beta_{n}^{2})+b_{n}^{-2}]i=-2\\kappa_{2n}^{2}i,\n\\end{equation}\nwe can also get (2.34) by directly invoking (2.21), (2.22) and (2.24) together.\n\\end{rem}\n\n\n\n\n\\section{Favard, Baxter, Geronimus,\nRakhmanov and Szeg\\\"o theorems}\n\nIn the present section, some theorems are obtained for orthogonal trigonometric polynomials, such\nas Favard, Baxter, Geronimus, Rakhmanov theorems and so on, which play important roles in the theory\nof OPUC\n\\cite{sim1,sim2}.\n\n\\subsection{Weak Favard Theorem} We begin with a weak Favard Theorem for OTP.\nFavard theorem for OPRL is about\nthe orthogonality of a system of polynomials which satisfies a\nthree-term recurrence with appropriate coefficients \\cite{sze,ma}.\nIts OPUC version is well-known and also called Verblunsky theorem\n\\cite{sim1,enzg}, that is, if $\\{\\alpha_{n}^{(0)}\\}_{n=0}^{\\infty}$ is a\nsequence of complex numbers in $\\mathbb{D}$, then there exists a\nunique measure $d\\mu$ such that $\\alpha_{n}(d\\mu)=\\alpha_{n}^{(0)}$,\nwhere $\\alpha_{n}(d\\mu)$ are the associated Verblunsky coefficients\nof $d\\mu$.\n\nFor orthogonal trigonometric polynomials, we have the following Favard theorem in a weak form.\n\\begin{thm} Let\n$\\{(a_{n}^{(0)},b_{n}^{(0)},\\beta_{n}^{(0)})\\}_{n=0}^{\\infty}$ with\n$a_{0}^{(0)},b_{0}^{(0)}=1$ and $\\beta_{0}^{(0)}=0$ be a system of\nthree-tuples of real numbers satisfying\n\\begin{align}\n&[(a_{n}^{(0)})^{2}+(b_{n}^{(0)})^{2}(1+(\\beta_{n}^{(0)})^{2})]\n[(a_{n+1}^{(0)})^{2}+(b_{n+1}^{(0)})^{2}(1+(\\beta_{n+1}^{(0)})^{2})]<4(a_{n}^{(0)})^{2}(b_{n}^{(0)})^{2}\n\\end{align}\nwith $a_{n}^{(0)},b_{n}^{(0)}>0$ for $n\\in \\mathbb{N}\\cup\\{0\\}$,\nthen there exists a nontrivial probability measure $d\\mu$ on\n$\\partial \\mathbb{D}$ such that $a_{n}(d\\mu)=a_{n}^{(0)}$,\n$b_{n}(d\\mu)=b_{n}^{(0)}$ and $\\beta_{n}(d\\mu)=\\beta_{n}^{(0)}$,\nwhere $a_{n}(d\\mu),b_{n}(d\\mu),\\beta_{n}(d\\mu)$ are associated\ncoefficients of $d\\mu$ defined by (2.11)-(2.13).\n\\end{thm}\n\n\\begin{proof}\nFor $n\\in \\mathbb{N}\\cup\\{0\\}$, define \\begin{equation}\n\\kappa_{2n}^{(0)}=\\frac{1}{2}\\Big[(a_{n}^{(0)})^{-2}\\big(1+(\\beta_{n}^{(0)})^{2}\\big)+(b_{n}^{(0)})^{-2}\\Big]^{\\frac{1}{2}}\n\\end{equation}\nand\n\\begin{equation}\n\\kappa_{2n+1}^{(0)}=\\Big[(a_{n+1}^{(0)})^{2}+(b_{n+1}^{(0)})^{2}\\big(1+(\\beta_{n+1}^{(0)})^{2}\\big)\\Big]^{-\\frac{1}{2}}.\n\\end{equation}\n\nLet\n\\begin{equation}\n\\alpha_{2n-1}^{(0)}=\\frac{1}{4}(\\kappa_{2n}^{(0)})^{-2}\\Big[(b_{n}^{(0)})^{-2}-(a_{n}^{(0)})^{-2}\\big(1-(\\beta_{n}^{(0)})^{2}\\big)\\Big]\n-\\frac{1}{2}(\\kappa_{2n}^{(0)})^{-2}(a_{n}^{(0)})^{-2}\n(\\beta_{n}^{(0)})i,\n\\end{equation}\nthen $\\alpha_{2n-1}^{(0)}\\in \\mathbb{D}$ since\n\\begin{equation}\n\\Big|\\alpha_{2n-1}^{(0)}\\Big|^{2}=\\frac{(\\kappa_{2n}^{(0)})^{4}-\\frac{1}{4}(a_{n}^{(0)})^{-2}(b_{n}^{(0)})^{-2}}\n{(\\kappa_{2n}^{(0)})^{4}}=1-\\frac{(\\kappa_{2n-1}^{(0)})^{2}}{(\\kappa_{2n}^{(0)})^{2}}\n\\end{equation}\nand $a_{n}^{(0)},b_{n}^{(0)}>0$. Note that (3.1) is equivalent to\n\\begin{equation}\n\\frac{\\kappa_{2n}^{(0)}}{\\kappa_{2n+1}^{(0)}}<1.\n\\end{equation}\n\nArbitrarily choose a sequence $\\{\\alpha_{2n}^{(0)}\\}_{n=0}^{\\infty}$\nsuch that\n\\begin{equation}\n\\Big|\\alpha_{2n}^{(0)}\\Big|=\\sqrt{1-(\\kappa_{2n}^{(0)})^{2}\\big\/(\\kappa_{2n+1}^{(0)})^{2}}\n\\end{equation}\nand fix it, then $\\alpha_{2n}^{(0)}\\in\\mathbb{D}$ for $n\\in\n\\mathbb{N}\\cup\\{0\\}$ by (3.6).\n\nTherefore, for this fixed sequence\n$\\{\\alpha_{n}^{(0)}\\}_{n=0}^{\\infty}$, by Verblunsky theorem, there\nexists a unique nontrivial probability measure $d\\mu$ on $\\partial\n\\mathbb{D}$ such that\n\\begin{equation}\n\\alpha_{n}(d\\mu)=\\alpha_{n}^{(0)}\n\\end{equation} for $n\\in\n\\mathbb{N}\\cup\\{0\\}$. Then for $n\\in \\mathbb{N}\\cup\\{0\\}$,\n\\begin{equation}\n\\kappa_{n}(d\\mu)=\\kappa_{n}^{(0)}\n\\end{equation}\nsince\n$\\kappa_{n}(d\\mu)=\\prod_{j=0}^{n-1}(1-|\\alpha_{j}(d\\mu)|^{2})^{-\\frac{1}{2}}$\n(see \\cite{sim1}).\n\nSuppose that $\\{\\Phi_{n}(d\\mu,z)\\}_{n=0}^{\\infty}$ is the sequence\nof monic orthogonal polynomials on the unit circle with respect to\n$d\\mu$, set\n\\begin{equation}\n\\Sigma_{n}(z)=-\\frac{1}{2}z^{-n}[(\\Lambda_{n}^{(0)})^{-1}(b_{n}^{(0)})^{-2}iz\\Phi_{2n-1}(d\\mu,z)\n-(1-\\beta_{n}^{(0)}i)\\Phi_{2n}^{*}(d\\mu,z)]\n\\end{equation}\nand\n\\begin{equation}\n\\Pi_{n}(z)=-\\frac{1}{2}z^{-n}[(\\Lambda_{n}^{(0)})^{-1}(a_{n}^{(0)})^{-2}(1+\\beta_{n}^{(0)}i)z\\Phi_{2n-1}(d\\mu,z)\n-i\\Phi_{2n}^{*}(d\\mu,z)]\n\\end{equation}\nfor $n\\in \\mathbb{N}$ and $z\\in \\mathbb{C}\\setminus\\{0\\}$, where\n$\\Lambda_{n}^{(0)}=-\\frac{1}{2}\\Big[(a_{n}^{(0)})^{-2}\\big(1+(\\beta_{n}^{(0)})^{2}\\big)+(b_{n}^{(0)})^{-2}\\Big]i$.\nObviously,\n\\begin{equation}\n\\Lambda_{n}^{(0)}=-2(\\kappa_{2n}^{(0)})^{2}i.\n\\end{equation}\nBy Szeg\\\"o recurrence and (3.8),\n\\begin{equation}\nz\\Phi_{2n-1}(d\\mu,z)=\\Phi_{2n}(d\\mu,z)+\\overline{\\alpha^{(0)}_{2n-1}}\\Phi^{*}_{2n-1}(d\\mu,z).\n\\end{equation}\nHence by the orthogonality of $\\Phi_{n}(d\\mu, z)$ and\n$\\Phi_{n}^{*}(d\\mu, z)$, we get\n\\begin{equation}\n\\langle z^{\\pm j}, \\Sigma_{n}\\rangle_{\\mathbb{R}}=\\langle z^{\\pm j},\n\\Pi_{n}\\rangle_{\\mathbb{R}}=0,\\,\\,\\,\\,j=0,1,\\ldots,n-1.\n\\end{equation}\nMoreover,\n\\begin{equation}\n\\langle z^{n},\n\\Sigma_{n}\\rangle_{\\mathbb{R}}=(a_{n}^{(0)})^{2}\\overline{\\alpha^{(0)}_{2n-1}}+\\frac{1}{2}(\\kappa_{2n}^{(0)})^{-2}(1-\\beta_{n}^{(0)}i),\n\\end{equation}\n\\begin{equation}\n\\langle z^{-n}, \\Sigma_{n}\\rangle_{\\mathbb{R}}=(a_{n}^{(0)})^{2},\n\\end{equation}\n\\begin{equation}\n\\langle z^{n},\n\\Pi_{n}\\rangle_{\\mathbb{R}}=(b_{n}^{(0)})^{2}(\\beta_{n}^{(0)}-i)\\overline{\\alpha^{(0)}_{2n-1}}+\\frac{1}{2}(\\kappa_{2n}^{(0)})^{-2}i,\n\\end{equation}\nand\n\\begin{equation}\n\\langle z^{-n},\n\\Pi_{n}\\rangle_{\\mathbb{R}}=(b_{n}^{(0)})^{2}(\\beta_{n}^{(0)}-i)\n\\end{equation}\nfollow from (3.9), (3.12) and the fact\n$||\\Phi_{n}(d\\mu)||_{\\mathbb{R}}^{2}=||\\Phi_{n}^{*}(d\\mu)||_{\\mathbb{R}}^{2}=[\\kappa_{n}(d\\mu)]^{^{-2}}$\nas well as\n$(\\kappa_{2n-1}^{(0)})^{2}(\\kappa_{2n}^{(0)})^{2}=\\frac{1}{4}(a_{n}^{(0)})^{-2}(b_{n}^{(0)})^{-2}$.\nBy (3.4),\n\\begin{equation}\n\\overline{\\alpha^{(0)}_{2n-1}}-1=-\\frac{1}{2}(\\kappa_{2n}^{(0)})^{-2}(a_{n}^{(0)})^{-2}(1-\\beta_{n}^{(0)}i)\n\\end{equation}\nand\n\\begin{equation}\n\\overline{\\alpha^{(0)}_{2n-1}}+1=\\frac{1}{2}(\\kappa_{2n}^{(0)})^{-2}\\Big[(a_{n}^{(0)})^{-2}(\\beta_{n}^{(0)})^{2}\n+(b_{n}^{(0)})^{-2}\\Big]+\\frac{1}{2}(\\kappa_{2n}^{(0)})^{-2}(a_{n}^{(0)})^{-2}\\beta_{n}^{(0)}i.\n\\end{equation}\nSo\n\\begin{equation}\n\\langle \\frac{z^{n}+z^{-n}}{2},\n\\Sigma_{n}\\rangle_{\\mathbb{R}}=(a_{n}^{(0)})^{2},\\,\\,\\,\\langle\n\\frac{z^{n}-z^{-n}}{2i},\n\\Pi_{n}\\rangle_{\\mathbb{R}}=(b_{n}^{(0)})^{2}\n\\end{equation}\nand\n\\begin{equation}\n\\langle \\frac{z^{n}-z^{-n}}{2i}, \\Sigma_{n}\\rangle_{\\mathbb{R}}=0\n\\end{equation}\nas well as\n\\begin{equation}\n\\langle \\frac{z^{n}+z^{-n}}{2},\n\\Pi_{n}\\rangle_{\\mathbb{R}}=(b_{n}^{(0)})^{2}\\beta_{n}^{(0)}.\n\\end{equation}\n\nIn addition, it is easy to check that the\ncoefficients of $z^{n}$ and $z^{-n}$ in $\\Pi_{n}(z)$ are\nrespectively $\\frac{1}{2i}$ and $-\\frac{1}{2i}$ whereas both of ones\nin $\\Sigma_{n}(z)-\\beta_{n}^{(0)}\\Pi_{n}(z)$ are $\\frac{1}{2}$. Noting\n(3.14) and (3.22), this fact means that $\\Sigma_{n}(z)$ and\n$\\Pi_{n}(z)$ are just the ``monic\" orthogonal Laurent\npolynomials of the first class on the unit circle with respect to $d\\mu$, i.e.\n\\begin{equation}\n\\Sigma_{n}(z)=a_{n}(d\\mu)\\sigma_{n}(d\\mu,z)\\,\\,\\,\\,\\,\\text{and}\\,\\,\\,\\,\\,\\Pi_{n}(z)=b_{n}(d\\mu)\\pi_{n}(d\\mu,z).\n\\end{equation}\nSince\n\\begin{equation}\n\\langle a_{n}(d\\mu)\\sigma_{n}(d\\mu),\na_{n}(d\\mu)\\sigma_{n}(d\\mu)\\rangle_{\\mathbb{R}}=a_{n}^{2}(d\\mu),\n\\end{equation}\n\\begin{equation}\n\\langle b_{n}(d\\mu)\\pi_{n}(d\\mu),\nb_{n}(d\\mu)\\pi_{n}(d\\mu)\\rangle_{\\mathbb{R}}=b_{n}^{2}(d\\mu)\n\\end{equation}\nand\n\\begin{equation}\n\\langle \\frac{z^{n}+z^{-n}}{2},\nb_{n}(d\\mu)\\pi_{n}(d\\mu)\\rangle_{\\mathbb{R}}=b_{n}^{2}(d\\mu)\\beta_{n}(d\\mu),\n\\end{equation}\ntherefore, by (3.21) and (3.23),\n\\begin{equation}\na_{n}(d\\mu)=a_{n}^{(0)},\\,\\,\\,b_{n}(d\\mu)=b_{n}^{(0)},\\,\\,\\,\\beta_{n}(d\\mu)=\\beta_{n}^{(0)}.\n\\end{equation}\n\\end{proof}\n\n\\begin{rem}\nOnly for the sequence of three-tuples\n$(a_{n}^{(0)},b_{n}^{(0)},\\beta_{n}^{(0)})$ fulfilling (3.1), to get\n(3.28), the measure $d\\mu$ is not unique since the sequence can\ndefinitely determine Verblunsky coefficients with odd subscript but\nones with even subscript from the above proof.\n\nFor $n\\in \\mathbb{N}$, set\n\\begin{equation}\n\\imath_{n}(d\\mu)=\\langle\\frac{z^{n}+z^{-n}}{2},(a_{n-1}^{(0)})^{-1}\\sigma_{n-1}(d\\mu)\\rangle_{\\mathbb{R}},\n\\end{equation}\n\\begin{equation}\n\\jmath_{n}(d\\mu)=\\langle\\frac{z^{n}+z^{-n}}{2},(b_{n-1}^{(0)})^{-1}\\pi_{n-1}(d\\mu)\\rangle_{\\mathbb{R}},\n\\end{equation}\n\\begin{equation}\n\\varsigma_{n}(d\\mu)=\\langle\\frac{z^{n}-z^{-n}}{2i},(a_{n-1}^{(0)})^{-1}\\sigma_{n-1}(d\\mu)\\rangle_{\\mathbb{R}},\n\\end{equation}\nand\n\\begin{equation}\n\\zeta_{n}(d\\mu)=\\langle\\frac{z^{n}-z^{-n}}{2i},(b_{n-1}^{(0)})^{-1}\\pi_{n-1}(d\\mu)\\rangle_{\\mathbb{R}}.\n\\end{equation}\nThen the measure $d\\mu$ is unique for the sequence of seven-tuples\n\\begin{equation}\n(a_{n}^{(0)},b_{n}^{(0)},\\beta_{n}^{(0)},\\imath_{n}(d\\mu),\\jmath_{n}(d\\mu),\\varsigma_{n}(d\\mu),\\zeta_{n}(d\\mu))\n\\end{equation}\nsatisfying (3.1) by Theorem 2.4 and Verblunsky theorem. Since $d\\mu$\nis partly dependent on $(a_{n}^{(0)},b_{n}^{(0)},\\beta_{n}^{(0)})$ and\n$\\imath_{n}(d\\mu),\\jmath_{n}(d\\mu),\\varsigma_{n}(d\\mu),\\zeta_{n}(d\\mu)$\nare dependent on $d\\mu$, $a_{n}^{(0)}$ and $b_{n}^{(0)}$, then the\nsequence of seven-tuples (3.33) satisfying (3.1) is partly dependent on the\nsequence of three-tuples $(a_{n}^{(0)},b_{n}^{(0)},\\beta_{n}^{(0)})$\nfulfilling (3.1). Considering the uniqueness of $d\\mu$ for the\nsequence of (3.33) with (3.1), we call that $d\\mu$ is selectively\nunique for the sequence\n$\\{(a_{n}^{(0)},b_{n}^{(0)},\\beta_{n}^{(0)})\\}_{n=0}^{\\infty}$\nsatisfying (3.1) and $a_{n}^{(0)},b_{n}^{(0)}>0$ as well as\n$a_{0}^{(0)},b_{0}^{(0)}=1$ and $\\beta_{0}^{(0)}=0$. In Section 5, we will give a strong Favard theorem which in detail illuminates the relation on the uniqueness of $d\\mu$ and a sequence of seven-tuples, $\\{(a_{n}^{(0)}, b_{n}^{(0)},\\beta_{n}^{(0)},\\imath_{n}^{(0)},\\jmath_{n}^{(0)},\\varsigma_{n}^{(0)},\\zeta_{n}^{(0)})\\}$, with some additional properties.\n\\end{rem}\n\nSimilarly, by Theorems 2.3, 2.4, 2.7 and using the corresponding theorems for OPUC, we also have Baxter, Geronimus,\nRakhmanov, Szeg\\\"o and the strong Szeg\\\"o theorems for OTP in what follows.\n\\subsection{Baxter Theorem}\nLet\n\\begin{equation}\nc_{n}=\\int_{\\partial \\mathbb{D}}\\overline{\\tau}^{n}d\\mu(\\tau),\n\\,\\,\\,n\\in \\mathbb{N}\\cup\\{0\\}\n\\end{equation}\nbe moments of $\\mu$, Baxter theorem for OPUC states that\n$\\sum_{n=0}^{\\infty}|\\alpha_{n}|<0$ if and only if\n$\\sum_{n=0}^{\\infty}|c_{n}|<0$ and\n$d\\mu(\\tau)=w(\\tau)\\frac{d\\tau}{2\\pi i\\tau}$ with $w(\\tau)$\ncontinuous and $\\min_{\\tau\\in\\partial \\mathbb{D}}w(\\tau)>0$.\n\nFor orthogonal trigonometric polynomials, we have Baxter theorem as follows.\n\\begin{thm}\nLet $\\mu$ be a nontrivial probability measure on $\\partial\n\\mathbb{D}$, $a_{n}, b_{n}, \\beta_{n}$ be the associated\ncoefficients given in (2.11)-(2.13) and $c_{n}$ be the moments of\n$\\mu$ defined by (3.34), then\n\\begin{align}\n&\\sum_{n=0}^{\\infty}\\sqrt{1-\\frac{1}{4}[a_{n}^{-2}(1+\\beta_{n}^{2})+b_{n}^{-2}][a_{n+1}^{2}+b_{n+1}^{2}(1+\\beta_{n+1}^{2})]}\n\\nonumber\\\\\n+&\\sum_{n=0}^{\\infty}\\sqrt{\\frac{a_{n}^{4}+b_{n}^{4}(1+\\beta_{n}^{2})^{2}+2a_{n}^{2}b_{n}^{2}(\\beta_{n}^{2}-1)}\n{a_{n}^{4}+b_{n}^{4}(1+\\beta_{n}^{2})^{2}+2a_{n}^{2}b_{n}^{2}(\\beta_{n}^{2}+1)}}<\\infty\n\\end{align}\nis equivalent to $\\sum_{n=0}^{\\infty}|c_{n}|<0$ and\n$d\\mu(\\tau)=w(\\tau)\\frac{d\\tau}{2\\pi i\\tau}$ with $w(\\tau)$\ncontinuous and $\\min_{\\tau\\in\\partial \\mathbb{D}}w(\\tau)>0$.\n\\end{thm}\n\n\n\\subsection{Geronimus Theorem} To discuss Geronimus theorem, it is\nnecessary to introduce some basic notions of Schur algorithm (see\n\\cite{sim1}).\n\nAn analytic function $F$ on $\\mathbb{D}$ is called a Carath\\'eodory\nfunction if and only if $F(0)=1$ and $\\Re F(z)>0$ on $\\mathbb{D}$.\nAn analytic function $f$ on $\\mathbb{D}$ is called a Schur function\nif and only if $\\sup_{z\\in \\mathbb{D}}|f(z)|<1$. Let\n\\begin{equation}\nF(z)=\\int_{\\partial\\mathbb{D}}\\frac{\\tau+z}{\\tau-z}d\\mu(\\tau)\n\\end{equation}\nbe an associated Carath\\'eodory function of $\\mu$, then\n\\begin{equation}\nf(z)=\\frac{1}{z}\\frac{F(z)-1}{F(z)+1}\n\\end{equation}\nis a Schur function related to $\\mu$.\n\nStarting with a Schur function $f_{0}$, Schur algorithm actually\nprovides an approach to continuously map one Schur function to\nanother by a series of transforms of the form\n\\begin{equation}\n\\begin{cases}\nf_{n+1}(z)=\\displaystyle\\frac{1}{z}\\frac{f_{n}(z)-\\gamma_{n}}{1-\\overline{\\gamma}_{n}f_{n}(z)},\\\\[4mm]\n\\gamma_{n}=f_{n}(0).\n\\end{cases}\n\\end{equation}\n$f_{n}$ are called Schur iterates and $\\gamma_{n}$ are called Schur\nparameters associated to $f_{0}$. Due to Schur, it is well known\nthat there is a one to one correspondence between the set of Schur\nfunctions which are not finite Blaschke products and the set of\nsequences of $\\{\\gamma_{n}\\}_{n=0}^{\\infty}$ in $\\mathbb{D}$.\nGeronimus theorem for OPUC asserts that if $\\mu$ is a nontrivial\nprobability measure on $\\partial \\mathbb{D}$, the Schur parameters\n$\\{\\gamma_{n}\\}_{n=0}^{\\infty}$ associated to $f_{0}$ related to\n$\\mu$ defined by (3.36) and (3.37) are identical to the Verblunsky\ncoefficients $\\{\\alpha_{n}\\}_{n=0}^{\\infty}$.\n\nFor orthogonal trigonometric polynomials, we have Geronimus theorem as follows.\n\\begin{thm}\nLet $\\mu$ be a nontrivial probability measure on $\\partial\n\\mathbb{D}$, if $\\gamma_{n}$ are Schur parameters and $a_{n}$,\n$b_{n}$, $\\beta_{n}$, $\\imath_{n}$, $\\jmath_{n}$, $\\varsigma_{n}$,\n$\\zeta_{n}$ are coefficients associated to $\\mu$ defined by\n(2.11)-(2.15), then\n\\begin{equation}\n\\gamma_{2n-1}=\\frac{a_{n}^{2}-b_{n}^{2}(1-\\beta_{n}^{2})}{a_{n}^{2}+b_{n}^{2}(1+\\beta_{n}^{2})}\n-\\frac{2b_{n}^{2}\\beta_{n}}{a_{n}^{2}+b_{n}^{2}(1+\\beta_{n}^{2})}i\n\\end{equation}\nand\n\\begin{equation}\n\\gamma_{2n-2}=\\frac{1}{2}(\\imath_{n}+\\beta_{n-1}\\varsigma_{n}-\\zeta_{n})-\\frac{i}{2}(\\jmath_{n}-\\imath_{n}\\beta_{n-1}\n+\\varsigma_{n})\n\\end{equation}\nfor $n\\in \\mathbb{N}$.\n\\end{thm}\n\n\n\n\\subsection{Rakhmanov Theorem and Szeg\\\"o Theorem}\nLet $d\\mu$ have the decomposition form (2.1),\n$\\{\\alpha_{n}\\}_{n=0}^{\\infty}$ be the Verblunsky coefficients of\n$\\mu$, Rakhmanov theorem for OPUC states that if $w(\\tau)>0$ for\na.e. $\\tau\\in\n\\partial\\mathbb{D}$, then $\\lim_{n\\rightarrow\\infty}|\\alpha_{n}|=0$.\nIts OTP version is as follows.\n\\begin{thm}\nLet $\\mu$ be a nontrivial probability measure on $\\partial\n\\mathbb{D}$ with the decomposition form (2.1), $a_{n}, b_{n},\n\\beta_{n}$ be the associated coefficients of $\\mu$ given in\n(2.11)-(2.13). If $w(\\tau)>0$ for a.e. $\\tau\\in \\partial\\mathbb{D}$,\nthen\n\\begin{equation}\n\\lim_{n\\rightarrow\\infty}\\frac{a_{n}^{4}+b_{n}^{4}(1+\\beta_{n}^{2})^{2}+2a_{n}^{2}b_{n}^{2}(\\beta_{n}^{2}-1)}\n{a_{n}^{4}+b_{n}^{4}(1+\\beta_{n}^{2})^{2}+2a_{n}^{2}b_{n}^{2}(\\beta_{n}^{2}+1)}=0\n\\end{equation}\nand\n\\begin{equation}\n\\lim_{n\\rightarrow\\infty}\\frac{1}{4}[a_{n}^{-2}(1+\\beta_{n}^{2})+b_{n}^{-2}][a_{n+1}^{2}+b_{n+1}^{2}(1+\\beta_{n+1}^{2})]=1.\n\\end{equation}\n\\end{thm}\n\n\nSzeg\\\"o theorem for OPUC shows that\n\\begin{equation}\n\\prod_{n=0}^{\\infty}(1-|\\alpha_{n}|^{2})=\\exp\\Big(\\frac{1}{2\\pi\ni}\\int_{\\partial \\mathbb{D}}\\log w(\\tau)\\frac{d\\tau}{\\tau}\\Big).\n\\end{equation}\n\n Especially,\n \\begin{equation}\n \\sum_{n=0}^{\\infty}|\\alpha_{n}|^{2}<\\infty\\Longleftrightarrow\n\\frac{1}{2\\pi i}\\int_{\\partial \\mathbb{D}}\\log\nw(\\tau)\\frac{d\\tau}{\\tau}>-\\infty.\n \\end{equation}\n\nIts analog for OTP is\n\\begin{thm}\nLet $\\mu$ be a nontrivial probability measure on $\\partial\n\\mathbb{D}$ with the decomposition form (2.1), $a_{n}, b_{n},\n\\beta_{n}$ be the associated coefficients of $\\mu$ given in\n(2.11)-(2.13). Then\n\\begin{equation}\n\\prod_{n=0}^{\\infty}\\frac{a_{n+1}^{2}+b_{n+1}^{2}(1+\\beta_{n+1}^{2})}{a_{n}^{2}+b_{n}^{2}(1+\\beta_{n}^{2})}=\n\\exp\\Big(\\frac{1}{2\\pi i}\\int_{\\partial \\mathbb{D}}\\log\nw(\\tau)\\frac{d\\tau}{\\tau}\\Big).\n\\end{equation}\nIn particular,\n\\begin{align}\n&\\sum_{n=0}^{\\infty}\\left\\{1-\\frac{1}{4}[a_{n}^{-2}(1+\\beta_{n}^{2})+b_{n}^{-2}][a_{n+1}^{2}+b_{n+1}^{2}(1+\\beta_{n+1}^{2})]\\right\\}\n\\nonumber\\\\\n+&\\sum_{n=0}^{\\infty}\\frac{a_{n}^{4}+b_{n}^{4}(1+\\beta_{n}^{2})^{2}+2a_{n}^{2}b_{n}^{2}(\\beta_{n}^{2}-1)}\n{a_{n}^{4}+b_{n}^{4}(1+\\beta_{n}^{2})^{2}+2a_{n}^{2}b_{n}^{2}(\\beta_{n}^{2}+1)}<\\infty\n\\end{align}\nis equivalent to $\\displaystyle\\frac{1}{2\\pi i}\\int_{\\partial\n\\mathbb{D}}\\log w(\\tau)\\frac{d\\tau}{\\tau}>-\\infty$.\n\\end{thm}\n\n\n\\subsection{The Strong Szeg\\\"o Theorem}Let $d\\mu$ have the decomposition form\n(2.1) satisfying the Szeg\\\"o condition\n\\begin{equation}\n\\frac{1}{2\\pi i}\\int_{\\partial \\mathbb{D}}\\log\nw(\\tau)\\frac{d\\tau}{\\tau}>-\\infty,\n\\end{equation}\nit is accustomed to introduce the Szeg\\\"o function as follows\n\\begin{equation}\nD(z)=\\exp\\Big(\\frac{1}{4\\pi i}\\int_{\\partial\n\\mathbb{D}}\\frac{\\tau+z}{\\tau-z}\\log w(\\tau)\\frac{d\\tau}{\\tau}\\Big).\n\\end{equation}\nIt is easy to get that $D(z)$ is analytic and nonvanishing in\n$\\mathbb{D}$, lies in the Hardy space $H^{2}(\\mathbb{D})$ and\n$\\lim_{r\\uparrow1}D(r\\tau)=D(\\tau)$ for a.e. $\\tau\\in\\partial\n\\mathbb{D}$ as well as $|D(\\tau)|^{2}=w(\\tau)$. Let\n\\begin{equation}\nD(z)=\\exp\\Big(\\frac{1}{2}\\hat{L}_{0}+\\sum_{n=1}^{\\infty}\\hat{L}_{n}z^{n}\\Big),\\,\\,\\,z\\in\n\\mathbb{D}.\n\\end{equation}\nDue to Ibragimov, the sharpest form of the strong Szeg\\\"o theorem\nfor OPUC (see \\cite{sim1}) says that\n\\begin{equation}\n\\sum_{n=0}^{\\infty}n|\\alpha_{n}|^{2}<\\infty\\Longleftrightarrow\nd\\mu_{s}=0 \\,\\,\\,\\text{and}\n\\,\\,\\,\\sum_{n=0}^{\\infty}n|\\hat{L}_{n}|^{2}<\\infty.\n\\end{equation}\n\nThe corresponding result for OTP is stated in the following theorem.\n\\begin{thm}\nLet $\\mu$ be a nontrivial probability measure on $\\partial\n\\mathbb{D}$ with the decomposition form (2.1) satisfying the Szeg\\\"o\ncindition (3.47), $a_{n}, b_{n}, \\beta_{n}$ be the associated\ncoefficients of $\\mu$ given in (2.11)-(2.13), and\n$\\{\\hat{L}_{n}\\}_{n=0}^{\\infty}$ be the Taylor coefficients of the logarithm of\nSzeg\\\"o function $D(z)$ at $z=0$ which are defined by (3.48) and\n(3.49). Then\n\\begin{align}\n&\\sum_{n=0}^{\\infty}2n\\left\\{1-\\frac{1}{4}[a_{n}^{-2}(1+\\beta_{n}^{2})+b_{n}^{-2}][a_{n+1}^{2}+b_{n+1}^{2}(1+\\beta_{n+1}^{2})]\\right\\}\n\\nonumber\\\\\n+&\\sum_{n=0}^{\\infty}(2n-1)\\left\\{\\frac{a_{n}^{4}+b_{n}^{4}(1+\\beta_{n}^{2})^{2}+2a_{n}^{2}b_{n}^{2}(\\beta_{n}^{2}-1)}\n{a_{n}^{4}+b_{n}^{4}(1+\\beta_{n}^{2})^{2}+2a_{n}^{2}b_{n}^{2}(\\beta_{n}^{2}+1)}\\right\\}<\\infty\n\\end{align}\nis equivalent to $d\\mu_{s}=0$ and\n$\\sum_{n=0}^{\\infty}n|\\hat{L}_{n}|^{2}<\\infty$.\n\\end{thm}\n\n\nIn the above, by the mutual representation theorem for OTP and OPUC,\nwe obtain some classical theorems for orthogonal trigonometric\npolynomials corresponding to the ones for orthogonal polynomials on the\nunit circle. In fact, by this theorem, we can\nobtain much more results for orthogonal trigonometric\npolynomials. For example, the important and useful Bernstein-Szeg\\\"o\nmeasure can be expressed in terms of orthogonal trigonometric\npolynomials as follows\n\\begin{equation*}\nd\\mu_{n}=\n\\begin{cases}\n\\displaystyle\\frac{a_{m}^{2}+b_{m}^{2}(1+\\beta^{2}_{m})}{|a_{m}\\sigma_{m}(\\theta)+(\\beta_{m}+i)b_{m}\\pi_{m}(\\theta)|^{2}}\n\\frac{d\\theta}{2\\pi},\\,\\,\\,n=2m-1,\\\\[3mm]\n\\displaystyle\\frac{a_{m}^{2}b_{m}^{2}}{a_{m}^{2}+b_{m}^{2}(1+\\beta^{2}_{m})}\\frac{1}{|a_{m}^{-1}(\\beta_{m}-i)\\sigma_{m}(\\theta)\n-b_{m}^{-1}\\pi_{m}(\\theta)|^{2}} \\frac{d\\theta}{2\\pi},\\,\\,\\,n=2m.\n\\end{cases}\n\\end{equation*}\n\n\\section{Identities from Riemann-Hilbert Characterizations}\n\nIn this section, by applying the corresponding RH characterizations, we obtain some identities for OPUC and OTP including Szeg\\\"o recursions for OPUC, four-term recurrences for OTP and some new identities on Cauchy integrals for OPUC and OTP as well as Hilbert transforms for OTP. Let $H(\\partial \\mathbb{D})$ denote the set of all complex-valued and H\\\"older continuous functions defined on $\\partial \\mathbb{D}$. For simplicity, we always assume that the weight function $w\\in H(\\partial \\mathbb{D})$ in what follows.\n\n\\subsection{The case of OPUC} The RH characterization for OPUC is uniquely solvable as follows\n\\begin{thm}[\\!\\!\\cite{bdj,dd06}]\nThe RHP (1.2) has a unique solution given by\n\\begin{equation}\nY(z)=\\left(\n \\begin{array}{cc}\n \\Phi_{n}(z) & C[\\tau^{-n}\\Phi_{n}w](z) \\\\\n -\\kappa_{n-1}^{2}\\Phi_{n-1}^{*}(z) & -\\kappa_{n-1}^{2}C[\\tau^{-n}\\Phi_{n-1}^{*}w](z) \\\\\n \\end{array}\n\\right),\n\\end{equation}\nwhere $\\Phi_{n}$ is the monic orthogonal polynomials on the unit circle of order $n$ with respect to the weight $w$, $\\Phi_{n-1}^{*}$ is the reversed polynomial of $\\Phi_{n-1}$, $\\kappa_{n-1}$ is given as in (2.4), and $C$ is the Cauchy integral operator.\n\\end{thm}\n\nConsider the Schwarz reflection of $Y$ defined by\n\\begin{equation}\nY_{1}(z)=\\overline{Y\\left(\\frac{1}{\\overline{z}}\\right)},\\,\\,\\,\\,z\\in \\mathbb{C}\\setminus\\partial \\mathbb{D},\n\\end{equation}\nthen $Y_{1}^{+}(t)=\\overline{Y^{-}(t)}$ and $Y_{1}^{-}(t)=\\overline{Y^{+}(t)}$ for $t\\in \\partial \\mathbb{D}$. Therefore, by using the boundary condition in RHP (1.2), we have\n\\begin{equation}\nY_{1}^{+}(t)=Y_{1}^{-}(t)\\left(\n \\begin{array}{cc}\n 1 & -t^{n}w(t) \\\\\n 0 & 1 \\\\\n \\end{array}\n \\right),\\,\\,\\, t\\in \\partial \\mathbb{D}.\n\\end{equation}\nBy a direct evaluation,\n\\begin{equation}\n\\lim_{z\\rightarrow \\infty}Y_{1}(z)=\\left(\n \\begin{array}{cc}\n -\\alpha_{n-1} & \\kappa_{n}^{-2} \\\\\n -\\kappa_{n-1}^{2} & -\\overline{\\alpha}_{n-1} \\\\\n \\end{array}\n \\right).\n\\end{equation}\nMoreover, by the growth condition at $\\infty$ in RHP (1.2), we have\n\\begin{equation}\n\\lim_{z\\rightarrow 0}Y_{1}(z)\\left(\n \\begin{array}{cc}\n z^{n} & 0 \\\\\n 0 & z^{-n} \\\\\n \\end{array}\n \\right)=I.\n\\end{equation}\nLet\n\\begin{align}\nY_{2}(z)=\n\\left(\n \\begin{array}{cc}\n -\\overline{\\alpha}_{n-1} & -\\kappa_{n}^{-2} \\\\\n -\\kappa_{n-1}^{2} & \\alpha_{n-1} \\\\\n \\end{array}\n \\right)Y_{1}(z)\\left(\n \\begin{array}{cc}\n z^{n} & 0 \\\\\n 0 & -z^{-n} \\\\\n \\end{array}\n \\right).\n\\end{align}\nNoting\n\\begin{equation}\n1-|\\alpha_{n-1}|^{2}=\\left(\\frac{\\kappa_{n-1}}{\\kappa_{n}}\\right)^{2},\n\\end{equation}\nby (4.4), (4.6) and simple calculations,\n\\begin{equation}\n\\lim_{z\\rightarrow \\infty}Y_{2}(z)\\left(\n \\begin{array}{cc}\n z^{-n} & 0 \\\\\n 0 & z^{n} \\\\\n \\end{array}\n \\right)=I.\n\\end{equation}\nBy (4.3) and (4.8), $Y_{2}$ satisfies the RH characterization (1.2) for OPUC, viz.\n\\begin{equation} (\\mbox{RHP for $Y_{2}$})\\,\\,\n\\begin{cases}\nY_{2}\\,\\, \\mbox{is analytic in}\\,\\,\\mathbb{C}\\setminus \\partial \\mathbb{D},\\vspace{2mm}\\\\\nY_{2}^{+}(t)=Y_{2}^{-}(t)\\left(\n \\begin{array}{cc}\n 1 & t^{-n}w(t) \\\\\n 0 & 1 \\\\\n \\end{array}\n \\right)\n \\,\\,\\mbox{for} \\,\\,t\\in \\partial \\mathbb{D},\\vspace{2mm}\\\\\nY_{2}(z)=\\left(I+O(\\frac{1}{z})\\right)\\left(\n \\begin{array}{cc}\n z^{n} & 0 \\\\\n 0 & z^{-n} \\\\\n \\end{array}\n \\right)\n \\,\\,\\mbox{as}\\,\\, z\\rightarrow \\infty.\n\\end{cases}\n\\end{equation}\n\nBy the uniqueness, $Y_{2}(z)=Y(z)$ for $z\\in \\mathbb{C}\\setminus \\partial \\mathbb{D}$. Namely,\n\n\n\n\n\\begin{equation}\nY(z)=\\left(\n \\begin{array}{cc}\n -\\overline{\\alpha}_{n-1} & -\\kappa_{n}^{-2} \\\\\n -\\kappa_{n-1}^{2} & \\alpha_{n-1} \\\\\n \\end{array}\n \\right)\\overline{Y\\left(\\frac{1}{\\overline{z}}\\right)}\\left(\n \\begin{array}{cc}\n z^{n} & 0 \\\\\n 0 & -z^{-n}\\\\\n \\end{array}\n \\right),\\,\\,z\\in \\mathbb{C}\\setminus\\partial\\mathbb{D}.\n\\end{equation}\n\nBy the above arguments, we have\n\\begin{thm}\nLet $\\Phi_{n}$, $\\Phi_{n-1}^{*}$, $\\alpha_{n-1}$, $\\kappa_{n}$ be as above, then\n\\begin{enumerate}\n \\item [(A)] The identities\n \\begin{equation}\n \\Phi_{n}(z)=-\\overline{\\alpha}_{n-1}\\Phi_{n}^{*}(z)+\\frac{\\kappa_{n-1}^{2}}{\\kappa_{n}^{2}}z\\Phi_{n-1}(z)\n \\end{equation}\n and\n \\begin{equation}\n \\Phi_{n}^{*}(z)=\\Phi_{n-1}^{*}(z)-\\alpha_{n-1}z\\Phi_{n-1}(z)\n \\end{equation}\n hold for $z\\in \\mathbb{C}$;\n \\item [(B)] The identities\n \\begin{align}\n C[\\Phi_{n}w](z)=&z^{n}\\Big[\\overline{\\alpha}_{n-1}\\overline{C[\\Phi_{n}w]\\left(\\frac{1}{\\overline{z}}\\right)}\n -\\frac{\\kappa_{n-1}^{2}}{\\kappa_{n}^{2}}\\overline{C[\\Phi_{n-1}^{*}w]\\left(\\frac{1}{\\overline{z}}\\right)}+\\frac{1}{\\kappa_{n}^{2}}\\Big]\n \\end{align}\n and\n \\begin{align}\n C[\\Phi_{n-1}^{*}w](z)=&-z^{n}\\Big[\\overline{C[\\Phi_{n}w]\\left(\\frac{1}{\\overline{z}}\\right)}\n +\\alpha_{n-1}\\overline{C[\\Phi_{n-1}^{*}w]\\left(\\frac{1}{\\overline{z}}\\right)}-\\frac{1+\\alpha_{n-1}}{\\kappa_{n-1}^{2}}\\Big]\n \\end{align}\n hold for $z\\in \\mathbb{C}\\setminus\\partial\\mathbb{D}$.\n\\end{enumerate}\n\\end{thm}\n\n\\begin{proof}\n(4.11) and (4.12) are obtained by identifying the 11 and 21 entries in left hand side with the ones in right hand side of (4.10).\n\nBy the orthogonality, it is easy to get that (see \\cite{dd06})\n\\begin{equation}\nC[\\tau^{-n}\\Phi_{n}w](z)=z^{-n}C[\\Phi_{n}w](z)\n\\end{equation}\nand\n\\begin{equation}\n-\\kappa_{n-1}^{2}C[\\tau^{-n}\\Phi_{n-1}^{*}w](z)=z^{-n}\\Big(-\\kappa_{n-1}^{2}C[\\Phi_{n-1}^{*}w](z)+1\\Big).\n\\end{equation}\nBy identifying the 12 and 22 entries in two sides of (4.10), (4.13) and (4.14) respectively follow from (4.15) and (4.16).\n\\end{proof}\n\n\\begin{rem}\nThe identities (4.11) and (4.12) are just the classical Szeg\\\"o recursions. They are equivalent to each other.\n\\end{rem}\n\n\\subsection{The case of OTP} The RH characterization for OTP is uniquely solvable as follows\n\n\n\\begin{thm}[\\!\\!\\cite{dd08}] The RHP (1.3) has a unique solution given by\n\\begin{equation}\n\\mathfrak{Y}(z)=\\left(\n \\begin{array}{cc}\n z^{n}L(\\sigma_{n}, \\pi_{n})(z) & C[\\tau^{-n}L(\\sigma_{n}, \\pi_{n})w](z) \\\\\n z^{n}\\mathcal{L}(\\sigma_{n-1}, \\pi_{n-1})(z) & C[\\tau^{-n}\\mathcal{L}(\\sigma_{n-1}, \\pi_{n-1})w](z) \\\\\n \\end{array}\n\\right),\n\\end{equation}\nwhere \\begin{equation}\nL(\\sigma_{n}, \\pi_{n})(z)=\\lambda_{1,n}a_{n}\\sigma_{n}(z)+\\lambda_{2,n}b_{n}\\pi_{n}(z),\n\\end{equation}\n\\begin{equation}\n\\mathcal{L}(\\sigma_{n-1}, \\pi_{n-1})(z)=\\lambda_{3,n-1}a_{n-1}\\sigma_{n-1}(z)+\\lambda_{4,n-1}b_{n-1}\\pi_{n-1}(z)\n\\end{equation}\nin which $\\sigma_{n}$ and $\\pi_{n}$ are the orthonormal Laurent polynomials of the first class on the unit circle with respect to the weight $w$, $a_{n}, b_{n}, \\beta_{n}$ are given in (2.11)-(2.13), $\\lambda_{1,n}=1$, $\\lambda_{2,n}=\\beta_{n}+i$, $\\lambda_{3,n-1}=-\\frac{1}{2}a_{n-1}^{-2}(1+\\beta_{n-1} i)$, $\\lambda_{4,n-1}=\\frac{1}{2}b_{n-1}^{-2}i$, and $C$ is the Cauchy integral operator.\n\\end{thm}\n\nSet\n\\begin{equation}\n\\mathfrak{Y}_{1}(z)=\\overline{\\mathfrak{Y}\\left(\\frac{1}{\\overline{z}}\\right)},\\,\\,\\,\\,z\\in \\mathbb{C}\\setminus\\partial \\mathbb{D},\n\\end{equation}\nthen $\\mathfrak{Y}_{1}^{+}(t)=\\overline{\\mathfrak{Y}^{-}(t)}$ and $\\mathfrak{Y}_{1}^{-}(t)=\\overline{\\mathfrak{Y}^{+}(t)}$ for $t\\in \\partial \\mathbb{D}$. Therefore, by the boundary and growth conditions in RHP (1.3), we have\n\\begin{equation}\n\\mathfrak{Y}_{1}^{+}(t)=\\mathfrak{Y}_{1}^{-}(t)\\left(\n \\begin{array}{cc}\n 1 & -t^{2n}w(t) \\\\\n 0 & 1 \\\\\n \\end{array}\n \\right),\\,\\,\\, t\\in \\partial \\mathbb{D}\n\\end{equation}\nand\n\\begin{equation}\n\\lim_{z\\rightarrow 0}\\mathfrak{Y}_{1}(z)\\left(\n \\begin{array}{cc}\n z^{2n} & 0 \\\\\n 0 & z^{-2n+1} \\\\\n \\end{array}\n \\right)\n=I.\n\\end{equation}\nMoreover, by straightforward calculations, we have\n\\begin{equation}\n\\lim_{z\\rightarrow \\infty}\\mathfrak{Y}_{1}(z)\\left(\n \\begin{array}{cc}\n z & 0 \\\\\n 0 & 1 \\\\\n \\end{array}\n \\right)\n=\\triangle=\\left(\n \\begin{array}{cc}\n \\triangle_{11} & \\triangle_{12} \\\\\n \\triangle_{21} & \\triangle_{22} \\\\\n \\end{array}\n \\right),\n\\end{equation}\nwhere\n\\begin{align}\n\\triangle_{11}&=-\\frac{1}{2}(\\imath_{n}+\\beta_{n-1}\\varsigma_{n}-\\zeta_{n})+\\frac{i}{2}(\\jmath_{n}-\\imath_{n}\\beta_{n-1}\n+\\varsigma_{n})\\nonumber\\\\\n&=-\\alpha_{2n-2}\\,\\, (\\mbox{by Theorem 2.4}),\\\\\n\\triangle_{12}&=a_{n}^{2}+b_{n}^{2}(1+\\beta_{n}^{2})=\\kappa_{2n-1}^{-2}\\,\\, (\\mbox{by Theorem 2.5}),\\\\\n\\triangle_{21}&=-\\frac{1}{4}\\Big(a_{n-1}^{-2}(1+\\beta_{n-1}^{2})+b_{n-1}^{-2}\\Big)=-\\kappa_{2n-2}^{2}\\,\\, (\\mbox{by Theorem 2.3}),\\\\\n\\triangle_{22}&=-\\frac{1}{2}(\\imath_{n}+\\varsigma_{n}\\beta_{n-1}-\\zeta_{n})-\\frac{i}{2}(\\jmath_{n}-\\imath_{n}\\beta_{n-1}+\\varsigma_{n})=-\\overline{\\alpha}_{2n-2}.\n\\end{align}\n\nLet\n\\begin{align}\n\\mathfrak{Y}_{2}(z)\n=\\left(\n \\begin{array}{cc}\n \\triangle_{22} & -\\triangle_{12} \\\\\n \\triangle_{21} & -\\triangle_{11} \\\\\n \\end{array}\n \\right)\\mathfrak{Y}_{1}(z)\\left(\n \\begin{array}{cc}\n z^{2n+1} & 0 \\\\\n 0 & -z^{-2n+1} \\\\\n \\end{array}\n \\right).\n\\end{align}\nNoting (or by Theorem 2.7)\n\\begin{equation}\n\\det\\triangle=|\\alpha_{2n-2}|^{2}+\\left(\\frac{\\kappa_{2n-2}}{\\kappa_{2n-1}}\\right)^{2}=1,\n\\end{equation}\nby (4.23) and (4.28),\n\\begin{equation}\n\\lim_{z\\rightarrow \\infty}\\mathfrak{Y}_{2}(z)\\left(\n \\begin{array}{cc}\n z^{-2n} & 0 \\\\\n 0 & z^{2n-1} \\\\\n \\end{array}\n \\right)=I.\n\\end{equation}\nThus $\\mathfrak{Y}_{2}$ satisfies the following RHP (i.e. the RH characterization (1.3) for OTP)\n\\begin{equation} (\\mbox{RHP for $\\mathfrak{Y}_{2}$})\\,\\,\n\\begin{cases}\n\\mathfrak{Y}_{2}\\,\\, \\mbox{is analytic in}\\,\\,\\mathbb{C}\\setminus \\partial \\mathbb{D},\\vspace{2mm}\\\\\n\\mathfrak{Y}_{2}^{+}(t)=\\mathfrak{Y}_{2}^{-}(t)\\left(\n \\begin{array}{cc}\n 1 & t^{-2n}w(t) \\\\\n 0 & 1 \\\\\n \\end{array}\n \\right)\n \\,\\,\\mbox{for} \\,\\,t\\in \\partial \\mathbb{D},\\vspace{2mm}\\\\\n\\mathfrak{Y}_{2}(z)=\\left(I+O(\\frac{1}{z})\\right)\\left(\n \\begin{array}{cc}\n z^{2n} & 0 \\\\\n 0 & z^{-2n+1} \\\\\n \\end{array}\n \\right)\n \\,\\,\\mbox{as}\\,\\, z\\rightarrow \\infty,\\vspace{2mm}\\\\\n(\\mathfrak{Y}_{2})_{11}(0)=(\\mathfrak{Y}_{2})_{21}(0)=0.\n\\end{cases}\n\\end{equation}\nBy the uniqueness of RHP (1.3), $\\mathfrak{Y}_{2}(z)=\\mathfrak{Y}(z)$ for any $z\\in \\mathbb{C}\\setminus \\partial \\mathbb{D}$. That is,\n\\begin{align}\n\\mathfrak{Y}(z)\n=\\left(\n \\begin{array}{cc}\n \\triangle_{22} & -\\triangle_{12} \\\\\n \\triangle_{21} & -\\triangle_{11} \\\\\n \\end{array}\n \\right)\\overline{\\mathfrak{Y}\\left(\\frac{1}{\\overline{z}}\\right)}\\left(\n \\begin{array}{cc}\n z^{2n+1} & 0 \\\\\n 0 & -z^{-2n+1} \\\\\n \\end{array}\n \\right),\\,\\,\\,z\\in \\mathbb{C}\\setminus\\partial \\mathbb{D}.\n\\end{align}\n\nIn order to derive some identities for OTP, we introduce reflectional sets, reflectional and auto-reflectional functions for the unit circle $\\partial \\mathbb{D}$.\n\n\\begin{defn}\nA set $\\Sigma$ is called a reflectional set for the unit circle $\\partial \\mathbb{D}$ if both $z\\in\\Sigma$ and $1\/z\\in\\Sigma$, or\nsimply a reflectional set, in which $z$ and $1\/z$ are called reflection to each other. For example, $\\mathbb{C} \\setminus\\{0\\}$ is\na reflectional set for the unit circle.\n\\end{defn}\n\n\\begin{defn}\nIf $f$ is defined on a reflectional set $\\Sigma$, set\n\\begin{equation}\nf_{*}(z)=\\overline{f\\left(1\/\\overline{z}\\right)},\\,\\,z\\in\\Sigma,\n\\end{equation}\nthen $f_{*}$ is the reflectional function of $f$ for the unit circle in $\\Sigma$, simply reflection.\n\\end{defn}\n\n\\begin{defn}\nIf $f$ is defined on a reflectional set $\\Sigma$ such that\n\\begin{equation}\nf(z)=f_{*}(z),\\,\\,z\\in\\Sigma,\n\\end{equation}\nthen $f$ is called an auto-reflectional function for the unit circle in $\\Sigma$, simply auto-reflection.\n\\end{defn}\n\n\\begin{lem}\nLet $\\mu$ be a nontrivial probability measure on the unit circle $\\partial \\mathbb{D}$,\n$\\{1, \\sigma_{n}, \\pi_{n}\\}$ be the unique system of orthonormal Laurent polynomials of the first class on the unit circle with\nrespect to $\\mu$, then $\\sigma_{n}, \\pi_{n}$ are auto-reflectional for the unit circle in $\\mathbb{C}\\setminus\\{0\\}$.\n\\end{lem}\n\n\\begin{proof}\nIt immediately follows from that $\\displaystyle\\frac{z^{n}+z^{-n}}{2}$ and $\\displaystyle\\frac{z^{n}-z^{-n}}{2i}$ are auto-reflectional for the unit circle in $\\mathbb{C}\\setminus\\{0\\}$ and all of the coefficients are real-valued.\n\\end{proof}\n\nWith the above preliminaries, we have\n\\begin{thm}\nLet $\\sigma_{n}$, $\\pi_{n}$, $a_{n}$, $b_{n}$, $\\triangle_{kl}$, $\\lambda_{m,n}$ be as above, then\n\\begin{enumerate}\n \\item [($\\mathcal{A}$)] The identities\n \\begin{align}\n &(\\lambda_{1,n}z^{-1}-\\overline{\\lambda}_{1,n}\\triangle_{22})a_{n}\\sigma_{n}(z)+(\\lambda_{2,n}z^{-1}-\\overline{\\lambda}_{2,n}\\triangle_{22})b_{n}\\pi_{n}(z)\\nonumber\\\\\n =&-\\triangle_{12}\\Big(\\overline{\\lambda}_{3,n-1}a_{n-1}\\sigma_{n-1}(z)+\\overline{\\lambda}_{4,n-1}b_{n-1}\\pi_{n-1}(z)\\Big)\n \\end{align}\n and\n \\begin{align}\n &(\\lambda_{3,n-1}z^{-1}+\\overline{\\lambda}_{3,n-1}\\triangle_{11})a_{n-1}\\sigma_{n-1}(z)+(\\lambda_{4,n-1}z^{-1}+\\overline{\\lambda}_{4,n-1}\\triangle_{11})b_{n-1}\\pi_{n-1}(z)\\nonumber\\\\\n =&\\triangle_{21}\\Big(\\overline{\\lambda}_{1,n}a_{n}\\sigma_{n}(z)+\\overline{\\lambda}_{2,n}b_{n}\\pi_{n}(z)\\Big)\n \\end{align}\n hold for $z\\in \\mathbb{C}\\setminus\\{0\\}$;\n \\item [($\\mathcal{B}$)] The identities\n \\begin{align}\n &C[\\tau^{-n}(\\lambda_{1,n}a_{n}\\sigma_{n}+\\lambda_{2,n}b_{n}\\pi_{n})w](z)\\nonumber\\\\\n =&z^{-2(n-1)}\\Big[\\triangle_{22}C[\\tau^{n-1}(\\overline{\\lambda}_{1,n}a_{n}\\sigma_{n}+\\overline{\\lambda}_{2,n}b_{n}\\pi_{n})w](z)\\nonumber\\\\\n &-\\triangle_{12}C[\\tau^{n-1}(\\overline{\\lambda}_{3,n-1}a_{n-1}\\sigma_{n-1}+\\overline{\\lambda}_{4,n-1}b_{n-1}\\pi_{n-1})w](z)\\Big]\n \\end{align}\n and\n \\begin{align}\n &C[\\tau^{-n}(\\lambda_{3,n-1}a_{n-1}\\sigma_{n-1}+\\lambda_{4,n-1}b_{n-1}\\pi_{n-1})w](z)\\nonumber\\\\\n =&z^{-2(n-1)}\\Big[\\triangle_{21}C[\\tau^{n-1}(\\overline{\\lambda}_{1,n}a_{n}\\sigma_{n}+\\overline{\\lambda}_{2,n}b_{n}\\pi_{n})w](z)\\nonumber\\\\\n &-\\triangle_{11}C[\\tau^{n-1}(\\overline{\\lambda}_{3,n}a_{n-1}\\sigma_{n-1}+\\overline{\\lambda}_{4,n}b_{n-1}\\pi_{n-1})w](z)\\Big]\n \\end{align}\n hold for $z\\in \\mathbb{C}\\setminus(\\partial\\mathbb{D}\\cup\\{0\\})$.\n \\end{enumerate}\n \\end{thm}\n\n \\begin{proof}\n By Lemma 4.8, (4.35) and (4.36) are obtained by identifying the 11 and 21 entries in LHS with the ones in RHS of (4.32). Since\n \\begin{equation}\n \\overline{C[\\tau^{-n}L(\\sigma_{n}, \\pi_{n})w]\\left(\\frac{1}{z}\\right)}=-zC[\\tau^{n-1}\\overline{L(\\sigma_{n}, \\pi_{n})}w](z)\n \\end{equation}\n and\n \\begin{equation}\n \\overline{C[\\tau^{-n}\\mathcal{L}(\\sigma_{n-1}, \\pi_{n-1})w]\\left(\\frac{1}{z}\\right)}=-zC[\\tau^{n-1}\\overline{\\mathcal{L}(\\sigma_{n-1}, \\pi_{n-1})}w](z)\n \\end{equation}\n for $z\\in \\mathbb{C}\\setminus(\\partial\\mathbb{D}\\cup\\{0\\})$,\n then (4.37) and (4.38) follow from comparing the 12 and 22 entries with each other in both sides of (4.32).\n \\end{proof}\n\n In the above theorem, as $z$ is restricted on $\\partial \\mathbb{D}$, we indeed obtain the following four-term recurrences for OTP.\n \\begin{thm}\nLet $\\sigma_{n}$, $\\pi_{n}$, $a_{n}$, $b_{n}$, $\\triangle_{kl}$, $\\lambda_{m,n}$ be as above, then\n\\begin{enumerate}\n\\item [($\\mathfrak{A}$)] The identities\n\\begin{align}\n &(\\lambda_{1,n}e^{-i\\theta}-\\overline{\\lambda}_{1,n}\\triangle_{22})a_{n}\\sigma_{n}(\\theta)+(\\lambda_{2,n}e^{-i\\theta}-\\overline{\\lambda}_{2,n}\\triangle_{22})b_{n}\\pi_{n}(\\theta)\\nonumber\\\\\n =&-\\triangle_{12}\\Big(\\overline{\\lambda}_{3,n-1}a_{n-1}\\sigma_{n-1}(\\theta)+\\overline{\\lambda}_{4,n-1}b_{n-1}\\pi_{n-1}(\\theta)\\Big)\n \\end{align}\n and\n \\begin{align}\n &(\\lambda_{3,n-1}e^{-i\\theta}+\\overline{\\lambda}_{3,n-1}\\triangle_{11})a_{n-1}\\sigma_{n-1}(\\theta)+(\\lambda_{4,n-1}e^{-i\\theta}+\\overline{\\lambda}_{4,n-1}\\triangle_{11})b_{n-1}\\pi_{n-1}(\\theta)\\nonumber\\\\\n =&\\triangle_{21}\\Big(\\overline{\\lambda}_{1,n}a_{n}\\sigma_{n}(\\theta)+\\overline{\\lambda}_{2,n}b_{n}\\pi_{n}(\\theta)\\Big)\n \\end{align}\n hold for $\\theta\\in [0, 2\\pi)$;\n \\item [($\\mathfrak{B}$)] The identities\n \\begin{align}\n &H[\\tau^{-n}(\\lambda_{1,n}a_{n}\\sigma_{n}+\\lambda_{2,n}b_{n}\\pi_{n})w](e^{i\\theta})\\nonumber\\\\\n =&e^{-i[2(n-1)\\theta]}\\Big[\\triangle_{22}H[\\tau^{n-1}(\\overline{\\lambda}_{1,n}a_{n}\\sigma_{n}+\\overline{\\lambda}_{2,n}b_{n}\\pi_{n})w](e^{i\\theta})\\nonumber\\\\\n &-\\triangle_{12}H[\\tau^{n-1}(\\overline{\\lambda}_{3,n-1}a_{n-1}\\sigma_{n-1}+\\overline{\\lambda}_{4,n-1}b_{n-1}\\pi_{n-1})w](e^{i\\theta})\\Big]\n \\end{align}\n and\n \\begin{align}\n &H[\\tau^{-n}(\\lambda_{3,n-1}a_{n-1}\\sigma_{n-1}+\\lambda_{4,n-1}b_{n-1}\\pi_{n-1})w](e^{i\\theta})\\nonumber\\\\\n =&e^{-i[2(n-1)\\theta]}\\Big[\\triangle_{21}H[\\tau^{n-1}(\\overline{\\lambda}_{1,n}a_{n}\\sigma_{n}+\\overline{\\lambda}_{2,n}b_{n}\\pi_{n})w](e^{i\\theta})\\nonumber\\\\\n &-\\triangle_{11}H[\\tau^{n-1}(\\overline{\\lambda}_{3,n}a_{n-1}\\sigma_{n-1}+\\overline{\\lambda}_{4,n}b_{n-1}\\pi_{n-1})w](e^{i\\theta})\\Big]\n \\end{align}\n hold for $\\theta\\in [0, 2\\pi)$, where $H$ is the Hilbert transform on the unit circle, i.e.\n \\begin{equation}\n Hf(t)=P.V. \\frac{1}{\\pi}\\int_{\\partial \\mathbb{D}}\\frac{f(\\tau)}{t-\\tau}d\\tau,\\,\\,t\\in\\partial \\mathbb{D}\n \\end{equation}\n in which $f\\in H(\\partial \\mathbb{D})$.\n\\end{enumerate}\n\\end{thm}\n\n\\begin{proof}\n(4.41) and (4.42) are obvious by identifying $e^{i\\theta}\\in \\partial \\mathbb{D}$ with $\\theta\\in[0,2\\pi)$ in (4.35) and (4.36). By the well-known Plemelj formula, viz.\n\\begin{equation}\nC^{\\pm}f(t)=\\pm\\frac{1}{2}f(t)+\\frac{i}{2}Hf(t),\\,\\,t\\in \\partial \\mathbb{D},\n\\end{equation}\nwhere $f\\in H(\\partial \\mathbb{D})$, taking $z\\in \\mathbb{D}\\rightarrow t=e^{i\\theta}$ (or $z\\in \\mathbb{C}\\setminus\\overline{\\mathbb{D}}\\rightarrow t=e^{i\\theta}$), then (4.43) and (4.44) easily follow from (4.37), (4.38), (4.41) and (4.42).\n\\end{proof}\n\n \\begin{rem}\n The identities (4.35) (4.36), (4.41) and (4.42) are four-term recurrences for OTP (exactly, the former two are for OLP of the first class). They are equivalent to each other and also to the ones in \\cite{dd08}.\n \\end{rem}\n\n \\begin{rem}\n By taking a similar strategy, we can also apply the RH characterization (1.2) for OPUC to derive the above identities for OTP (or OLP of the first class) in Theorems 4.9 and 4.10 when $2n-1$ is in place of $n$ as stated in the Introduction.\n \\end{rem}\n\n \\begin{rem}\n By the mutual representation theorem (Theorem 2.1) and Theorem 4.2, we can directly get some identities for OTP in different forms. They are equivalent to (4.35)-(4.38) and (4.41)-(4.44). By this approach, the four-term recurrences (corresponding to (4.35), (4.36), (4.41) and (4.42)) were obtained in \\cite{dd08}.\n \\end{rem}\n\n\\section{A Strong Favard theorem}\n\nTheorem 3.1 tells us that there exist many nontrivial probability measures $d\\mu$ corresponding to any fixed system of three-tuples $\\{(a_{n}^{(0)},b_{n}^{(0)},\\beta_{n}^{(0)})\\}$ satisfying (3.1). That is to say that the system of three-tuples $\\{(a_{n}^{(0)},b_{n}^{(0)},\\beta_{n}^{(0)})\\}$ with (3.1) is not sufficient to uniquely determine the nontrivial probability measure $d\\mu$.\nAs stated in Remark 3.2, we need consider a system of seven-tuples $\\{(a_{n}^{(0)}, b_{n}^{(0)},\\beta_{n}^{(0)},$ $\\imath_{n}^{(0)},\\jmath_{n}^{(0)},\\varsigma_{n}^{(0)},\\zeta_{n}^{(0)})\\}$ with some suitable properties in order to uniquely determine the nontrivial probability measure $d\\mu$. In what follows, we will discuss it in detail.\n\nAt first, we give some more relations on the coefficients $a_{n}, b_{n},\\beta_{n},\\imath_{n},\\jmath_{n},\\varsigma_{n},\\zeta_{n}$ of OTP and $\\alpha_{n}, \\kappa_{n}$ of OPUC. To this end, the following basic facts are required.\n\\begin{lem}\nLet $\\Phi_{n}$ be the monic orthogonal polynomial on the unit circle of order $n$ with respect to $\\mu$, and $\\Phi^{*}_{n}$ be the reversed polynomial of $\\Phi_{n}$, then\n\\begin{equation}\n\\langle 1,z\\Phi_{n}^{*}\\rangle_{\\mathbb{R}}=\\int_{\\partial \\mathbb{D}}z\\Phi_{n}^{*}(\\tau)d\\mu(\\tau)=-a_{n+1,n}\n\\end{equation}\nand\n\\begin{equation}\n\\langle 1,z^{2}\\Phi_{n}\\rangle_{\\mathbb{R}}=\\int_{\\partial \\mathbb{D}}\\tau^{2}\\Phi_{n}(\\tau)d\\mu(\\tau)=\\alpha_{n}^{-1}\\Big(a_{n+2,n+1}-a_{n+1,n}\\Big),\n\\end{equation}\nwhere the Verblunsky coefficient $\\alpha_{n}$ is restricted in $\\mathbb{D}\\setminus\\{0\\}$, and $a_{n+1,n}$ is given by\n\\begin{equation}\n\\Phi_{n+1}(z)=z^{n+1}+a_{n+1,n}z^{n}+\\mbox{lower order}.\n\\end{equation}\n\\end{lem}\n\\begin{proof}\nBy (2.6), we have\n\\begin{align}\n\\int_{\\partial \\mathbb{D}}z\\Phi_{n}^{*}(\\tau)d\\mu(\\tau)=\\int_{\\partial \\mathbb{D}}\\tau^{n+1}\\overline{\\Phi_{n}(\\tau)}d\\mu(\\tau)=\\langle z^{n+1},\\Phi_{n}\\rangle_{\\mathbb{C}}.\n\\end{align}\nThus (5.1) immediately follows from (5.3) by the orthogonality. For $\\alpha_{n}\\in\\mathbb{D}\\setminus\\{0\\}$, by Szeg\\\"o recurrence,\n\\begin{align}\n\\int_{\\partial \\mathbb{D}}\\tau^{2}\\Phi_{n}(\\tau)d\\mu(\\tau)=\\alpha_{n}^{-1}\\Big[\\int_{\\partial \\mathbb{D}}\\tau\\Phi_{n}^{*}(\\tau)d\\mu(\\tau)-\\int_{\\partial \\mathbb{D}}\\tau\\Phi_{n+1}^{*}(\\tau)d\\mu(\\tau)\\Big].\n\\end{align}\nSo (5.2) holds by applying (5.1).\n\\end{proof}\n\n\\begin{thm} Let $a_{n}, b_{n},\\beta_{n},\\imath_{n},\\jmath_{n},\\varsigma_{n},\\zeta_{n},\\alpha_{n},\\kappa_{n}$ be given in Section 2, then\n\\begin{align}\n\\imath_{n+1}=&\\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\Big(\\alpha_{2n-1}^{-1}+1\\Big)a_{2n,2n-1}i-\\frac{1}{4}\\Big[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\alpha_{2n-1}^{-1}i\\nonumber\\\\\n&+a_{n}^{-2}(1-\\beta_{n}i)\\Big]a_{2n+1,2n}\n+\\frac{1}{4}a_{n}^{-2}(1-\\beta_{n}i)\\overline{\\alpha}_{2n}^{-1}\\Big(\\kappa_{2n}^{-2}-\\kappa_{2n+1}^{-2}\\Big),\n\\end{align}\n\\begin{align}\n\\jmath_{n+1}=&\\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\Big(\\alpha_{2n-1}^{-1}+1\\Big)a_{2n,2n-1}-\\frac{1}{4}\\Big[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\alpha_{2n-1}^{-1}\\nonumber\\\\\n&+b_{n}^{-2}i\\Big]a_{2n+1,2n}\n+\\frac{1}{4}b_{n}^{-2}\\overline{\\alpha}_{2n}^{-1}\\Big(\\kappa_{2n}^{-2}-\\kappa_{2n+1}^{-2}\\Big)i,\n\\end{align}\n\\begin{align}\n\\varsigma_{n+1}=&\\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\Big(\\alpha_{2n-1}^{-1}-1\\Big)a_{2n,2n-1}-\\frac{1}{4i}\\Big[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\alpha_{2n-1}^{-1}i\\nonumber\\\\\n&+a_{n}^{-2}(1-\\beta_{n}i)\\Big]a_{2n+1,2n}\n-\\frac{1}{4i}a_{n}^{-2}(1-\\beta_{n}i)\\overline{\\alpha}_{2n}^{-1}\\Big(\\kappa_{2n}^{-2}-\\kappa_{2n+1}^{-2}\\Big)\n\\end{align}\nand\n\\begin{align}\n\\zeta_{n+1}=&\\frac{1}{4i}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\Big(\\alpha_{2n-1}^{-1}-1\\Big)a_{2n,2n-1}-\\frac{1}{4i}\\Big[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\alpha_{2n-1}^{-1}\\nonumber\\\\\n&+b_{n}^{-2}i\\Big]a_{2n+1,2n}\n-\\frac{1}{4}b_{n}^{-2}\\overline{\\alpha}_{2n}^{-1}\\Big(\\kappa_{2n}^{-2}-\\kappa_{2n+1}^{-2}\\Big)\n\\end{align}\nfor $n\\in \\mathbb{N}\\cup\\{0\\}$, where $\\alpha_{2n-1},\\alpha_{2n}\\in \\mathbb{D}\\setminus\\{0\\}$, $a_{2n+1,2n}, a_{2n,2n-1}$ are given by (5.3).\n\\end{thm}\n\\begin{proof}\nIt is enough to prove (5.6) and (5.7). Similar are to (5.8) and (5.9).\nBy Theorem 2.2, Lemmas 5.1 and 2.8 and Szeg\\\"o recurrence, when $\\alpha_{2n-1},\\alpha_{2n}\\in \\mathbb{D}\\setminus\\{0\\}$, we have\n\\begin{align}\n\\langle z^{n+1},a_{n}\\sigma_{n}\\rangle_{\\mathbb{R}}=&\\langle z^{n+1},-\\frac{1}{2}z^{-n}[\\Lambda_{n}^{-1}b_{n}^{-2}iz\\Phi_{2n-1}-(1-\\beta_{n}i)\\Phi_{2n}^{*}]\\rangle_{\\mathbb{R}}\\nonumber\\\\\n=&-\\frac{1}{2}\\Lambda_{n}^{-1}b_{n}^{-2}i\\langle 1,z^{2}\\Phi_{2n-1}\\rangle_{\\mathbb{R}}+\\frac{1}{2}(1-\\beta_{n}i)\\langle 1,z\\Phi_{2n}^{*}\\rangle_{\\mathbb{R}}\\nonumber\\\\\n=&\\frac{1}{2}\\Lambda_{n}^{-1}b_{n}^{-2}\\alpha_{2n-1}^{-1}a_{2n,2n-1}i-\\frac{1}{2}\\Big[\\Lambda_{n}^{-1}b_{n}^{-2}\\alpha_{2n-1}^{-1}i\\nonumber\\\\\n&+(1-\\beta_{n}i)\\Big]a_{2n+1,2n},\n\\end{align}\n\\begin{align}\n\\langle z^{-(n+1)},a_{n}\\sigma_{n}\\rangle_{\\mathbb{R}}=&\\langle z^{-(n+1)},-\\frac{1}{2}z^{-n}[\\Lambda_{n}^{-1}b_{n}^{-2}iz\\Phi_{2n-1}-(1-\\beta_{n}i)\\Phi_{2n}^{*}]\\rangle_{\\mathbb{R}}\\nonumber\\\\\n=&-\\frac{1}{2}\\Lambda_{n}^{-1}b_{n}^{-2}i\\langle 1,z^{-2n}\\Phi_{2n-1}\\rangle_{\\mathbb{R}}+\\frac{1}{2}(1-\\beta_{n}i)\\langle 1,z^{-(2n+1)}\\Phi_{2n}^{*}\\rangle_{\\mathbb{R}}\\nonumber\\\\\n=&-\\frac{1}{2}\\Lambda_{n}^{-1}b_{n}^{-2}i\\langle z^{2n},\\Phi_{2n-1}\\rangle_{\\mathbb{C}}+\\frac{1}{2}(1-\\beta_{n}i)\\overline{\\langle 1,z\\Phi_{2n}\\rangle}_{\\mathbb{R}}\\nonumber\\\\\n=&\\frac{1}{2}\\Lambda_{n}^{-1}b_{n}^{-2}a_{2n,2n-1}i+\\frac{1}{2}(1-\\beta_{n}i)\\overline{\\alpha}_{2n}^{-1}\\Big(\\kappa_{2n}^{-2}-\\kappa_{2n+1}^{-2}\\Big),\n\\end{align}\n\\begin{align}\n\\langle z^{n+1},b_{n}\\pi_{n}\\rangle_{\\mathbb{R}}=&\\langle z^{n+1},-\\frac{1}{2}z^{-n}[\\Lambda_{n}^{-1}a_{n}^{-2}(1+\\beta_{n}i)z\\Phi_{2n-1}(z)-i\\Phi_{2n}^{*}(z)]\\rangle_{\\mathbb{R}}\\nonumber\\\\\n=&-\\frac{1}{2}\\Lambda_{n}^{-1}a_{n}^{-2}(1+\\beta_{n}i)\\langle 1,z^{2}\\Phi_{2n-1}\\rangle_{\\mathbb{R}}+\\frac{i}{2}\\langle 1,z\\Phi_{2n}^{*}\\rangle_{\\mathbb{R}}\\nonumber\\\\\n=&\\frac{1}{2}\\Lambda_{n}^{-1}a_{n}^{-2}(1+\\beta_{n}i)\\alpha_{2n-1}^{-1}a_{2n,2n-1}-\\frac{1}{2}\\Big[\\Lambda_{n}^{-1}a_{n}^{-2}(1+\\beta_{n}i)\\alpha_{2n-1}^{-1}\\nonumber\\\\&+i\\Big]a_{2n+1,2n}\n\\end{align}\nand\n\\begin{align}\n\\langle z^{-(n+1)},b_{n}\\pi_{n}\\rangle_{\\mathbb{R}}=&\\langle z^{-(n+1)},-\\frac{1}{2}z^{-n}[\\Lambda_{n}^{-1}a_{n}^{-2}(1+\\beta_{n}i)z\\Phi_{2n-1}(z)-i\\Phi_{2n}^{*}(z)]\\rangle_{\\mathbb{R}}\\nonumber\\\\\n=&-\\frac{1}{2}\\Lambda_{n}^{-1}a_{n}^{-2}(1+\\beta_{n}i)\\langle 1,z^{-2n}\\Phi_{2n-1}\\rangle_{\\mathbb{R}}+\\frac{i}{2}\\langle 1,z^{-(2n+1)}\\Phi_{2n}^{*}\\rangle_{\\mathbb{R}}\\nonumber\\\\\n=&-\\frac{1}{2}\\Lambda_{n}^{-1}a_{n}^{-2}(1+\\beta_{n}i)\\langle z^{2n},\\Phi_{2n-1}\\rangle_{\\mathbb{C}}+\\frac{i}{2}\\overline{\\langle 1,z\\Phi_{2n}\\rangle}_{\\mathbb{R}}\\nonumber\\\\\n=&\\frac{1}{2}\\Lambda_{n}^{-1}a_{n}^{-2}(1+\\beta_{n}i)a_{2n,2n-1}+\\frac{i}{2}\\overline{\\alpha}_{2n}^{-1}\\Big(\\kappa_{2n}^{-2}-\\kappa_{2n+1}^{-2}\\Big).\n\\end{align}\n\nThus\n\\begin{align}\n\\imath_{n+1}=&\\langle\\frac{z^{n+1}+z^{-(n+1)}}{2},a_{n}^{-1}\\sigma_{n}\\rangle_{\\mathbb{R}}=a_{n}^{-2}\\langle\\frac{z^{n+1}+z^{-(n+1)}}{2},a_{n}\\sigma_{n}\\rangle_{\\mathbb{R}}\\nonumber\\\\\n=&\\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\Big(\\alpha_{2n-1}^{-1}+1\\Big)a_{2n,2n-1}i-\\frac{1}{4}\\Big[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\alpha_{2n-1}^{-1}i\\nonumber\\\\\n&+a_{n}^{-2}(1-\\beta_{n}i)\\Big]a_{2n+1,2n}\n+\\frac{1}{4}a_{n}^{-2}(1-\\beta_{n}i)\\overline{\\alpha}_{2n}^{-1}\\Big(\\kappa_{2n}^{-2}-\\kappa_{2n+1}^{-2}\\Big)\n\\end{align}\nand\n\\begin{align}\n\\jmath_{n+1}=&\\langle\\frac{z^{n+1}+z^{-(n+1)}}{2},b_{n}^{-1}\\pi_{n}\\rangle_{\\mathbb{R}}=b_{n}^{-2}\\langle\\frac{z^{n+1}+z^{-(n+1)}}{2},b_{n}\\pi_{n}\\rangle_{\\mathbb{R}}\\nonumber\\\\\n=&\\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\Big(\\alpha_{2n-1}^{-1}+1\\Big)a_{2n,2n-1}-\\frac{1}{4}\\Big[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\alpha_{2n-1}^{-1}\\nonumber\\\\\n&+b_{n}^{-2}i\\Big]a_{2n+1,2n}\n+\\frac{1}{4}b_{n}^{-2}\\overline{\\alpha}_{2n}^{-1}\\Big(\\kappa_{2n}^{-2}-\\kappa_{2n+1}^{-2}\\Big)i,\n\\end{align}\nwhere $\\alpha_{2n-1},\\alpha_{2n}\\in \\mathbb{D}\\setminus\\{0\\}$.\n\\end{proof}\n\nDenote\n\\begin{equation}\nA=\\left(\n \\begin{array}{cc}\n \\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\Big(\\alpha_{2n-1}^{-1}+1\\Big)i & -\\frac{1}{4}\\Big[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\alpha_{2n-1}^{-1}i+a_{n}^{-2}(1-\\beta_{n}i)\\Big] \\vspace{2mm}\\\\\n \\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\Big(\\alpha_{2n-1}^{-1}+1\\Big) & -\\frac{1}{4}\\Big[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\alpha_{2n-1}^{-1}+b_{n}^{-2}i\\Big] \\\\\n \\end{array}\n \\right),\n\\end{equation}\nthen by (2.21) and (2.37),\n\\begin{align}\n|A|=&\\left|\\begin{array}{cc}\n \\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\Big(\\alpha_{2n-1}^{-1}+1\\Big)i & -\\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\alpha_{2n-1}^{-1}i\\vspace{2mm}\\\\\n \\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\Big(\\alpha_{2n-1}^{-1}+1\\Big) & -\\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\alpha_{2n-1}^{-1} \\\\\n \\end{array}\n\\right|\\nonumber\\\\\n&+\\left|\\begin{array}{cc}\n \\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\Big(\\alpha_{2n-1}^{-1}+1\\Big)i & -\\frac{1}{4}a_{n}^{-2}(1-\\beta_{n}i)\\vspace{2mm}\\\\\n \\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\Big(\\alpha_{2n-1}^{-1}+1\\Big) & -\\frac{1}{4}b_{n}^{-2}i \\\\\n \\end{array}\n\\right|\\nonumber\\\\\n=&-\\frac{1}{16}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\Big(\\alpha_{2n-1}^{-1}+1\\Big)\\left|\\begin{array}{cc}\n i & a_{n}^{-2}(1-\\beta_{n}i)\\vspace{2mm}\\\\\n (1+\\beta_{n}i) & b_{n}^{-2}i \\\\\n \\end{array}\n\\right|\\nonumber\\\\\n=&\\frac{1}{16}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\Big(\\alpha_{2n-1}^{-1}+1\\Big)[a_{n}^{-2}(1+\\beta_{n}^{2})+b_{n}^{-2}]\\nonumber\\\\\n=&\\frac{1}{8}a_{n}^{-2}b_{n}^{-2}\\Big(\\alpha_{2n-1}^{-1}+1\\Big)i\\neq 0.\n\\end{align}\nSo $A$ is invertible.\n\nBy Theorem 5.2, we can represent the coefficient $a_{n+1,n}$ of OPUC by the coefficients $a_{n}, b_{n},\\beta_{n},\\imath_{n},\\jmath_{n},\\varsigma_{n},\\zeta_{n}$ of OTP as follows.\n\\begin{thm}\nLet $a_{n}, b_{n},\\beta_{n},\\imath_{n},\\jmath_{n},\\varsigma_{n},\\zeta_{n},\\alpha_{n},\\kappa_{n},a_{n+1,n}$ be as above, then\n\\begin{equation}\n\\left(\n \\begin{array}{c}\n a_{2n,2n-1} \\vspace{1mm}\\\\\n a_{2n+1,2n} \\\\\n \\end{array}\n\\right)=A^{-1}\\left(\n \\begin{array}{c}\n \\imath_{n+1}-\\frac{1}{4}a_{n}^{-2}(1-\\beta_{n}i)\\overline{\\alpha}_{2n}^{-1}\\Big(\\kappa_{2n}^{-2}-\\kappa_{2n+1}^{-2}\\Big) \\vspace{1mm}\\\\\n \\jmath_{n+1}-\\frac{1}{4}b_{n}^{-2}\\overline{\\alpha}_{2n}^{-1}\\Big(\\kappa_{2n}^{-2}-\\kappa_{2n+1}^{-2}\\Big)i \\\\\n \\end{array}\n \\right),\n\\end{equation}\nwhere $A$ is given by (5.16).\n\\end{thm}\n\\begin{proof}\nIt follows from (5.6), (5.7) and the invertibility of $A$.\n\\end{proof}\n\\begin{center}\n\\begin{tikzpicture}\n\\node (a) at (0,0) {$\\varsigma_{n+1}$};\n\\node (b) at (0,3) {$\\imath_{n+1}$};\n\\node (c) at (4,0) {$\\zeta_{n+1}$};\n\\node (d) at (4,3) {$\\jmath_{n+1}$};\n\\draw (a) -- node[left]{$C$} (b);\n\\draw (a) -- node[below]{$B$} (c);\n\\draw (b) -- node[above]{$A$} (d);\n\\draw (c) -- node[right]{$D$} (d);\n\\draw [dashed] (a) -- node[above,pos=0.7]{$F$} (d);\n\\draw (b) -- node[above,pos=0.3]{$E$} (c);\n\\end{tikzpicture}\n\\end{center}\n\\begin{center}\nDerivation of $a_{n+1,n}$ from different ways\n\\end{center}\n\n\\begin{rem}\nIn Theorem 5.3, we derive $a_{2n+1,2n}$ and $a_{2n,2n-1}$ in terms of $\\imath_{n+1}$ and $\\jmath_{n+1}$. In fact, there are many different ways shown in the above figure from which to deduce them. For instance, like as $A$, let $B$ be the coefficient matrix for $a_{2n+1,2n}$ and $a_{2n,2n-1}$ in (5.8) and (5.9). Namely,\n\\begin{equation}\nB=\\left(\n \\begin{array}{cc}\n \\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\Big(\\alpha_{2n-1}^{-1}-1\\Big) & -\\frac{1}{4i}\\Big[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\alpha_{2n-1}^{-1}i+a_{n}^{-2}(1-\\beta_{n}i)\\Big] \\vspace{2mm}\\\\\n \\frac{1}{4i}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\Big(\\alpha_{2n-1}^{-1}-1\\Big) & -\\frac{1}{4i}\\Big[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\alpha_{2n-1}^{-1}+b_{n}^{-2}i\\Big] \\\\\n \\end{array}\n \\right).\n\\end{equation}\nThen we can use $B$ to express $a_{2n+1,2n}$ and $a_{2n,2n-1}$ in terms of $\\varsigma_{n+1}$ and $\\zeta_{n+1}$. So are to use $C,D$ and $E$ in the solid-line cases shown in the above figure. However, a further condition will be required for $F$ in the dashed-line case. Here $C, D, E, F$ have the similar sense as $A$ and $B$. Such observations are based on the following results about the evaluations for the determinants of these coefficient matrices.\n\\end{rem}\n\n\\begin{thm} Let $B, C, D, E, F$ be the coefficient matrices stated in the above remark, then\n\\begin{equation}\n|B|=-\\frac{1}{8}a_{n}^{-2}b_{n}^{-2}\\Big(\\alpha_{2n-1}^{-1}-1\\Big)i,\n\\end{equation}\n\\begin{equation}\n|C|=\\frac{1}{8}\\Lambda_{n}^{-1}a_{n}^{-4}b_{n}^{-2}\\alpha_{2n-1}^{-1}(1+\\beta_{n}i),\n\\end{equation}\n\\begin{equation}\n|D|=-\\frac{1}{8}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-4}\\alpha_{2n-1}^{-1},\n\\end{equation}\n\\begin{equation}\n|E|=\\frac{1}{8i}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\alpha_{2n-1}^{-1}\\Big[a_{n}^{-2}+b_{n}^{-2}+a_{n}^{-2}\\beta_{n}i\\Big]\n\\end{equation}\nand\n\\begin{equation}\n|F|=-\\frac{1}{8i}\\Lambda_{n}^{-1}a_{n}^{-4}b_{n}^{-2}\\alpha_{2n-1}^{-1}\\beta_{n}(\\beta_{n}-i).\n\\end{equation}\n\\end{thm}\n\\begin{proof}\nBy applying (2.21) and (2.37) as well as basic properties of determinants, we have\n\\begin{align}\n|B|=&\\left|\\begin{array}{cc}\n \\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\Big(\\alpha_{2n-1}^{-1}-1\\Big) & -\\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\alpha_{2n-1}^{-1}\\vspace{2mm}\\\\\n \\frac{1}{4i}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\Big(\\alpha_{2n-1}^{-1}-1\\Big) & -\\frac{1}{4i}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\alpha_{2n-1}^{-1} \\\\\n \\end{array}\n\\right|\\nonumber\\\\\n&+\\left|\\begin{array}{cc}\n \\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\Big(\\alpha_{2n-1}^{-1}-1\\Big) & -\\frac{1}{4i}a_{n}^{-2}(1-\\beta_{n}i)\\vspace{2mm}\\\\\n \\frac{1}{4i}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\Big(\\alpha_{2n-1}^{-1}-1\\Big) & -\\frac{1}{4}b_{n}^{-2}\\\\\n \\end{array}\n\\right|\\nonumber\\\\\n=&\\frac{1}{16}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\Big(\\alpha_{2n-1}^{-1}-1\\Big)\\left|\\begin{array}{cc}\n i & a_{n}^{-2}(1-\\beta_{n}i)\\vspace{2mm}\\\\\n (1+\\beta_{n}i) & b_{n}^{-2}i \\\\\n \\end{array}\n\\right|\\nonumber\\\\\n=&-\\frac{1}{16}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\Big(\\alpha_{2n-1}^{-1}-1\\Big)[a_{n}^{-2}(1+\\beta_{n}^{2})+b_{n}^{-2}]\\nonumber\\\\\n=&-\\frac{1}{8}a_{n}^{-2}b_{n}^{-2}\\Big(\\alpha_{2n-1}^{-1}-1\\Big)i,\\nonumber\n\\end{align}\n\\begin{align}\n|C|=&\\left|\\begin{array}{cc}\n \\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\Big(\\alpha_{2n-1}^{-1}+1\\Big)i & -\\frac{1}{4}\\Big[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\alpha_{2n-1}^{-1}i+a_{n}^{-2}(1-\\beta_{n}i)\\Big]\\vspace{2mm}\\\\\n \\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\Big(\\alpha_{2n-1}^{-1}-1\\Big) & -\\frac{1}{4i}\\Big[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\alpha_{2n-1}^{-1}i+a_{n}^{-2}(1-\\beta_{n}i)\\Big] \\\\\n \\end{array}\n\\right|\\nonumber\\\\\n=&-\\frac{1}{16}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\Big[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\alpha_{2n-1}^{-1}i+a_{n}^{-2}(1-\\beta_{n}i)\\Big]\\left|\\begin{array}{cc}\n \\alpha_{2n-1}^{-1}+1 & 1\\vspace{2mm}\\\\\n \\alpha_{2n-1}^{-1}-1 & 1 \\\\\n \\end{array}\n\\right|\\nonumber\\\\\n=&-\\frac{1}{8}\\Lambda_{n}^{-1}a_{n}^{-4}b_{n}^{-2}\\alpha_{2n-1}^{-1}\\Big[\\Lambda_{n}^{-1}b_{n}^{-2}i+\\alpha_{2n-1}(1-\\beta_{n}i)\\Big]\\nonumber\\\\\n=&\\frac{1}{8}\\Lambda_{n}^{-1}a_{n}^{-4}b_{n}^{-2}\\alpha_{2n-1}^{-1}(1+\\beta_{n}i),\\nonumber\n\\end{align}\n\\begin{align}\n|D|=&\\left|\\begin{array}{cc}\n \\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\Big(\\alpha_{2n-1}^{-1}+1\\Big) & -\\frac{1}{4}\\Big[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\alpha_{2n-1}^{-1}+b_{n}^{-2}i\\Big]\\vspace{2mm}\\\\\n \\frac{1}{4i}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\Big(\\alpha_{2n-1}^{-1}-1\\Big) & -\\frac{1}{4i}\\Big[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\alpha_{2n-1}^{-1}+b_{n}^{-2}i\\Big]\\\\\n \\end{array}\n\\right|\\nonumber\\\\\n=&-\\frac{1}{16i}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\Big[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\alpha_{2n-1}^{-1}+b_{n}^{-2}i\\Big]\\left|\\begin{array}{cc}\n \\alpha_{2n-1}^{-1}+1 & 1\\vspace{2mm}\\\\\n \\alpha_{2n-1}^{-1}-1 & 1 \\\\\n \\end{array}\n\\right|\\nonumber\\\\\n=&-\\frac{1}{8i}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-4}\\alpha_{2n-1}^{-1}\\Big[\\Lambda_{n}^{-1}a_{n}^{-2}(1+\\beta_{n}i)+\\alpha_{2n-1}i\\Big]\\nonumber\\\\\n=&-\\frac{1}{8}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-4}\\alpha_{2n-1}^{-1},\\nonumber\n\\end{align}\n\\begin{align}\n|E|=&\\left|\\begin{array}{cc}\n \\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\Big(\\alpha_{2n-1}^{-1}+1\\Big)i & -\\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\alpha_{2n-1}^{-1}i\\vspace{2mm}\\\\\n \\frac{1}{4i}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\Big(\\alpha_{2n-1}^{-1}-1\\Big) & -\\frac{1}{4i}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\alpha_{2n-1}^{-1} \\\\\n \\end{array}\n\\right|\\nonumber\\\\\n&+\\left|\\begin{array}{cc}\n \\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\Big(\\alpha_{2n-1}^{-1}+1\\Big)i & -\\frac{1}{4}a_{n}^{-2}(1-\\beta_{n}i)\\vspace{2mm}\\\\\n \\frac{1}{4i}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\Big(\\alpha_{2n-1}^{-1}-1\\Big) & -\\frac{1}{4}b_{n}^{-2} \\\\\n \\end{array}\n\\right|\\nonumber\n\\end{align}\n\\begin{align}\n=&-\\frac{1}{16}(\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2})^{2}(1+\\beta_{n}i)\\alpha_{2n-1}^{-1}\\left|\\begin{array}{cc}\n \\alpha_{2n-1}^{-1}+1 & 1\\vspace{2mm}\\\\\n \\alpha_{2n-1}^{-1}-1 & 1 \\\\\n \\end{array}\n\\right|\\nonumber\\\\\n&-\\frac{1}{16i}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\left|\\begin{array}{cc}\n (\\alpha_{2n-1}^{-1}+1)i & a_{n}^{-2}(1-\\beta_{n}i)\\vspace{2mm}\\\\\n (\\alpha_{2n-1}^{-1}-1)(1+\\beta_{n}i) & b_{n}^{-2}i \\\\\n \\end{array}\n\\right|\\nonumber\\\\\n=&-\\frac{1}{8}(\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2})^{2}(1+\\beta_{n}i)\\alpha_{2n-1}^{-1}+\\frac{1}{16i}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\alpha_{2n-1}^{-1}[a_{n}^{-2}(1+\\beta_{n}^{2})+b_{n}^{-2}]\\nonumber\\\\\n&+\\frac{1}{16i}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}[b_{n}^{-2}-a_{n}^{-2}(1+\\beta_{n}^{2})]\\nonumber\\\\\n=&\\frac{1}{16i}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\alpha_{2n-1}^{-1}\\Big\\{\\kappa_{2n}^{-2}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)+[a_{n}^{-2}(1+\\beta_{n}^{2})+b_{n}^{-2}]\\nonumber\\\\\n&+\\alpha_{2n-1}[b_{n}^{-2}-a_{n}^{-2}(1+\\beta_{n}^{2})]\\Big\\}\\nonumber\\\\\n=&\\frac{1}{8i}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\alpha_{2n-1}^{-1}[a_{n}^{-2}(1+\\beta_{n}i)+b_{n}^{-2}]\\nonumber\n\\end{align}\nand\n\\begin{align}\n|F|=&\\left|\\begin{array}{cc}\n \\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\Big(\\alpha_{2n-1}^{-1}+1\\Big) & -\\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\alpha_{2n-1}^{-1}\\vspace{2mm}\\\\\n \\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\Big(\\alpha_{2n-1}^{-1}-1\\Big) & -\\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\alpha_{2n-1}^{-1} \\\\\n \\end{array}\n\\right|\\nonumber\\\\\n&+\\left|\\begin{array}{cc}\n \\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\Big(\\alpha_{2n-1}^{-1}+1\\Big) & -\\frac{1}{4}b_{n}^{-2}i\\vspace{2mm}\\\\\n \\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\Big(\\alpha_{2n-1}^{-1}-1\\Big) & -\\frac{1}{4i}a_{n}^{-2}(1-\\beta_{n}i)\\\\\n \\end{array}\n\\right|\\nonumber\\\\\n=&-\\frac{1}{16}(\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2})^{2}(1+\\beta_{n}i)\\alpha_{2n-1}^{-1}\\left|\\begin{array}{cc}\n \\alpha_{2n-1}^{-1}+1 & 1\\vspace{2mm}\\\\\n \\alpha_{2n-1}^{-1}-1 & 1 \\\\\n \\end{array}\n\\right|\\nonumber\\\\\n&-\\frac{1}{16i}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\left|\\begin{array}{cc}\n (\\alpha_{2n-1}^{-1}+1)(1+\\beta_{n}i) & b_{n}^{-2}i\\vspace{2mm}\\\\\n (\\alpha_{2n-1}^{-1}-1)i & a_{n}^{-2}(1-\\beta_{n}i) \\\\\n \\end{array}\n\\right|\\nonumber\\\\\n=&-\\frac{1}{8}(\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2})^{2}(1+\\beta_{n}i)\\alpha_{2n-1}^{-1}-\\frac{1}{16i}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\alpha_{2n-1}^{-1}[a_{n}^{-2}(1+\\beta_{n}^{2})+b_{n}^{-2}]\\nonumber\\\\\n&+\\frac{1}{16i}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}[b_{n}^{-2}-a_{n}^{-2}(1+\\beta_{n}^{2})]\\nonumber\\\\\n=&\\frac{1}{16i}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\alpha_{2n-1}^{-1}\\Big\\{\\kappa_{2n}^{-2}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)-[a_{n}^{-2}(1+\\beta_{n}^{2})+b_{n}^{-2}]\\nonumber\\\\\n&+\\alpha_{2n-1}[b_{n}^{-2}-a_{n}^{-2}(1+\\beta_{n}^{2})]\\Big\\}\\nonumber\\\\\n=&-\\frac{1}{8i}\\Lambda_{n}^{-1}a_{n}^{-4}b_{n}^{-2}\\alpha_{2n-1}^{-1}\\beta_{n}(\\beta_{n}-i).\\nonumber\n\\end{align}\nThus the proof is complete.\n\\end{proof}\n\n\\begin{cor}\nLet $B, C, D, E, F$ be the coefficient matrices as above, then $B, C, D, E$ are invertible for $\\beta_{n}\\in \\mathbb{R}$, whereas $F$ is invertible for $\\beta_{n}\\in \\mathbb{R}\\setminus\\{0\\}$.\n\\end{cor}\n\nBy Theorems 5.2 and 5.3 as well as Corollary 5.6, we have\n\\begin{thm}\nLet $A, B, C, D, E, F$ be the coefficient matrices as above, then\n\\begin{align}\n&A^{-1}\\left(\n \\begin{array}{c}\n \\imath_{n+1}-\\frac{1}{4}a_{n}^{-2}(1-\\beta_{n}i)\\overline{\\alpha}_{2n}^{-1}\\Big(\\kappa_{2n}^{-2}-\\kappa_{2n+1}^{-2}\\Big) \\vspace{1mm}\\\\\n \\jmath_{n+1}-\\frac{1}{4}b_{n}^{-2}\\overline{\\alpha}_{2n}^{-1}\\Big(\\kappa_{2n}^{-2}-\\kappa_{2n+1}^{-2}\\Big)i \\\\\n \\end{array}\n \\right)\\nonumber\\\\\n =&B^{-1}\\left(\n \\begin{array}{c}\n \\varsigma_{n+1}+\\frac{1}{4i}a_{n}^{-2}(1-\\beta_{n}i)\\overline{\\alpha}_{2n}^{-1}\\Big(\\kappa_{2n}^{-2}-\\kappa_{2n+1}^{-2}\\Big) \\vspace{1mm}\\\\\n \\zeta_{n+1}+\\frac{1}{4}b_{n}^{-2}\\overline{\\alpha}_{2n}^{-1}\\Big(\\kappa_{2n}^{-2}-\\kappa_{2n+1}^{-2}\\Big) \\\\\n \\end{array}\n \\right)\\nonumber\\\\\n =&C^{-1}\\left(\n \\begin{array}{c}\n \\imath_{n+1}-\\frac{1}{4}a_{n}^{-2}(1-\\beta_{n}i)\\overline{\\alpha}_{2n}^{-1}\\Big(\\kappa_{2n}^{-2}-\\kappa_{2n+1}^{-2}\\Big) \\vspace{1mm}\\\\\n \\varsigma_{n+1}+\\frac{1}{4i}a_{n}^{-2}(1-\\beta_{n}i)\\overline{\\alpha}_{2n}^{-1}\\Big(\\kappa_{2n}^{-2}-\\kappa_{2n+1}^{-2}\\Big) \\\\\n \\end{array}\n \\right)\\nonumber\\\\\n =&D^{-1}\\left(\n \\begin{array}{c}\n \\jmath_{n+1}-\\frac{1}{4}b_{n}^{-2}\\overline{\\alpha}_{2n}^{-1}\\Big(\\kappa_{2n}^{-2}-\\kappa_{2n+1}^{-2}\\Big)i \\vspace{1mm}\\\\\n \\zeta_{n+1}+\\frac{1}{4}b_{n}^{-2}\\overline{\\alpha}_{2n}^{-1}\\Big(\\kappa_{2n}^{-2}-\\kappa_{2n+1}^{-2}\\Big) \\\\\n \\end{array}\n \\right)\\nonumber\\\\\n =&E^{-1}\\left(\n \\begin{array}{c}\n \\imath_{n+1}-\\frac{1}{4}a_{n}^{-2}(1-\\beta_{n}i)\\overline{\\alpha}_{2n}^{-1}\\Big(\\kappa_{2n}^{-2}-\\kappa_{2n+1}^{-2}\\Big) \\vspace{1mm}\\\\\n \\zeta_{n+1}+\\frac{1}{4}b_{n}^{-2}\\overline{\\alpha}_{2n}^{-1}\\Big(\\kappa_{2n}^{-2}-\\kappa_{2n+1}^{-2}\\Big) \\\\\n \\end{array}\n \\right)\n\\end{align}\nfor $\\beta_{n}\\in \\mathbb{R}$. Moreover, any term in the above identities is equal to\n\\begin{equation}\nF^{-1}\\left(\n \\begin{array}{c}\n \\jmath_{n+1}-\\frac{1}{4}b_{n}^{-2}\\overline{\\alpha}_{2n}^{-1}\\Big(\\kappa_{2n}^{-2}-\\kappa_{2n+1}^{-2}\\Big)i \\vspace{1mm}\\\\\n \\varsigma_{n+1}+\\frac{1}{4i}a_{n}^{-2}(1-\\beta_{n}i)\\overline{\\alpha}_{2n}^{-1}\\Big(\\kappa_{2n}^{-2}-\\kappa_{2n+1}^{-2}\\Big) \\\\\n \\end{array}\n \\right)\n\\end{equation}\nas $\\beta_{n}\\neq 0$.\n\\end{thm}\n\nLet\n\\begin{equation}\n\\gamma=\\left(\n \\begin{array}{c}\n \\frac{1}{4i}a_{n}^{-2}(1-\\beta_{n}i)\\overline{\\alpha}_{2n}^{-1}\\Big(\\kappa_{2n}^{-2}-\\kappa_{2n+1}^{-2}\\Big) \\vspace{1mm}\\\\\n \\frac{1}{4}b_{n}^{-2}\\overline{\\alpha}_{2n}^{-1}\\Big(\\kappa_{2n}^{-2}-\\kappa_{2n+1}^{-2}\\Big) \\\\\n \\end{array}\n \\right)\n\\end{equation}\nand\n\\begin{equation}\n\\eta=\\left(\n \\begin{array}{c}\n \\frac{1}{4}a_{n}^{-2}(1-\\beta_{n}i)\\overline{\\alpha}_{2n}^{-1}\\Big(\\kappa_{2n}^{-2}-\\kappa_{2n+1}^{-2}\\Big) \\vspace{1mm}\\\\\n \\frac{1}{4}b_{n}^{-2}\\overline{\\alpha}_{2n}^{-1}\\Big(\\kappa_{2n}^{-2}-\\kappa_{2n+1}^{-2}\\Big)i \\\\\n \\end{array}\n \\right),\n\\end{equation}\nthen by the first identity in (5.25),\n\\begin{equation}\nB\\left(\n \\begin{array}{c}\n \\imath_{n+1} \\\\\n \\jmath_{n+1} \\\\\n \\end{array}\n \\right)-A\\left(\n \\begin{array}{c}\n \\varsigma_{n+1} \\\\\n \\zeta_{n+1} \\\\\n \\end{array}\n \\right)=A\\gamma+B\\eta,\n\\end{equation}\nwhere $A$ and $B$ are given by (5.16) and (5.19) respectively.\n\nBy (2.23), we have\n\n\\begin{align}\n\\left(\n \\begin{array}{c}\n \\alpha_{2n} \\vspace{1mm}\\\\\n \\overline{\\alpha}_{2n} \\\\\n \\end{array}\n\\right)=P\\left(\n \\begin{array}{c}\n \\imath_{n+1} \\vspace{1mm}\\\\\n \\jmath_{n+1} \\\\\n \\end{array}\n \\right)+Q\n \\left(\n \\begin{array}{c}\n \\varsigma_{n+1} \\vspace{1mm}\\\\\n \\zeta_{n+1} \\\\\n \\end{array}\n \\right),\n\\end{align}\nwhere\n\\begin{equation}\nP=\\left(\n \\begin{array}{cc}\n \\frac{1+\\beta_{n}i}{2} & -\\frac{i}{2} \\vspace{1mm}\\\\\n \\frac{1-\\beta_{n}i}{2} & \\frac{i}{2} \\\\\n \\end{array}\n \\right)\\,\\,\\,\\mbox{and}\\,\\,\\,Q=\\left(\n \\begin{array}{cc}\n \\frac{\\beta_{n}-i}{2} & -\\frac{1}{2} \\vspace{1mm}\\\\\n \\frac{\\beta_{n}+i}{2} & -\\frac{1}{2} \\\\\n \\end{array}\n \\right).\n\\end{equation}\nDenote\n\\begin{equation}\nD=\\left(\n \\begin{array}{cc}\n P & Q \\\\\n B & -A \\\\\n \\end{array}\n \\right)\\,\\,\\,\\mbox{and}\\,\\,\\,\\alpha=\\left(\n \\begin{array}{c}\n \\alpha_{2n} \\\\\n \\overline{\\alpha}_{2n} \\\\\n \\end{array}\n \\right),\n\\end{equation}\nthen we have the following theorem.\n\\begin{thm}\n\\begin{equation}\n\\left(\n \\begin{array}{c}\n \\imath_{n+1} \\\\\n \\jmath_{n+1} \\\\\n \\varsigma_{n+1} \\\\\n \\zeta_{n+1} \\\\\n \\end{array}\n \\right)=D^{-1}\\left(\n \\begin{array}{c}\n \\alpha \\\\\n A\\gamma+B\\eta \\\\\n \\end{array}\n \\right),\n\\end{equation}\nwhere $D^{-1}$ is the inverse matrix of $D$.\n\\end{thm}\n\n\\begin{proof}\nBy (5.29) and (5.30),\n\\begin{equation}\nD\\left(\n \\begin{array}{c}\n \\imath_{n+1} \\\\\n \\jmath_{n+1} \\\\\n \\varsigma_{n+1} \\\\\n \\zeta_{n+1} \\\\\n \\end{array}\n \\right)=\\left(\n \\begin{array}{cc}\n P & Q \\\\\n B & -A \\\\\n \\end{array}\n\\right)\\left(\n \\begin{array}{c}\n \\imath_{n+1} \\\\\n \\jmath_{n+1} \\\\\n \\varsigma_{n+1} \\\\\n \\zeta_{n+1} \\\\\n \\end{array}\n \\right)=\\left(\n \\begin{array}{c}\n \\alpha \\\\\n A\\gamma+B\\eta \\\\\n \\end{array}\n \\right).\n\\end{equation}\nSo it is enough to show that $|D|\\neq0$ in order to get (5.33). However, it follows from Laplace theorem by simple calculations on some of its subdeterminants. More precisely,\n\\begin{itemize}\n \\item [Case 1.] \\begin{equation}\n |P|=\\left|\n \\begin{array}{cc}\n \\frac{1+\\beta_{n}i}{2} & -\\frac{i}{2} \\vspace{1mm}\\\\\n \\frac{1-\\beta_{n}i}{2} & \\frac{i}{2} \\\\\n \\end{array}\n \\right|=\\frac{i}{2}\\frac{1+\\beta_{n}i}{2}+\\frac{i}{2}\\frac{1-\\beta_{n}i}{2}=\\frac{i}{2}\n \\end{equation}\n and\n \\begin{equation}\n (-1)^{1+2+1+2}|-A|=|A|.\n \\end{equation}\n \\item [Case 2.] \\begin{equation}\n |Q|=\\left|\n \\begin{array}{cc}\n \\frac{\\beta_{n}-i}{2} & -\\frac{1}{2} \\vspace{1mm}\\\\\n \\frac{\\beta_{n}+i}{2} & -\\frac{1}{2} \\\\\n \\end{array}\n \\right|=-\\frac{1}{2}\\frac{\\beta_{n}-i}{2}+\\frac{1}{2}\\frac{\\beta_{n}+i}{2}=\\frac{i}{2}\n \\end{equation}\n and\n \\begin{equation}\n (-1)^{1+2+3+4}|B|=|B|.\n \\end{equation}\n \\item [Case 3.] \\begin{equation}\n \\left|\n \\begin{array}{cc}\n \\frac{1+\\beta_{n}i}{2} & \\frac{\\beta_{n}-i}{2} \\vspace{1mm}\\\\\n \\frac{1-\\beta_{n}i}{2} & \\frac{\\beta_{n}+i}{2} \\\\\n \\end{array}\n \\right|=\\frac{1+\\beta_{n}i}{2}\\frac{\\beta_{n}+i}{2}-\\frac{\\beta_{n}-i}{2}\\frac{1-\\beta_{n}i}{2}=\\frac{1+\\beta_{n}^{2}}{2}i\n \\end{equation}\n and\n \\begin{align}\n &(-1)^{1+2+1+3}\\left|\n \\begin{array}{cc}\n -\\frac{1}{4i}\\Big[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\alpha_{2n-1}^{-1}i+a_{n}^{-2}(1-\\beta_{n}i)\\Big] & \\frac{1}{4}\\Big[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\alpha_{2n-1}^{-1}i+a_{n}^{-2}(1-\\beta_{n}i)\\Big] \\vspace{1mm}\\\\\n -\\frac{1}{4i}\\Big[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\alpha_{2n-1}^{-1}+b_{n}^{-2}i\\Big] & \\frac{1}{4}\\Big[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\alpha_{2n-1}^{-1}+b_{n}^{-2}i\\Big] \\\\\n \\end{array}\n \\right|\\nonumber\\\\\n &=0.\n \\end{align}\n\n \\item [Case 4.] \\begin{equation}\\left|\n \\begin{array}{cc}\n \\frac{1+\\beta_{n}i}{2} & -\\frac{1}{2} \\vspace{1mm}\\\\\n \\frac{1-\\beta_{n}i}{2} & -\\frac{1}{2} \\\\\n \\end{array}\n \\right|=-\\frac{1}{2}\\frac{1+\\beta_{n}i}{2}+\\frac{1}{2}\\frac{1-\\beta_{n}i}{2}=-\\frac{\\beta_{n}}{2}i\n \\end{equation}\n and\n \\begin{align}\n &(-1)^{1+2+1+4}\\left|\n \\begin{array}{cc}\n -\\frac{1}{4i}\\Big[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\alpha_{2n-1}^{-1}i+a_{n}^{-2}(1-\\beta_{n}i)\\Big] & -\\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\Big(\\alpha_{2n-1}^{-1}+1\\Big)i \\vspace{1mm}\\\\\n -\\frac{1}{4i}\\Big[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\alpha_{2n-1}^{-1}+b_{n}^{-2}i\\Big] & -\\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\Big(\\alpha_{2n-1}^{-1}+1\\Big) \\\\\n \\end{array}\n \\right|\\nonumber\\\\\n =&-|A|i.\n \\end{align}\n \\item [Case 5.] \\begin{equation}\n \\left|\n \\begin{array}{cc}\n -\\frac{i}{2} & \\frac{\\beta_{n}-i}{2} \\vspace{1mm}\\\\\n \\frac{i}{2} & \\frac{\\beta_{n}+i}{2} \\\\\n \\end{array}\n \\right|=-\\frac{i}{2}\\frac{\\beta_{n}+i}{2}-\\frac{\\beta_{n}-i}{2}\\frac{i}{2}=-\\frac{\\beta_{n}}{2}i\n \\end{equation}\n and\n \\begin{align}\n &(-1)^{1+2+2+3}\\left|\n \\begin{array}{cc}\n \\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\Big(\\alpha_{2n-1}^{-1}-1\\Big) & \\frac{1}{4}\\Big[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\alpha_{2n-1}^{-1}i+a_{n}^{-2}(1-\\beta_{n}i)\\Big] \\vspace{1mm}\\\\\n \\frac{1}{4i}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\Big(\\alpha_{2n-1}^{-1}-1\\Big) & \\frac{1}{4}\\Big[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\alpha_{2n-1}^{-1}+b_{n}^{-2}i\\Big] \\\\\n \\end{array}\n \\right|\\nonumber\\\\\n =&-|B|i.\n \\end{align}\n \\item [Case 6.]\n\\end{itemize} \\begin{equation}\n \\left|\n \\begin{array}{cc}\n -\\frac{i}{2} & -\\frac{1}{2} \\vspace{1mm}\\\\\n \\frac{i}{2} & -\\frac{1}{2} \\\\\n \\end{array}\n \\right|=\\frac{i}{2}\\frac{1}{2}+\\frac{i}{2}\\frac{1}{2}=\\frac{i}{2}\n \\end{equation}\n and\n \\begin{align}\n &(-1)^{1+2+2+4}\\left|\n \\begin{array}{cc}\n \\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\Big(\\alpha_{2n-1}^{-1}-1\\Big) & -\\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\Big(\\alpha_{2n-1}^{-1}+1\\Big)i \\vspace{1mm}\\\\\n \\frac{1}{4i}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\Big(\\alpha_{2n-1}^{-1}-1\\Big) & -\\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\Big(\\alpha_{2n-1}^{-1}+1\\Big) \\\\\n \\end{array}\n \\right|=0.\n \\end{align}\nSo\n \\begin{align}\n |D|=&\\frac{i}{2}(|A|+|B|)-\\frac{\\beta_{n}}{2}i(-|A|i-|B|i)=(|A|+|B|)\\frac{i-\\beta_{n}}{2}\\nonumber\\\\\n =&(|A|+|B|)i\\frac{1+\\beta_{n}i}{2}=-\\frac{1}{8}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\neq 0.\n \\end{align}\nThus\n\\begin{equation}\n\\left(\n \\begin{array}{c}\n \\imath_{n+1} \\\\\n \\jmath_{n+1} \\\\\n \\varsigma_{n+1} \\\\\n \\zeta_{n+1} \\\\\n \\end{array}\n \\right)=D^{-1}\\left(\n \\begin{array}{c}\n \\alpha \\\\\n A\\gamma+B\\eta \\\\\n \\end{array}\n \\right).\\nonumber\\vspace{-5mm}\n\\end{equation}\n\\end{proof}\n\n\\begin{rem}\nSimilar as in Remark 2.10, we can directly obtain (5.33) by using (2.21)-(2.24) and (2.37) together.\n\\end{rem}\n\nWith the above preliminaries, we get the following strong Favard theorem.\n\n\\begin{thm} Let\n$\\{(a_{n}^{(0)}, b_{n}^{(0)},\\beta_{n}^{(0)},\\imath_{n}^{(0)},\\jmath_{n}^{(0)},\\varsigma_{n}^{(0)},\\zeta_{n}^{(0)})\\}$ with\n$a_{0}^{(0)},b_{0}^{(0)}=1$ and $\\beta_{0}^{(0)}=0$ be a system of\nseven-tuples of real numbers satisfying\n\\begin{itemize}\n \\item [(1)] \\begin{align}\n&0<(\\imath_{n+1}^{(0)}+\\beta_{n}^{(0)}\\varsigma_{n+1}^{(0)}-\\zeta_{n+1}^{(0)})^{2}+(\\jmath_{n+1}^{(0)}-\\imath_{n+1}^{(0)}\\beta_{n}^{(0)}\n+\\varsigma_{n+1}^{(0)})^{2}\\nonumber\\\\\n=&1-\\frac{1}{4}[(a_{n}^{(0)})^{2}+(b_{n}^{(0)})^{2}(1+(\\beta_{n}^{(0)})^{2})]\n[(a_{n+1}^{(0)})^{2}+(b_{n+1}^{(0)})^{2}(1+(\\beta_{n+1}^{(0)})^{2})]<1;\n\\end{align}\n \\item [(2)]\n \\begin{equation}\n \\beta_{n}^{(0)}\\neq0\\,\\,\\,\\,\\mbox{or}\\,\\,\\,\\,\\frac{(a_{n}^{(0)})^{2}}{(b_{n}^{(0)})^{2}}+(\\beta_{n}^{(0)})^{2}\\neq1\n \\end{equation}\n\\end{itemize}\nfor $n\\in \\mathbb{N}$ with $a_{n}^{(0)},b_{n}^{(0)}>0$ for $n\\in \\mathbb{N}\\cup\\{0\\}$,\nthen there exists a unique nontrivial probability measure $d\\mu$ on\n$\\partial \\mathbb{D}$ such that $a_{n}(d\\mu)=a_{n}^{(0)}$,\n$b_{n}(d\\mu)=b_{n}^{(0)}$, $\\beta_{n}(d\\mu)=\\beta_{n}^{(0)}$, $\\imath_{n}(d\\mu)=\\imath_{n}^{(0)}$, $\\jmath_{n}(d\\mu)=\\jmath_{n}^{(0)}$, $\\varsigma_{n}(d\\mu)=\\varsigma_{n}^{(0)}$ and $\\zeta_{n}(d\\mu)=\\zeta_{n}^{(0)}$,\nwhere $a_{n}(d\\mu),b_{n}(d\\mu),\\beta_{n}(d\\mu),\\imath_{n}(d\\mu),\\jmath_{n}(d\\mu),\\varsigma_{n}(d\\mu),\\zeta_{n}(d\\mu)$ are associated\ncoefficients of $d\\mu$ defined by (2.11)-(2.15).\n\\end{thm}\n\n\\begin{proof}\nLet \\begin{equation}\n\\kappa_{2n}^{(0)}=\\frac{1}{2}\\Big[(a_{n}^{(0)})^{-2}\\big(1+(\\beta_{n}^{(0)})^{2}\\big)+(b_{n}^{(0)})^{-2}\\Big]^{\\frac{1}{2}},\n\\end{equation}\n\\begin{equation}\n\\kappa_{2n+1}^{(0)}=\\Big[(a_{n+1}^{(0)})^{2}+(b_{n+1}^{(0)})^{2}\\big(1+(\\beta_{n+1}^{(0)})^{2}\\big)\\Big]^{-\\frac{1}{2}},\n\\end{equation}\n\\begin{equation}\n\\alpha_{2n-1}^{(0)}=\\frac{1}{4}(\\kappa_{2n}^{(0)})^{-2}\\Big[(b_{n}^{(0)})^{-2}-(a_{n}^{(0)})^{-2}\\big(1-(\\beta_{n}^{(0)})^{2}\\big)\\Big]\n-\\frac{1}{2}(\\kappa_{2n}^{(0)})^{-2}(a_{n}^{(0)})^{-2}\n(\\beta_{n}^{(0)})i\n\\end{equation}\nand\n\\begin{equation}\n\\alpha_{2n}^{(0)}=\\frac{1}{2}(\\imath_{n+1}^{(0)}+\\beta_{n}^{(0)}\\varsigma_{n+1}^{(0)}-\\zeta_{n+1}^{(0)})-\\frac{i}{2}(\\jmath_{n+1}^{(0)}-\\imath_{n+1}^{(0)}\\beta_{n}^{(0)}\n+\\varsigma_{n+1}^{(0)})\n\\end{equation}\nfor $n\\in \\mathbb{N}\\cup\\{0\\}$, then by (5.48), Verblunsky theorem and Theorem 3.1, there exists a unique nontrivial probability measure $d\\mu$ on $\\partial \\mathbb{D}$ such that\n\\begin{equation}\\alpha_{n}(d\\mu)=\\alpha_{n}^{(0)}\\,\\,\\,\\mbox{and}\\,\\,\\,\\kappa_{n}(d\\mu)=\\kappa_{n}^{(0)}\\end{equation}\nas well as\n\\begin{equation}a_{n}(d\\mu)=a_{n}^{(0)},\\,\\,b_{n}(d\\mu)=b_{n}^{(0)}\\,\\,\\,\\mbox{and}\\,\\,\\,\\beta_{n}(d\\mu)=\\beta_{n}^{(0)}\\end{equation} for $n\\in \\mathbb{N}\\cup\\{0\\}$.\n\nOn one hand, noting (5.49), by Theorem 5.8, we have\n\\begin{equation}\n\\left(\n \\begin{array}{c}\n \\imath_{n+1}(d\\mu) \\vspace{1mm}\\\\\n \\jmath_{n+1}(d\\mu) \\vspace{1mm}\\\\\n \\varsigma_{n+1}(d\\mu) \\vspace{1mm}\\\\\n \\zeta_{n+1}(d\\mu) \\\\\n \\end{array}\n \\right)=D(d\\mu)^{-1}\\left(\n \\begin{array}{c}\n \\alpha(d\\mu) \\\\\n A(d\\mu)\\gamma(d\\mu)+B(d\\mu)\\eta(d\\mu) \\\\\n \\end{array}\n \\right)\n\\end{equation}\nfor $n\\in \\mathbb{N}\\cup\\{0\\}$, where $A(d\\mu), B(d\\mu), \\gamma(d\\mu), \\eta(d\\mu), D(d\\mu), \\alpha(d\\mu)$ are respectively given by (5.16), (5.19), (5.27), (5.28) and (5.32) with $a_{n}(d\\mu), b_{n}(d\\mu),\\beta_{n}(d\\mu), \\alpha_{n}(d\\mu),$ $\\kappa_{n}(d\\mu), \\Lambda_{n}(d\\mu)$ replacing $a_{n}, b_{n},\\beta_{n}, \\alpha_{n}, \\kappa_{n}, \\Lambda_{n}$.\n\nOn the other hand, as in Remark 5.9, by directly invoking (5.50)-(5.53) and\n\\begin{equation}\\Lambda_{n}^{(0)}=-\\frac{1}{2}\\Big[(a_{n}^{(0)})^{-2}\\big(1+(\\beta_{n}^{(0)})^{2}\\big)+(b_{n}^{(0)})^{-2}\\Big]i,\\end{equation}\nwe have\n\\begin{equation}\n\\left(\n \\begin{array}{c}\n \\imath_{n+1}^{(0)} \\vspace{1mm}\\\\\n \\jmath_{n+1}^{(0)} \\vspace{1mm}\\\\\n \\varsigma_{n+1}^{(0)} \\vspace{1mm}\\\\\n \\zeta_{n+1}^{(0)} \\vspace{1mm}\\\\\n \\end{array}\n \\right)=(D^{(0)})^{-1}\\left(\n \\begin{array}{c}\n \\alpha^{(0)} \\\\\n A^{(0)}\\gamma^{(0)}+B^{(0)}\\eta^{(0)} \\\\\n \\end{array}\n \\right)\n\\end{equation}\nfor $n\\in \\mathbb{N}\\cup\\{0\\}$, where $A^{(0)}, B^{(0)}, \\gamma^{(0)}, \\eta^{(0)}, D^{(0)}, \\alpha^{(0)}$ are respectively given by (5.16), (5.19), (5.27), (5.28) and (5.32) with $a_{n}^{(0)}, b_{n}^{(0)},\\beta_{n}^{(0)}, \\alpha_{n}^{(0)},$ $\\kappa_{n}^{(0)}, \\Lambda_{n}^{(0)}$ replacing $a_{n}, b_{n},\\beta_{n}, \\alpha_{n}, \\kappa_{n}, \\Lambda_{n}$.\n\nIn terms of (2.37), (5.16), (5.19), (5.27), (5.28), (5.32), (5.54), (5.55) and (5.57), one can easily find that\n\\begin{equation*}D(d\\mu)=D^{(0)},A(d\\mu)=A^{(0)}, B(d\\mu)=B^{(0)}, \\alpha(d\\mu)=\\alpha^{(0)},\\gamma(d\\mu)=\\gamma^{(0)},\\eta(d\\mu)=\\eta^{(0)}.\\end{equation*}\nThus, by (5.56) and (5.58),\n\\begin{equation*}\\imath_{n}(d\\mu)=\\imath_{n}^{(0)}, \\jmath_{n}(d\\mu)=\\jmath_{n}^{(0)}, \\varsigma_{n}(d\\mu)=\\varsigma_{n}^{(0)}\\,\\,\\,\\mbox{and}\\,\\,\\, \\zeta_{n}(d\\mu)=\\zeta_{n}^{(0)}\\end{equation*}\nfor $n\\in \\mathbb{N}$.\n\\end{proof}\n\nExcept for the above strong Favard theorem, by Theorem 5.5 and (5.17), we also have the following result on the determinants of the coefficient matrices $A, B, C, D, E$ and $F$.\n\n\\begin{thm}\nLet $A, B, C, D, E, F$ be the coefficient matrices as above, then\n\\begin{equation}\n|A||B|-|C||D|(1+\\beta_{n}i)+|E||F|=0.\n\\end{equation}\n\\end{thm}\n\n\\begin{proof}\nNote that\n\\begin{align}\nFE^{-1}=&|E|^{-1}S,\n\\end{align}\nwhere\n\\begin{align}\nS=&\\left(\\begin{array}{cc}\n \\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\Big(\\alpha_{2n-1}^{-1}+1\\Big) & -\\frac{1}{4}[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\alpha_{2n-1}^{-1}+b_{n}^{-2}i]\\vspace{2mm}\\\\\n \\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\Big(\\alpha_{2n-1}^{-1}-1\\Big) & -\\frac{1}{4i}[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\alpha_{2n-1}^{-1}i+a_{n}^{-2}(1-\\beta_{n}i)] \\\\\n \\end{array}\n\\right)\\nonumber\\\\\n&\\left(\\begin{array}{cc}\n -\\frac{1}{4i}[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\alpha_{2n-1}^{-1}+b_{n}^{-2}i] & \\frac{1}{4}[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\alpha_{2n-1}^{-1}i+a_{n}^{-2}(1-\\beta_{n}i)]\\vspace{2mm}\\\\\n -\\frac{1}{4i}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\Big(\\alpha_{2n-1}^{-1}-1\\Big) & \\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\Big(\\alpha_{2n-1}^{-1}+1\\Big)i\\\\\n \\end{array}\n\\right)\\nonumber\\\\\n\\triangleq&\\left(\n \\begin{array}{cc}\n s_{11} & s_{12} \\\\\n s_{21} & s_{22} \\\\\n \\end{array}\n \\right)\n\\end{align}\nwith\n\\begin{align}\ns_{11}=&\\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\Big(\\alpha_{2n-1}^{-1}+1\\Big)\n\\Big\\{-\\frac{1}{4i}[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\alpha_{2n-1}^{-1}+b_{n}^{-2}i]\\Big\\}\\nonumber\\\\\n&-\\frac{1}{4}[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\alpha_{2n-1}^{-1}+b_{n}^{-2}i]\\Big\\{-\\frac{1}{4i}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\Big(\\alpha_{2n-1}^{-1}-1\\Big) \\Big\\}\\nonumber\\\\\n=&-\\frac{1}{8i}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\alpha_{2n-1}^{-1}+b_{n}^{-2}i]\\nonumber\\\\\n=&-\\frac{1}{8}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-4}\\alpha_{2n-1}^{-1}(1+\\beta_{n}i)=|D|(1+\\beta_{n}i),\n\\end{align}\n\n\\begin{align}\ns_{12}=&\\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\Big(\\alpha_{2n-1}^{-1}+1\\Big)\n\\Big\\{\\frac{1}{4}[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\alpha_{2n-1}^{-1}i+a_{n}^{-2}(1-\\beta_{n}i)]\\Big\\}\\nonumber\\\\\n&-\\frac{1}{4}[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\alpha_{2n-1}^{-1}+b_{n}^{-2}i]\\Big\\{\\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\Big(\\alpha_{2n-1}^{-1}+1\\Big)i\\Big\\}\\nonumber\\\\\n=&\\frac{1}{16}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}[a_{n}^{-2}(1+\\beta_{n}^{2})+b_{n}^{-2}]\\Big(\\alpha_{2n-1}^{-1}+1\\Big)\\nonumber\\\\\n=&\\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\kappa_{2n}^{2}\\Big(\\alpha_{2n-1}^{-1}+1\\Big)=|A|,\n\\end{align}\n\n\\begin{align}\ns_{21}=&\\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\Big(\\alpha_{2n-1}^{-1}-1\\Big)\n\\Big\\{-\\frac{1}{4i}[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\alpha_{2n-1}^{-1}+b_{n}^{-2}i]\\Big\\}\\nonumber\\\\\n&-\\frac{1}{4i}[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\alpha_{2n-1}^{-1}i+a_{n}^{-2}(1-\\beta_{n}i)]\\Big\\{-\\frac{1}{4i}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\\beta_{n}i)\\Big(\\alpha_{2n-1}^{-1}-1\\Big)\\Big\\}\\nonumber\\\\\n=&-\\frac{1}{16}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}[a_{n}^{-2}(1+\\beta_{n}^{2})+b_{n}^{-2}]\\Big(\\alpha_{2n-1}^{-1}-1\\Big)\\nonumber\\\\\n=&-\\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\kappa_{2n}^{2}\\Big(\\alpha_{2n-1}^{-1}-1\\Big)=|B|,\n\\end{align}\nand\n\\begin{align}\ns_{22}=&\\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\Big(\\alpha_{2n-1}^{-1}-1\\Big)\n\\Big\\{\\frac{1}{4}[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\alpha_{2n-1}^{-1}i+a_{n}^{-2}(1-\\beta_{n}i)]\\Big\\}\\nonumber\\\\\n&-\\frac{1}{4i}[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\alpha_{2n-1}^{-1}i+a_{n}^{-2}(1-\\beta_{n}i)]\\Big\\{\\frac{1}{4}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\Big(\\alpha_{2n-1}^{-1}+1\\Big)i\\Big\\}\\nonumber\\\\\n=&-\\frac{1}{8}\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}[\\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\\alpha_{2n-1}^{-1}i+a_{n}^{-2}(1-\\beta_{n}i)]\\nonumber\\\\\n=&\\frac{1}{8}\\Lambda_{n}^{-1}a_{n}^{-4}b_{n}^{-2}\\alpha_{2n-1}^{-1}(1+\\beta_{n}i)=|C|.\n\\end{align}\nThe last steps in the above identities for all the entries of $S$ hold by (5.17) and Theorem 5.5.\nSo\n\\begin{align}\nFE^{-1}=|E|^{-1}\\left(\n \\begin{array}{cc}\n |D|(1+\\beta_{n}i) & |A| \\\\\n |B| & |C| \\\\\n \\end{array}\n \\right).\n\\end{align}\nThus (5.59) follows immediately.\n\\end{proof}\n\n\\begin{rem}\nIn fact, one can get (5.59) by directly invoking (5.20)-(5.24) and (5.17). However, by this means, (5.66) could not be seen.\n\\end{rem}\n\n\n\n\n\\bibliographystyle{amsplain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{0pt}{4pt plus 0pt minus 0pt}{1.5pt plus 0pt minus 0pt}\n\n\n\\usepackage{multirow}\n\\usepackage{listings}\n \\lstdefinelanguage{diff}{\n\tbasicstyle=\\ttfamily\\bfseries\\scriptsize,\n\tmorecomment=[f][\\color{diffstart}]{@},\n\tmorecomment=[f][\\color{diffincl}]{+},\n\tmorecomment=[f][\\color{diffrem}]{-},\n keepspaces=true,\n\tidentifierstyle=\\color{black},\n }\n\n\\usepackage[shortlabels]{enumitem}\n\n\\setlength{\\textfloatsep}{4pt plus 1.0pt minus 2.0pt}\n\\setlength{\\floatsep}{4pt plus 1.0pt minus 2.0pt}\n\\setlength{\\intextsep}{2pt plus 1.0pt minus 2.0pt}\n\\setlength{\\dbltextfloatsep}{4pt plus 1.0pt minus 2.0pt}\n\\setlength{\\dblfloatsep}{4pt plus 1.0pt minus 2.0pt}\n\\captionsetup{belowskip=0pt,aboveskip=1.0pt}\n\n\\copyrightyear{2020} \n\\acmYear{2020} \n\\setcopyright{acmcopyright}\n\\acmConference[ICPC '20]{28th International Conference on Program Comprehension}{October 5--6, 2020}{Seoul, Republic of Korea}\n\\acmBooktitle{28th International Conference on Program Comprehension (ICPC '20), October 5--6, 2020, Seoul, Republic of Korea}\n\\acmPrice{15.00}\n\\acmDOI{10.1145\/3387904.3389285}\n\\acmISBN{978-1-4503-7958-8\/20\/05}\n\n\n\n\\begin{document}\n\\title{Automatic Android Deprecated-API Usage Update \\protect\\\\by Learning from Single Updated Example}\n\n\n\n\\author{Stefanus A. Haryono$^*$, Ferdian Thung$^*$, Hong Jin Kang$^*$, Lucas Serrano$^{\\dagger}$, Gilles Muller$^{\\ddagger}$,\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ Julia Lawall$^{\\ddagger}$, David Lo$^*$, Lingxiao Jiang$^*$}\n\\affiliation{%\n \\institution{*School of Information Systems, Singapore Management University, Singapore}\n}\n\\email{{stefanusah,ferdianthung,hjkang.2018,davidlo,lxjiang}@smu.edu.sg}\n\\affiliation{%\n \\institution{$^{\\dagger}$Sorbonne University\/Inria\/LIP6, France}\n}\n\\email{Lucas.Serrano@lip6.fr}\n\\affiliation{%\n \\institution{$^{\\ddagger}$Inria, France}\n}\n\\email{{Gilles.Muller,Julia.Lawall}@inria.fr} \n\n\n\n\n\n\n\n\n\\begin{abstract}\nDue to the deprecation of APIs in the Android operating system, developers have to update usages of the APIs to ensure that their applications work for both the past and current versions of Android.\nSuch updates may be widespread, non-trivial, and time-consuming.\nTherefore, automation of such updates will be of great benefit to developers. \nAppEvolve, which is the state-of-the-art tool for automating such updates, relies on having before- and after-update examples to learn from. In this work, we propose an approach named CocciEvolve\\ that performs such updates using only a single after-update example. \nCocciEvolve\\ learns edits by extracting the relevant update to a block of code from an after-update example. From preliminary experiments, we find that CocciEvolve\\ can successfully perform 96 out of 112 updates, with a success rate of 85\\%.\n\\end{abstract}\n\\keywords{API update, program transformation, Android, single example}\n\n\\maketitle\n\\renewcommand{\\shortauthors}{Haryono, et al.}\n\n\\section{Introduction}\nWhen an Android API is deprecated, apps using the API should update their usages of the API to ensure that they still work in the current and future versions of Android.\nFor these updates, developers need to learn the new API(s) that should replace the deprecated API, while maintaining backward compatibility with older versions to address Android fragmentation~\\cite{he2018understanding,li2018cid}. Moreover, the deprecated API may be used in multiple locations in a codebase. \nThus, manually updating the deprecated API may be cumbersome and time consuming. \n\nTo help developers in updating usages of deprecated APIs with their replacement APIs, Fazzini et al.~\\cite{fazzini2019automated} proposed AppEvolve to automate the update task. AppEvolve learns to transform applications that use a deprecated API by learning from before- and after-update examples in GitHub.\nThese updates add usages of replacement APIs around usages of the deprecated API along with conditional checks of Android versions in code. AppEvolve learns a generic patch from such examples and applies the transformation from each generic patch in a certain order. \n\nRecently, Thung et al.~\\cite{thung2020automated} reported that, in order for AppEvolve to perform a successful update, the target code requiring update has to be written syntactically similar to the before- and after-update example. They demonstrated that AppEvolve's performance can be improved significantly if the app code is {\\em manually} rewritten to have syntactic similarities to the before- and after-update example.\n\n\\begin{figure}[t]\n\t\\centering\n\t\\scriptsize{\n\\begin{lstlisting}[language=java,numbers=none,sensitive=true,columns=flexible,basicstyle=\\ttfamily]\nif (android.os.Build.VERSION.SDK_INT >= android.os.Build.VERSION_CODES.M) {\n hour = picker.getHour();\n} else {\n hour = picker.getCurrentHour();\n}\n\\end{lstlisting}\n\t\t\\caption{An example of an after-update for {\\tt getCurrentHour} deprecated API}\\label{fig:after_update_example}\n\t}\n\\end{figure}\n\n\n\nIn this work, we propose CocciEvolve{}. CocciEvolve{} outperforms AppEvolve in the following aspects:\n\\begin{enumerate}[nosep,leftmargin=*]\n \\item CocciEvolve\\ eliminates the shortcoming of AppEvolve by normalizing both the after-update example and the target app code to update. In this way, CocciEvolve\\ ensures that both of them are written similarly, thereby preventing unsuccessful updates caused by minor differences in the way the code is written.\n \n \\item CocciEvolve\\ requires only a single after-update example for learning how to update an app that uses a deprecated API.\n Consider an after-update example in Figure~\\ref{fig:after_update_example}. The {\\tt if} and {\\tt else} blocks correspond to the code using the replacement and deprecated API methods, respectively. The code in the {\\tt if} block runs only on versions of Android after the deprecation. Thus, the code in {\\tt else} block can be considered as the code that uses the deprecated method before the update. \n \\item Transformations made by CocciEvolve\\ are expressed in the form of a semantic patch by leveraging Coccinelle~\\cite{lawall2018coccinelle}. Semantic patch has a syntax similar to a \\textit{diff} that is familiar to software developers. Therefore, the modifications process are more readable and understandable to developers.\n\\end{enumerate} \n\n\\noindent\nThe contributions of our work are:\n\\begin{itemize}[nosep,leftmargin=*]\n \\item We propose CocciEvolve, an approach for updating Android deprecated-API usages using only a single after-update example. \n \\item We perform {\\em automatic} code normalization of both the update example and the target code to be updated, addressing the challenge of updating code that is semantically equivalent but syntactically different, which was a limitation of prior work.\n \\item \nWe have evaluated CocciEvolve\\ with a dataset of 112 target files to update that we obtained from Github. The 112 files use 10 deprecated APIs used in the original evaluation of AppEvolve.\nWe show that CocciEvolve\\ can successfully update 96 target files. \n\n\\end{itemize}\n\n\\noindent\nThe remainder of this paper is structured as follows. Section~\\ref{sec:prelim} provides some preliminaries. Section~\\ref{sec:approach} details our proposed approach. \nSection~\\ref{sec:exp} describes our preliminary experiments and results. \nSection~\\ref{sec:related} presents related work. Finally, we conclude in Section~\\ref{sec:conclusion}.\n\n\\section{Preliminaries}\\label{sec:prelim}\n{\\bf AppEvolve.}\nAppEvolve is the state-of-the-art tool automating API-usage updates for deprecated Android APIs. As input, it takes an app to update and a mapping from a deprecated method to its replacement method(s). It has four phases: {\\em API-Usage Analysis}, {\\em Update Example Search}, {\\em Update Example Analysis}, and {\\em API-Usage Update}. \n\nIn the {\\em API-Usage Analysis} phase, AppEvolve finds uses of the deprecated method inside the app. \nIn the {\\em Update Example Search} phase, AppEvolve searches GitHub for apps that use both the deprecated and replacement methods in the same file. \nFor each of these apps, AppEvolve searches the app commit history for a change that adds the replacement method(s) without removing the deprecated method. These changes are used to learn how to update deprecated method usages. In the {\\em Update Example Analysis} phase, AppEvolve produces a generic patch from each example. AppEvolve then computes the\ncommon core from the produced generic patches. The common core is the longest subsequence of edits across the patches. In the {\\em API-Usage Update} phase, AppEvolve applies generic patches in ascending order of their distance to the common core. To apply a generic patch, AppEvolve tries to match context variables to variables in the app. If matches are found, AppEvolve applies edits in the generic patch and returns the updated app if the edits are successful.\n\n\\vspace{0.2cm}\\noindent{\\bf Coccinelle4J.}\nCoccinelle4J~\\cite{kang2019semantic} is a recent Java port of Coccinelle, \nwhich is a program matching and transformation tool \\cite{lawall2018coccinelle,padioleau2008documenting}.\nGiven the source code of a program and a {\\em semantic patch} describing the desired transformations, Coccinelle4J transforms the parts of the source code that match the semantic patch. \n\nWritten in the Semantic Patch Language (SmPL), \na semantic patch has two parts: (1) context information, including the declaration of metavariables; and (2) changes to be made to the source code. \nA metavariable can match program elements of the type indicated in its declaration. Modifications are expressed using fragments of Java as follows: (1) code that should be removed by annotating the start of the lines with $-$; and (2) code that should be added by annotating the start of the lines with $+$. Unannotated lines add context to the semantic patch. \nFigure~\\ref{fig:semantic_patch_example} showed an example of a semantic patch. The line surrounded by {\\tt@@} declares the metavariable.\n\n\\begin{figure}[h]\n\t\\centering\n\t\\scriptsize{\n\\begin{lstlisting}[language=diff,numbers=none]\n@@\nexpression timepicker;\n@@\n- timepicker.getCurrentHour();\n+ timepicker.getHour();\n\\end{lstlisting}\n\t\t\\caption{A simplified example of a semantic patch that transforms uses of the deprecated method, {\\tt getCurrentHour} }\\label{fig:semantic_patch_example}\n\t}\n\\end{figure}\n\n\\section{Approach}\\label{sec:approach}\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=0.95\\linewidth]{Architecture_Diagram.png}\n\t\\caption{System overview of CocciEvolve}\n\t\\label{fig:framework}\n\\end{figure}\n\n\nAn overview of our proposed system and the relevant pipelines are shown in Figure~\\ref{fig:framework}. CocciEvolve\\ is built on three main components: (1) Source file normalization, (2) Updated API block detection, and (3) API-update semantic patch creation. These components are the building blocks of the CocciEvolve\\ pipelines: a pipeline to create the update semantic patch, and a pipeline to apply the update to a target file. \nThe create update patch pipeline takes as inputs the API usage change, and a source file containing updated API call. \nThe apply update pipeline takes as inputs the API usage change, target source file, and the update semantic patch file. In the following subsections, we will explain in detail each of the system components.\n\n\n\n\n\\subsection{Source File Normalization}\nDifferent software developers may have different programming styles, thus, semantically-equivalent code may be expressed in different syntactic forms. As a result, equivalent usages of one API may vary in its expression\n. Figure~\\ref{fig:syntax_different_same_semantic} shows an example of such cases in which \\texttt{getCurrentHour()} is expressed differently. \nTherefore, it is necessary to perform source code normalization. \n\n\\begin{figure}[t]\n\t\\centering\n\t\\scriptsize{\n\\begin{lstlisting}[language=Java,numbers=none,sensitive=true,columns=flexible,basicstyle=\\ttfamily]\nif (timePicker.getCurrentHour() > 11)\n itsNoon();\n \nint currentHour = timePicker.getCurrentHour();\nif (10 < currentHour)\n itsNoon();\n\\end{lstlisting}\n\t\t\\caption{Two fragments of semantically-equivalent code expressed differently}\\label{fig:syntax_different_same_semantic}\n\t}\n\\end{figure}\n\nFocusing on source code that is related to the API usage, we perform the following code normalization steps:\n\\begin{itemize}[nosep,leftmargin=*]\n \\item An API call contained in a compound expression or statement (e.g. if, loop, return, etc) is extracted into a variable assignment. \n \\item Arguments of an API call are extracted into variable assignments.\n \\item The object receiving an API call is extracted into a variable assignment.\n\\end{itemize}\nTwo components are responsible for normalization in CocciEvolve{}:\n\n\\subsubsection{Statement Extractor}\nFor calls to API methods that return a value, the Statement Extractor extracts API calls that are part of compound expressions or statements. \nBefore each of such a compound expression or statement, a new simple statement is inserted, \nwhich initializes a new temporary variable with the return value of the API call.\nThis extraction is performed by using Coccinelle4J, with a semantic patch that is built from the type signature of the input API. \nAn example of statement extraction can be seen in Figure~\\ref{fig:statement_extraction}.\n\n\n\\begin{figure}[t]\n\t\\centering\n\t\\scriptsize{\n\\begin{lstlisting}[language=diff,numbers=none]\nfinal String tmDevice;\n+ String tempFunctionReturnValue;\n+ tempFunctionReturnValue = tm.getDeviceId();\n- tmDevice = \"\" + tm.getDeviceId();\n+ tmDevice = \"\" + tempFunctionReturnValue;\n\\end{lstlisting}\n\t\t\\caption{Example of statement extraction for {\\tt getDeviceId} API call in a compound expression.}\\label{fig:statement_extraction}\n\t}\n\\end{figure}\n\n\n\\subsubsection{Variable Extractor}\nThe Variable Extractor is used to extract the arguments and object that the API call is invoked on. \nThese arguments and object are assigned to temporary variables. \nAs before, this extraction is done through Coccinelle4J using a semantic patch.\nAn example of this extraction is shown in Figure~\\ref{fig:variable_extraction}.\n\n\\begin{figure}[t]\n\t\\centering\n\t\\scriptsize{\n\\begin{lstlisting}[language=diff,numbers=none]\n+ Context paramVar0 = context;\n+ int paramVar1 = android.R.style.TextAppearance_Large;\n+ TextView classNameVar = tvTitle;\n- tvTitle.setTextAppearance(context, \n- android.R.style.TextAppearance_Large);\n+ classNameVar.setTextAppearance(paramVar0, paramVar1);\n\\end{lstlisting}\n\t\t\\caption{Example of variable extraction for call to method {\\tt setTextAppearance}}\\label{fig:variable_extraction}\n\t}\n\\end{figure}\n\n\\subsection{Updated API Block Extractor}\nThe Updated API Block Extractor identifies the relevant block of code containing the updated API call. \nThis block is extracted and used as the source of the update patch. \nIn order to prevent false positives due to irrelevant code blocks, we use the following rules to identify a valid updated API block:\n\\begin{enumerate}[nosep,leftmargin=*]\n \\item A valid updated block contains the updated API call in the {\\tt if} branch and the old API call in the {\\tt else} branch or vice versa. \n \\item A valid updated block will have an Android version check as the {\\tt if} condition.\n\\end{enumerate}\nThese classification rules are implemented in two components:\n\n\\subsubsection{Version Statement Normalization}\nOne of the important criteria for a valid updated block is the presence of\nan if statement that checks for the Android version. \nHowever, this check may take different forms in different projects (e.g. the Android version may first be assigned to a local variable). \nTo alleviate this problem, CocciEvolve{} normalizes the conditions involving the Android version. \nThis normalization is done automatically by first detecting any assignment of an Android version constant to a variable, and then replacing usage of the local variable in a condition with the relevant Android version constant. \nAn example can be seen in Figure~\\ref{fig:version_normalization}.\n\n\\begin{figure}[h]\n\t\\centering\n\t\\scriptsize{\n\\begin{lstlisting}[language=diff,numbers=none]\nint currentBuildVersion = Build.VERSION.SDK_INT;\nint marshmallowVersion = Build.VERSION_CODES.M;\n- if (currentBuildVersion >= marshmallowVersion) {\n+ if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {\n timePicker.setHour(1);\n}\n\\end{lstlisting}\n\t\t\\caption{Sample Android version statement normalization}\\label{fig:version_normalization}\n\t}\n\\end{figure}\n\n\\subsubsection{Update Block Extractor}\nThe Update Block Extractor takes as an input the source file that has been normalized, then it detects a valid block based on the aforementioned criteria and extracts it from the file. This block is input to the API Update Semantic Patch Creation component.\n\n\\subsection{API Update Semantic Patch Creation}\nUsing the normalized update block as an input, this component creates a Coccinelle4J semantic patch that can be used to update a normalized target file. \nThis component will replace variables and expressions with metavariables. \nMetavariables will bind to program elements in the input code passed into Coccinelle4J. \nThe patch works by detecting the location of the old API call and then adding the new code which consists of a surrounding if block, the updated API call, and new variables introduced by the new API. \n\nTo increase the robustness of the system, for APIs that return a value, two different update rules are created. \nOne rule is for cases where the return value is assigned into a variable, while the other is for cases where the return value is not used. \nAn example of the update semantic patch can be seen in Figure~\\ref{fig:cocci_example}.\n\n\\begin{figure}[h]\n\t\\centering\n\t\\scriptsize{\n\\begin{lstlisting}[language=diff,numbers=none]\n@bottomupper@\nexpression exp0;\nidentifier classIden;\n@@\nTimePicker classIden = exp0;\n...\n+ if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {\n+ classIden.getMinute();\n+ } else {\n classIden.getCurrentMinute();\n+ }\n\n@bottomupper_assignment@\nexpression exp0;\nidentifier classIden;\n@@\nTimePicker classIden = exp0;\n...\n+ if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {\n+ tempFunctionReturnValue = classIden.getMinute();\n+ } else {\ntempFunctionReturnValue = classIden.getCurrentMinute();\n+ }\n\\end{lstlisting}\n\t\t\\caption{Example of update patch for {\\tt getCurrentMinute} API}\\label{fig:cocci_example}\n\t}\n\\end{figure}\n\n\n\n\\section{Experiments}\\label{sec:exp}\n\n\\subsection{Dataset}\nTo assess the performance of CocciEvolve\\ for practical usage, \nwe use a dataset of real-world Android projects retrieved from public Github repositories. \nFor this purpose, we use AUSearch~\\cite{asyrofi2020ausearch}, a tool for searching Github repositories, to find Android API usage. \nFor the update semantic patch creation, we use the existing after-update examples provided in the AppEvolve replication package. \nFor each API, only a single after-update example is used.\nWe obtained a total of 112 target source files from Github for \nthe 10 most commonly used APIs that were used in the original evaluation of AppEvolve. These target files are disjoint from the target files used by AppEvolve and thus are used to evaluate AppEvolve's generalizability in updating other target files.\nDetailed statistics for this dataset can be seen in Table~\\ref{table:data_statistic}. \nThis dataset is published with our replication package.\\footnote{\\label{refnote}\\url{https:\/\/sites.google.com\/view\/cocci-evolve\/}}\n\n\\begin{table}[t]\n\\caption{Number of targets in our evaluation dataset }\n\\begin{center}\n\\begin{tabular}{ |p{20em}|p{4.5em}| }\n\\hline\n \\textbf{API Description} & \\textbf{\\# Targets}\n\\\\ \\hline\n\ngetAllNetworkInfo() & 8\n\\\\ \\hline\ngetCurrentHour() & 9\n\\\\ \\hline\ngetCurrentMinute() & 12\n\\\\ \\hline\nsetCurrentHour(Integer) & 12\n\\\\ \\hline\nsetCurrentMinute(Integer) & 10\n\\\\ \\hline\nsetTextAppearance(...) & 12\n\\\\ \\hline\nrelease() & 14\n\\\\ \\hline\ngetDeviceId() & 12\n\\\\ \\hline\nrequestAudioFocus(...) & 8\n\\\\ \\hline\nsaveLayer(...) & 15\n\\\\ \\hline\n\n\\end{tabular}\n\\end{center}\n\\label{table:data_statistic}\n\\end{table}\n\n\\subsection{Experimental Settings}\nOur experiments are done by comparing the performance of CocciEvolve\\ against AppEvolve based on the number of applicable updates produced. \nTo generate\nthe update patch, we utilize a single update example for each API from the available AppEvolve examples.\n\nThe target files for the experiments are the public Android project dataset that have been collected from Github through the use of AUSearch~\\cite{asyrofi2020ausearch}. CocciEvolve\\ is applied to every target file using the relevant API update patch that was created. \nFor experiments involving AppEvolve, we need to configure each target project as an Eclipse project and create an additional XML file that contains the deprecated API description and their locations in the file. \nDue to this limitation, our experiments on AppEvolve are focused on first instance of each API call for each target project. \n\n\\subsection{Results}\nIn our experiments, CocciEvolve\\ attains a better performance compared to AppEvolve. \nFor most APIs, CocciEvolve\\ achieves a near perfect result. \nWe ask a software engineer with three years experience in Android, who was not part of this project, to validate the correctness of the update by verifying that there are no semantic changes in the update. Our experimental results are also included in our replication package.\n\nIn most cases, AppEvolve does not produce any code update. Thung et al.~\\cite{thung2020automated} note that AppEvolve requires some manual code refactoring and modifications to be able to perform the automated update. Table~\\ref{result_statistic} shows the statistics of our evaluation.\n\n\n\\begin{table}[t]\n \\centering\n \\caption{Statistics of updating target files per API}\n \\label{result_statistic}\n \\begin{tabular}{|l|c|c|c|c|}\n \n \\hline\n\\multicolumn{1}{|c|}{\\multirow{2}{*}{\\textbf{API}}} & \\multicolumn{2}{c|}{\\textbf{CocciEvolve}} & \\multicolumn{2}{c|}{\\textbf{AppEvolve}} \\\\ \n\\cline{2-5} \n\\multicolumn{1}{|c|}{} & \n\\textbf{Success} & \\textbf{Fail} & \\textbf{Success} & \\textbf{Fail} \\\\ \\hline\n \n \n getAllNetworkInfo() & 0 & 8 & 0 & 8 \n\\\\ \\hline\ngetCurrentHour() & 9 & 0 & 1 & 8\n\\\\ \\hline\ngetCurrentMinute() & 12 & 0 & 1 & 11\n\\\\ \\hline\nsetCurrentHour(Integer) & 12 & 0 & 10 & 2\n\\\\ \\hline\nsetCurrentMinute(Integer) & 10 & 0 & 6 & 4\n\\\\ \\hline\nsetTextAppearance(...) & 12 & 0 & 1 & 11\n\\\\ \\hline\nrelease() & 14 & 0 & 0 & 14\n\\\\ \\hline\ngetDeviceId() & 12 & 0 & 1 & 11\n\\\\ \\hline\nsaveLayer(...) & 15 & 0 & 0 & 15\n\\\\ \\hline\nrequestAudioFocus(...) & 0 & 8 & 0 & 8\n\\\\ \\hline\n\\textbf{Total} & \\textbf{96} & \\textbf{16} & \\textbf{20} & \\textbf{92}\n\\\\ \\hline \n \\end{tabular}\n \n\\end{table}\n\n\nBased on the experiments result, we can see that CocciEvolve\\ mainly failed for two APIs: {\\tt getAllNetworkInfo()} and {\\tt requestAu}-{\\tt dioFocus(...)}. \nUpdating these two APIs requires the creation of new objects for arguments to the replacement APIs.\nThese objects are frequently created outside of the updated API block, \nand will require a data flow analysis to detect and construct the update correctly. The current version of CocciEvolve{} does not support sophisticated data-flow analysis.\n\nCompared to AppEvolve, CocciEvolve{} has several advantages:\n\n\\begin{itemize}[nosep,leftmargin=*]\n \\item CocciEvolve\\ does not need extensive setup or configuration;\n \\item CocciEvolve\\ is capable of updating multiple API calls in the same file without additional configuration;\n \\item CocciEvolve\\ provides an easily readable and understandable semantic patch as a by-product;\n \\item CocciEvolve\\ only needs a single updated example.\n\\end{itemize}\n\n\\section{Related Work}\\label{sec:related}\nThere are many studies on API deprecation~\\cite{kapur2010refactoring,zhou2016api,brito2016developers,sawant2018features,li2018characterising,CiD,ACRyL,PIVOT,ELEGANT,understandingfic,tamingandroid}. \nKapur et al.~\\cite{kapur2010refactoring} discovered that APIs can be removed from a library without warning. \nZhou and Walker~\\cite{zhou2016api} proposed a tool to mark deprecated API usages in StackOverflow posts. \nSome studies propose \napproaches~\\cite{CiD,ACRyL, PIVOT,ELEGANT,understandingfic,tamingandroid} to detect API compatibility issues.\nBrito et al.~\\cite{brito2016developers} showed that not all APIs are annotated with replacement messages.\nSawant et al.~\\cite{sawant2018features} found 12 reasons for deprecation.\nLi et al.~\\cite{li2018characterising} \ncharacterized Android APIs and \nfound inconsistencies between their annotation and documentation. Unlike these studies, our work aims to automatically update usages of deprecated Android APIs.\n\nThere are many studies on program transformations inference~\\cite{Meng:2013:LLA:2486788.2486855,Rolim:2017:LSP:3097368.3097417,Rolim:arxiv,jiang2019inferring,fazzini2019automated}.\nLASE~\\cite{Meng:2013:LLA:2486788.2486855} creates edit scripts by finding common changes from a set of change examples.\nREFAZER~\\cite{Rolim:2017:LSP:3097368.3097417} employs a programming-by-example methodology to infer transformations from a set of change examples. REVISAR~\\cite{Rolim:arxiv} finds common Java edit patterns from code repositories by clustering the edit patterns. Jiang et al.~\\cite{jiang2019inferring} proposed GenPat, which builds source code hypergraphs to infer transformations. \nFazzini et al.~\\cite{fazzini2019automated} proposed AppEvolve to transform app with Android deprecated-API usages into ones that are backward compatible.\nOur work shares the same goal. However, while AppEvolve learns from a before- and after- update example, our approach requires only one after-update example.\n\n\\section{Conclusion and Future Work}\\label{sec:conclusion}\n\nIn this work, we propose CocciEvolve{}, which can learn an Android API update from a single after-update example. \nCocciEvolve\\ performs code normalization to standardize the code and writes the update in the Semantic Patch Language (SmPL) to make the transformation transparent and readable to developers.\nOur experiments on 112 target files on 10 deprecated Android APIs show that CocciEvolve\\ successfully updates usages in 96 target files. On the other hand, AppEvolve can only update deprecated-API usages in 20 of these target files. For future work, we plan to perform code slicing to find code relevant to the API update, including code beyond method boundaries. We also plan to perform code denormalization to restore the original coding style.\n\n\n\n\\noindent{\\bf Acknowledgement.} This research is supported by the Singapore NRF (award number: NRF2016-NRF-ANR003) and the ANR ITrans project.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\\subsection{Notation}\nThe set of positive integers up to and including $n$ is denoted by $\\mathbb{N}_n$. For a set $K$, its cardinality is denoted by $|K|$, and its complement is denoted by $K^c$. The Boolean domain $\\{True,False\\}$ is denoted by $\\mathbb{B}$. Given a set $X$ of Boolean variables, the set of all valuations is denoted by $V_x = \\mathbb{B}^{|X|}$ and called the domain of $X$, and a specific valuation is denoted by $x\\in V_x$. \n \n \n \\subsection{Graph theory}\n\nA \\emph{graph} is a pair $\\mathcal{G}=(\\mathcal{V},\\mathcal{E})$, where $\\mathcal{V}$ is a set of nodes (vertices) and $\\mathcal{E}\\subset (\\mathcal{V}\\times \\mathcal{V})$ is a set of edges. \n\nThe graph $\\mathcal{G}$ is called \\emph{directed} (digraph) when edges have a direction. We say that edge $e = (u,v) \\in \\mathcal{E}$ points from $u$ to $v$. A \\emph{path} $\\sigma$ on a digraph is a sequence $\\sigma= \\{e_1,\\ldots,e_k\\}$ of consecutive edges $e_i = (v_s^i, v_d^i)$, i.e., with $v_d^{i} = v_s^{i+1}$ for all $1\\leq i < k$. The path $\\sigma$ is said to be a path from node $v_s^1$ to node $v_d^k$. A path with identical start and end nodes (i.e., $v_s^1 = v_d^k$) is called a \\emph{cycle}. If directed graph $\\mathcal{G}$ has no cycles, it is called a \\emph{directed acyclic graph}.\nFurthermore a directed acyclic graph is called a \\emph{tree} if any two vertices are connected with at most one path. Any disjoint union of trees is called a \\emph{forest}. A node without any outgoing edge is called a \\emph{leaf node}.\n\nA graph $\\mathcal{G}'=(\\mathcal{V}',\\mathcal{E}')$ is called an \\emph{induced subgraph} of $\\mathcal{G}$ if $\\mathcal{V}'\\subseteq \\mathcal{V}$ and $\\mathcal{E}' \\subset ( (\\mathcal{V}'\\times\\mathcal{V}') \\cap \\mathcal{E})$. All the subgraphs we consider in this paper are induced subgraphs, so, we call them just subgraph for short. A digraph is called \\emph{strongly connected} if there is a path from each node to every other node. The \\emph{strongly connected components} of a directed graph $\\mathcal{G}$ are its maximal strongly connected subgraphs. Here maximal is used in the sense that no strongly connected component of $\\mathcal{G}$ is a subgraph of some other strongly connected subgraph of $\\mathcal{G}$.\n \n\n\n\n\\subsection{Boolean systems and networks}\n\nA \\emph{Boolean system} $S$ is a tuple of the form $\\left$ where\n\\begin{itemize}\n\\item $U = \\{u^{(1)},\\ldots,u^{(n_u)}\\}$ is the set of Boolean control inputs with domain $V_u \\doteq \\mathbb{B}^{n_u}$,\n\\item $E= \\{e^{(1)},\\ldots,e^{(n_e)}\\}$ is the set of Boolean environment (uncontrolled) inputs, disjoint from $U$, with domain $V_{e}\\doteq \\mathbb{B}^{n_e}$,\n\\item $Y = \\{y^{(1)},\\ldots,y^{(n_y)}\\}$ is the set of Boolean outputs with domain $V_y\\doteq \\mathbb{B}^{n_y}$, and\n\\item $f:V_u \\times V_e \\rightarrow V_y$ is the system function, that maps the inputs to the outputs.\n\\end{itemize}\nThe $j^{th}$ component of the system function is denoted by $f^{(j)} : V_u \\times V_e \\rightarrow V_{y^{(j)}}$. Note that this system definition corresponds to a memoryless or stateless system, that is,\n\\begin{equation} \\label{eq:y=fue}\ny = f(u,e)\n\\end{equation}\nwhere $y\\in V_y$, $u\\in V_u$ and $e \\in V_e$ are valuations of the outputs, control inputs and environment inputs, respectively. \n\n\nGiven two systems $S_1=\\left$ and $S_2=\\left$, a \\emph{serial interconnection} from $S_1$ to $S_2$ is formed by equating a set of outputs of $S_1$ to a set of environment inputs of $S_2$. We denote the set of shared variables in a serial interconnection as \n\\begin{equation}\\label{eq:shared}\n{I_{1,2}} = \\{(k,l)\\mid y_1^{(k)} = e_2^{(l)}\\},\n\\end{equation}\nand say $S_1$ is connected to $S_2$ when the set of shared variables is nonempty, i.e., ${I_{1,2}}\\neq\\emptyset$. \n\n\nA \\emph{Boolean network} is a tuple $S = \\langle \\{S_i\\}_{i=1}^n, \\mathcal I \\rangle$ that consists of a collection of subsystems $S_i=\\left$, for $i\\in\\mathbb{N}_n$, and an interconnection structure $\\mathcal I =\\{I_{i,j}\\}_{i,j\\in\\mathbb{N}_n}$, where $I_{i,j}$ is defined as in \\eqref{eq:shared}. The interconnection structure represents which subsystems are connected to which others through which variables. The interconnection structure induces a digraph $\\mathcal G_S = (\\mathcal{V}_S,\\mathcal{E}_S)$ called the \\emph{system graph}, where $\\mathcal{V}_S = \\{S_1,\\ldots, S_n\\}$ and $\\mathcal{E}_S=\\{(S_i,S_j)\\mid {I_{i,j}}\\neq\\emptyset\\}$. The set $E^{int}_i \\subseteq E_i$ of internal inputs for a system $S_i$ is defined as $E^{int}_i = \\{e_i^{(l)} \\in E_i \\mid \\exists j\\in \\mathbb{N}_n, k\\in \\mathbb{N}_{n_{y_j}}, (k,l)\\in I_{j,i}\\}$. Remaining inputs of $S_i$ are called external inputs and defined as $E_i^{ext} = E_i\\setminus E^{int}_i$. \n\n\nFor well-posedness of the network, we assume the following: (i) If two systems are connected to a third one, they are connected through different environment inputs. That is, if there exist some $i,j\\in\\mathbb{N}_n$, $i\\neq j$ with $(k_1,l) \\in I_{i,j}$, then there does not exist any $k_2$ with $(k_2,l) \\in I_{i',j}$ for any $i'\\neq i$. (ii) The system graph $\\mathcal G_S$ is a directed acyclic graph. Note that in the memoryless Boolean setting, cycles in the interconnection structure lead to mostly not well-defined algebraic loops, therefore such interconnections are not considered. In fact, the Boolean network $S$ itself is a Boolean system whose inputs, outputs and system function can be derived from those of its subsystems, and the interconnection structure.\n\n\n\\subsection{Specifications}\n\nWe consider specifications in the form of \\emph{assumption-guarantee} pairs given in terms of Boolean-valued functions. This section introduces some terminology regarding Boolean-valued functions and assume-guarantee specifications.\n\nLet $\\varphi: V_x \\rightarrow \\mathbb{B}$ be a Boolean-valued function of variables in $X$. Each $\\varphi$ can be equivalently represented by its satisfying set $\\llbracket \\varphi \\rrbracket \\subseteq V_x$ defined as follows:\n\\begin{equation}\n\\llbracket \\varphi\\rrbracket \\doteq \\{x \\in V_{x} \\mid \\varphi(x)=True \\}.\n\\end{equation}\nPropositional operations on Boolean-valued functions are defined in the usual way. The symbols $\\neg,\\land$ and $\\lor$ are used for logical operations \\emph{NOT} (negation), \\emph{AND} (conjunction) and \\emph{OR} (disjunction), respectively and performs as follows:\n$\\llbracket \\neg \\psi \\rrbracket \\doteq \\llbracket \\psi \\rrbracket^c$, $\\llbracket \\psi_1 \\wedge \\psi_2 \\rrbracket \\doteq \\ws{\\psi_1} \\cap \\ws{\\psi_2}$, and $\\llbracket \\psi_1 \\lor \\psi_2 \\rrbracket \\doteq \\ws{\\psi_1} \\cup \\ws{\\psi_2}$.\n\nAdditional operations such as \\emph{XOR} (exclusive-or) denoted by $\\oplus$, can be defined using the main operators above: $ \\psi_1 \\oplus \\psi_2 \\doteq (\\psi_1 \\land \\neg\\psi_2) \\lor (\\neg\\psi_1 \\land \\psi_2)$.\n\n\\begin{defn}\t\n\tLet $\\varphi: V \\rightarrow \\mathbb{B}$ be a Boolean-valued function and assume $V = \\prod_{i \\in J} V_i$.\n\tThe \\emph{projection} of $\\varphi$ onto a set $I\\subseteq J$ of variables is another Boolean function $\\varphi |_{I} : \\prod_{i \\in I} V_{i} \\rightarrow \\mathbb{B}$ whose satisfying set is defined as \n\t$$\\llbracket \\varphi |_{I}\\rrbracket \\doteq \\left\\{ x \\in \\prod_{i \\in I} V_{i} \\mid \\exists y \\in \\prod_{j \\notin I} V_{j} \\text{ such that } (x,y)\\in \\llbracket \\varphi \\rrbracket \\right\\}.$$ \n\\end{defn}\n\nWhen the identity of the variables are clear from the context, the order of the tuple is ignored. In other words, saying $(x,y)\\in \\llbracket \\varphi \\rrbracket$ is equivalent to saying $(y,x)\\in \\llbracket \\varphi \\rrbracket$ and vice versa. \n\nAssumptions and guarantees are Boolean functions that capture the \\emph{a priori} knowledge about the uncontrolled inputs and the desired safe behavior of the system outputs, respectively. \n\n\n\n\\begin{defn} An \\emph{assumption} $A: V_{e} \\rightarrow \\mathbb{B}$ for a Boolean system $S$ is a Boolean-valued function of its uncontrolled inputs. \\end{defn}\nWe say that $(e_1^{ext},\\dots,e_i^{ext},\\dots,e_{n}^{ext}) \\in V_e$ is \\emph{admissible} if it is in the satisfying set of $A$. With a slight abuse of terminology, we also say $e_i^{ext}$ is admissible, for the sake of convenience.\n\n\\begin{defn} A \\emph{guarantee} $G: V_{y} \\rightarrow \\mathbb{B}$ for a Boolean system $S$ is a Boolean-valued function of its outputs.\n\\end{defn}\n\nFor a Boolean network $S$ with subsystems $S_1,\\ldots, S_{n}$, by convention, $A$ denotes an assumption with domain $V_{e} = \\prod_{j=1}^{n} V_{e_j^{ext}}$. Furthermore, \n$A^{\\downarrow(i)}$\ndenotes an assumption associated with subsystem $S_i$ with domain $V_{e_i^{ext}}$. Note that for a given assumption $A$, its projection $A|_{E^{ext}_i}$ can always be denoted by $A^{\\downarrow(i)}$ as it only contains variables from $E_i$. Similarly, $G$ denotes a guarantee with domain $V_{y} = \\prod_{j=1}^{n} V_{y_j}$ and $G^{\\downarrow(i)}$ denotes a guarantee associated with subsystem $S_i$ with domain $V_{y_i}$. \n\n\\begin{defn}\n\tA formula of the form\n\t\\begin{equation}\\label{eq:formula_global}\n\t\\varphi\\doteq \\bigwedge_{k=1}^{n_c}(\\asmp{k}{}\\to\\gar{k}{})\n\t\\end{equation}\n\tis called a \\emph{global contract} and denoted by $\\mathbb{C} \\doteq \\{[A_k,G_k]\\}_{k=1}^{n_c}$. \n\\end{defn}\n\n \\begin{defn}\n \tA formula of the form\n\t\\begin{equation}\n\t\\varphi^{(i)}\\doteq \\bigwedge_{k=1}^{n_c}(\\asmp{k}{\\downarrow(i)}\\to\\gar{k}{\\downarrow(i)})\n\t\\end{equation}\n \tis called a \\emph{local contract} for $S_i$ and denoted by $\\mathbb{C}_i \\doteq \\{[\\asmp{k}{(i)},\\gar{k}{\\downarrow(i)}]\\}_{k=1}^{n_c}$. \n \\end{defn}\n \n\\subsection{Control protocol synthesis}\nGiven a system $S$, a \\emph{control protocol} $\\pi: V_{e} \\rightarrow V_u$ maps the environment inputs to control inputs. When a control protocol $\\pi$ is implemented on a system $S$, \\emph{controlled system} is governed by the following input-output relation:\n\\begin{equation} \\label{eq:sys_sr}\ny = f(\\pi(e),e),\n\\end{equation}\nsince $u= \\pi(e)$.\n\nGiven an assumption-guarantee pair, the control synthesis problem aims to find a protocol $\\pi$ that sets the control inputs such that $A \\rightarrow G$ is satisfied. For any $e \\in \\llbracket A \\rrbracket$, determining if there exist a $u \\in V_u$ so that $G$ evaluates $True$, is a quantified Boolean SAT problem: \n\\begin{equation}\\label{eq:qsat}\n\\forall e \\in V_e: \\exists u \\in V_u : A(e) \\to G(y)\n\\end{equation}\nwhere $y$ is given as in the Equation \\eqref{eq:y=fue}.\nIf the quantified SAT problem above can be solved, then the set of all solutions can be used as the control protocol. In this case, we say $A \\rightarrow G$ is \\emph{realizable}.\n\n\n\\subsection{Finding local assumptions}\nLocal assumptions are simply found by projection as shown in line 3 the Algorithm \\ref{alg:main}. By definition of the projection operator, we have $A^{\\downarrow(i)}$ depend only on local external variables. Moreover, projection ensures that local assumptions do not restrict the environment more than the global assumption and does so in the ``best\" possible way. In particular, we have the following property that will be useful in proving the soundness of the algorithm.\n\\begin{prop}\\label{theo:as}\n\tLet $A : \\prod_{i=1}^{n} V_{e^{ext}_i} \\rightarrow \\mathbb{B}$ be the global assumption. \n\tDefine $S_i$'s local assumption $A^{\\downarrow(i)} \\doteq A|_{V_{e_i^{ext}}}$ for $i \\in \\mathbb{N}_n$. Then local assumptions are less restrictive than the global assumption, i.e.,\n\t\\begin{equation}\\label{eq:asmp}\n\t\\llbracket A \\rrbracket \\subset \\prod_{i=1}^n \\llbracket A^{\\downarrow(i)}\\rrbracket.\n\t\\end{equation}\n\tMoreover, \\eqref{eq:asmp} fails to hold if we replace any ${A}^{\\downarrow(i)}$ with $\\bar{A}^{\\downarrow(i)}$ where $\\llbracket \\bar{A}^{\\downarrow(i)}\\rrbracket \\subset \\llbracket A^{\\downarrow(i)}\\rrbracket$.\n\\end{prop}\n\n\n\n\n\\begin{proof}\n\tAssume that \n\t$(e_1^{ext},\\dots,e_n^{ext}) \\in \\llbracket A \\rrbracket $.\n\tBy construction, \n\t$e_i^{ext} \\in \\llbracket A^{\\downarrow(i)} \\rrbracket \\text { for all } i \\in \\mathbb{N}_n$.\n\tThen \n\t$\n\t(e_1^{ext},\\dots,\\dots,e_n^{ext}) \\in \\prod_{i=1}^n \\llbracket A^{\\downarrow(i)} \\rrbracket.\n\t$\n\tThis implies \\eqref{eq:asmp}.\n\t\n\tFor any arbitrary $i \\in \\mathbb{N}_n$, let $\\bar{A}^{\\downarrow(i)}$ be a Boolean-valued function satisfying $\\llbracket \\bar{A}^{\\downarrow(i)}\\rrbracket \\subset \\llbracket A^{\\downarrow(i)}\\rrbracket$ and $\\llbracket \\bar{A}^{\\downarrow(j)}\\rrbracket = \\llbracket A^{\\downarrow(j)}\\rrbracket$ for every other $j\\neq i$. Now let $e^{ext}_i \\in \\llbracket A^{\\downarrow(i)}\\rrbracket \\setminus \\llbracket \\bar{A}^{\\downarrow(i)}\\rrbracket$. By construction of $A^{\\downarrow(i)}$, there exists an environment valuation $(e_1^{ext},\\dots,e^{ext}_i,\\dots,e_n^{ext}) \\in \\llbracket A \\rrbracket $. However, $(e_1^{ext},\\dots,e^{ext}_i,\\dots,e_n^{ext}) \\in \\prod_{j=1}^n \\llbracket \\bar{A}^{\\downarrow(j)} \\rrbracket $. Thus \\eqref{eq:asmp} is no longer true.\n\\end{proof}\n\n\n\\subsection{Finding local guarantees}\nAs opposed to the local assumptions, local guarantees $G^{\\downarrow(i)}$ cannot be less restrictive than the global guarantee. No communication is assumed between subsystems, hence any possible combination of outputs allowed by local guarantees should be in the satisfying set of global guarantee, i.e.,\n\\begin{equation}\\label{eq:dist}\n\\forall i \\in \\mathbb{N}_n: \\forall y_i \\in \\ws{G^{\\downarrow(i)}}: (y_1,\\dots,y_n) \\in \\ws{G}.\n\\end{equation}\n\n \nDue to the dependence, local guarantees cannot be computed independently. Our algorithm proceeds by selecting a local guarantee for a subsystem at each iteration, using the notion of a distribution, which is introduced next.\n\n\\begin{defn}\nLet $G : \\prod_{j=1}^n V_{y_i} \\rightarrow \\mathbb{B} $ be the global guarantee. A \\emph{distribution} of $G$ between $S_i$ and the rest of the subsystems is a pair of Boolean valued functions $ G^{\\downarrow(i)}: V_{y_i} \\to \\mathbb{B}$ and $ G^{\\uparrow(i)}:\\prod_{j \\neq i} V_{y_j} \\to \\mathbb{B}$ where\n\\begin{equation}\\label{eq:dist2}\n\\forall y_i \\in \\ws{G^{\\downarrow(i)}}: \\forall y \\in \\ws{G^{\\uparrow(i)}}: (y_i,y) \\in \\ws{G}.\n\\end{equation}\nWe denote a distribution as $\\gamma = \\{G^{\\downarrow(i)},G^{\\uparrow(i)}\\}$.\n\\end{defn}\n\n\n\n\n\n\n\n\n \n\nUnfortunately, in general, there is no unique ``best\" way of generating local guarantees, therefore distributions are not unique. Note that \n$\n \\llbracketG^{\\downarrow(i)}\\rrbracket \\times \\ws{G^{\\uparrow(i)}}\\subset \\llbracket G\\rrbracket\n $ \nis trivial by Eq.~\\eqref{eq:dist2}. This implies that local guarantees are conservative and they under-approximate $\\ws{G}$.\nHowever, we do not want to restrict the system more than necessary. That is why we compute only the \\emph{maximal distributions}. A distribution is called \\emph{maximal} when there is no other distribution $\\bar{\\gamma} = \\{\\bar{G}^{\\downarrow(i)},\\bar{G}^{\\uparrow(i)}\\}$ that satisfy $\\ws{G^{\\downarrow(i)}}\\subset \\ws{\\Bar{G}^{\\downarrow(i)}}$ and\/or $\\ws{G^{\\uparrow(i)}}\\subset \\ws{\\Bar{G}^{\\uparrow(i)}}$.\nWe denote the set of all maximal distributions with $\\Gamma = \\{\\gamma_k\\}_k$, which is computed in line 4 of the algorithm. We refer the reader to the appendix for more details on how to compute these distributions.\n\n\n\n\n\n\n\n\t\n\n\n\\subsection{Least restrictive assumptions and controller synthesis}\\label{sec:32}\n\n Controller synthesis for any subsystem\n is essentially a quantified SAT problem as stated in \\eqref{eq:qsat}.\nHowever, if we let internal inputs to take any possible value, then it might not be possible to find an input $u_i$ that renders $G^{\\downarrow(i)} = True$.\nWhen this is the case, we restrict the internal inputs, which are controlled by $S_i$'s ancestors, to a certain set to achieve $G^{\\downarrow(i)}$. While doing so, we would like to be as permissive as possible.\n\n \\begin{defn}\n \tGiven a local contract $\\mathbb{C}_i = [A^{\\downarrow(i)} , G^{\\downarrow(i)}]$, the set of all internal inputs that makes the contract realizable is called the \\emph{least restrictive assumption}. It is denoted with $\\lambda^{(i)}_{lra}$ where \n \t\\begin{equation}\n \t\\llbracket \\lambda^{(i)}_{lra} \\rrbracket \\doteq \\{ e_i^{int} \\in E_i^{int} \\mid A^{\\downarrow(i)} \\to G^{\\downarrow(i)} \\text{ is realizable}\\}.\n \t\\end{equation} \n\\end{defn}\n\\vspace{2mm}\n \n In other words, the least restrictive assumption gives the set of internal inputs that makes the guarantee realizable. Any internal input outside of this set makes the guarantee unsatisfiable. \n\nAfter computing the least restrictive assumption, we update the local contract as\n\\begin{equation}\n\\mathbb{C}_i = \\left[ \\left(A^{\\downarrow(i)} \\wedge \\lambda^{\\downarrow(i)}_{lra}\\right),G^{\\downarrow(i)} \\right].\n\\end{equation}\n\nNote that by definition of least restrictive assumption, $\\mathbb{C}_i$ is realizable. Having a realizable local contract, the control protocol is synthesized by solving the respective QSAT problem. Line 6 in Algorithm \\ref{alg:main} performs these two operations simultaneously.\n\nOn the other hand, the least restrictive assumption $\\lambda^{(i)}_{lra}$ imposes new guarantees to the ancestors of $S_i$. We change internal inputs with their output correspondents by examining the interconnection structure $\\mathcal{I}$. Finally we update the global contract for the remaining subsystems as\n \n\\begin{equation}\n\\mathbb{C}' = [A ,( G^{\\uparrow(i)} \\wedge \\lambda^{(i)}_{lra})].\n\\end{equation}\n \n \\subsection{Algorithm Analysis}\n \n In this section we show that solutions returned by Alg.~\\ref{alg:main} are correct. Then we introduce conditions on the system graph and the specifications that renders the algorithm complete. Finally we discuss the complexity of the proposed method.\n \n \\begin{Theorem}[Soundness]\n \tFor arbitrary Boolean network $S$, if the specification is given with $\\mathbb{C} =[A , G]$, then Alg.~\\ref{alg:main} is sound.\n \\end{Theorem}\n \\begin{proof}\n \tIt is enough to show that local contracts results in weaker assumptions on environment inputs and stronger restrictions on system outputs.\n\t\tIt can be shown using Boolean algebra that \n\t\t\\begin{equation}\\label{eq:sound0}\n\t\t\\left(\\bigwedge_{i=1}^n \\left(A^{\\downarrow(i)} \\to G^{\\downarrow(i)}\\right)\\right) \\to \\left(\\bigwedge_{i=1}^n A^{\\downarrow(i)} \\to \\bigwedge_{i=1}^n G^{\\downarrow(i)}\\right)\n\t\t\\end{equation} \n\t\tis a tautology.\n\t\tAlso from Eq.~\\eqref{eq:asmp} and Eq.\\eqref{eq:dist}, we can show that $A \\to \\bigwedge_{i=1}^n A^{\\downarrow(i)}$ and $\\bigwedge_{i=1}^n G^{\\downarrow(i)} \\to G$ are tautologies. Therefore,\n\t\t\\begin{equation}\n\t\t\t\t\\left(\\bigwedge_{i=1}^n A^{\\downarrow(i)} \\to \\bigwedge_{i=1}^n G^{\\downarrow(i)} \\right) \\to \\left(A\\to G\\right)\n \t\t\\end{equation}\nis trivially true.\n\t\t\n\t\t\n \t\n \t\t\n Also note that the least restrictive assumptions computed for each subsystem are not restrictions on environment. In fact they restrict the outputs, which only strengthens result presented above. Hence, if the algorithm returns a controller that satisfies the local contracts, the global contract is satisfied.\n \t\t\\end{proof}\n \n The completeness requires additional assumptions on the system graph and the specification.\n \\begin{Theorem}[Completeness]\\label{theo:comp} \n Let the specifications given with the contract $\\mathbb{C}=[A,G]$ and the system graph induced by the Boolean system $S$ be $\\mathcal{G}_S$.\n Alg.~\\ref{alg:main} is complete if\n \\begin{enumerate}\n \t\\item $A = \\bigwedge_i A^{\\downarrow(i)}$,%\n \t\\item $G = \\bigwedge_i G^{\\downarrow(i)}$, and\t\\item $\\mathcal{G}_S$ is a forest.\n \t\\end{enumerate} \n \\end{Theorem}\n In other words the specification is given as\n\\begin{equation} \\label{eq:conj}\n\\mathbb{C} = \\left[\\bigwedge_i A^{\\downarrow(i)} , \\bigwedge_i G^{\\downarrow(i)}\\right].\n\\end{equation}\n\n\n \n \\begin{proof}\nWe prove by induction that if Alg.~\\ref{alg:main} fails to return a protocol then there does not exist one.\n \nWhen the number of subsystems is one, centralized and distributed algorithms are identical. If the respective QSAT problem is not satisfiable, then there does not exist any control protocol to achieve the task. \n\nAssume that the Alg.~\\ref{alg:main} is complete for an arbitrary forest $\\bar{\\mathcal{G}}_S$ with $n$ nodes and any specification $\\bar{\\varphi}$ given in the form of Eq.~\\eqref{eq:conj}.\n\nAlso let ${\\mathcal{G}}_S$ be an arbitrary forest with $(n+1)$ nodes and let \n\\begin{equation}\n{\\mathbb{C}} = \\left[ \\bigwedge_{i=1}^{n+1} {A}^{\\downarrow(i)} , \\bigwedge_{i=1}^{n+1} {G}^{\\downarrow(i)}\\right]\n\\end{equation}\nbe its specification.\nWithout loss of generality, assume that $S_{(n+1)}$ is a leaf node and the local contract that is computed according to Alg.~\\ref{alg:main} is given as\n$$\\mathbb{C}_{(n+1)} = \\left[\\left({A}|_{V_{E_{n+1}^{ext}}} \\wedge {\\lambda}^{(n+1)}_{lra}\\right), {G}^{\\downarrow(n+1)} \\right].$$\n\n\nFirst assume that ${\\lambda}^{(n+1)}$ is $False$ and the local contract $\\mathbb{C}_{(n+1)}$ is not satisfiable. This means that there exists at least one admissible environment valuation $e_{(n+1)}^{ext}$ such that no matter what the internal inputs are, ${G}^{\\downarrow(n+1)}$ is unsatisfiable. Since the environment inputs are uncontrolled, no control protocol can overcome this problem. Thus no distributed (or central) control protocol exists for the given system and specifications. \n\nNow assume that ${\\lambda}^{(n+1)}$ is not $False$ and the local contract $\\mathbb{C}_{(n+1)}$ is satisfiable. Then the global contract is updated as \n\n\\begin{equation}\\label{eq:update}\n{\\mathbb{C}}' = \\left[{A} , \\left({G}^{\\uparrow(n+1)}_{lra}\\wedge {\\lambda}^{\\downarrow(n+1)}_{lra}\\right)\\right].\n\\end{equation}\n\nNote that distribution is unique and ${G}^{\\uparrow(n+1)}_{lra} = \\bigwedge_{i \\neq n+1} {G}^{\\downarrow(i)}$.\nLet $S_j$ denote the parent of $S_i$. Then ${\\lambda}^{(n+1)}_{lra}$ is an additional guarantee that involves variables only from $S_j$.\nNow define $({G}')^{\\downarrow(i)} = {G}^{\\downarrow(i)}$ for $i \\neq j$ and $({G}')^{\\downarrow(j)} = {G}^{\\downarrow(j)} \\wedge {\\lambda}^{(n+1)}_{lra}$. Then we can write\n\n\\begin{equation}\\label{eq:comp2}\n\\mathbb{C}' = \\left[\\bigwedge_{i=1}^{n} ({A}')^{\\downarrow(i)} , \\bigwedge_{i=1}^{n} ({G}')^{\\downarrow(i)}\\right]\n\\end{equation}\nwhere $A' = A|_{\\prod_{j \\neq i}^{n}}$.\nNow we are left with an arbitrary forest with $n$ nodes and a specification given in the form of Eq.~\\eqref{eq:conj}. Thus the Alg.~\\ref{alg:main} is complete.\n \\end{proof}\n \\vspace{2mm}\n \n \n \n\\begin{rem}\n\tA few remarks on the complexity of the algorithm are in order. In the worst case, the proposed algorithm requires solving $\\mathcal{O}(n 2^{\\Sigma_i|Y_i|+\\max_i{|E_i|}})$ SAT problems each with complexity $\\mathcal{O}(2^{\\max_i |U_i|})$, where $n$ is the number of subsystems. Under the assumptions of Theorem \\ref{theo:comp}, it reduces to $\\mathcal{O}(n 2^{\\max_i{|E_i|}})$ since the distribution can simply be computed using projection and is unique. Recall that the complexity of the general problem is $\\mathcal{O}(2^{\\sum_{i\\in{\\mathbb N}_n}|U_i|2^{ |E_i|}})$; therefore with the assumed structure, one of the exponents is eliminated.\n\\end{rem}\n \n\n\n\n\n\\subsection*{Computation of local guarantees}\n\nBoolean distribution operation is done by representing the Boolean formulas as graphs and using graph properties. This has two advantages: (i) graphs provide a canonical representation for the satisfying set of the formulas (whereas there could be multiple formulas with the same satisfying set), (ii) once the problem is converted to a graph problem, we leverage well-established algorithms from graph theory to find distributions. \n\nLet us start with some additional graph terminology. Given a graph, a set of nodes, no two of which are connected with an edge is called an \\emph{independent set}. \nA graph $\\mathcal{H} =(\\mathcal{V}, \\mathcal{E})$ is called \\emph{bipartite} if its nodes can be partitioned into two independent sets, i.e. $\\mathcal{V} = \\mathcal{V}_1 \\cup \\mathcal{V}_2$ and $\\mathcal{E} \\subset \\mathcal{V}_1 \\times \\mathcal{V}_2$. \n$\\mathcal{V}_1$ and $\\mathcal{V}_2$ are also called \\emph{parts}.\nIf every node in one part is connected to every other node in the other part with an edge, the bipartite graph is called \\emph{complete}, i.e., $\\mathcal{E} = \\mathcal{V}_1 \\times \\mathcal{V}_2$.\n\nLet $H_c=(\\mathcal{V}',\\mathcal{E}')$ be a complete bipartite subgraph of a bipartite graph $H=(\\mathcal{V},\\mathcal{E})$. Then $H_c$ is called \\emph{maximal} if addition of any node to $\\mathcal{V}'$ breaks down the completeness. Putting it differently, $H_c$ is not a subgraph of any other $H'_c$ which is also a complete bipartite subgraph of $H$.\n\nLet $S$ be a Boolean system and $\\mathbb{C}=[A,G]$ be its specification. Now assume that $S_i$ is a leaf node in the system graph and we want to compute its local guarantee.\nIn order to perform Boolean distribution, we construct a bipartite graph $H= \\left( \\left(\\prod_{j\\neq i} V_{y_j} \\bigcup V_{y_i}\\right), \\mathcal{E} \\right)$. In this graph, every node $y_i \\in V_{y_i}$ represents a different valuation of $S_i$'s outputs. Similarly each node $y \\in \\prod_{j\\neq i} V_{y_j}$ represent all other outputs valuations. Edges are created between nodes that forms an admissible output, i.e.,\n$\\mathcal{E} = \\{ (y_i,y) \\mid (y_i,y) \\in \\ws{G} \\}.$\n\n\n\\begin{prop}\\label{prop:biclique}\n\tLet $H_c = \\left( \\left( \\prod_{j\\neq i} \\bar{V}_{y_j}\\bigcup\\bar{V}_{y_i} \\right), \\bar{\\mathcal{E}} \\right) $ be a subgraph of $H$ where $\\bar{V}_{y_k} \\subset {V}_{y_k}$ for all $k$. \n\tNow define $G^{\\downarrow(i)} : V_{y_i} \\to \\mathbb{B}$ and $G^{\\uparrow(i)} : \\prod_{j\\neq i} V_{y_j} \\to \\mathbb{B}$ with their satisfying set $\\ws{G^{\\downarrow(i)}} = \\bar{V}_{y_i}$ and $\\ws{G^{\\uparrow(i)}} = \\prod_{j\\neq i} \\bar{V}_{y_j}$. Then $\\gamma=\\{G^{\\downarrow(i)},G^{\\uparrow(i)}\\}$ is a maximal distribution if and only if $H_c$ is a maximal complete bipartite graph.\n\\end{prop} \n\n\\begin{proof}\n\tLet $\\gamma$ be a distribution.\n\tNow define $H_c = \\left( \\left( \\prod_{j\\neq i} \\bar{V}_{y_j}\\bigcup\\bar{V}_{y_i} \\right), \\bar{\\mathcal{E}} \\right)$ where $ \\bar{V}_{y_i}=\\ws{G^{\\downarrow(i)}}, \\prod_{j\\neq i} \\bar{V}_{y_j}= \\ws{G^{\\uparrow(i)}}$ and $\\bar{\\mathcal{E}} = \\mathcal{E} \\cap \\left( \\prod_{j\\neq i} \\bar{V}_{y_j}\\times\\bar{V}_{y_i} \\right)$. By definition of distribution $\\forall y_i \\in \\ws{G^{\\downarrow(i)}} : \\forall y \\in \\ws{G^{\\uparrow(i)}}: (y_i,y) \\in \\ws{G}$. This implies $\\forall y_i \\in \\bar{V}_{y_i} : \\forall y \\in \\prod_{j\\neq i} \\bar{V}_{y_j}: (y_i,y) \\in \\bar{\\mathcal{E}}$. Thus $H_c$ is a complete bipartite graph.\n\t\n\tConversely let $H_c = \\left( \\left( \\prod_{j\\neq i} \\bar{V}_{y_j}\\bigcup\\bar{V}_{y_i} \\right), \\bar{\\mathcal{E}} \\right) $ be a complete bipartite subgraph of $H$. By definition of complete bipartite graph $\\forall y_i \\in \\bar{V}_{y_i} : \\forall y \\in \\prod_{j\\neq i} \\bar{V}_{y_j}: (y_i,y) \\in \\bar{\\mathcal{E}} \\subset \\mathcal{E}$. Now define $G^{\\downarrow(i)} : V_{y_i} \\to \\mathbb{B}$ and $G^{\\uparrow(i)} : \\prod_{j\\neq i} V_{y_j} \\to \\mathbb{B}$ with their satisfying set $\\ws{G^{\\downarrow(i)}} = \\bar{V}_{y_i}$ and $\\ws{G^{\\uparrow(i)}} = \\prod_{j\\neq i} \\bar{V}_{y_j}$. Then $\\forall y_i \\in \\ws{G^{\\downarrow(i)}} : \\forall y \\in \\ws{G^{\\uparrow(i)}}: (y_i,y) \\in \\ws{G}$ by construction of $H$. This implies $\\gamma=\\{G^{\\downarrow(i)},G^{\\uparrow(i)}\\}$ is a local guarantee set.\n\t\n\tFinally let the distribution $\\gamma=\\{G^{\\downarrow(i)},G^{\\uparrow(i)}\\}$ be maximal. This implies that there does not exist another complete bipartite subgraph $\\bar{H_c}=\\{\\prod_{j\\neq i} \\bar{V}_{y_j}\\cup\\bar{V}_{y_i}, \\bar{\\mathcal{E}} \\}$ such that $\\prod_{j\\neq i} \\bar{V}_{y_j} \\supset \\prod_{j\\neq i} {V}_{y_j}$ and $\\bar{V}_{y_i} \\supset {V}_{y_i}$. This implies that there does not exist any other subgraph $H'_c$ of $H$ such that $H_c$ is a subgraph of $H'_c$. Thus $H_c$ is maximal. Going in the other direction is also similar. If $H_c$ is maximal, so is $\\gamma=\\{G^{\\downarrow(i)},G^{\\uparrow(i)}\\}$.\n\t\\end{proof}\n\nNote that with Proposition~\\ref{prop:biclique}, the problem of finding a distribution reduces to the problem of finding maximal complete bipartite subgraphs. The latter can be done using standard results from graph theory literature \\cite{alexe2004consensus}. In particular, in our implementation, we have used a variant of the Bron-Kerbosch algorithm \\cite{bron1973algorithm} to find all maximal distributions.\n\n\n\t\n\n\\section{Introduction}\n\n\\input{1_introduction}\n\n\\section{Preliminaries}\\label{sec:prelim}\n\\input{2_prelim}\n\n\\input{3_framework}\n\n\\section{Problem Statement}\n\\input{4_problem_statement}\n\n\\section{Algorithm}\n\\input{5_algorithm}\n\n\\section{Illustrative Examples}\\label{sec:ex}\n\\input{6_examples}\n\n\\section{Case Study}\n\\input{7_eps_desc}\n\n\\section{Conclusion}\n\\input{8_conclusion}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\n\n\n\n\\section{Introduction}\n\nAnalogue filters have been a source of inspiration for digital filter design since the early days of the discipline of digital signal \nprocessing. This solves one of the great conundrums of the area, particularly for musical applications, which is how to provide\nuseful filters that are punchy in character (that is, have a powerful sound transformation effect, particularly with regards to\ntheir amplitude responses), have low computational demands, and are easy to design or control. The first two aspects are\neasily met by infinite impulse response (IIR), or feedback, filters, while the second is, strictly, only possible with finite impulse\nresponse filters. However, by recourse to the various configurations that were developed for analogue filter circuits, it is possible\nto bridge that gap and have IIR digital filters that can be deployed in many scenarios.\n\nTherefore it is common practice to look for inspiration in analogue signal processing \\cite{Rossum}, particularly in the case of celebrated \ndesigns such as the various ladder filter implementations and their variations \\cite{Stilson, Huovilainen, Fontana3, Dangelo1, Dangelo2}. In this paper, we look at another very interesting\nanalogue filter case, which is the state variable filter. This has been employed in many significant musical instrument applications.\nNot surprisingly, this was one of the filter configurations which very early on received a digital treatment in the pioneering work\nof \\cite{Chamberlin}, which has been a source of inspiration for many computer music practitioners, and deployed in very successful\ninstruments such as the Kurzweil synthesisers \\cite{Dattorro}. However, this design has some\npractical issues that may turn out to be problematic in some applications. In this paper, we explore the question of improving the design from a spectral perspective. The filter developed here can then be used as a drop-in replacement for the Chamberlin design, or employed to implement\na variety of responses such as the Octave Cat and Moog ladder lowpass filters as described by \\cite{WernerMcClellan}.\n\nThe text is organised as follows. We first introduce the original analogue state variable configuration and\nexamine its characteristics. Next we turn to the Chamberlin implementation, deriving an equivalent biquadratic transfer function to\ndescribe its spectrum. This leads into an analyses of the issues stemming from it. We carry this spectral approach further and propose an improved frequency response, which then informs some modifications to the digital filter update equations. Finally,\nwe discuss the method and contrasting approaches leading to similar results. Csound language code is used as ready-to-deploy\nexamples of the filter designs discussed in the paper.\n\n\\section{The State Variable Filter}\n\nThe \\emph{state variable filter} \\cite{Kerwin1967StateVariableSF, Colin1971ElectricalDA, Hutchins2} is a classic analogue filter configuration, which has been employed in numerous \nmusical applications. A typical example is found in the second-order lowpass filter of the Oberheim OB-Xa synthesiser, which uses\na CEM3320 integrated circuit to realise it \\cite{ElectricDruid}. The state variable filter can be described as an integrator-based design, which sets it apart from the typical leaky-integrator forms found in typical first-order lowpass sections of ladder filters. Another special aspect of the filter, which was somehow ignored in the Oberheim implementation, is that it can provide a whole variety of frequency responses, lowpass, highpass, bandpass, band-reject, and allpass, simultaneously. In addition, Hutchins \\cite{Hutchins143} demonstrated that two state variable filters in series can be used to implement a frequency response similar to the ladder filter, which was again shown in \\cite{WernerMcClellan}.\n\nThe block diagram of the state variable filter is shown in Fig.~\\ref{fig:statevar}, where we observe that it is composed of two\nintegrators in series, both of which are fed back to be summed with the filter input. Two parameters control the frequency\nresponse, $Q$ and $K$. The latter is proportional to the filter frequency, whereas the former determines the amount of resonance\nor peaking around that frequency. To get a highpass output, we tap the filter after the input summing stage. The bandpass signal\ncomes from the first-order integrator output, and the second-order integrator output gives the lowpass response. A\nsystem of filter equations can then be defined as\n\n\\begin{equation}\n\\begin{split}\n&y_{hp}(t) = x(t) - (1\/Q) y_{bp}(t) - y_{lp}(t) \\\\\n&y_{bp}(n) = K\\int y_{hp}(t) dt \\\\\n&y_{lp}(n) = K\\int y_{bp}(n) dt \n\\end{split}\n\\end{equation}\n\\smallskip\n\nDue to the two integrators in series, the highpass and lowpass outputs have a phase difference of $\\pi$ radians at the cutoff frequency, therefore by mixing them together, we can obtain a band-reject response. Summing the three outputs yields an allpass response. As far as analogue filters are concerned, the state variable filter has a very straightforward block diagram, with just\nthree black box components: the integrator, a variable gain element, and a summing unit. While there are various\nways to implement it, the circuits often only require three or four operation amplifiers, plus a few resistors and capacitors.\n\n\\begin{figure}[htp]\n\\begin{center}\n\\includegraphics[width=.4\\textwidth]{figs\/statevar.png}\n\\caption{State variable filter block diagram.}\n\\label{fig:statevar} \n\\end{center}\n\\end{figure}\n\nThe transfer functions for each output can be derived as follows. First we define the general lines\nof the highpass transfer function as\n\n\\begin{equation}\\label{eq:svar_an}\nH_{hp}(s) = 1 - (1\/Q) H_{bp}(s) - H_{lp}(s).\n\\end{equation}\n\\smallskip\n\n\\noindent Then, using the integrator transfer function $H(s) = 1\/s$, we have\n\n\\begin{equation}\n\\begin{split}\n&H_{bp}(s) = \\frac {K H_{hp}(s)}s \\\\\n&H_{lp}(s) = \\frac {K H_{bp}(s)}s = \\frac {K^2 H_{hp}(s)} {s^2}\\\\\n\\end{split}\n\\end{equation}\n\\smallskip\n\nReplacing back in Eq.~\\ref{eq:svar_an} gives us the highpass transfer function\nas\n\n\\begin{equation}\\label{eq:svar_anhp2}\nH_{hp}(s) = \\frac {s^2} {s^2 + (K\/Q)s + K^2},\n\\end{equation}\n\\smallskip\n\n\\noindent followed by the bandpass\n\n\\begin{equation}\\label{eq:svar_anbp2}\nH_{bp}(s) = \\frac {Ks} {s^2 + (K\/Q)s + K^2},\n\\end{equation}\n\\smallskip\n\n\\noindent and lowpass response\n\n\\begin{equation}\\label{eq:svar_anlp2}\nH_{lp}(s) = \\frac {K^2} {s^2 + (K\/Q)s + K^2}.\n\\end{equation}\n\\smallskip\n\nThese three equations allow us to note that the state variable filter is second-order, or two-pole, \nfilter with a biquadratic transfer function. The different frequency responses share the same \ndenominator, defining the filter poles, and diverge in the position of the zeros. These are \ntwo at s = 0, for the highpass filter; one at s = 0 and another at $s = \\infty$ for bandpass;\nand finally two zeros at $s = \\infty$ in the lowpass case.\n\n\\section{Chamberlin's Digital State Variable Filter} \n\nIn his book, Chamberlin \\cite{Chamberlin} proposes that the state variable filter may be a very economical model for the implementation\nof a digital filter for musical applications. The idea is that with only a few operations we would have a processor that is\ncapable of several different frequency responses, controlled by only two parameters. Moreover, as we noted above,\nthe filter is of a fairly straightforward design, which also simplifies implementation. The main component is the\nintegrator, for which a discrete-time version is given in Fig.~\\ref{fig:integrator}. This is simply an Euler method discretisation of continuous time integration, which yields an allpole integrator. Its transfer function is\n\n\\begin{equation}\\label{eq:integtf}\nH(z) = \\frac 1 {1 - z^{-1}}\n\\end{equation}\n\\smallskip\n\n\\begin{figure}[htp]\n\\begin{center}\n\\includegraphics[width=.2\\textwidth]{figs\/integrator.png}\n\\caption{The digital integrator.}\n\\label{fig:integrator} \n\\end{center}\n\\end{figure}\n\nChamberlin's design employs the two digital integrators in such a way that yields a slightly modified\nblock diagram. The change is made at the first integrator stage. Instead of taking the integrator \noutput as the input to the next stage, the integrator state (its delay) is tapped to provide this signal. This\ninserts a 1-sample delay in the middle of the block diagram (Fig.~\\ref{fig:chamberlin}). The idea behind this is to keep \nthe phase difference between the highpass and lowpass outputs as in the original analogue\nmodel, so that we can obtain a band-reject output. This also requires us to re-order the \ncomputation so that the lowpass output is computed first, using the following equations \n\n\\begin{equation}\n\\begin{split}\n&y_{lp}(n) = Ky_{bp}(n-1) + y_{lp}(n-1) \\\\\n&y_{hp}(n) = x(n) - (1\/Q)y_{bp}(n-1) - y_lp(n) \\\\\n&y_{bp}(n) = Ky_{hp} + y_{bp}(n-1)\n\\end{split}\n\\end{equation}\n\\smallskip\n\n\\noindent which are implemented in Listing~\\ref{code:svar1}.\\\\\n\n\\begin{figure}[htp]\n\\begin{center}\n\\includegraphics[width=.4\\textwidth]{figs\/chamberlin.png}\n\\caption{Chamberlin digital state variable filter block diagram.}\n\\label{fig:chamberlin} \n\\end{center}\n\\end{figure}\n\n\\begin{lstlisting}[caption={Chamberlin digital state variable filter},label={code:svar1}]\nopcode Svar,aaaa,akk\n setksmps 1\n abp,alp init 0,0\n as,kK,kQ xin\n alp = abp*kK + alp \n ahp = as - alp - (1\/kQ)*abp\n abp = ahp*kK + abp\n xout ahp,alp,abp,ahp+alp\nendop\n\\end{lstlisting}\n\nTo begin an analysis of this implementation, we might want first to re-arrange this design\ninto a filter which resembles the original block diagram more closely (Fig.~\\ref{fig:statevarV}). \nWhile we could derive a transfer function for the current arrangement, as indeed \\cite{Dattorro} did\nfor the lowpass response, a configuration that is closer to the analogue filter will allow us\nto develop the proposed improvements in a more straightforward way. For this, all is needed\nis to move the 1-sample delay to both feedback paths and restoring the signal path\nthrough the filter. This is equivalent to tapping the integrator states to get the inputs \nto the highpass filter, as shown by\n\n\\begin{equation}\n\\begin{split}\n&y_{hp}(n) = x(n) - (1\/Q)y_{bp}(n-1) - y_{lp}(n-1) \\\\\n&y_{bp}(n) = Ky_{hp} + y_{bp}(n-1) \\\\\n&y_{lp}(n) = Ky_{bp}(n) + y_{lp}(n-1).\n\\end{split}\n\\end{equation}\n\\smallskip\n\n\\begin{figure}[htp]\n\\begin{center}\n\\includegraphics[width=.4\\textwidth]{figs\/statevarV.png}\n\\caption{Re-arranged Chamberlin digital state variable filter block diagram.}\n\\label{fig:statevarV} \n\\end{center}\n\\end{figure}\n\nThis form produces a highpass and bandpass outputs that are sample-by-sample \nequivalent to those of the Chamberlin filter, and a lowpass output that is also \nidentical when delayed by one sample. The main difference between the two \narrangements is that now there is a 1-sample delay between the lowpass and bandpass \nsignals and the highpass output. Therefore, the phase \nrelationship between the lowpass and highpass needed to obtain a band-reject \nresponse is lost. However, an equivalent relationship can be used for this purpose,\nwhich happens between the input and the inverted bandpass feedback path.\nTo get the band-reject output we split the sum that produces the highpass signal\nin two separate stages, the first of which is equivalent to the desired output. An\nexamination of the original block chart shows that this is exactly equivalent to\nthe sum of the highpass and lowpass signals. This rearrangement is shown in \nListing~\\ref{code:svar2}.\\\\\n\n\\begin{lstlisting}[caption={Re-arranged Chamberlin digital state variable filter},label={code:svar2}]\nopcode Svar,aaaa,akk\n setksmps 1\n abp,alp init 0,0\n as,kK,kQ xin\n abr = as - (1\/kQ)*abp \n ahp = abr - alp\n abp = ahp*kK + abp\n alp = abp*kK + alp\n xout ahp,alp,abp,abr\nendop\n\\end{lstlisting}\n\n\\subsection{Transfer Functions}\n\nThis puts the filter in a form for which we can derive transfer functions as we did in the\nanalogue case. Starting again with with an outline of the the highpass transfer function, we have. \n\n\\begin{equation}\\label{eq:svar_hp}\nH_{hp}(z) = 1 - \\frac {z^{-1}} Q H_{bp}(z) - z^{-1}H_{lp}(z) \n\\end{equation}\n\\smallskip\n\n\\noindent where $H_{bp}(z)$ and $H_{lp}(z)$ are the bandpass and lowpass responses, respectively. Note that the $z^{-1}$ factor needs to be incorporated to take account of the 1-sample delay in the feedback path, which does not exist in the analogue filter. From this, as before, we can derive the bandpass and lowpass responses in turn, as they correspond first and second-order integration of the highpass output signal, which scaled by a $K$ factor,\n\n\\begin{equation}\n\\begin{split}\n&H_{bp}(z) = \\frac {K H_{hp}(z)} {1 - z^{-1}} \\\\\n&H_{lp}(z) = \\frac {K H_{bp}(z)} {1 - z^{-1}} = \\frac {K^2 H_{hp}(z)} {(1 - z^{-1})^2}\\\\\n\\end{split}\n\\end{equation}\n\\smallskip\n\nNow we can replace these back into Eq.~\\ref{eq:svar_hp}, to obtain the correct frequency response for the highpass output\n\n\\begin{equation}\\label{eq:svar_hp2}\nH_{hp}(z) = \\frac {(1 - z^{-1})^2} {1 - (2 - K\/Q - K^2) z^{-1} + (1 - K\/Q) z^{-2}}\n\\end{equation}\\smallskip\n\n\\noindent and from $H_{hp}(z)$ we get \n\n\\begin{equation}\\label{eq:svar_bp2}\nH_{bp}(z) = \\frac {K(1 - z^{-1})} {1 - (2 - K\/Q - K^2) z^{-1} + (1 - K\/Q) z^{-2}} \n\\end{equation}\n\\smallskip\n\n\\noindent and\n\n\\begin{equation}\\label{eq:svar_lp2}\nH_{lp}(z) = \\frac {K^2} {1 - (2 - K\/Q - K^2) z^{-1} + (1 - K\/Q) z^{-2}}.\n\\end{equation}\n\\smallskip\n\nThese equations do indeed give us a highpass, bandpass, and lowpass filters, with resonance controlled\nby the $Q$ parameter, which turns out to be, on first looks, very similar here to its usual interpretation as the ratio \nbetween frequency and bandwidth. Now we need to determine how to compute $K$. We have noted that in\nthe analogue filter, $Q$ is its quality factor, related to resonance, and $K$ is proportional to \nthe cutoff\/centre frequency. In this case, we can set $K = 2\\pi f$, but in this digital model, an\nequivalent expression such as $K = 2\\pi f\/f_s$ will not be accurate. This is mainly because the digital \nintegrators introduce a certain amount of errors, particularly as the frequency increases.\nA correction factor can be applied, yielding the expression $K = 2\\sin(\\pi f\/f_s)$, \nwhich gives a more accurate tuning of the filter frequency \\cite{Chamberlin}.\n\n\\subsection{Issues}\n\nHowever, some difficulties still remain. If we examine the transfer functions, we will note\nthat at high frequencies (particularly $> f_s\/4$) pole frequency will drift higher than expected \ndepending on the value of $Q$. In order to keep the filter more or less in tune at these frequencies, \nwe need to increase $Q$. The coupling of the two parameters is evident from\nthe fact that the pole radius is dependent on both $Q$ and $K$. As $K$ gets\nlarger, for low values of $Q$, it will make the pole move slightly away from the unit circle. \nMoreover, the pole frequency will also shift upwards relative to where we would want it to be, \nand the filter also may need some means of output scaling to prevent it from exploding. \nThe limits of stability and tuning for the Chamberlin, which depend both on $K$ and $Q$, \nhave been determined by Dattorro \\cite{Dattorro}, as\n\n\\begin{equation}\n0 < K < \\textrm{min}\\left(Q, 2 - \\frac 1 Q, 2Q -\\frac 1 Q, \\frac {-1\/Q + \\sqrt{8 + (1\/Q)^2}} 2 \\right)\n\\end{equation}\n\n\\noindent from where we can surmise that the filter will not behave very well \nat high frequencies.\n\nFigure~\\ref{fig:svarerr} shows these discrepancies in the expected filter frequency and the \nresulting amplitude responses. Notice that these problems are perhaps not themselves an inherent problem \nwith the filter, but just a difficulty to find correct values for $Q$ and $K$ to yield the correct filter \nfrequency and stability. Running the filter at higher sampling rates will improve the filter \ntuning as the onset of errors is pushed higher in the spectrum.\n\n\\begin{figure}[htp]\n\\begin{center}\n\\includegraphics[width=.5\\textwidth]{figs\/svarerr.png}\n\\caption{Amplitude responses for the state variable lowpass filter, using Q=5 and $f=5$ KHz (solid), 10 KHz (dashes), and 15KHz (dots), with $K = 2\\sin(\\pi f\/f_s)$ and $f_s =44.1$ KHz.}\n\\label{fig:svarerr} \n\\end{center}\n\\end{figure}\n\nPerhaps the main reason why the direct translation of the \nstate-variable flowchart fails at high frequencies, at least using normal sampling rates, \nis that we had to sneak in a one-sample delay somewhere in the block diagram, because\nit is not possible to have an instantaneous-time feedback as in the analogue circuit.\nIf we follow the filter structure, we will notice that the bandpass and lowpass \noutputs are supposed to be the exact same as the inputs to the highpass signal. \nThat of course cannot be computed motivating the use of an extra 1-sample delay. \nThe problem is that this addition modifies the filter topology somewhat, and the digital version\ndoes not fully match the original analogue block diagram in a reasonable way. \nWe could of course adopt a different route by reverting to the analogue transfer functions \nin the s-domain, and then apply the bilinear transformation \\cite{STEIGLITZ1965} directly to them. \nThese will give us coefficients for typical highpass, lowpass, and bandpass second-order \ndigital filter sections. This is a useful approach if we want to implement some derived analogue\ndesigns such as the Sallen-Key \\cite{Hutchins3} and Steiner-Parker \\cite{Steiner} filters, which we have done in \\cite{CMJ}\nHowever, this somehow defeats the purpose of trying to model a \nstate variable filter and reap the benefits of this configuration. Particularly, if we want o\nexpand it into implementations that include nonlinear elements, as we also show in \\cite{CMJ}.\n\n\\section{An Improved Digital State Variable Filter}\n\nWhile we cannot avoid the fact that a 1-sample delay needs to figure somewhere in\nthe block diagram, we can improve things by placing it in an optimal position. We can\ndetermine this by looking at the transfer functions and try to establish why they \nare not ideal. Since we identified that the problem appears to be very apparent in\nthe lowpass case, we have a good place to start. Examining Eq.~\\ref{eq:svar_lp2},\nwe notice that if we moved the four (theoretical) zeros, which exist at $z=\\infty$ in the analogue \ncase, to the $z=-1$, we would improve the lowpass response somewhat. This is not\na complete solution, but gives us a route towards it.\n\nOne of the problems of uni-laterally fixing the lowpass transfer function is that, if\nwe are to preserve the state variable structure, this will have to be compensate \nby changes in the other transfer functions. We need to find a way to change the\ncomplete filter so that we end up with two zeros at $z=-1$ in the lowpass filter\nfrequency response. It is becoming clear that the problem is the complete\nabsence of zeros at $z=-1$ in both the lowpass and bandpass responses. \nIf we look closely at Eq.~\\ref{eq:svar_hp}, we will notice that we in fact have two \nactual poles at $z=0$. These are the result of the 1-sample delays we had to \ninflict to the block diagram. We can now try to swap these for zeros in a position where their \neffect can be used to solve the issue we identified in the lowpass response. \n\nThis requires us to replace the pure delays $z^{-1}$ by one-zero lowpass filters $1+z^{-1}$, \nin the Chamberlin highpass transfer function (Eq.~\\ref{eq:svar_hp}),\n\n\\begin{equation}\\label{eq:svar_hp_fix}\nH_{hp}(z) = 1 - \\frac {(1 + z^{-1})} Q H_{bp}(z) - (1 + z^{-1}) H_{lp}(z), \n\\end{equation}\n\\smallskip\n\n\\noindent which places a first-order lowpass finite impulse response filter in each one\nof the feedback paths. The updated transfer functions are then\n\n\\begin{equation}\\label{eq:svar_equiv}\nH_{hp}(z) =\\frac {1}{1 + K\/Q + K^2} \\left[\\frac {(1 - z^{-1})^2} {1 - \\frac {(2(1 - K^2) z^{-1} - (1 - K\/Q + K^2) z^{-2}}{1 + K\/Q + K^2}}\\right],\n\\end{equation}\n\\smallskip\n\n\\noindent for the highpass output,\n\n\\begin{equation}\nH_{bp}(z) = \\frac {K}{1 + K\/Q + K^2} \\left[\\frac {1 - z^{-2}} {1 - \\frac {(2(1 - K^2) z^{-1} - (1 - K\/Q + K^2) z^{-2}}{1 + K\/Q + K^2}}\\right],\n\\end{equation}\n\\smallskip\n\n\\noindent for the bandpass output, and\n\n\\begin{equation}\\label{eq:svar_fix_lp}\nH_{lp}(z) = \\frac {K^2}{1 + K\/Q + K^2} \\left[\\frac {(1 + z^{-1})^2} {1 - \\frac {(2(1 - K^2) z^{-1} - (1 - K\/Q + K^2) z^{-2}}{1 + K\/Q + K^2}}\\right],\n\\end{equation}\n\\smallskip\n\n\\noindent for the lowpass output. \n\n\\subsection{Equivalence to Bilinear Transformation}\n\nA cursory look at the numerator of these transfer functions indicates that we have zeros at $z = 1$ and $z = -1$, for the\nbandpass case, and at $z=-1$ in the lowpass frequency response. The highpass transfer function keeps its two zeros\nat $z=1$ as we should have expected. In fact these frequency responses are what we would expect if we were applying a bilinear transformation to the analogue filter transfer functions. We can demonstrate this by setting $Q = \\sqrt{2}$, which should \ngive the filter a Butterworth response and comparing the result to a bilinear transform of applied to a filter such as \n\n\\begin{equation}\\label{eq:butt_lp}\nH(s)H(-s) = \\frac 1 {1 + (-s^2)^{N}}\n\\end{equation}\n\\smallskip\n\n\\noindent which has a Butterworth response with a cutoff radian frequency $\\Omega = 1$ \\cite{Oppenheim:1999:DSP}. \nFor a second-order filter we set $N=2$. The poles of this filter are \n\n\\begin{equation}\\label{eq:bilinear_transform1}\n1 + (-s^2)^{4}= 0\n\\end{equation}\n\\smallskip\n\n\\noindent and there are four of these at the unit circle in the s-plane, \n$s_p = \\{3\\pi\/4, -3\\pi\/4, -\\pi\/4, \\pi\/4\\}$, of which only the first two are\nstable. The location of the poles in the z-plane is found using\nthe bilinear transformation,\n\n\\begin{equation}\\label{eq:bilinear_transform_zeros}\nz_p = \\frac {1 + e^{\\pm j 3\\pi\/4}} {1 - e^{\\pm j 3\\pi\/4}}\n\\end{equation}\n\\smallskip\n\n\\noindent which uses the conformal mapping $z = (1 + s)\/(1 - s)$. We now use the\nbilinear transformation in the other direction to obtain the \ntransfer function of the digital filter,\n\n\\begin{equation}\\label{eq:butt_lp_tf}\nH(z)H(-z) = \\frac 1 {1 + \\left[-\\left(\\frac {z - 1} {z + 1}\\right)^2\\right]^N}\n\\end{equation}\n\\smallskip\n\nFor $N=2$ we have,\n\n\\begin{equation}\\label{eq:butt_lp_tf2}\nH(z)H(-z) = \\frac 1 {1 + \\left(\\frac {z - 1} {z + 1}\\right)^4} = \\frac {(z + 1)^4} {(z + 1)^4 + (z - 1)^4}\n\\end{equation}\n\\smallskip\n\n\\noindent which shows that we also have four zeros, in addition to the four poles\nshown above. These are all located at $z=-1$, which makes sense for a lowpass\nfilter. Since only the two first poles are stable and we want a second-order\nfilter, we will use two of these zeros in the final filter. \n\nFrom the digital transfer function, we can get the filter power spectrum,\n\n\\begin{equation}\\label{eq:butt_lp_ps}\n|H(\\omega)|^2 = \\frac 1 {1 + \\left[-\\left(\\frac {e^{j\\omega} - 1} {e^{j\\omega} + 1}\\right)^2\\right]^N } = \\frac 1 {1 + \\tan^{2N}(\\omega\/2)}.\n\\end{equation}\n\\smallskip\n\n\\noindent This filter has a cutoff frequency of $\\tan^{2N}(\\omega\/2) = 1$, and so should be equivalent to\nthe digital state variable lowpass with $K=1$ and $Q = \\sqrt{2}$. Replacing these parameters in\nEq.~\\ref{eq:svar_fix_lp} demonstrates that this is indeed the case. In fact, we can now also see that\nif we make the replacement $s = \\frac {z - 1} {z + 1}$ in Eq.~\\ref{eq:svar_an}, we will arrive\nat a similar result to Eq.~\\ref{eq:svar_hp_fix}. Another way to look at this is to say that \nwhat we have done is also equivalent to applying the bilinear transform to the analogue integrator \ntransfer function $s^{-1}$. Therefore we have indirectly derived three\nbilinear transformation digital filters, one for each of the three analogue state variable responses.\nAs we noted earlier, we could of course use them to implement three separate filters using a digital \nbiquadratic structure, but that is not our objective.\n\n\\subsection{Filter Equations}\n\nWe now need to apply the modifications from the transfer functions to implement an improved\nstate variable filter. This should follow from the recognition that each integrator should be\nchanged to include a zero at $z=-1$, \n\n\\begin{equation}\\label{eq:integtf2}\nH(z) = \\frac {1 + z^{-1}} {1 - z^{-1}}\n\\end{equation}\n\\smallskip\n\nThis change requires us now to change the integrator equation slightly.\nSince it is important for us to continue to tap the filter state so not introduce an\nextra 1-sample delay, we should implement Eq.~\\ref{eq:integtf2} in such a way\nwhich will allows us to preserve the feedback paths in the re-arranged Chamberlin \ndesign. As shown in Fig.~\\ref{fig:integrator2}, we only need to add a feedforward path\nto the allpole integrator to turn it into a 1-pole 1-zero configuration, as required by\nthe transfer function. For this we need to use a system of update equations,\n\n\\begin{equation}\\label{eq:integtf3}\n\\begin{split}\n&y(n) = x(n) + s(n-1) \\\\\n&s(n) = y(n) + x(n)\n\\end{split}\n\\end{equation}\n\\smallskip\n\n\\noindent where $s(n)$ now represents the filter delay (its state).\n\n\\begin{figure}[htp]\n\\begin{center}\n\\includegraphics[width=.4\\textwidth]{figs\/integrator2.png}\n\\caption{The digital integrator with an added feedforward path.}\n\\label{fig:integrator2} \n\\end{center}\n\\end{figure}\n\nFrom this, we can define the filter update equations as \n\n\\begin{equation} \\label{eq:svar_new}\n\\begin{split}\n&y_{hp}(n) = x(n) - (1\/Q)s_{bp}(n-1) - s_{lp}(n-1) \\\\\n&y_{bp}(n) = Ky_{hp} + s_{bp}(n-1) \\\\\n&s_{bp}(n) = y_{bp}(n) + Ky_{hp} \\\\\n&y_{lp}(n) = Ky_{bp} + s_{lp}(n-1) \\\\\n&s_{lp}(n) = y_{lp}(n) + Ky_{bp} \n\\end{split}\n\\end{equation}\n\\smallskip\n\n\\subsection{Filter Stability}\n\nWe are almost finished, except for one aspect, which is filter stability. The filter continues to be\nunstable as the original, particularly at higher frequencies. For this reason, the filter needs to be corrected \nso that it can be made stable within a wide range of values for $Q$ and $K$, or at least as stable as \nthe derived transfer functions. Therefore we can derive the required adjustments by making the \nstate variable filter frequency response equivalent to the derived biquadratic transfer function. \nAt the moment, this is not the case. The main differences reside in the feedback paths to\nthe highpass output. While the state variable filter uses the previous first and second-order \nintegrator states, the biquadratic transfer function expects that the actual integrator outputs,\n$H_{bp}(z)$ and $H_{lp}(z)$, are used. \n\nWe first need to recognise that as an input signal recirculates through the integrator state, it has \na factor of two scaling with respect to the signal from the integrator output. We can use this \nas the basis for the derivation of a solution. First we will represent\nthe highpass signal output as $Y_{hp}$ and its input as $X$. The feedback signals in this case \nare formed by ${(2KY_{hp} + S_{bp})}$ and ${(2K(KY_{hp} + S_{bp}) + S_lp}$. The states $S_{bp}$ \nand $S_{lp}$ are associated with the first- and second-order integrators (which provide the \nbandpass and lowpass outputs). From this, we have for the state variable filter\n\n\\begin{equation}\n\\begin{split}\n&Y_{hp} = X - 2KY_{hp}\/Q - S_{bp}\/Q - 2K^2Y_{hp} - 2KS_{bp} - S_{lp} \\\\\n&Y_{hp} = \\frac {X - KY_{hp}\/Q - S_{bp}(1\/Q + K) - K^2Y_{hp} - KS_{bp} - S_{lp}} {1 + 2K\/Q + 2K^2}\n\\end{split}\n\\end{equation}\n\\smallskip\n\nWith this result we have derived the corrections to make the filter behave in the same way\nas the biquadratic form for which we have a transfer function. This is because we eliminated\nthe factors of two involved in the feedback signals, which was the difference between the filters. \nFrom this result, we conclude that to stabilise the filter, we need to do two things: \\\\\n\n\\begin{enumerate}\n\\item Scale the highpass output by $(1 + K\/Q + K^2)^{-1}$; and \n\\item Include the extra $-KS_{bp}$ term, which amounts to offsetting the first-order\nfeedback path gain $-1\/Q$ by $-K$. \n\\end{enumerate}\n\\bigskip\n\nWithout these corrections the feedback signals can render the filter numerically unstable. Note that \nthese errors only become significant as $K$ gets larger, which is the case as the frequency \nincreases. The $Q$ factor also plays a part in this, particularly if it is small. \nThe final expression for highpass output is then\n\n\\begin{equation}\\label{eq:svar_hp_correct}\ny_{hp}(n) = \\frac {x(n) - (\\frac 1 Q + K) s_{bp}(n) - s_{lp}(n)} {1 + \\frac K Q + K^2},\n\\end{equation}\n\\smallskip\n\n\\noindent which we can now replace in Eq.~\\ref{eq:svar_new} to give the corrected filter equations.\nThe block diagram of this re-designed filter is shown in Fig.~\\ref{fig:statevar3}.\n\n\\begin{figure}[htp]\n\\begin{center}\n\\includegraphics[width=.4\\textwidth]{figs\/statevar3.png}\n\\caption{Re-designed digital state variable filter block diagram.}\n\\label{fig:statevar3} \n\\end{center}\n\\end{figure}\n\nWith these modifications, the transfer function of the three digital biquadratic filters and the state variable\nfrequency responses describe exactly the same amplitude spectrum. In Fig.~\\ref{fig:svardigi}, \nwe plot the biquadratic and state variable amplitude responses for the four basic filter outputs.\nThe first one was obtained by evaluating the transfer function directly, and the second from the\ndiscrete Fourier Transform of the state variable filter impulse responses.\n\n\\begin{figure}[htp]\n\\begin{center}\n\\includegraphics[width=.5\\textwidth]{figs\/svardigi.png}\n\\caption{Amplitude responses for the lowpass (dashes), bandpass(solid), band-reject (solid) and\nhighpass (dots) outputs of improved digital state variable filter (lower) and its equivalent biquadratic\nfilter transfer function (top).}\n\\label{fig:svardigi} \n\\end{center}\n\\end{figure}\n\n\\subsection{Filter Frequency}\n\nThe digital state variable form developed here solves the difficulties with tuning we had\nexperienced with the Chamberlin model. However, we now need to find out\na different way to compute $K$ in such a way that a filter frequency parameter can be applied. As\nexpected, one of the added bonuses of the method developed here is that \nnow we have a filter whose transfer function has been warped correctly to fit within the digital \nbaseband. This maps frequencies in such a way that the radian frequency $\\Omega = \\infty$ \nin the original analogue filter is now $\\omega = \\pi$ in this digital version. This can be easily\ndemonstrated by the fact that any zeros at $s = \\infty$ infinity are now placed at $z = -1$,\nwhich is equivalent to the Nyquist frequency.\n\nThus all we need to do is to warp the filter frequency in the same way by applying a \ntangent map,\n\n\\begin{equation}\nK = \\tan(\\pi f\/f_s),\n\\end{equation}\\smallskip\n\n\\noindent and the filter is then good to go. Note that this is consistent with the fact that\nthe digital filter is now equivalent to one obtained through the application of a \nbilinear transformation to the analogue state variable design.\n\n\\subsection{Band-Reject and Allpass Responses}\n\nThis filter allows us now to get the band-reject response as the sum $Y_{hp} + Y_{lp}$, \nor, alternatively, as $X - (1\/Q + K)S_{bp}$, since these two expressions are equivalent.\nA closer look at the filter equation will confirm that, as in the original analogue filter, \nthe highpass and lowpass responses are correctly offset by $\\pi$ radians at their cutoff \nfrequencies. \n\nThe bandpass output, in the current form, does not have unity gain at the centre\nfrequency. However, it is a simple matter of scaling it by a $1\/Q$ factor in order to\nrectify this. As we can see, this is particularly useful if we want to make sure that\nthe output of the filter does not increase as we employ a sharper resonance.\n\nFinally, since we have both a phase-aligned band-reject and normalised bandpass outputs,\nwe can now obtain the allpass response that was missing from the Chamberlin filter. Due to\nthe extra delay between the three filter outputs, this was not possible to obtain directly. In\nthe current design, since we have restored the phase alignment in the original analogue\nfilter, we can also get the allpass output as $y_{hp} + y_{lp} + y_{bp}\/Q$, which combine \nthe opposing band-reject and normalised bandpass responses. The complete filter with\nthe five outputs, highpass, lowpass, bandpass, band-reject, and allpass is shown in\nListing~\\ref{code:svar3}.\n\n\\begin{lstlisting}[caption={Improved digital state variable filter},label={code:svar3}]\nopcode Svar3,aaaaa,akk\n setksmps 1\n as1,as2 init 0,0\n as,kK,kQ xin\n kdiv = 1+kK\/kQ+kK*kK\n ahp = (as - (1\/kQ+kK)*as1 - as2)\/kdiv\n au = ahp*kK\n abp = au + as1\n as1 = au + abp\n au = abp*kK\n alp = au + as2\n as2 = au + alp\n xout ahp,abp,alp,\n ahp+alp,ahp+alp+(1\/Q)*abp\nendop\n\\end{lstlisting}\n\n\\section{Discussion}\n\nAs expected, the improved digital state variable filter has a much better high-frequency behaviour than\nthe Chamberlin design. Amplitude responses for the lowpass output are given in Fig.~\\ref{fig:svarfix}.\nThese are now correct for a digital filter, with no high-frequency\nissues at normal sampling rates. We can clearly see the beneficial effect of the zeros\nwe added to the integrators, as the high end of each curve is well anchored at the\nNyquist frequency (unlike in the previous case of Fig.~\\ref{fig:svarerr}). Moreover,\nthe rearrangement of the filter equation also had separate effect on the position of\nthe poles, which can be surmised by looking at the differences between the denominators of the\ntransfer functions for the Chamberlin state variable filter and the improved version developed here.\n\n\\begin{figure}[htp]\n\\begin{center}\n\\includegraphics[width=.5\\textwidth]{figs\/svarfix.png}\n\\caption{Amplitude responses for the revised state variable lowpass filter, \nusing Q=5 and $f=5$ KHz (solid), 10 KHz (dashes), and 15KHz (dots), with $K = \\tan(\\pi f\/f_s)$ and $f_s =44.1$ KHz.}\n\\label{fig:svarfix} \n\\end{center}\n\\end{figure}\n\nSuch changes can be explained by the two different methodologies of discretisation that \nunderline the digital filter models. In the case of the Chamberlin design, the pole frequencies \napproximate the frequency of the analogue filter poles, with an error that is inversely proportional to\nthe ratio of the sample rate and filter frequency. By increasing the sampling rate, we can minimise\nthe error, at the cost of extra computation. With extremely short sampling periods, the Chamberlin \nmodel will approximate the actual analogue filter fairly well. This is also the case of the \nimproved design, but with the usual sampling rates of $f_s=44.1$ to $48KHz$, we have\na more reasonable warping of the frequency response. While both discretisation methods inevitably lead \nto some distortion of the analogue filter transfer function, the one in the improved design \nis of a more benign nature.\n\n\\subsection{Contrasting Approaches}\n\nThe method used to derive an improved filter was purely based on an analysis of the filter transfer functions,\nwhich led to the development of a modified set of filter equations. However, it is possible to approach the \nproblem from an alternative perspective, leading to exactly the same results. This starts by recognising that\nthe simultaneous nature of the three outputs in the analogue filter is incompatible with a digital filter implementation.\nAs we already noted, there is something in the original filter that cannot be computed in a sequence of steps,\nwhich is sometimes described as a \\emph{delay-free loop}. The only way to deal with such a problem is\nto introduce a 1-sample delay somewhere in the signal path, and we have shown that it matters where this\nis placed. In some places, a design such as the one derived here is called a zero-delay filter, but that is a \ncomplete misnomer and such a term should be discouraged. While it is true that we placed the three outputs\nin phase alignment, and as such any delays between them have been removed, it is impossible to completely \nremove feedback delays, and have instantaneous signals everywhere in the filter. Instead, what we can do \nis move the 1-sample delays around, which gives the equations better resilience to errors.\n\nWe can demonstrate, however, that using a well-known algorithm for tackling delay-free loops \\cite{Harma, Fontana2, Fontana1}, further developed by Fontana \\cite{Fontana4} and D'Angelo \\cite{Dangelo2} for non-linear cases, can lead to the exact same result we have obtained before. This approach involves no spectral domain considerations, it is purely focused on the rearrangement of the filter update equations. In other to develop the idea, we first show how this can be applied to a leaky-integrator lowpass analogue design, whose block diagram is shown in Fig.~\\ref{fig:linearlowpass}. An example of such filter is given by the first-order sections in the Moog ladder filter as\nmodelled for instance by Huovilainen \\cite{Huovilainen}, but excluding the hyperbolic tangent non-linear mapping. The digital filter equation for this\nis given as\n\n\\begin{equation}\ny(n) = g(x(n) - y(n-1)) + y(n-1) \n\\end{equation}\\smallskip\n\nWe first note that the $-gy(n-1)$ term on the right-hand side is, in this case, where a one-sample delay was inserted \nin order for the filter to work as a straight discretisation of the continuous-time differential equation. This eliminates a\ndelay-free loop in the analogue filter flowchart and the digital implementation follows from it. However, we can do better by being bold and defining what looks like a more correct model as\n\n\\begin{equation}\ny(n) = g(x(n) - y(n)) + y(n-1). \n\\end{equation}\\smallskip\n\n\\begin{figure}[htp]\n\\begin{center}\n\\includegraphics[width=.33\\textwidth]{figs\/mooglinear.png}\n\\caption{First-order linear lowpass filter.}\n\\label{fig:linearlowpass} \n\\end{center}\n\\end{figure}\n\nThe remaining $y(n-1)$ term is the integration state, which we need to preserve. Note that, in this form, we have no\nhope to compute its output, but we can proceed with the algorithm defined by Harma \\cite{Harma} to get a usable set of\nfilter equations. Rewriting $s(n) = y(n-1)$ and re-arranging, we have\n\n\\begin{equation}\n\\begin{split}\n&y(n) + gy(n) = gx(n) + y(n) \\\\\n&y(n)(1 + g) = gx(n) + s(n) \\\\\n&y(n) = \\frac {gx(n) + s(n)} {1 + g}\n\\end{split}\n\\end{equation}\\smallskip\n\nThe next step is to define a tap containing the signal before the integration stage, $u(n) = g(x(n) - y(n))$,\nand do the replacement \n\n\\begin{equation}\n\\begin{split}\n&u(n) = g\\left(x(n) - \\frac {gx(n) + s(n)} {1 + g}\\right) \\\\\n&u(n)= g \\frac {x(n) - s(n)} {1 + g} \n\\end{split}\n\\end{equation}\\smallskip\n\nWhat is left to do now is to order the operations carefully so that the filter output can be computed\ncorrectly. Starting with $u(n)$, we need to obtain $y(n)$ first, then update the integration state,\n\n\\begin{equation}\n\\begin{split}\n&u(n) = g \\frac {x(n) - s(n-1)} {1 + g} \\\\\n&y(n) = u(n) + s(n-1)\\\\ \n&s(n) = y(n) + u(n)\n\\end{split}\n\\end{equation}\\smallskip\n\nThe relevance of this approach to our state variable problem can be demonstrated by showing\nthat the structure derived for leaky integrator can be applied directly to the allpole integrator.\nBy applying the algorithm, the update equations for each integrator are rearranged as \n\n\\begin{equation}\n\\begin{split}\n&u(n) = Kx(n) \\\\\n&y(n) = u(n) + s(n-1) \\\\ \n&s(n) = y(n) + u(n),\n\\end{split}\n\\end{equation}\\smallskip\n\n\\noindent which can be instantly recognised as the same employed in the improved design.\n\nWhile this approach is a well-established method to obtain better filters, particularly with regards\nto tuning and high-frequency behaviour, it obscures the fact that the solution is leveraged \nby the anchoring of the transfer function at the Nyquist frequency.\n Our original motivation for such a modification to the integrators was, on the other hand, \na purely spectral one, which followed directly from the recognition that the theoretical zeros placed at the \norigin in the lowpass case were not at an optimal location. This led to the incorporation of the \none-sample feedforward delay into the integrator, which was done in line with the aims of \nH\\\"{a}rm\\\"{a}'s algorithm, that is, to avoid the introduction of an extra delay in the filter \nupdate equations. \n\n\\section{Conclusions}\n\nIn this article, we have looked at the state variable filter and its typical digital implementation given by Chamberlin \\cite{Chamberlin}, and proposed some modifications leading to an improved frequency response at the usual sampling rates (e.g. 44.1 or 48 KHz). We have shown that the solution has an equivalent spectrum to the bilinear transformation of its analogue biquadratic transfer function. With it, it is possible to preserve the original filter block diagram, which allows us to\ncompute four simultaneous frequency responses with a small number of operations. We have also noted that the \nspectral method developed here effectively targeted the transformation of two poles at the origin of the z-plane \ninto zeros at the Nyquist frequency point, which played an important part in correcting the amplitude responses,\nparticularly for the bandpass and lowpass cases at high filter frequencies. \n\nThis method was compared with the well-known approach of re-arranging filter equations to tackle the issue \nof delay-free loops, and we concluded that the two alternative approaches can lead to the same results \nin the present case. Finally, it is important to note that the\noriginal design by Chamberlin is correct, and will work well if the sampling frequency is sufficiently high since\nwith a relatively small unit delay, the original analogue filter will be well approximated. In contrast, the filters \nderived here have a warped frequency response that is not equivalent to the analogue case, but are less\ncorrect from that perspective. On the other hand, they can produce better results at lower sampling rates.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}