diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzipsj" "b/data_all_eng_slimpj/shuffled/split2/finalzzipsj" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzipsj" @@ -0,0 +1,5 @@ +{"text":"\\section{Motivation} \nThe signature of a path was first studied by K.T. Chen (\\cite{chen1957integration}, \\cite{chen1977iterated}). It can be understood as a collection of non-commutative iterated integrals, and has always been an interesting and essential topic in rough paths theory.\\newline\n\\newline\nThe signature provides a characteristic description of a path. Chen \\cite{chen1958integration} first showed that the non-commutative iterated integrals of a piecewise regular continuous path give a unqiue representation of the path up to some null modifications. Hambly and Lyons \\cite{hambly2010uniqueness} furthered the result and showed that this non-commutative transform is faithful for paths of bounded variation up to tree-like pieces.\\newline\n\\newline\nGiven the fact that the signature of a path is unique up to tree-like pieces \\cite{hambly2010uniqueness}, it is an important and natural topic to reconstruct the path from its signature for the completeness of the theory. Lyons and Xu (\\cite{lyons2017hyperbolic} and \\cite{lyons2014inverting}) developed theories about inverting the signature of a $C^1$ path. Geng investigated more complicated cases and developed a method of inverting the signature of a rough path \\cite{geng2017reconstruction}. Pfeffer, Seigal and Sturmfels \\cite{pfeffer2018learning} demonstrated a method of computing the shortest path with a given signature level.\\newline\n\\newline\nThe main aim of this article is therefore to provide practical algorithms for signature inversion for some classes of paths, and hopefully shed light on signature inversion in more complicated cases. We develop a new method of inverting the signature of the path by trying to approximate a level of the signature by a lower level of signature. We justify the motivation of the method by considering the signatures of simple paths, and then we illustrate how we can approximate a path by solving an optimisation problem and demonstrate the method for a particular set of paths.\n\\section{Introductory examples}\nWe first introduce the definition of the signature of a path.\n\\begin{defn}[Signature of a path]\\label{signaturedefn}\nLet $J$ denote a compact interval, $E$ a Banach space. Let $X:J\\to E$ be a continuous path of finite $p$-variation for some $p<2$. The \\emph{signature of $X$} is \n\\begin{align*}\n\\mathbf{S}_J(X)=(1,S^1_J(X),S^2_J(X),\\cdots),\n\\end{align*} \nwhere for each $n\\geq 1$, $S^n_J(X)=\\int_{u_1<\\cdots0$, $\\ell^l$ norm satisfies the properties in Definition \\ref{norm}.\n\\end{lem}\nThe proof of Lemma \\ref{satifiednorm} is straightforward.\\newline\n\\begin{defn}[Normalised signature]\nAssume $\\gamma$ is a continuous path with bounded variation over the interval $[s,t]$ parametrised at unit speed. Define the \\emph{normalised signature} of $\\gamma$ over $[s,t]$ as\n\\begin{align*}\n\\left(1,\\bar{S}^1_{s,t}(\\gamma),\\bar{S}^2_{s,t}(\\gamma),\\cdots,\\bar{S}^m_{s,t}(\\gamma),\\cdots\\right),\n\\end{align*}\nwhere for all $m\\geq 1$, \n\\begin{equation}\\label{normalisedsig}\n\\bar{S}_{s,t}^m(\\gamma):=\\frac{m!\\int_{s0$,\n\\begin{align*}\n\\mathbb{P}\\left(S_n-\\mathbb{E}[S_n]\\geq t\\right)\\leq\\exp\\left(-\\frac{2t^2}{\\sum_{i=1}^n(b_i-a_i)^2}\\right),\n\\end{align*}\n\\begin{align*}\n\\mathbb{P}\\left(\\left|S_n-\\mathbb{E}[S_n]\\right|\\geq t\\right)\\leq 2\\exp\\left(-\\frac{2t^2}{\\sum_{i=1}^n(b_i-a_i)^2}\\right).\n\\end{align*}\n\\end{thm}\nNotice since a binomial variable is a sum of independent Bernoulli variables, Hoeffding's inequality applies to binomial variables. We note the following example.\n\\begin{eg}\\label{convergeuperboundanaeg}\nAssume $\\tilde{\\gamma}:[0,1]\\to\\mathbb{R}^d$ is a linear path with derivative $g:(0,1)\\to\\mathbb{R}^d$ such that $\\left\\lVert g(t)\\right\\rVert=1$ for all $t\\in(0,1)$. Then for any $\\theta\\in(0,1)$ and $q=\\left\\lfloor\\theta(n+2)\\right\\rfloor$, \n\\begin{align*}\n\\left\\lVert I_{q,n}(g(\\theta))-\\bar{S}_{n+1}\\right\\rVert=\\left\\lVert n!\\frac{\\left(g(\\theta)\\right)^{\\otimes (n+1)}}{n!}-(n+1)!\\frac{\\left(g(\\theta)\\right)^{\\otimes (n+1)}}{(n+1)!}\\right\\rVert=0,\n\\end{align*}\ntherefore\n\\begin{align*}\n\\left\\lVert I_{q,n}(g(\\theta))-\\bar{S}_{n+1}\\right\\rVert\\to 0\\quad\\text{as}\\quad n\\to\\infty.\n\\end{align*}\nNow let us consider a slightly more complicated case. Assume $\\{e_1,e_2\\}$ is a basis of $\\mathbb{R}^2$ and $\\gamma:[0,1]\\to\\mathbb{R}^2$ is a piecewise linear path such that \n\\begin{align*}\n\\gamma(t)= \\begin{cases} \n te_2 & t\\in[0,\\frac{2}{3}] \\\\\n (t-\\frac{2}{3})e_1+\\frac{2}{3}e_2 & t\\in(\\frac{2}{3},1].\n \\end{cases}\n\\end{align*}\nNote that the derivative $f:(0,1)\\to\\mathbb{R}^2$ of $\\gamma$ satisfies $\\lVert f(t)\\rVert=1$ for all $t\\in (0,1)$ where $f$ is defined. Note in this case,\n\\begin{align*}\n\\bar{S}_n=n!\\sum_{k=0}^n\\left(\\frac{2}{3}\\right)^k\\frac{e_2^{\\otimes k}}{k!}\\otimes\\left(\\frac{1}{3}\\right)^{n-k}\\frac{e_1^{\\otimes (n-k)}}{(n-k)!}.\n\\end{align*}\nNote that if we choose $\\theta=1\/2$ and $p=\\left\\lfloor\\theta(n+2)\\right\\rfloor$, then $f(\\frac{1}{2})=e_2$, and we can write \n\\begin{align*}\nI_{p,n}\\left(f\\left(\\frac{1}{2}\\right)\\right)=&n!\\sum_{k=p-1}^n\\left(\\frac{2}{3}\\right)^k\\frac{e_2^{\\otimes (k+1)}}{k!}\\left(\\frac{1}{3}\\right)^{n-k}\\frac{e_1^{\\otimes (n-k)}}{(n-k)!}\\\\\n&+n!\\sum_{k=0}^{p-2}\\left(\\frac{2}{3}\\right)^k\\frac{e_2^{\\otimes k}}{k!}\\left(\\frac{1}{3}\\right)^{n-k}\\frac{e_1^{\\otimes (p-1-k)}\\otimes e_2\\otimes e_1^{\\otimes (n-p+1)}}{(n-k)!},\n\\end{align*}\nthen\n\\begin{align*}\n&I_{p,n}\\left(f\\left(\\frac{1}{2}\\right)\\right)-\\bar{S}_{n+1}\\\\\n=&\\sum_{k=p-1}^n\\left(\\frac{2}{3}\\right)^{k+1}\\left(\\frac{1}{3}\\right)^{n-k}\\frac{(n+1)!}{(k+1)!(n-k)!}\\left(\\frac{3}{2}\\frac{k+1}{n+1}-1\\right)e_2^{\\otimes (k+1)}e_1^{\\otimes (n-k)}\\\\\n&+\\sum_{k=0}^{p-2}\\left(\\frac{2}{3}\\right)^k\\left(\\frac{1}{3}\\right)^{n-k}\\frac{n!}{k!(n-k)!}e_2^{\\otimes k}e_1^{\\otimes (p-1-k)}\\otimes e_2\\otimes e_1^{\\otimes (n-p+1)}\\\\\n&-\\sum_{k=0}^{p-1}\\left(\\frac{2}{3}\\right)^k\\left(\\frac{1}{3}\\right)^{n+1-k}\\frac{(n+1)!}{k!(n+1-k)!}e_2^{\\otimes k}e_1^{\\otimes (n+1-k)}.\n\\end{align*}\nHence\n\\begin{align}\\label{exampleconvergenceupperboundinsertionmiddle}\n\\left\\lVert I_{p,n}\\left(f\\left(\\frac{1}{2}\\right)\\right)-\\bar{S}_{n+1}\\right\\rVert\n\\leq&\\sum_{k=p}^{n+1}\\left(\\frac{2}{3}\\right)^k\\left(\\frac{1}{3}\\right)^{n+1-k}\\frac{(n+1)!}{k!(n+1-k)!}\\left|\\frac{3}{2}\\frac{k}{n+1}-1\\right|\\nonumber\\\\\n&+\\sum_{k=0}^{p-2}\\left(\\frac{2}{3}\\right)^k\\left(\\frac{1}{3}\\right)^{n-k}\\frac{n!}{k!(n-k)!}\\nonumber\\\\\n&+\\sum_{k=0}^{p-1}\\left(\\frac{2}{3}\\right)^k\\left(\\frac{1}{3}\\right)^{n+1-k}\\frac{(n+1)!}{k!(n+1-k)!}.\n\\end{align}\nLet us investigate the binomial sums on the right-hand side of (\\ref{exampleconvergenceupperboundinsertionmiddle}) respectively. Note \n\\begin{align}\\label{binomialeg}\n&\\sum_{k=p}^{n+1}\\left(\\frac{2}{3}\\right)^k\\left(\\frac{1}{3}\\right)^{n+1-k}\\frac{(n+1)!}{k!(n+1-k)!}\\left|\\frac{3}{2}\\frac{k}{n+1}-1\\right|\\nonumber\\\\\n\\leq&\\sum_{k=0}^{n+1}\\left(\\frac{2}{3}\\right)^k\\left(\\frac{1}{3}\\right)^{n+1-k}\\frac{(n+1)!}{k!(n+1-k)!}\\left|\\frac{3}{2}\\frac{k}{n+1}-1\\right|\\nonumber\\\\\n=&\\sum_{\\substack{t=k\/(n+1)\\\\k=0,\\cdots,n+1}}\\left(\\frac{2}{3}\\right)^{(n+1)t}\\left(\\frac{1}{3}\\right)^{(n+1)(1-t)}\\frac{(n+1)!}{\\left((n+1)t\\right)!\\left((n+1)(1-t)\\right)!}\\left|\\frac{3}{2}t-1\\right|.\n\\end{align}\nAssume $X\\sim$Binomial$(n+1,2\/3)$. Note that (\\ref{binomialeg}) is the expectation of a function of the random variable $Y:=X\/(n+1)$, and\n\\begin{align*}\n\\mathbb{E}[Y]=\\frac{1}{n+1}\\mathbb{E}[X]=\\frac{2}{3},\\quad \\text{Var}[Y]=\\frac{1}{(n+1)^2}\\text{Var}[X]=\\frac{2}{9}\\frac{1}{n+1},\n\\end{align*}\nhence we can see that as $n$ increases, the distribution of $Y$ will be mostly concentrated around $\\mathbb{E}[Y]$, and since $\\left|\\frac{3}{2}Y-1\\right|=0$ at $Y=\\mathbb{E}[Y]$, the value of the sum of (\\ref{binomialeg}) would converge to zero as $n$ increases. For a more formal argument, we have for any $\\lambda>0$, \n\\begin{align*}\n&\\mathbb{P}\\left(\\left|Y-\\frac{2}{3}\\right|\\geq\\frac{\\lambda}{3}\\sqrt{\\frac{2}{n+1}}\\right)\\\\\n=&\\mathbb{P}\\left(\\left|X-\\frac{2}{3}(n+1)\\right|\\geq\\frac{\\lambda}{3}\\sqrt{2(n+1)}\\right)\\\\\n\\leq&2\\exp\\left(-\\frac{4}{9}\\lambda^2\\right),\n\\end{align*}\nwhere the last inequality comes from Hoeffding's inequality.\\newline\nThen for any $\\lambda>0$,\n\\begin{align*}\n&\\sum_{k=p}^{n+1}\\left(\\frac{2}{3}\\right)^k\\left(\\frac{1}{3}\\right)^{n+1-k}\\frac{(n+1)!}{k!(n+1-k)!}\\left|\\frac{3}{2}\\frac{k}{n+1}-1\\right|\\\\\n\\leq&\\sum_{\\substack{t=k\/(n+1)\\\\k=0,\\cdots,n+1\\\\\\left|t-\\frac{2}{3}\\right|<\\frac{\\lambda}{3}\\sqrt{\\frac{2}{n+1}}}}\\left(\\frac{2}{3}\\right)^{(n+1)t}\\left(\\frac{1}{3}\\right)^{(n+1)(1-t)}\\frac{(n+1)!}{((n+1)t)!((n+1)(1-t))!}\\left|\\frac{3}{2}t-1\\right|\\\\\n&+\\sum_{\\substack{t=k\/(n+1)\\\\k=0,\\cdots,n+1\\\\\\left|t-\\frac{2}{3}\\right|\\geq\\frac{\\lambda}{3}\\sqrt{\\frac{2}{n+1}}}}\\left(\\frac{2}{3}\\right)^{(n+1)t}\\left(\\frac{1}{3}\\right)^{(n+1)(1-t)}\\frac{(n+1)!}{((n+1)t)!((n+1)(1-t))!}\\left|\\frac{3}{2}t-1\\right|\\\\\n\\leq&\\frac{\\lambda}{\\sqrt{2(n+1)}}+2\\mathbb{P}\\left(\\left|Y-\\frac{2}{3}\\right|\\geq\\frac{\\lambda}{3}\\sqrt{\\frac{2}{n+1}}\\right)\\\\\n\\leq&\\frac{\\lambda}{\\sqrt{2(n+1)}}+4\\exp\\left(-\\frac{4}{9}\\lambda^2\\right).\n\\end{align*}\nNote that for a given $n>0$, $\\lambda\/\\sqrt{2(n+1)}$ is a strictly increasing linear function of $\\lambda$ from $(0,\\infty)$ to $(0,\\infty)$, and $4\\exp\\left(-\\frac{4}{9}\\lambda^2\\right)$ is a strictly decreasing function of $\\lambda$ from $(0,\\infty)$ to $(0,4)$. So for each $n$, there exists $\\lambda_n>0$ such that \n\\begin{align}\\label{satisfyinglambda}\n\\frac{\\lambda_n}{\\sqrt{2(n+1)}}=4\\exp\\left(-\\frac{4}{9}\\lambda_n^2\\right).\n\\end{align}\nDifferentiating (\\ref{satisfyinglambda}) with respect to $n$ gives\n\\begin{align*}\n\\frac{\\partial \\lambda_n}{\\partial n}(2(n+1))^{\\frac{1}{2}}-\\lambda_n(2(n+1))^{-\\frac{3}{2}}=-\\frac{32}{9}\\frac{\\partial \\lambda_n}{\\partial n}\\lambda_n\\exp\\left(-\\frac{4}{9}\\lambda_n^2\\right),\n\\end{align*}\n\\begin{align*}\n\\frac{\\partial\\lambda_n}{\\partial n}\\left(\\left(2(n+1)\\right)^{\\frac{1}{2}}+\\frac{32}{9}\\lambda_n\\exp\\left(-\\frac{4}{9}\\lambda_n^2\\right)\\right)=\\lambda_n\\left(2(n+1)\\right)^{-\\frac{3}{2}},\n\\end{align*}\ntherefore $\\frac{\\partial \\lambda_n}{\\partial n}>0$, and $\\lambda_n$ is a strictly increasing function in $n$ which tends to infinity, so $4\\exp\\left(-4\/9\\lambda^2_n\\right)$ is a strictly decreasing function in $n$, and\n\\begin{align*}\n\\sum_{k=p}^{n+1}\\left(\\frac{2}{3}\\right)^k\\left(\\frac{1}{3}\\right)^{n+1-k}\\frac{(n+1)!}{k!(n+1-k)!}\\left|\\frac{3}{2}\\frac{k}{n+1}-1\\right|\\leq&\\frac{2\\lambda_n}{\\sqrt{2(n+1)}}\\\\\n=&8\\exp\\left(-\\frac{4}{9}\\lambda^2_n\\right)\\\\\n\\to& 0\n\\end{align*}\nas $n$ goes to infinity.\\newline\nFor the other two binomial sums in (\\ref{exampleconvergenceupperboundinsertionmiddle}), by Hoeffding's inequality, \n\\begin{align*}\n&\\sum_{k=0}^{p-2}\\left(\\frac{2}{3}\\right)^k\\left(\\frac{1}{3}\\right)^{n-k}\\frac{n!}{k!(n+1-k)!}+\\sum_{k=0}^{p-1}\\left(\\frac{2}{3}\\right)^{k}\\left(\\frac{1}{3}\\right)^{n+1-k}\\frac{(n+1)!}{k!(n+1-k)!}\\\\\n\\leq&\\exp\\left(-C_1n+B_1\\right)+\\exp\\left(-C_2n+B_2\\right),\n\\end{align*}\nwhere $C_1>0$, $C_2>0$, $B_1$ and $B_2$ are some constants. Therefore\n\\begin{align*}\n\\left\\lVert I_{p,n}\\left(\\frac{1}{2}\\right)-\\bar{S}_{n+1}\\right\\rVert\\leq\\frac{2\\lambda_n}{\\sqrt{2(n+1)}}+\\exp\\left(-C_1n+B_1\\right)+\\exp\\left(-C_2n+B_2\\right)\\to 0\n\\end{align*}\nas $n\\to\\infty$. Note since $2\\lambda_n\/\\sqrt{2(n+1)}$ is decreasing in $n$, and $\\lambda_n$ is increasing in $n$, the rate of increasing of $\\lambda_n$ must be smaller than the increasing rate of $\\sqrt{2(n+1)}$, therefore the upper bound obtained for $\\left\\lVert I_{p,n}\\left(\\frac{1}{2}\\right)-\\bar{S}_{n+1}\\right\\rVert$ decreases at a rate slower than $O(1\/\\sqrt{n})$.\n\\end{eg}\nExample \\ref{convergeuperboundanaeg} has inspired us that we can use the tail behaviour of the binomial distribution to prove that $\\left\\lVert I_{p,n}\\left(f(\\theta)\\right)-\\bar{S}_{n+1}\\right\\rVert$ converges to $0$.\n\\begin{thm}\\label{thmconvergetozeroupperbound}\nSuppose $\\gamma:[0,1]\\to\\mathbb{R}^d$ is a tree-reduced continuous bounded-variation path, and the derivative of $\\gamma$ $f:(0,1)\\to\\mathbb{R}^d$ is defined almost everywhere, and $\\lVert f(t)\\rVert=1$ for all $t\\in(0,1)$ if defined. Assume $\\gamma$ is linear on $[s,t]$ for $0\\leq s0$, \n\\begin{align*}\n&\\mathbb{P}\\left(\\left|Y-(t-s)\\right|\\geq\\lambda\\sqrt{\\frac{(t-s)(1-t+s)}{n+1}}\\right)\\\\\n=&\\mathbb{P}\\left(\\left|X-(t-s)(n+1)\\right|\\geq\\lambda\\sqrt{(t-s)(1-t+s)(n+1)}\\right)\\\\\n\\leq&2\\exp\\left(-2(t-s)(1-t+s)\\lambda^2\\right),\n\\end{align*}\ntherefore by considering the cases when $\\left|Y-(t-s)\\right|<\\lambda\\sqrt{(t-s)(1-t+s)\/(n+1)}$ and $\\left|Y-(t-s)\\right|\\geq\\lambda\\sqrt{(t-s)(1-t+s)\/(n+1)}$ respectively, we have\n\\begin{align*}\n&\\sum_{0\\leq k\\leq n+1}\\frac{(n+1)!}{k!(n+1-k)!}(t-s)^{k}(1-t+s)^{n+1-k}\\left|\\frac{k}{n+1}\\frac{1}{t-s}-1\\right|\\\\\n\\leq&\\lambda\\sqrt{\\frac{1-t+s}{(t-s)(n+1)}}\\\\\n+&\\max\\left(1,\\left(\\frac{1}{t-s}-1\\right)\\right)\\mathbb{P}\\left(\\left| Y-(t-s)\\right|\\geq \\lambda\\sqrt{\\frac{(t-s)(1-t+s)}{n+1}}\\right)\\\\\n\\leq&\\lambda\\sqrt{\\frac{1-t+s}{(t-s)(n+1)}}+2\\max\\left(1,\\left(\\frac{1}{t-s}-1\\right)\\right)\\exp\\left(-2(t-s)(1-t+s)\\lambda^2\\right).\n\\end{align*}\nBy similar arguments as in Example \\ref{convergeuperboundanaeg}, there exists a strictly increasing sequence $\\left(\\lambda_n\\right)_n$ such that for each $n\\in\\mathbb{N}$, $\\lambda_n>0$, and \n\\begin{equation}\\label{limitofparameter}\n\\lambda_n\\sqrt{\\frac{1-t+s}{(t-s)(n+1)}}=2\\max\\left(1,\\left(\\frac{1}{t-s}-1\\right)\\right)\\exp\\left(-2(t-s)(1-t+s)\\lambda_n^2\\right).\n\\end{equation}\nSuppose $\\lambda_n\\to\\lambda^*<\\infty$ as $n\\to\\infty$. Then taking limits on both sides of Equation (\\ref{limitofparameter}) gives\n\\begin{align*}\n0=2\\max\\left(1,\\left(\\frac{1}{t-s}-1\\right)\\right)\\exp\\left(-2(t-s)(1-t+s){\\lambda^*}^2\\right),\n\\end{align*}\nwhich does not hold if $\\lambda^*$ is finite. Hence we must have $\\lambda^*=\\infty$. Therefore\n\\begin{align*}\n&\\sum_{0\\leq k\\leq n+1}\\frac{(n+1)!}{k!(n+1-k)!}(t-s)^{k}(1-t+s)^{n+1-k}\\left|\\frac{k}{n+1}\\frac{1}{t-s}-1\\right|\\\\\n\\leq &2\\lambda_n\\sqrt{\\frac{1-t+s}{(t-s)(n+1)}}\\\\\n=&4\\max\\left(1,\\left(\\frac{1}{t-s}-1\\right)\\right)\\exp\\left(-2(t-s)(1-t+s)\\lambda_n^2\\right)\\to 0\n\\end{align*}\nas $n\\to\\infty$. If we define $X_1\\sim$Binomial$(n,s)$, $X_2\\sim$Binomial$(n,1-t)$, $X_3\\sim$Binomial$(n+1,s)$, and $X_4\\sim$Binomial$(n+1,1-t)$, by applying Hoeffding's inequality to the other binomial sums on the right-hand side of (\\ref{upperboundforbinomialsums}), there exists $M\\in\\mathbb{N}$ such that for all $n\\geq M$, \n\\begin{align*}\n&\\left\\lVert I_{p,n}\\left(f(\\theta)\\right)-\\bar{S}_{n+1}\\right\\rVert\\\\\n\\leq & 2\\lambda_n\\sqrt{\\frac{1-t+s}{(t-s)(n+1)}}+\\mathbb{P}\\left(X_1\\geq p\\right)\\\\\n&+\\mathbb{P}\\left(X_2\\geq n-p+2\\right)+\\mathbb{P}\\left(X_3\\geq p\\right)+\\mathbb{P}\\left(X_4\\geq n-p+2\\right)\\\\\n\\leq &2\\lambda_n\\sqrt{\\frac{1-t+s}{(t-s)(n+1)}}+\\exp\\left(-C_1n+B_1\\right)\\\\\n&+\\exp\\left(-C_2n+B_2\\right)+\\exp\\left(-C_3n+B_3\\right)+\\exp\\left(-C_4n+B_4\\right),\n\\end{align*}\nwhere we have used Hoeffding's inequality and $C_1>0, C_2>0, C_3>0, C_4>0$, $B_1,B_2,B_3$ and $B_4$ are some constants. Therefore\n\\begin{align*}\n\\left\\lVert I_{p,n}\\left(f(\\theta)\\right)-\\bar{S}_{n+1}\\right\\rVert\\to 0\\quad\\text{as}\\quad n\\to\\infty.\n\\end{align*}\nMoreover, the rate of convergence of the upper bound obtained for $\\left\\lVert I_{p,n}\\left(f(\\theta)\\right)-\\bar{S}_{n+1}\\right\\rVert$ is slower than $O\\left(1\/\\sqrt{(t-s)(n+1)}\\right)$.\n\\end{proof}\nWe also have the following example as a numerical demonstration of our claim. \n\\begin{eg}\nAssume $\\gamma\\in\\mathbb{R}^2$ is a piecewise linear path which is an approximation to the quadratic path over the unit interval $[0,1]$ parametrised at unit speed, i.e. $\\gamma^{(2)}(t)=(\\gamma^{(1)}(t))^2$ for all $t\\in[0,1]$. Fixing $\\theta=0.3$, if we compute the difference $\\left\\rVert I_{p,n}(f(\\theta))-\\bar{S}_{n+1}\\right\\rVert$ under the $\\ell^1$ norm and $\\ell^2$ norm, we obtain Figure \\ref{nextleveldiffl1} and \\ref{nextleveldiffl2}. From the figures we can see that under both the $\\ell^1$ norm and $\\ell^2$ norm, the difference between the $n$-th term in the normalised signature with $f(\\theta)$ inserted at the $p$-th position and the $(n+1)$-th term in the normalised signature decreases as $n$ increases, although not monotonically. The reason for the non-monotonicity is that given $p=\\left\\lfloor\\theta(n+2)\\right\\rfloor$, $I_{p,n}(f(p\/(n+2)))$ is a good approximation of $\\bar{S}_{n+1}$, while $I_{p,n}(f(\\theta))$ may not be as a good approximation as $I_{p,n}(f(p\/(n+2)))$ if $\\theta(n+2)$ is not an integer, hence we may observe small increases when $p\/(n+2)$ is a bit far from $\\theta$.\\newline\nAs a justification, we now choose $\\theta=0.5$ and $p=\\left\\lfloor0.5(n+2)\\right\\rfloor$ for $n=4,6,8,10,12$, and plot $\\left\\lVert I_{p,n}(f(\\theta))-\\bar{S}_{n+1}\\right\\rVert_1$ in Figure \\ref{nextleveldiffl10.5}. In this case since the signature level $n$ used is even, $0.5(n+2)$ is an integer, and from the figure we can see that we get monotone convergence as $n$ increases in this case. \n\\end{eg}\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[trim={2cm 8cm 3cm 6cm},clip,width=0.8\\textwidth]{insertionmapl1norm.pdf}\n\\caption{$\\left\\lVert I_{p,n}(f(\\theta))-\\bar{S}_{n+1}\\right\\rVert_1$ for $p=\\left\\lfloor 0.3(n+2)\\right\\rfloor$, $n=3,\\cdots,10$.}\n\\label{nextleveldiffl1}\n\\end{figure}\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[trim={2cm 8cm 3cm 6cm},clip,width=0.8\\textwidth]{insertionmapl2norm.pdf}\n\\caption{$\\left\\rVert I_{p,n}(f(\\theta))-\\bar{S}_{n+1}\\right\\rVert_2$ for $p=\\left\\lfloor 0.3(n+2)\\right\\rfloor$, $n=3,\\cdots,10$.}\n\\label{nextleveldiffl2}\n\\end{figure}\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[trim={2cm 8cm 3cm 6cm},clip,width=0.8\\textwidth]{insertionmapl1normdiffatmiddle.pdf}\n\\caption{$\\left\\rVert I_{p,n}(f(\\theta))-\\bar{S}_{n+1}\\right\\rVert_1$ for $p=\\left\\lfloor 0.5(n+2)\\right\\rfloor$, $n=4,6,8,10,12$.}\n\\label{nextleveldiffl10.5}\n\\end{figure}\n\\section{A lower bound for the signature of a path}\\label{sectionlowerbound}\nWe have so far discussed finding an upper bound on $\\left\\lVert I_{p,n}(f(\\theta))-\\bar{S}_{n+1}\\right\\rVert$ for a path which is differentiable at $\\theta$. In the light of Lemma \\ref{normrelation}, we know that for $x,y\\in\\mathbb{R}^d$, \n\\begin{align*}\n\\left\\lVert\\bar{S}_n\\right\\rVert\\left\\lVert x-y\\right\\rVert=\\left\\lVert I_{p,n}(x)-I_{p,n}(y)\\right\\rVert\\leq\\left\\lVert I_{p,n}(x)-\\bar{S}_{n+1}\\right\\rVert+\\left\\lVert I_{p,n}(y)-\\bar{S}_{n+1}\\right\\rVert.\n\\end{align*}\nGiven an upper bound on the right-hand side of the inequality, if we can obtain a lower bound on $\\left\\lVert\\bar{S}_n\\right\\rVert$, we can get an upper bound on $\\lVert x-y\\rVert$. In fact, finding a lower bound for the signature is itself an interesting topic. We will see in the following example that the rate of decay of the signature depends on the path as well as the norm we choose. \n\\begin{eg}\\label{strongdecaycounter}\nIf we consider a monotone lattice path $\\gamma$ of consisting of two pieces, and each piece is of length $\\frac{1}{2}$, the $\\ell^1$ norm of the signature at level $n$ is\n\\begin{align*}\n\\left\\lVert n!S^n_{0,T}(\\gamma)\\right\\rVert_{1}=\\sum_{k=0}^n\\binom{n}{k}\\left(\\frac{1}{2}\\right)^k\\left(\\frac{1}{2}\\right)^{n-k}=1,\n\\end{align*}\nhence the signature is clearly bounded below. If we consider the norm of the signature under the Hilbert-Schmidt norm, then\n\\begin{align*}\n\\left\\lVert n!S^n_{0,T}(\\gamma)\\right\\rVert_{\\text{HS}}=\\sqrt{\\sum_{k=0}^n\\binom{n}{k}^2\\left(\\frac{1}{2}\\right)^{2k}\\left(\\frac{1}{2}\\right)^{2(n-k)}}.\n\\end{align*}\nAs we can see from Figure \\ref{lowerboundeg}, the $\\left\\lVert S^n_{0,T}(\\gamma)\\right\\rVert_{\\text{HS}}$ decreases in such a way that there is no obvious constant non-zero lower bound for the signature.\n\\end{eg}\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\textwidth]{lowerboundl2}\n\\caption{The Hilbert-Schmidt norm of the normalised signature of a monotone lattice path at level $n$}\n\\label{lowerboundeg}\n\\end{figure}\nTherefore it is important to take into account the effects of the norm when we look for a lower bound for the signature.\\newline\nWe first recall the norm in defined by Hambly and Lyons \\cite{hambly2010uniqueness}.\n\\begin{defn}\nIf $V$ is a Banach space, $A$ is a Banach algebra, and $F_1,\\cdots,F_k\\in \\text{Hom}(V,A)$, then the canonical linear extension $F_1\\otimes\\cdots\\otimes F_k$ from $V^{\\otimes k}$ to $A$ is defined as\n\\begin{align*}\n(v_1,\\cdots,v_k)\\to F_1(v_1)\\cdots F_k(v_k).\n\\end{align*} \nDefine the norm \n\\begin{align*}\n\\lVert x\\rVert_{\\to A}:=\\sup_{F_i\\in \\text{Hom}(V,A),\\lVert F_i\\rVert_{\\text{Hom}(V,A)}=1}\\left\\lVert F_1\\otimes\\cdots\\otimes F_k(x)\\right\\rVert_A.\n\\end{align*}\n\\end{defn}\nAs stated by Hambly and Lyons \\cite{hambly2010uniqueness}, the norm $\\lVert\\cdot\\rVert_{\\to A}$ is smaller than the projective tensor norm. We give a proof of this claim in the following lemma.\n\\begin{lem}\\label{arrowandprojnorm}\nFor all $x\\in V^{\\otimes k}$, $\\lVert x\\rVert_{\\to A}\\leq \\lVert x\\rVert_{\\pi}$, where $\\lVert\\cdot\\rVert_{\\pi}$ is the projective tensor norm.\n\\end{lem}\n\\begin{proof}\nFor all $x\\in V^{\\otimes k}$, if $x=\\sum_{i\\in I}v_{i_1}\\otimes\\cdots\\otimes v_{i_k}$ is an representation of $x$ for some indexing set $I$, then for any $F_i\\in \\text{Hom}(V,A)$ such that $\\lVert F_i\\rVert_{\\text{Hom}(V,A)}=1$, $i=1,\\cdots,k$, we have\n\\begin{align*}\n\\left\\lVert(F_1\\otimes\\cdots\\otimes F_k)(x)\\right\\rVert_A&=\\left\\lVert\\sum_{i\\in I}F_1(v_1)\\cdots F_k(v_k)\\right\\rVert_A\\\\\n&\\leq\\sum_{i\\in I}\\left\\lVert F(v_{i_1})\\rVert_{A}\\cdots\\right\\lVert F(v_{i_k})\\rVert_A\\\\\n&\\leq\\sum_{i\\in I}\\left\\lVert v_{i_1}\\right\\rVert\\cdots\\left\\lVert v_{i_k}\\right\\rVert,\n\\end{align*}\nfor an abitrary representation of $x$. Then by the definition of the projective tensor norm, for any $F_i\\in \\text{Hom}(V,A)$ such that $\\lVert F_i\\rVert_{\\text{Hom}(V,A)}=1$, $i=1,\\cdots,k$,\n\\begin{align*}\n\\left\\lVert(F_1\\otimes\\cdots\\otimes F_k)(x)\\right\\rVert_A\\leq\\lVert x\\rVert_{\\pi},\n\\end{align*}\nhence\n\\begin{align*}\n\\lVert x\\rVert_{\\to A}\\leq\\lVert x\\rVert_{\\pi}.\n\\end{align*}\n\\end{proof}\nWe also introduce a mapping function defined by Hambly and Lyons \\cite{hambly2010uniqueness}.\n\\begin{defn}\\label{hypermatrix}\nDefine the mapping $F:\\mathbb{R}^{d}\\to \\text{Hom}\\left(\\mathbb{R}^{d+1},\\mathbb{R}^{d+1}\\right)$ such that for $x=(x_1,\\cdots,x_d)\\in\\mathbb{R}^d$,\n\\begin{align*}\nF:x\\mapsto\\begin{pmatrix}\n0 & \\cdots & 0 & x_1\\\\\n\\vdots &\\ddots &\\vdots &\\vdots\\\\\n0 &\\cdots & 0 & x_d\\\\\nx_1 &\\cdots & x_d & 0\n\\end{pmatrix},\n\\end{align*}\nwhere $\\text{Hom}(V,A)$ denotes the set of bounded linear operators from $V$ to $A$.\n\\end{defn}\nWe also note the following useful lemma which gives a bound on the tail behaviour of the Poisson distribution by Canonne \\cite{canonne2017}.\n\\begin{lem}[Canonne \\cite{canonne2017}]\\label{poissontail}\nLet $X\\sim$ Poisson$(\\lambda)$ for some parameter $\\lambda>0$. Then for any $h>0$, we have\n\\begin{align*}\n\\mathbb{P}\\left(|X-\\lambda|\\geq h\\right)\\leq2\\exp\\left(-\\frac{h^2}{2(\\lambda+h)}\\right).\n\\end{align*} \n\\end{lem}\nWe extend the argument by Hambly and Lyons (Theorem 13, \\cite{hambly2010uniqueness}) and prove in the following theorem that a non-zero lower bound exists for more than one level of the signature of a piecewise linear path.\n\\begin{thm}\\label{lowerbound}\nLet $\\gamma:[0,1]\\rightarrow\\mathbb{R}^d$ be a non-degenerate piecewise linear path consisting of $M>0$ linear pieces. Suppose $2\\Omega>0$ is the smallest angle between two adjacent edges. Equip $\\mathbb{R}^d$ and $\\mathbb{R}^{d+1}$ with the Euclidean norm. Then for any $c\\in(0,1)$, there exists at least an increasing subsequence $(n_k)_{k\\geq 1}\\in\\mathbb{N}$ such that\n\\begin{align*}\n\\left\\lVert\\bar{S}_{n_k}\\right\\rVert_{\\to\\text{Hom}(\\mathbb{R}^{d+1},\\mathbb{R}^{d+1})}\\geq c\\exp\\left(-(M-1)K(\\Omega)\\right)\\quad\\forall\\, k\\geq 1,\n\\end{align*}\nwhere $K(\\Omega):=\\log\\left(\\frac{2}{1-\\cos|\\Omega|}\\right)$.\n\\end{thm}\n\\begin{proof}\nWithout loss of generality, we can assume $\\gamma$ is of length $1$. Suppose $D>0$ is the length of the shortest edge of $\\gamma$. For $\\alpha>0$, write the path $\\alpha\\gamma$ as $\\gamma_\\alpha$. Then for all $\\alpha$ such that $\\alpha\\geq\\frac{K(\\Omega)}{D}$, the shortest path of $\\gamma_\\alpha$ is at least of length $K(\\Omega)$. Then by Lemma 3.7 of \\cite{hambly2010uniqueness}, the Cartan development $\\Gamma_\\alpha$ of $\\gamma_\\alpha$ satisfies\n\\begin{align*}\nd(o, \\Gamma_{\\alpha}o)\\geq\\alpha-(M-1)K(\\Omega).\n\\end{align*}\nAlso by Proposition 3.13 of \\cite{hambly2010uniqueness}, we know that \n\\begin{align*}\n\\lVert\\Gamma_\\alpha\\rVert_{\\text{Hom}(\\mathbb{R}^{d+1},\\mathbb{R}^{d+1})}\\geq\\exp(d(o,\\Gamma_{\\alpha}o)).\n\\end{align*}\nThen if we recall the definition of the map $F$ as in Definition \\ref{hypermatrix}, for all $\\alpha$ such that $\\alpha>\\frac{K(\\Omega)}{D}$, we have\n\\begin{align}\\label{poissonineqderive}\n&\\exp\\left(\\alpha-(M-1)K(\\Omega)\\right)\\\\\n\\leq&\\left\\lVert\\Gamma_\\alpha\\right\\rVert_{\\text{Hom}(\\mathbb{R}^{d+1},\\mathbb{R}^{d+1})}\\nonumber\\\\\n\\leq&\\sum_{n=0}^\\infty\\alpha^n\\left\\lVert\\int_{00$, \n\\begin{align*}\n\\mathbb{P}\\left(|X-\\alpha|\\geq h\\right)\\leq2\\exp\\left(-\\frac{h^2}{2(\\alpha+h)}\\right).\n\\end{align*}\nIn particular, \n\\begin{equation}\\label{poissontailspecial}\n\\mathbb{P}\\left(|X-\\alpha|\\geq\\alpha^{3\/4}\\right)\\leq 2\\exp\\left(-\\frac{\\alpha^{3\/2}}{2(\\alpha+\\alpha^{3\/4})}\\right).\n\\end{equation}\nNote the right-hand side of (\\ref{poissontailspecial}) is a decreasing function in $\\alpha$, hence there exists $\\alpha^*$ such that for all $\\alpha>\\alpha^*$, \n\\begin{align*}\n\\mathbb{P}(|X-\\alpha|\\geq\\alpha^{3\/4})&\\leq 2\\exp\\left(-\\frac{\\alpha^{3\/2}}{2(\\alpha+\\alpha^{3\/4})}\\right)\\\\\n&<(1-c)\\exp\\left(-(M-1)K(\\Omega)\\right)\\\\\n&\\leq\\mathbb{P}\\left(n\\, \\text{such that}\\, \\left\\lVert\\bar{S}_n\\right\\rVert_{\\to\\text{Hom}(\\mathbb{R}^{d+1},\\mathbb{R}^{d+1})}\\geq c\\exp(-(M-1)K(\\Omega))\\right).\n\\end{align*}\nThen there must be some $n$ near the mean $\\alpha$ such that\n\\begin{align*}\n\\left\\lVert\\bar{S}_n\\right\\rVert_{\\to\\text{Hom}(\\mathbb{R}^{d+1},\\mathbb{R}^{d+1})}\\geq c\\exp(-(M-1)K(\\Omega)),\n\\end{align*}\ni.e. for $\\alpha>\\alpha^*$,\n\\begin{align*}\n&\\mathbb{P}\\left(X=n\\,\\text{where}\\,|n-\\alpha|<\\alpha^{3\/4}\\,\\text{and}\\,\\lVert\\bar{S}_n\\rVert_{\\to\\text{Hom}(\\mathbb{R}^{d+1},\\mathbb{R}^{d+1})}\\geq c\\exp(-(M-1)K(\\Omega))\\right)\\\\\n\\geq&(1-c)\\exp\\left(-(M-1)K(\\Omega)\\right)-2\\exp\\left(-\\frac{\\alpha^{3\/2}}{2(\\alpha+\\alpha^{3\/4})}\\right)\\\\\n>&0.\n\\end{align*}\nHence for large enough $\\alpha$, there exists at least one $n\\in(\\alpha-\\alpha^{3\/4},\\alpha+\\alpha^{3\/4})$ such that $\\left\\lVert\\bar{S}_n\\right\\rVert_{\\to\\text{Hom}(\\mathbb{R}^{d+1},\\mathbb{R}^{d+1})}\\geq c\\exp(-(M-1)K(\\Omega))$. Note that $\\alpha$ grows faster than $\\alpha^{3\/4}$, so as $\\alpha$ increases, the interval $(\\alpha-\\alpha^{3\/4},\\alpha+\\alpha^{3\/4})$ moves rightwards. Hence there exists a strictly increasing subsequence $(n_k)_{k\\geq 1}\\in\\mathbb{N}$ such that \n\\begin{align*}\n\\left\\lVert\\bar{S}_{n_k}\\right\\rVert_{\\to\\text{Hom}(\\mathbb{R}^{d+1},\\mathbb{R}^{d+1})}\\geq c\\exp(-(M-1)K(\\Omega))\\quad\\forall\\, k\\geq 1.\n\\end{align*}\n\\end{proof}\n\\begin{cor}\\label{pinormlowerbound}\nLet $\\gamma:[0,1]\\rightarrow\\mathbb{R}^d$ be a non-degenerate piecewise linear path consisting of $M>0$ linear pieces. Suppose $2\\Omega>0$ is the smallest angle between two adjacent edges. Equip $\\mathbb{R}^d$ and $\\mathbb{R}^{d+1}$ with the Euclidean norm. Then for any $c\\in(0,1)$, there exists at least a subsequence $(n_k)_{k\\geq 1}\\in\\mathbb{N}$ such that\n\\begin{align*}\n\\left\\lVert\\bar{S}_{n_k}\\right\\rVert_{\\pi}\\geq c\\exp(-(M-1)K(\\Omega))\\quad\\forall k\\geq 1,\n\\end{align*}\nwhere $K(\\Omega):=\\log\\left(\\frac{2}{1-\\cos|\\Omega|}\\right)$ and $\\lVert\\cdot\\rVert_{\\pi}$ is the projective tensor norm induced from the Euclidean space $\\mathbb{R}^d$.\n\\end{cor}\n\\begin{proof}\nWithout loss of generality, we assume $\\gamma$ is of length $1$. There are two ways to justify this result.\\newline\nBy Lemma \\ref{arrowandprojnorm}, we know that $\\lVert\\cdot\\rVert_{\\pi}$ is a bigger norm than $\\lVert\\cdot\\rVert_{\\to \\text{Hom}(\\mathbb{R}^{d+1},\\mathbb{R}^{d+1})}$, hence by Theorem \\ref{lowerbound}, for any $c\\in (0,1)$, there exists a subsequence $(n_k)_{k\\geq 1}\\in\\mathbb{N}$ such that \n\\begin{align*}\nc\\exp(-(M-1)K(\\Omega))\\leq\\left\\lVert\\bar{S}_{n_k}\\right\\rVert_{\\to \\text{Hom}(\\mathbb{R}^{d+1},\\mathbb{R}^{d+1})}\\leq \\left\\lVert\\bar{S}_{n_k}\\right\\rVert_{\\pi}.\n\\end{align*}\n\\newline\nAn alternative proof directly applies the argument in the proof of Theorem \\ref{lowerbound} to the norm $\\lVert\\cdot\\rVert_{\\pi}$. Note that for any $x\\in\\mathbb{R}^{d^{\\otimes k}}$, and $F:\\mathbb{R}^d\\to \\text{Hom}(\\mathbb{R}^{d+1},\\mathbb{R}^{d+1})$ as defined in Definition \\ref{hypermatrix}, if $x=\\sum_{i\\in I}v_{i_1}\\otimes\\cdots\\otimes v_{i_k}$ for an indexing set $I$, we have\n\\begin{align*}\n&\\left\\lVert(F\\otimes\\cdots\\otimes F)(x)\\right\\rVert_{\\text{Hom}(\\mathbb{R}^{d+1},\\mathbb{R}^{d+1})}\\\\\n=&\\left\\lVert\\sum_{i\\in I}F(v_{i_1})\\cdots F(v_{i_k})\\right\\rVert_{\\text{Hom}(\\mathbb{R}^{d+1},\\mathbb{R}^{d+1})}\\\\\n\\leq&\\sum_{i\\in I}\\left\\lVert F(v_{i_1})\\right\\rVert_{\\text{Hom}(\\mathbb{R}^{d+1},\\mathbb{R}^{d+1})}\\cdots\\lVert F(v_{i_k})\\rVert_{\\text{Hom}(\\mathbb{R}^{d+1},\\mathbb{R}^{d+1})}\\\\\n\\leq&\\left\\lVert F\\right\\rVert^k_{\\text{Hom}(\\mathbb{R}^d,\\text{Hom}(\\mathbb{R}^{d+1},\\mathbb{R}^{d+1}))}\\sum_{i\\in I}\\left\\lVert v_{i_1}\\right\\rVert\\cdots\\left\\lVert v_{i_k}\\right\\rVert,\n\\end{align*}\nfor an abritrary representation of $x$. Hence\n\\begin{align*}\n\\left\\lVert (F\\otimes\\cdots\\otimes F)(x)\\right\\rVert_{\\text{Hom}(\\mathbb{R}^{d+1},\\mathbb{R}^{d+1})}\\leq\\left\\lVert F\\right\\rVert^k_{\\text{Hom}(\\mathbb{R}^d,\\text{Hom}(\\mathbb{R}^{d+1},\\mathbb{R}^{d+1}))}\\lVert x\\rVert_{\\pi}\\leq \\lVert x\\rVert_{\\pi}. \n\\end{align*}\nTherefore when we equip $(\\mathbb{R}^d)^{\\otimes k}$ with $\\lVert\\cdot\\rVert_{\\pi}$, we have\n\\begin{align*}\n\\lVert (F\\otimes\\cdots\\otimes F)\\rVert_{\\text{Hom}\\left(\\left(\\mathbb{R}^d\\right)^{\\otimes k}, \\text{Hom}(\\mathbb{R}^{d+1},\\mathbb{R}^{d+1})\\right)}\\leq 1.\n\\end{align*}\nAs in the proof of Theorem \\ref{lowerbound}, and if we use $\\lVert\\cdot\\rVert_{\\pi}$ instead of $\\lVert\\cdot\\rVert_{\\to \\text{Hom}(\\mathbb{R}^{d+1},\\mathbb{R}^{d+1})}$, (\\ref{poissonineqderive}) becomes\n\\begin{align*}\n&\\exp(\\alpha-(M-1)K(\\Omega))\\\\\n\\leq&\\left\\lVert\\Gamma_{\\alpha}\\right\\rVert_{\\text{Hom}(\\mathbb{R}^{d+1},\\mathbb{R}^{d+1})}\\\\\n\\leq&\\sum_{n=0}^\\infty\\alpha^n\\left\\lVert\\int_{00$ with the modulus of continuity of its derivative $\\delta(\\epsilon)=o(\\epsilon^{3\/4})$, then\n\\begin{align*}\nL^{-k}k!\\left\\lVert\\int_{00$ is the number of linear pieces of $\\gamma$. Then by Lemma \\ref{normrelation}, \n\\begin{align*}\n&\\left\\lVert x_{\\theta,n_k}-y_{\\theta,n_k}\\right\\rVert_{2}\\\\\n=&\\frac{\\left\\lVert I_{p,n_k}(x_{\\theta,n_k})-I_{p,n_k}(y_{\\theta,n_k})\\right\\rVert_{\\pi}}{\\left\\lVert\\bar{S}_{n_k}\\right\\rVert_{\\pi}}\\\\\n\\leq&\\frac{4\\epsilon^\\gamma_{\\theta,n_k}}{\\exp(-(M-1)K(\\Omega))}.\n\\end{align*}\nSince $\\epsilon_{\\theta,n_k}\\rightarrow0$ as $k\\rightarrow\\infty$, we have $\\lVert x_{\\theta,n_k}-y_{\\theta,n_k}\\rVert_{2}\\rightarrow0$ as $k\\rightarrow\\infty$.\n\\end{proof}\nWe are then able to derive a corollary which is more useful for computation.\n\\begin{cor}\\label{derapprox}\nAssume $\\gamma:[0,1]\\rightarrow\\mathbb{R}^d$ is a non-degenerate piecewise linear path with derivative $f:(0,1)\\rightarrow\\mathbb{R}^d$ such that $\\lVert f(t)\\rVert_2=1$ for all $t\\in(0,1)$ if defined. Assume $\\gamma$ is differentiable at $\\theta\\in(0,1)$. For $n\\geq1$, choose $p\\in\\{1,\\cdots,n+1\\}$ such that $p=\\left\\lfloor\\theta(n+2)\\right\\rfloor$. Define\n\\begin{equation}\\label{optimisationsln}\nx^*_{\\theta,n}:=argmin_{x\\in\\mathbb{R}^d,\\left\\lVert x\\right\\rVert_2=1}\\left\\lVert I_{p,n}(x)-\\bar{S}_{n+1}\\right\\rVert_{\\pi}.\n\\end{equation}\nThen there exists at least a subsequence $(n_k)_{k\\geq 1}\\in\\mathbb{N}$ such that $x^*_{n_k}$ converges to $f(\\theta)$ as $k$ increases. \n\\end{cor}\n\\begin{proof}\nBy Theorem \\ref{setconverge}, we know that there exists a subsequence $(n_k)_{k\\geq 1}\\in\\mathbb{N}$ such that for all $k\\geq 1$, for all $x_{\\theta,n_k}, y_{\\theta,n_k}\\in A^\\gamma_{\\theta, n_k}$, $\\lVert x_{\\theta,n_k}-y_{\\theta,n_k}\\rVert_{2}\\rightarrow0$ as $k\\rightarrow\\infty$. We know that $f(\\theta)\\in A^\\gamma_{\\theta, n_k}$, and $\\left\\lVert I_{p,n_k}(x^*_{\\theta,n_k})-\\bar{S}_{n_k+1}\\right\\rVert_{\\pi}\\leq\\epsilon^\\gamma_{\\theta,n_k}$ due to the fact that $x^*_{\\theta,n_k}$ gives the shortest distance between $I_{p,n_k}(x)$ and $\\bar{S}_{n_k+1}$ among all $x\\in\\mathbb{R}^d$ such that $\\lVert x\\rVert_2=1$. Therefore $x^*_{\\theta,n_k}\\in A^\\gamma_{\\theta,n_k}$, hence $\\left\\lVert x^*_{\\theta,n_k}-f(\\theta)\\right\\rVert_{2}\\rightarrow0$ as $k\\rightarrow\\infty$.\n\\end{proof}\nWe can also develop such an algorithm for another set of paths. First we recall the following theorem by Hambly and Lyons \\cite{hambly2010uniqueness}.\n\\begin{thm}[Hambly and Lyons, Theorem 9 \\cite{hambly2010uniqueness}]\\label{hamblylyonslimit}\nLet $J$ be a closed and bounded interval. Let $\\gamma:J\\to\\mathbb{R}^d$ be a continuous path of finite length $\\ell>0$. Recall that the modulus of continuity of the derivative is defined as $\\delta(h):=\\sup_{|u-v|\\leq h}\\left\\lVert\\dot{\\gamma}(u)-\\dot{\\gamma}(v)\\right\\rVert_2$. If $\\delta(h)=o\\left(h^{3\/4}\\right)$, then\n\\begin{align*}\n\\ell^kk!\\left\\lVert\\int_{00$ and differentiable at $\\theta\\in(a,b)$, we can slightly change (\\ref{optimisationsln}) and obtain an approximation to the derivative of $\\gamma$ when it is parametrised at unit speed when we choose the position of insertion $p$ appropriately, even if the original speed of parametrisation is unknown. Moreover the same changes apply to the result of Theorem \\ref{differentiablepathreconstruction}.\n\\end{lem}\n\\begin{proof}\nLet the function $\\phi:[a,b]\\to[u,v]$ be such that the path $\\tilde{\\gamma}:=\\gamma\\circ\\phi$ is parametrised at unit speed.\\newline\nWe first try to determine what value $p$ should be, i.e. the position at which the element shall be inserted into the $n$-th level of the normalised signature of $\\tilde{\\gamma}$. For any norm which satisfies properties stated in Definition \\ref{norm}, if we insert $x\\in\\mathbb{R}^d$ into the $n$-th level of the signature of $\\tilde{\\gamma}$, then \n\\begin{align*}\n&L^{-(n+1)}\\bigg\\lVert\\int_{u 0$, we have $\\lambda> 0$, then from above we can see that $\\left(\\frac{y_i}{\\sqrt{\\sum_{j=1}^d y_j^2}}\\right)_{i=1,\\cdots,d}$ is the global minimum. Hence the solution to (\\ref{l2minimisation}) is unique by the way our optimisation problem is proposed. Finally we get the minimum $x^*$ by $x=VV^Tx$\n\\end{proof}\n\\begin{cor}\\label{l2uniquenessnotunitspeed}\nAssume a tree-reduced continuous bounded-variation path $\\gamma:[a,b]\\to\\mathbb{R}^d$ is of length $L>0$ and differentiable at $\\theta\\in(a,b)$, and suppose $\\gamma$ is parametrised at an unknown speed. Then there exists a unique solution to problem\n\\begin{equation}\\label{notunitspeedmatrix}\n\\min_{x\\in\\mathbb{R}^d, \\lVert x\\rVert_2=1}\\left\\lVert LAx-b\\right\\rVert_2,\n\\end{equation}\nwhere $A$ and $b$ are as described in (\\ref{l2minimisation}).\n\\end{cor}\n\\begin{proof}\nWe have seen from Lemma \\ref{differentscale} that if we do not know the speed of parametrisation of the path, we can solve the optimisation problem (\\ref{notunitspeedopt}) to get a approximation of the derivative of the path. If we use $\\ell^2$ norm, then (\\ref{notunitspeedopt}) becomes (\\ref{notunitspeedmatrix}). Note the only difference between (\\ref{notunitspeedmatrix}) and (\\ref{l2minimisation}) is the constant $L$ in front of the matrix $A$, therefore a similar analysis as in Proposition \\ref{l2uniqueness} applies, and (\\ref{notunitspeedmatrix}) admits a unique solution.\n\\end{proof}\nWe now demonstrate some examples of inverting the signature of a path by solving (\\ref{l2minimisation}). All of the following computation is done in C++, and the graphs are plotted in MATLAB. The computation of signatures used is done via the C++ library \\emph{Libalgebra} \\cite{libalgebra}. The matrix computation algorithms used are from \\emph{LAPACK} \\cite{Anderson:1999:LUG:323215}, and the version used is provided by Intel Math Kernel Library.\n\\begin{eg}[Semicircle]\nLet $\\gamma:[0,1]\\rightarrow\\mathbb{R}^2$ be the path of a semicircle, i.e. $\\gamma^{(1)}_t=\\frac{1}{\\pi}\\cos(\\pi t)$, $\\gamma^{(2)}_t=\\frac{1}{\\pi}\\sin(\\pi t)$ for $t\\in [0,1]$. If we use $\\ell^2$ norm, we can use the formulae obtained in Proposition \\ref{l2uniqueness} to get an approximation to the derivative of the path at different time points. Thus we are able to approximate the increments over subintervals by Mean Value Theorem, as shown in Figure \\ref{cpp_lagrange_semicircle}. We can see that using higher levels of signature gives better approximations to the true path.\n\\end{eg}\n\\begin{figure}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=\\textwidth]{cpp_lagrange_semicircle.pdf}\n\\caption{Reconstruction of a semicircle under $\\ell^2$ norm, where $n$ is the level of signature used}\\label{cpp_lagrange_semicircle}\n\\end{figure}\n\\begin{eg}[Circle]\nAssume in this example $\\gamma:[0,1]\\rightarrow\\mathbb{R}^2$ is the path of a circle such that $\\gamma_t=(\\frac{1}{2\\pi}\\cos(2\\pi t),\\frac{1}{2\\pi}\\sin(2\\pi t))$ for $t\\in[0,1]$. Again we used the formulae obtained in Proposition \\ref{l2uniqueness} to get an approximation to the derivative at different times in $\\ell^2$ norm, therefore an approximation of the increments over the sub-intervals, as shown in Figure \\ref{cpp_lagrange_circle}.\n\\end{eg}\n\\begin{figure}\n\\centering \n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=\\textwidth]{cpp_lagrange_circle.pdf}\n\\caption{Reconstruction of a circle under $\\ell^2$ norm, where $n$ is the level of signature used}\\label{cpp_lagrange_circle}\n\\end{figure}\n\\begin{eg}[Digit `8']\\label{digit8egch5}\nOne interesting case to consider is when the path is self-crossing. A good example of this kind is digits.\\newline\nThe dataset we use is \\emph{Pen-Based Recognition of Handwritten Digits Data Set} from UC Irvine Machine Learning Repository \\cite{Dua:2017}, which is a digit database by collecting 250 handwritten digit samples from 44 writers. The dataset records the $(x,y)$ coordinates on the $2$-dimensional plane as the participants write. The raw data captured consists of integer values between $0$ and $500$, and then a resampling algorithm is applied so that the points are regularly spaced in arc length.\\newline\nWe have taken one sample of the digit `8' from the training data, and normalise the input vectors so that they consist of values in $[0,1]$. Note that the path now is not necessarily parametrised at unit speed. In this case, we can solve a slightly altered optimisation problem by the result of Lemma \\ref{differentscale}, therefore we need an approximation of the length of the path. Due to the conjecture in \\cite{chang2018super}, we can approximate the length of the path by taking the $n$-th root of the $n$-th level of the signature multiplied by $n!$.\\newline\nWe then reconstruct the underlying path using the method of Lagrange multipliers to approximate the derivative of the path at different points by the results of Proposition \\ref{l2uniqueness} and Corollary \\ref{l2uniquenessnotunitspeed}, and use splines to smooth the derivatives, and then integrate over $[1\/(n+2),(n+1)\/(n+2)]$ in MATLAB to approximate the underlying path, where $n$ is the level of the lower level signature used. Compared to the underlying path in Figure \\ref{eightrue}, we can see from Figure \\ref{eight4}, \\ref{eight5}, \\ref{eight6}, \\ref{eight7}, \\ref{eight8}, \\ref{eight9}, and \\ref{eight10} that overall we get better approximations when we use higher levels of the signature of the path. Note that the paths reconstructed are at different scales from the underlying path. This is because we have reconstructed the path parametrised at unit speed, as shown in Lemma \\ref{differentscale}. Also note that if $\\gamma:[u,u+L]\\to\\mathbb{R}^d$ is a path parametrised at unit speed and of length $L$, then\n\\begin{align*}\n&\\int_{u}^{u+L}\\dot{\\gamma}(t)\\mathrm{d}t\\\\\n=&L\\int_0^1\\dot{\\gamma}(Ls+u)\\mathrm{d}s,\n\\end{align*}\ntherefore the path obtained is the underlying path parametrised at unit speed and scaled by $1\/L$. The shapes of the reconstructed paths are not affected even though we have a different speed of parametrisation.\n\\end{eg}\n\\begin{figure}\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{eight_truepath.pdf}\n\\caption{Underlying path of the digit `8'}\n\\label{eightrue}\n\\end{subfigure}%\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{cpp_lagrange_digit8n=4.pdf}\n\\caption{Reconstruction using signature level 4}\n\\label{eight4}\n\\end{subfigure}\n\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{cpp_lagrange_digit8n=5.pdf}\n\\caption{Reconstruction using signature level 5}\n\\label{eight5}\n\\end{subfigure}%\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{cpp_lagrange_digit8n=6.pdf}\n\\caption{Reconstruction using signature level 6}\n\\label{eight6}\n\\end{subfigure}\n\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{cpp_lagrange_digit8n=7.pdf}\n\\caption{Reconstruction using signature level 7}\n\\label{eight7}\n\\end{subfigure}%\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{cpp_lagrange_digit8n=8.pdf}\n\\caption{Reconstruction using signature level 8}\n\\label{eight8}\n\\end{subfigure}\n\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{cpp_lagrange_digit8n=9.pdf}\n\\caption{Reconstruction using signature level 9}\n\\label{eight9}\n\\end{subfigure}%\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{cpp_lagrange_digit8n=10.pdf}\n\\caption{Reconstruction using signature level 10}\n\\label{eight10}\n\\end{subfigure}\n\\caption{Reconstruction of the digit `8' using the insertion method}\n\\end{figure}\n\n\\begin{eg}[Robustness of the insertion method]\nIn this example, we show that we can build a pipeline to invert the signatures of paths by the insertion method. We arbitrarily choose 20 samples from the training set (consisting of handwritten digits by 30 writers) of the Pen-Based Recognition of Handwritten Digits Data Set \\cite{Dua:2017} and normalise the data as described in Example \\ref{digit8egch5}. Then we reconstruct the underlying path using signature level $9$ and $10$ using the method of Lagrange multipliers as described in Proposition \\ref{l2uniqueness} and Corollary \\ref{l2uniquenessnotunitspeed}, and obtain Figure \\ref{robustdataset-1}, \\ref{robustdataset-2}, \\ref{robustdataset-3}, \\ref{robustdataset-4} and \\ref{robustdataset-5}. Note we export the derivatives computed in C++ into MATLAB, and use the splines to approximate the derivatives, and then unlike Example \\ref{digit8egch5}, we integrate the splines over $[0,1]$. This is because signature level $9$ is relatively higher than most of the signature levels used in Example \\ref{digit8egch5}, so the splines are supposed to behave better at extrapolation. We can see that the insertion method is in general quite robust, however it may not be able to give an accurate approximation at the corner of the path.\n\\end{eg}\n\n\\begin{figure}[t]\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample9UnderlyingDigit.pdf}\n\\caption{Sample 9, underlying digit}\n\\end{subfigure}%\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample9C++LagrangePathDigitDern=9.pdf}\n\\caption{Sample 9, reconstructed digit}\n\\end{subfigure}\n\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample12UnderlyingDigit.pdf}\n\\caption{Sample 12, underlying digit}\n\\end{subfigure}%\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample12C++LagrangePathDigitDern=9.pdf}\n\\caption{Sample 12, reconstructed digit}\n\\end{subfigure}\n\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample17UnderlyingDigit.pdf}\n\\caption{Sample 17, underlying digit}\n\\end{subfigure}%\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample17C++LagrangePathDigitDern=9.pdf}\n\\caption{Sample 17, reconstructed digit}\n\\end{subfigure}\n\n\\begin{subfigure}{.5\\textwidth}\n\\centering \n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample38UnderlyingDigit.pdf}\n\\caption{Sample 38, underlying digit}\n\\end{subfigure}%\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample38C++LagrangePathDigitDern=9.pdf}\n\\caption{Sample 38, reconstructed digit}\n\\end{subfigure}\n\\caption{Reconstruction of digits from the data set \\cite{Dua:2017} using signature level $9$ and $10$}\n\\label{robustdataset-1}\n\\end{figure}\n\\clearpage\n\n\\begin{figure}[t]\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample41UnderlyingDigit.pdf}\n\\caption{Sample 41, underlying digit}\n\\end{subfigure}%\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample41C++LagrangePathDigitDern=9.pdf}\n\\caption{Sample 41, reconstructed digit}\n\\end{subfigure}\n\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample51UnderlyingDigit.pdf}\n\\caption{Sample 51, underlying digit}\n\\end{subfigure}%\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample51C++LagrangePathDigitDern=9.pdf}\n\\caption{Sample 51, reconstructed digit}\n\\end{subfigure}\n\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample55UnderlyingDigit.pdf}\n\\caption{Sample 55, underlying digit}\n\\end{subfigure}%\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample55C++LagrangePathDigitDern=9.pdf}\n\\caption{Sample 55, reconstructed digit}\n\\end{subfigure}\n\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample62UnderlyingDigit.pdf}\n\\caption{Sample 62, underlying digit}\n\\end{subfigure}%\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample62C++LagrangePathDigitDern=9.pdf}\n\\caption{Sample 62, reconstructed digit}\n\\end{subfigure}\n\\caption{Reconstruction of digits from the data set \\cite{Dua:2017} using signature level $9$ and $10$}\n\\label{robustdataset-2}\n\\end{figure}\n\\clearpage\n\n\\begin{figure}[t]\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample64UnderlyingDigit.pdf}\n\\caption{Sample 64, underlying digit}\n\\end{subfigure}%\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample64C++LagrangePathDigitDern=9.pdf}\n\\caption{Sample 64, reconstructed digit}\n\\end{subfigure}\n\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample70UnderlyingDigit.pdf}\n\\caption{Sample 70, underlying digit}\n\\end{subfigure}%\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample70C++LagrangePathDigitDern=9.pdf}\n\\caption{Sample 70, reconstructed digit}\n\\end{subfigure}\n\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample71UnderlyingDigit.pdf}\n\\caption{Sample 71, underlying digit}\n\\end{subfigure}%\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample71C++LagrangePathDigitDern=9.pdf}\n\\caption{Sample 71, reconstructed digit}\n\\end{subfigure}\n\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample77UnderlyingDigit.pdf}\n\\caption{Sample 77, underlying digit}\n\\end{subfigure}%\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample77C++LagrangePathDigitDern=9.pdf}\n\\caption{Sample 77, reconstructed digit}\n\\end{subfigure}\n\\caption{Reconstruction of digits from the data set \\cite{Dua:2017} using signature level $9$ and $10$}\n\\label{robustdataset-3}\n\\end{figure}\n\\clearpage\n\n\\begin{figure}[t]\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample81UnderlyingDigit.pdf}\n\\caption{Sample 81, underlying digit}\n\\end{subfigure}%\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample81C++LagrangePathDigitDern=9.pdf}\n\\caption{Sample 81, reconstructed digit}\n\\end{subfigure}\n\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample86UnderlyingDigit.pdf}\n\\caption{Sample 86, underlying digit}\n\\end{subfigure}%\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample86C++LagrangePathDigitDern=9.pdf}\n\\caption{Sample 86, reconstructed digit}\n\\end{subfigure}\n\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample91UnderlyingDigit.pdf}\n\\caption{Sample 91, underlying digit}\n\\end{subfigure}%\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample91C++LagrangePathDigitDern=9.pdf}\n\\caption{Sample 91, reconstructed digit}\n\\end{subfigure}\n\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample100UnderlyingDigit.pdf}\n\\caption{Sample 100, underlying digit}\n\\end{subfigure}%\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 4cm 10cm},clip,width=0.9\\linewidth]{Sample100C++LagrangePathDigitDern=9.pdf}\n\\caption{Sample 100, reconstructed digit}\n\\end{subfigure}\n\\caption{Reconstruction of digits from the data set \\cite{Dua:2017} using signature level $9$ and $10$}\n\\label{robustdataset-4}\n\\end{figure}\n\\clearpage\n\n\\begin{figure}[t]\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample107UnderlyingDigit.pdf}\n\\caption{Sample 107, underlying digit}\n\\end{subfigure}%\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample107C++LagrangePathDigitDern=9.pdf}\n\\caption{Sample 107, reconstructed digit}\n\\end{subfigure}\n\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample112UnderlyingDigit.pdf}\n\\caption{Sample 112, underlying digit}\n\\end{subfigure}%\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample112C++LagrangePathDigitDern=9.pdf}\n\\caption{Sample 112, reconstructed digit}\n\\end{subfigure}\n\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample125UnderlyingDigit.pdf}\n\\caption{Sample 125, underlying digit}\n\\end{subfigure}%\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample125C++LagrangePathDigitDern=9.pdf}\n\\caption{Sample 125, reconstructed digit}\n\\end{subfigure}\n\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample127UnderlyingDigit.pdf}\n\\caption{Sample 127, underlying digit}\n\\end{subfigure}%\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[trim={4cm 10cm 3cm 10cm},clip,width=0.9\\linewidth]{Sample127C++LagrangePathDigitDern=9.pdf}\n\\caption{Sample 127, reconstructed digit}\n\\end{subfigure}\n\\caption{Reconstruction of digits from the data set \\cite{Dua:2017} using signature level $9$ and $10$}\n\\label{robustdataset-5}\n\\end{figure}\n\n\\begin{rmk}\nFrom a computational point of view, in general if we want to use the insertion method to invert the signature, we need a nonlinear optimisation solver. However most of such solvers require a good initial guess. Hence when doing computation, we need to keep in mind that such factors may affect the results.\\newline\nIf we compare the insertion method with the symmetrisation method described in \\cite{chang2017signature}, from computation we saw that the insertion method is better in terms of efficiency, but the symmetrisation method gives more accurate approximation results.\n\\end{rmk}\n\\section{Conclusions}\nIn this article we have developed a practical algorithm for inverting the signature of a path by inserting elements into terms of the signature and comparing with other terms in the signature, and we have demonstrated computational results for inverting the signature of a piecewise linear path. In essence, the insertion algorithm depends on the relation\n\\begin{align*}\n\\left\\lVert x-y\\right\\rVert=\\frac{\\left\\lVert I_{p,n}(x)-I_{p,n}(y)\\right\\rVert}{\\left\\lVert \\bar{S}_n\\right\\rVert}.\n\\end{align*}\nTherefore, there is a possibility that the insertion method can be extended for inverting the signature of a more complicated path if $\\left\\lVert I_{p,n}(x)-I_{p,n}(y)\\right\\rVert$ decays faster than the norm of the normalised signature, $\\left\\lVert \\bar{S}_n\\right\\rVert$.\\newline\nMoreover, we can see from the analysis that understanding the decay of the signature can be very helpful for signature inversion, therefore finding a lower bound for the terms in the signature of a path has its impacts on inverting the signature.\\newline\nIn conclusion, the insertion method described in this article has potential in inverting the signature of a more general path, which is an interesting topic to study.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn this paper, we are interested in some rigidity results on the holomorphic automorphism ${f}\\colon X\\to X$ of a projective hyperk\\\"ahler manifold $X$. By a hyperk\\\"ahler manifold $X$ we mean a compact K\\\"ahler manifold $X$ which is simply connected and the group $H^{2,0}(X,\\C)$ is generated by an everywhere nondegenerate holomorphic 2-form $\\Omega$ \\cite{oguiso2009}. Hyperk\\\"ahler manifolds are of interest from the classification result of Beauville \\cite[Th\\'eor\\`eme 1]{Beauville83}\\cite[Proposition 1]{Beauville85} for compact K\\\"ahler manifolds with zero first Chern class.\n\nOne of the first examples of hyperk\\\"ahler manifolds \\cite[\\S{}7]{Beauville83} is constructed from complex tori, now known as generalized Kummer varieties \\cite[\\S{}21.2]{ghj01}. Likewise, one can define a holomorphic automorphism $(X,{f})$ on a hyperk\\\"ahler manifold which originates from that on a complex torus. Extending this idea, we use the term \\emph{Kummer example}, following \\cite[Definition 1.3]{CantatDupont2020}:\n\\begin{definition}[Kummer Example]\n Let $X$ be a hyperk\\\"ahler manifold and let ${f}$ be a holomorphic automorphism of $X$. The pair $(X,{f})$ is a (hyperk\\\"ahler) \\emph{Kummer example} if there exist\n\\begin{itemize}\n\\item a birtational morphism $\\phi\\colon X\\to Y$ onto an orbifold $Y$,\n\\item a finite orbifold cover $\\pi\\colon \\mathbb{T}\\to Y$ by a complex torus $\\mathbb{T}$, and\n\\item an automorphism $\\widetilde{f}$ on $Y$ and an automorphism $A$ on $\\mathbb{T}$ such that\n\\[\\widetilde{f}\\circ\\phi = \\phi\\circ {f}\\quad\\text{ and }\\quad \\widetilde{f}\\circ\\pi=\\pi\\circ A.\\]\n\\end{itemize}\n\\end{definition}\n\nAlthough the word `orbifold' and `orbifold cover' may vary among the literatures, we use the terms as in \\cite[\\S{13.2}]{Thurston}. By ibid., $Y$ is good in the sense that it is given by a quotient $\\mathbb{T}\/\\Gamma$ by a finite group $\\Gamma$. The description of $Y$ in below Theorem \\ref{lem:00} is thus general enough to cover the orbifolds of interest.\n\nIn case $X$ is a projective complex surface or a K3 surface, results like \\cite{CantatDupont2020} or \\cite{FT18} imply that, if the volume measure on $X$ is a measure of maximal entropy for the dynamics $(X,{f})$, then the pair $(X,{f})$ is necessarily a Kummer example. The aim of this paper is to generalize this to the projective hyperk\\\"ahler case:\n\n\\begin{theorem}\n \\label{lem:00}\n Let $X$ be a projective hyperk\\\"ahler manifold. Let ${f}\\colon X\\to X$ be a holomorphic automorphism that has positive topological entropy.\n Suppose the volume form is an ${f}$-invariant measure of maximal entropy.\n Then the underlying hyperk\\\"ahler manifold $X$ is a normalization of a torus\n quotient, and ${f}$ is induced from a hyperbolic affine-linear transformation on that\n torus quotient.\n\n That is, if $\\dim_\\C X=2n$, there exists a complex torus $\\mathbb{T}=\\C^{2n}\/\\Lambda$ and a \n finite group of toral isomorphisms $\\Gamma$ \n in which $X$ normalizes $\\mathbb{T}\/\\Gamma$, and ${f}$ is induced from a \n hyperbolic affine-linear transformation \n $A\\colon\\mathbb{T}\/\\Gamma\\to\\mathbb{T}\/\\Gamma$.\n\\end{theorem}\n\n\\subsection{Examples of Hyperk\\\"ahler Kummer Examples}\n\nBefore going through the proof of the main theorem, we list some examples that a reader may have in mind. These examples are brought from \\cite[\\S{}3.3-3.4]{LoB17}, and we seek for whether each example\n\\begin{itemize}\n \\item is actually a (hyperk\\\"ahler) Kummer example, and\n \\item has the volume as a measure of maximal entropy.\n\\end{itemize}\n\nThroughout this section, following \\cite[\\S{}3]{LoB17}, we denote $T$ as a 2-dimensional complex torus, $f_T\\colon T\\to T$ a hyperbolic automorphism on it; by hyperbolic we mean by $h:=h_{\\mathrm{top}}(f_T)>1$ (cf. \\cite[Corollary 1.23]{LoB17}). Let $(f_T,\\cdots,f_T)\\colon T^n\\to T^n$ be the product of $n$ copies of $f_T$. From \\cite[Lemma 3.1]{LoB17}, it is known that $(f_T,\\cdots,f_T)$ has unstable and stable foliations $\\mathcal{F}^+$ and $\\mathcal{F}^-$, obtained by making $n$-product of those for $(T,f_T)$.\nThe topological entropy of $(T^n,(f_T,\\cdots,f_T))$ is $nh$; cf. \\cite[Proposition 3.1.7(4)]{KatokHasselblatt}.\n\nMoreover, combining \\cite[Proposition 1.12]{LoB17} and a theorem of Gromov--Yomdin \\cite[Theorem 1.10]{LoB17}, a Kummer example $(X,f)$ with associated toral automorphism $(\\mathbb{T},A)$ has the same topological entropies $h_{\\mathrm{top}}(X,f)=h_{\\mathrm{top}}(\\mathbb{T},A)$.\n\n\\subsubsection{Generalized Kummer Variety}\n\\label{subsec:gen-kummer-var}\n\nDenote $K_n(T)$ for the $2n$-dimensional generalized Kummer variety, the notation following \\cite[\\S{}21.2]{ghj01}.\n\nFollowing \\cite[\\S{}3.4]{LoB17}, $f_T$ is known to induce an automorphism $K_n(f_T)\\colon K_n(T)\\to K_n(T)$. By \\cite[Lemma 3.12]{LoB17}, we have $(X,f)=(K_n(T),K_n(f_T))$, with\n\\begin{itemize}\n \\item $(Y,\\widetilde{f})=(T^n\/\\mathfrak{S}_{n+1},(f_T,\\cdots,f_T)\/\\mathfrak{S}_{n+1})$,\n \\item $(\\mathbb{T},A)=(T^n,(f_T,\\cdots,f_T))$, and\n \\item the quotient map $q\\colon\\mathbb{T}\\to Y$, which is birationally equivalent to a generically finite meromorphic map $\\pi\\colon\\mathbb{T}\\dashrightarrow X$. (That birational equivalence $\\phi\\colon X\\to Y$ may be defined on whole $X$.)\n\\end{itemize}\nHere, $\\mathfrak{S}_{n+1}$ is the symmetry group of $(n+1)$ letters, acting on $T^n$ by restricting the natural action $\\mathfrak{S}_{n+1}\\ACTS T^{n+1}$ on\n\\[T^n=\\{(t_0,t_1,\\cdots,t_n)\\in T^{n+1}\\mid t_0+t_1+\\cdots+t_n=0\\}.\\]\nTherefore $(X,f)$ is a Kummer example, with associated toral automorphism $(T^n,(f_T,\\cdots,f_T))$.\n\nBy loc.cit., induced from foliations $\\mathcal{F}^+$ and $\\mathcal{F}^-$ on $T^n$, we have foliations $\\mathcal{F}^+_X$ and $\\mathcal{F}^-_X$ on $X$, called unstable and stable foliations respectively. By the action of $(f_T,\\cdots,f_T)$, each vector tangent to the foliations $\\mathcal{F}^+$ and $\\mathcal{F}^-$ on $T^n$ are dilated by $e^{h\/2}$ and $e^{-h\/2}$ respectively. The same rates apply for $\\mathcal{F}^+_X$ and $\\mathcal{F}^-_X$ on $X\\setminus\\mathrm{Sing}(\\mathcal{F}^\\pm_X)$.\n\nThis gives that the Lyapunov exponents of $(X,f)$ under the volume is $\\pm h\/2$, with multiplicity $n$ each. The Ledrappier--Young formula \\cite[Corollary G]{LY85II} then yields $h_{\\mathrm{vol}}(X,f)=nh$. Now since $(X,f)$ is a Kummer example, its topological entropy is also $nh$, as that of $(T^n,(f_T,\\cdots,f_T))$ is $nh$. Hence the volume measure is a measure of maximal entropy.\n\n\\subsubsection{Hilbert Scheme of a Kummer Surface}\n\nDenote $K_1(T)$ for the Kummer surface of the 2-dimensional complex torus $T$. Then one has the Hilbert scheme $X:=K_1(T)^{[n]}$ of $n$ points and induced map $f:=K_1(f_T)^{[n]}\\colon X\\to X$, from $f_T\\colon T\\to T$ (cf. \\cite[Proposition 3.10]{LoB17}).\n\nThe hyperk\\\"ahler manifold $X=K_1(T)^{[n]}$ is obtained by normalizing $T^n\/\\Gamma$, where $\\Gamma$ is generated by\n\\begin{itemize}\n \\item[-] an involution $\\theta\\colon T^n\\to T^n$, $\\theta(t_1,t_2,\\cdots,t_n)=(-t_1,t_2,\\cdots,t_n)$, and\n \\item[-] the symmetry group $\\mathfrak{S}_n\\ACTS T^n$ on coordinates.\n\\end{itemize}\n(The group $\\Gamma$, generated by involutions, forms the Weyl group of the Lie algebra $B_n$.) The map $(f_T,\\cdots,f_T)$ commutes with $\\Gamma$, thus induces a map $\\widetilde{f}\\colon T^n\/\\Gamma\\to T^n\/\\Gamma$. The map $f=K_1(f_T)^{[n]}$ then satisfies, with the normalization map $\\phi\\colon X\\to T^n\/\\Gamma$, $\\widetilde{f}\\circ\\phi=\\phi\\circ f$. Thus $(X,f)$ is a Kummer example, with associated toral automorphism $(T^n,(f_T,\\cdots,f_T))$.\n\nThe foliations $\\mathcal{F}^\\pm$ on $T^n$ are $\\Gamma$-invariant, hence carrying this to the regular locus of $T^n\/\\Gamma$ and inducing (singular) foliations on $X$, we obtain unstable and stable foliations $\\mathcal{F}^+_X$ and $\\mathcal{F}^-_X$. Arguing as in \\S{\\ref{subsec:gen-kummer-var}}, we see that for $(X,f)$, the volume measure is a measure of maximal entropy.\n\n\\subsubsection{Remark}\n\nSo far we have seen that both of the examples required some extra structures (stable\/unstable holomorphic foliations) to make sure that the volume is an invariant measure of maximal entropy. A partial converse of this will be shown in this paper (\\S{\\ref{sec:stable-unstable-distributions}}): if the volume is an invariant measure of maximal entropy, the unstable and stable holomorphic foliations are defined, and they dilate by some factors $e^{h\/2}$ and $e^{-h\/2}$. (This converse is partial because the singularities are of codimension $\\geq 1$.) We note that this converse is irrelevant to the projectivity.\n\nFor a general hyperk\\\"ahler Kummer example $(X,{f})$, we have a restriction on the eigenvalues of its associated toral automorphism $(\\mathbb{T},A)$. That is, the matrix part of $A$ has $n$ eigenvalues of modulus $e^{h\/2}$ and $n$ eigenvalues of modulus $e^{-h\/2}$, counted with multiplicities. The proof of this fact is based on comparing the dynamical degrees of $(X,{f})$ found by two distinct means, \\cite[Proposition 1.12]{LoB17} and \\cite[Theorem 1.1]{oguiso2009}.\n\n\\subsection{Outline of the Proof}\n\nThe proof starts with the results of \\cite{Yomdin}\\cite{Gromov}, which give that the topological entropy is realized as the spectral radius of the induced map ${f}^\\ast\\colon H^\\bullet(X)\\to H^\\bullet(X)$ on the cohomology ring. More specifically, \\cite[Theorem 1]{oguiso2009} yields that the topological entropy $h_{\\mathrm{top}}({f})$ of ${f}$ equals $nh$, where $e^h$ is the spectral radius of ${f}^\\ast\\colon H^{1,1}(X)\\to H^{1,1}(X)$ on the $(1,1)$-classes. By this we obtain the eigenvectors $[\\eta_+],[\\eta_-]\\in H^{1,1}(X)$ of ${f}^\\ast$ with eigenvalues $e^h,e^{-h}$ respectively.\n\nThe sum of these classes, $[\\eta_+]+[\\eta_-]$ is at best big and nef, and thus represented by an incomplete, Ricci-flat K\\\"ahler metric $\\omega_0$, approximated by complete Ricci-flat K\\\"ahler metrics \\cite[Theorem 1.6]{CT15}. Thanks to the assumption on the measure of maximal entropy, we can use Jensen's inequality to obtain a \\emph{constant} expansion and contraction rate along the unstable and stable directions. The same rate applies backwards in time, and by this we obtain holomorphic unstable and stable distributions.\n\nThe holomorphic unstable and stable distributions gives a product structure on the incomplete metric $\\omega_0$, and some flatness result of \\cite[Lemme 2.2.3(b)]{BFL92} yields that the space $X$ is mostly flat, except on some locus $E\\subset X$.\n\nIt remains to contract the locus $E\\subset X$ and obtain a normal space $Y$ with canonical singularities, flat on its regular locus. Applying \\cite[Theorem D]{claudon2020kahler} to $Y$, we establish the Kummer example in question.\n\n\\subsection{Content of each Section}\n\nSection \\ref{sec:prelim} is placed to introduce various notations and structures that we will recall from the hyperk\\\"ahler dynamics $(X,{f})$ of interest.\n\nSections \\ref{sec:analysis-on-approximate-metrics} to \\ref{sec:jensen-inequality-constant-log-singular-values} discuss some properties of the approximate metrics (cf. Lemma \\ref{lem:01}), to derive the uniform expansion and contraction observed for $\\omega_0$ (cf. Corollary \\ref{lem:14})\n\nSections \\ref{sec:stable-unstable-distributions} and \\ref{sec:holomorphicity-stable-distribution} are devoted for the stable and unstable distributions; on how they are defined, and how do we show that they are holomorphic.\n\nSection \\ref{sec:flatness} shows the flat nature of the manifold $X$ under the metric $\\omega_0$. Section \\ref{sec:upshots-of-flatness} explains how flatness readily gives a proof of Theorem \\ref{lem:00}, the main theorem.\n\n\n\\subsection*{Acknowledgements}\nThe author would like to thank Simion Filip, Alex Eskin, Sang-hyun Kim, Seok Hyeong Lee, Junekey Jeon, and Hongtaek Jung for useful discussions, Valentino Tosatti for significant comments on an earlier draft.\nThis material is based on a work supported by the University of Chicago.\n\n\\section{Preliminaries}\n\\label{sec:prelim}\n\nHyperk\\\"ahler manifolds are rich in structures, which leads us to introduce various notations and list basic facts that are required to study them. This section is devoted to that purpose.\n\n\\subsection{Hyperk\\\"ahler Structures}\n\nLet $X$ be the (projective) hyperk\\\"ahler manifold, with $\\dim_\\C X=2n$. Let ${f}\\colon X\\to X$ be a holomorphic automorphism that has positive topological entropy $h_{\\mathrm{top}}({f})>0$. We denote by $\\omega$ a hyperk\\\"ahler metric on $X$, and by $\\Omega$ an everywhere nondegenerate holomorphic 2-form that $X$ should have. We will call $\\Omega$ as a \\emph{holomorphic symplectic form} on $X$.\n\nThis $\\Omega$ is a generator of the $(2,0)$-Hodge group: $H^{2,0}(X,\\C)=\\C.\\Omega$ \\cite[Proposition 23.3]{ghj01}. Moreover, we declare the \\emph{volume form} $\\mathrm{vol}=(\\Omega\\wedge\\overline{\\Omega})^n$ associated to the holomorphic symplectic form; we normalize $\\Omega$ so that $\\mathrm{vol}(X)=1$. This volume form is same as that of the Riemannian geometry on $X$, in the following sense: $\\omega^{2n}=c\\cdot\\mathrm{vol}$ for some constant $c>0$.\n\nInduced from this is the Beauville--Bogomolov--Fujiki quadratic form $q$ \\cite[Definition 22.8]{ghj01}, defined as follows. Suppose $\\alpha\\in H^2(X,\\C)=H^{2,0}(X)\\oplus H^{1,1}(X)\\oplus H^{0,2}(X)$ is decomposed as $\\alpha=c_1\\Omega+\\beta+c_2\\overline{\\Omega}$, where $c_1,c_2\\in\\C$ and $\\beta\\in H^{1,1}(X)$. Then, the number $q(\\alpha)$ is defined as\n\\begin{equation}\n \\label{eqn:1.1}\n q(\\alpha):=c_1c_2+\\frac{n}{2}\\int_X\\beta^2(\\Omega\\wedge\\overline{\\Omega})^{n-1}.\n\\end{equation}\nThe Beauville--Bogomolov--Fujiki quadratic form $q$, together with the Beauville--Fujiki relation \\cite[Proposition 23.14]{ghj01}, gives the following formula: given a Ricci-flat K\\\"ahler metric $\\omega'$,\n\\begin{equation}\n \\label{eqn:1.2}\n (\\omega')^{2n}=q(\\omega')^n\\binom{2n}{n}\\cdot(\\Omega\\wedge \\overline{\\Omega})^n,\n\\end{equation}\nas differential forms.\n\n\\subsection{The Eigenclasses}\n\\label{subsec:eigenclass}\n\nAccording to \\cite[Theorem 1.1]{oguiso2009}, one can relate the first dynamical degree $d_1({f})$ of ${f}$ with the topological entropy $h_{\\mathrm{top}}({f})$ in the following manner. The number $d_1({f})$ is defined as the spectral radius of $f^\\ast\\colon H^{1,1}(X,\\R)\\to H^{1,1}(X,\\R)$, and we have\n\\begin{equation}\n \\label{eqn:1.3}\n h_{\\mathrm{top}}({f}) = n\\log d_1({f}).\n\\end{equation}\nFor ease of notations, we denote $\\lambda:= d_1({f})$ and $h:=\\log d_1({f})$. That our automorphism has positive entropy $h_{\\mathrm{top}}({f})>0$, is equivalent to $\\lambda>1$.\n\nFurthermore, as a special property that hyperk\\\"ahler automorphisms enjoy, one has $d_1({f}^{-1})=d_1({f})(=\\lambda)$ as argued in \\cite[\\S{}3]{oguiso2009}. Hence, $({f}^{-1})^\\ast\\colon H^{1,1}(X,\\R)\\to H^{1,1}(X,\\R)$ also has spectral radius $\\lambda$, and its inverse ${f}^\\ast$ has an eigenvalue of modulus $\\lambda^{-1}$.\n\n\\cite[Theorem 1]{oguiso2007} yields that $\\lambda,\\lambda^{-1}$ are not just moduli of eigenvalues, but by itself single eigenvalues. Therefore, one has $[\\eta_+],[\\eta_-]\\in H^{1,1}(X,\\R)$ the eigenvectors of ${f}^\\ast$ that corresponds to eigenvalues $\\lambda,\\lambda^{-1}$, respectively.\n\nMoreover, for a generic (in fact, any; see Corollary \\ref{lem:spectral-sequence}) K\\\"ahler class $[\\omega]\\in H^{1,1}(X,\\R)$, the sequences $\\lambda^{-n}(g^n)^\\ast[\\omega]$ and $\\lambda^{-n}(g^{-n})^\\ast[\\omega]$ respectively converge to some multiple of $[\\eta_+]$ and $[\\eta_-]$. Rescaling $[\\eta_\\pm]$ if necessary, we may thus assume that $[\\eta_\\pm]$ are nef classes. It will be shown later (Corollary \\ref{lem:04}) that their sum, $[\\eta_+]+[\\eta_-]$, is a big class, i.e., $\\int_X(\\eta_++\\eta_-)^{2n}>0$. Based on that knowledge, we normalize $[\\eta_\\pm]$ so that $\\int_X(\\eta_++\\eta_-)^{2n}=1$.\n\n\\subsection{Null Locus and Metric Approximations}\n\nEven if we know that the class $[\\eta_+]+[\\eta_-]$ is big and nef, according to \\cite[Theorem 0.5]{DP04}, at best the class contains a K\\\"ahler current $C$, and there is no guarantee that $[\\eta_+]+[\\eta_-]$ contains a K\\\"ahler metric.\n\nThere are a number of facts studied in \\cite{CT15} about the obstructions on having a genuine K\\\"ahler metric in a big and nef class. First, one introduces the \n\\emph{null locus} $E\\subset X$ of the class $[\\eta_+]+[\\eta_-]$, which is the union of all \nsubvarieties $V\\subset X$ such that $\\int_{V}(\\eta_++\\eta_-)^{\\dim V}=0$ \\cite{CT15}.\n\nAlthough the following is an immediate appliation of \\cite[Theorem 1.6]{CT15}, we state this as a lemma, to introduce some notations for later use.\n\\begin{lemma}\n \\label{lem:01}\n Suppose $[\\eta_+]+[\\eta_-]$ is big. \n There exists a smooth, incomplete Ricci-flat K\\\"ahler metric $\\omega_0$ on $X\\setminus E$, and a sequence $\\left(\\omega_k\\right)$ of complete, smooth Ricci-flat \\emph{hyperk\\\"ahler} metrics on $X$ that converges to $\\omega_0$ in the following senses: (i) $[\\omega_k]\\to[\\eta_+]+[\\eta_-]$ in $H^{1,1}(X,\\R)$, and (ii) $\\omega_k\\to\\omega_0$ in $C^\\infty_{\\mathrm{loc}}(X\\setminus E)$.\n\n Moreover, $\\omega_k$'s may be set to have the unit volume. That is, $\\omega_k^{2n}=\\mathrm{vol}$.\n\\end{lemma}\n\\begin{proof}\n \\cite[Theorem 1.6]{CT15} tells the existence of a smooth, incomplete Ricci-flat K\\\"ahler metric $\\omega_0$ on $X\\setminus E$ in which, for any sequence $[\\alpha_k]\\to[\\eta_+]+[\\eta_-]$ in $H^{1,1}(X)$, and the Ricci-flat metric $\\omega_k\\in[\\alpha_k]$, $\\omega_k$'s converge to $\\omega_0$ in $C^\\infty_{\\mathrm{loc}}(X\\setminus E)$ topology.\n\n For hyperk\\\"ahler $X$, each K\\\"ahler class $[\\alpha_k]$ contains a unique hyperk\\\"ahler metric in it \\cite[Theorem 23.5]{ghj01}. Such a metric is necessarily Ricci-flat \\cite[Proposition 4.5]{ghj01}. By the uniqueness of Ricci-flat metric in a K\\\"ahler class, $\\omega_k$ must be hyperk\\\"ahler as well.\n\n Because $[\\alpha_k]$'s are converging to $[\\eta_+]+[\\eta_-]$, a big and nef class, \n the volume $[\\alpha_k]^{2n}$ by $\\omega_k$, also converges to a \n positive number. Thus one can normalize $\\omega_k$'s so that they \n have unit volume, i.e., $[\\alpha_k]^{2n}=1$ for all $k$. \n Consequently, as each $\\omega_k$ is Ricci-flat, we have \n $\\omega_k^{2n}=(\\Omega\\wedge\\overline{\\Omega})^{2n}=\\mathrm{vol}$, due to \n \\eqref{eqn:1.2}.\n (Note that this normalization is compatible with $\\int_X(\\eta_++\\eta_-)^{2n}=1$ assumed above.)\n\\end{proof}\n\n\\subsection{Invariant Structures}\n\nSo far we have collected various structures already assigned to $X$, and some more structures emerged from the automorphism ${f}$ given. Some of the structures are invariant under ${f}$, and aids the study of the rigidity of question. We list such structures in this section, with some notes explaining why.\n\nThe first in the list is the complex structure $I$. Indeed, ${f}$ is holomorphic\nif and only if $D_x{f}\\circ I_x = I_{{f}(x)}\\circ D_x{f}$ for all $x$, and we \ndevise a shorthand notation\n\\begin{equation}\n \\label{eqn:2.1}\n [D{f},I]_x := D_x{f}\\circ I_x - I_{{f}(x)}\\circ D_x{f},\n\\end{equation}\nto measure `how holomorphic' the ${f}$ is---so $[D{f},I]=0$ iff \n${f}$ is holomorphic.\n\nThe second in the list is, the holomorphic symplectic form $\\Omega$.\n\n\\begin{lemma}\n \\label{lem:02}\n There exists a constant $k_{f}$, with modulus 1, such that \n ${f}^\\ast\\Omega=k_{f}\\Omega$.\n\\end{lemma}\n\\begin{proof}\n View the group $H^{2,0}(X)=\\C.\\Omega$ as $H^0(X,\\Omega_X^2)=\\C.\\Omega$, \n through the Dolbault isomorphism. This tells that, any holomorphic section of \n the vector bundle $\\Omega_X^2$ on $X$ is proportional to \n $\\Omega\\colon X\\to\\Omega_X^2$.\n\n Now ${f}$ induces another holomorphic section of $\\Omega_X^2$, by \n ${f}^\\ast\\Omega\\colon X\\xrightarrow{{f}}X\\xrightarrow{\\Omega}\\Omega_X^2$. \n This has to be proportional to $\\Omega\\colon X\\to\\Omega_X^2$, thus we have \n ${f}^\\ast\\Omega=k_{f}\\Omega$ for some $k_{f}\\in\\C$.\n Because ${f}$ preserves the volume $(\\Omega\\wedge\\overline{\\Omega})^n$,\n $|k_{f}|=1$ follows.\n\\end{proof}\n\nOne corollary to the above lemma is \n${f}^\\ast(\\Omega\\overline{\\Omega})=\\Omega\\overline{\\Omega}$. \nThus ${f}^\\ast$ acts on $H^{4n}(X)=\\C.(\\Omega\\overline{\\Omega})^n$ trivially.\nSo $\\int_Xf^\\ast\\gamma=\\int_X\\gamma$ for any (closed) $4n$-form $\\gamma$.\n\nThe third in the list is the Beauville--Bogomolov--Fujiki form $q$.\n\n\\begin{lemma}\n \\label{lem:03}\n The Beauville--Bogomolov--Fujiki form $q$ is preserved under ${f}$. That is, \n $q({f}^\\ast\\alpha)=q(\\alpha)$ for any closed 2-form $\\alpha$.\n\\end{lemma}\n\\begin{proof}\n Write $\\alpha=c_1\\Omega+\\beta+c_2\\overline{\\Omega}$, where $c_1,c_2\\in\\C$ \n and $\\beta\\in H^{1,1}(X)$. Then\n ${f}^\\ast\\alpha=c_1({f}^\\ast\\Omega)+({f}^\\ast\\beta)+c_2({f}^\\ast\\overline{\\Omega})=c_1 k_f\\Omega+({f}^\\ast\\beta)+c_2\\overline{k}_f\\overline{\\Omega}$. By \\eqref{eqn:1.1},\n \\begin{align*}\n q({f}^\\ast\\alpha) &= (c_1k_f)(c_2\\overline{k}_f)\n +\\frac{n}{2}\\int_X({f}^\\ast\\beta)^2(\\Omega\\overline{\\Omega})^{n-1} \\\\\n &= c_1c_2|k_f|^2 \n +\\frac{n}{2}\\int_X({f}^\\ast\\beta)^2({f}^\\ast(\\Omega\\overline{\\Omega}))^{n-1} \\\\\n &= c_1c_2\n +\\frac{n}{2}\\int_X{f}^\\ast[\\beta^2(\\Omega\\overline{\\Omega})^{n-1}] \\\\\n &= c_1c_2+\\frac{n}{2}\\int_X\\beta^2(\\Omega\\overline{\\Omega})^{n-1}=q(\\alpha),\n \\end{align*}\n the equation demanded.\n\\end{proof}\n\n\\begin{corollary}\n \\label{lem:04}\n The eigenvectors $[\\eta_+]$ and $[\\eta_-]$ of ${f}^\\ast|H^2(X,\\C)$ satisfy \n $q([\\eta_+])=q([\\eta_-])=0$, and $[\\eta_+]^{n+1}=[\\eta_-]^{n+1}=0$. Moreover, $[\\eta_+]+[\\eta_-]$ is big (and nef).\n\\end{corollary}\n\\begin{proof}\n Applying Beauville--Bogomolov--Fujiki form on both sides of \n ${f}^\\ast[\\eta_+]=\\lambda[\\eta_+]$, we get \n $q([\\eta_+])=q({f}^\\ast[\\eta_+])=q(\\lambda[\\eta_+])=\\lambda^2q([\\eta_+])$. \n Because $\\lambda>1$, $q([\\eta_+])=0$ follows. \n Likewise, from ${f}^\\ast[\\eta_-]=\\lambda^{-1}[\\eta_-]$, we get \n $q([\\eta_-])=\\lambda^{-2}q([\\eta_-])$ and $q([\\eta_-])=0$ follows.\n\n To show that $[\\eta_+]^{n+1}=[\\eta_-]^{n+1}=0$, we refer to \n \\cite[Proposition 24.1]{ghj01}\\cite[Theorem 15.1]{verbitsky1995} \n for the following fact: $\\alpha^{n+1}=0$ if $q(\\alpha)=0$. \n (The `only if' is true and is an easy consequence of the\n Beauville--Fujiki relation $q(\\alpha)^n=c_n\\int_X\\alpha^{2n}$ \n \\cite[Proposition 23.14]{ghj01}.)\n \n To show bigness, it suffices to show that $q([\\eta_+]+[\\eta_-])>0$, thanks to the Beauville--Fujiki relation. Because $[\\eta_+]$ and $[\\eta_-]$ are approximated by K\\\"ahler forms (in $H^{1,1}$), so is $[\\eta_+]+[\\eta_-]$, hence $q([\\eta_+]+[\\eta_-])\\geq 0$, as $q$ has positive values on K\\\"ahler classes \\cite[Corollary 23.11]{ghj01}.\n \n Now suppose $q([\\eta_+]+[\\eta_-])=0$. Then this suffices to show that $q(c_0[\\eta_+]+c_1[\\eta_-])=0$, whenever $c_0,c_1\\in\\R$. But due to the fact that $q$ is a real quadratic form on $H^{1,1}(X,\\R)$ with signature $(1,h^{1,1}-1)$ \\cite[Corollary 23.11]{ghj01}, the set $\\{q=0\\}$ cannot have a linear space with dimension $>1$. Since $[\\eta_+]$ and $[\\eta_-]$ are linearly independent, this contradicts.\n\n Hence $q([\\eta_+]+[\\eta_-])>0$ follows, and therefore $[\\eta_+]+[\\eta_-]$ is big and nef.\n\\end{proof}\n\nAnother corollary to the invariance of the Beauville--Bogomolov--Fujiki form is that the convergence $\\lambda^{-n}({f}^{\\pm n})^\\ast[\\omega]\\to c\\cdot[\\eta_\\pm]$ holds for \\emph{any} K\\\"ahler class $[\\omega]$, not just generic classes (cf. \\S{\\ref{subsec:eigenclass}}).\n\n\\begin{corollary}\n \\label{lem:spectral-sequence}\n Let $[\\omega]$ be any K\\\"ahler class on $X$. Then as $n\\to\\infty$,\n \\begin{align*}\n \\lambda^{-n}({f}^n)^\\ast[\\omega] &\\to 2q([\\omega],[\\eta_-])[\\eta_+], \\\\\n \\lambda^{-n}({f}^{-n})^\\ast[\\omega] &\\to 2q([\\omega],[\\eta_+])[\\eta_-],\n \\end{align*} in $H^{1,1}(X,\\R)$. Here, the numbers $q([\\omega],[\\eta_\\pm])$ are positive.\n\\end{corollary}\n\\begin{proof}\n We show the first convergence only; the other one is shown similarily.\n\n We first show that $q([\\omega],[\\eta_-])>0$. By the Hodge index theorem as in \\cite[Exercise 23.1]{ghj01}, and approximating $[\\eta_-]$'s by K\\\"ahler classes, we first have $q([\\omega],[\\eta_-])\\geq 0$. Now if $q([\\omega],[\\eta_-])=0$, then again by the Hodge index theorem we have $[\\eta_-]=0$ or $q([\\eta_-])<0$. None of the possibility are feasible, due to Corollary \\ref{lem:04} above.\n\n We now show the convergence. Since $\\lambda$ is the spectral radius of ${f}^\\ast|H^{1,1}(X,\\R)$, we see that $\\lambda^{-n}({f}^n)^\\ast[\\omega]$ converges to a multiple of $[\\eta_+]$, possibly zero. Write the multiple as $\\lambda^{-n}({f}^n)^\\ast[\\omega]\\to c\\cdot[\\eta_+]$. This gives, as $n\\to\\infty$,\n \\[q(\\lambda^{-n}({f}^n)^\\ast[\\omega],[\\eta_-])\\to q(c\\cdot[\\eta_+],[\\eta_-])=c\/2.\\]\n On the other hand, by the invariance of $q$ by ${f}$, we also have\n \\begin{align*}\n q(\\lambda^{-n}({f}^n)^\\ast[\\omega],[\\eta_-])&=q([\\omega],\\lambda^{-n}({f}^{-n})^\\ast[\\eta_-]) \\\\\n &= q([\\omega],\\lambda^{-n}\\cdot\\lambda^n[\\eta_-]) = q([\\omega],[\\eta_-])>0.\n \\end{align*}\n Thus $c=2q([\\omega],[\\eta_-])>0$ is nonzero, and the desired convergence is shown.\n\\end{proof}\n\nLast in the list is the exceptional locus $E$. We check that it satisfies ${f}(E)\\subset E$. \nRecall that $E$ is defined as the union of subvarieties $V$ in which \n$\\int_V(\\eta_++\\eta_-)^{\\dim V}=0$. Because $[\\eta_+]$ and $[\\eta_-]$ are nef classes, thanks to \\cite[Theorem 4.5]{DP04} and approximation of $[\\eta_\\pm]$ by K\\\"ahler classes,\nwe have $\\int_V(\\eta_+)^a(\\eta_-)^b\\geq 0$ whenever $a+b=\\dim V$. Hence if $\\int_V(\\eta_++\\eta_-)^{\\dim V}=0$, we have $\\int_V(\\eta_+)^a(\\eta_-)^b=0$.\nBecause ${f}^\\ast[\\eta_\\pm]=\\lambda^{\\pm1}[\\eta_\\pm]$, this implies \n$\\int_{{f}(V)}(\\eta_+)^a(\\eta_-)^b=0$ as well, now integrating on $f(V)$. \nCollecting them we have $\\int_{{f}(V)}(\\eta_++\\eta_-)^{\\dim V}=0$, \nverifying ${f}(V)\\subset E$.\n\n\\subsection{Ergodicity}\n\\label{subsec:ergodicity}\n\nA corollary to bigness (Corollary \\ref{lem:04}) of $[\\eta_+]+[\\eta_-]$ is that the volume measure is ergodic and is the unique measure of maximal entropy. This fact is essentially due to \\cite[Theorem 1.2]{DTD12}, and will be proven below.\n\nTo prove this, we first remark that the proof of \\cite[Theorem 1.2]{DTD12} only requires ${f}$ to be `simple' at the subring $\\bigoplus_{p=0}^{2n}H^{p,p}(X,\\C)$. That is, this subring admits a unique eigenvalue (of multiplicity 1) of modulus $\\lambda'$, the spectral radius of ${f}$ on the cohomology ring. By \\cite[Theorem 1.1]{oguiso2009}, this $\\lambda'$ equals $\\lambda^n$, it is an eigenvalue of ${f}^\\ast|H^{n,n}(X,\\C)$, and no other subgroups $H^{p,p}(X,\\C)$ may have an eigenvalue with modulus $\\geq\\lambda'$.\n\nTherefore, if the spectral radius of ${f}$ at $H^{n,n}(X,\\C)$ has multiplicity 1, then we obtain that ${f}$ is `simple' at the subring $\\bigoplus_{p=0}^{2n}H^{p,p}(X,\\C)$. This assumption may be checked via item 1 of \\cite[Proposition 4.4.1]{DS10}. That is, it suffices to show that there exists Green $(n,n)$-currents \\cite[\\S{}4.2]{DS10} $T^+$ and $T^-$ of ${f}$ and ${f}^{-1}$ respectively, in which $T^+\\wedge T^-$ is a positive nonzero measure.\n\nApply \\cite[Theorem 4.2.1]{DS10} for the strictly dominant space $\\R.[\\eta_+]\\subset H^{1,1}(X,\\R)$. Then we have a closed $(1,1)$-current $S^+$, that has H\\\"older potentials, which is an eigencurrent $f^\\ast S^+=\\lambda S^+$.\nSince $[\\eta_+]$ is a nef class, $S^+$ may be set positive.\nLikewise, applying the theorem for $\\R.[\\eta_-]\\subset H^{1,1}(X,\\R)$ with ${f}^{-1}$, we have a closed positive $(1,1)$-current $S^-\\in[\\eta_-]$ in which $f^\\ast S^-=\\lambda^{-1}S^-$.\n\nThe currents $S^\\pm$ moreover has continuous potentials, thus it makes sense of $(S^+)^n$ and $(S^-)^n$ (cf. \\cite[\\S{}III.3]{Dem12}), each respectively in the class $[\\eta_+]^n$ and $[\\eta_-]^n$. Writing $T^+=(S^+)^n$ and $T^-=(S^-)^n$, then we have $T^\\pm$ be Green $(n,n)$-currents, of ${f}$ and ${f}^{-1}$ respectively. That $\\int_XT^+\\wedge T^-=\\int_X\\eta_+^n\\eta_-^n=\\binom{2n}{n}^{-1}\\int_X(\\eta_++\\eta_-)^{2n}=q([\\eta_+]+[\\eta_-])=1$ thus tells that the item 1 of \\cite[Proposition 4.4.1]{DS10} is met.\n\n\\section{Analysis on Approximate Metrics}\n\\label{sec:analysis-on-approximate-metrics}\n\nIn this section, motivated by the dynamical degrees (cf. \\cite[\\S{}2]{oguiso2009}), we study the forms $\\omega_k^{2n-1}\\wedge({f}^N)^\\ast\\omega_k$ by the following viewpoints: (i) give local expressions of them as differential forms, and (ii) understand the cohomology class of them.\n\n\\subsection{Local Setups}\n\\label{subsec:local-setups}\n\nAs claimed in Lemma \\ref{lem:01}, \nwe pick up hyperk\\\"ahler metrics $\\omega_k$ that converges to $\\omega_0$ in \n$C^\\infty_{\\mathrm{loc}}(X\\setminus E)$ topology.\n\nFix $N\\in\\Z_{>0}$. Along the dynamics, we declare the quantitites \n$\\left(\\sigma_i^{(k)}(x,Nh)\\right)_{i=1}^{2n}$, to be called \n\\emph{log-singular values of $({f}^N)^\\ast\\omega_k$ relative to $\\omega_k$}, \nas follows. At a point $x\\in X$, one can write \n\\begin{equation}\n \\label{eqn:3.2}\n \\omega_k=\\frac{\\sqrt{-1}}{2}\\sum_{i=1}^{2n}dz_i\\wedge d\\overline{z}_i,\n\\end{equation}\nwith an appropriate holomorphic coordinate $(z_1,\\cdots,z_{2n})$ of $X$ at $x$. \nMoreover, for\n\\begin{equation}\n \\label{eqn:3.3}\n h_{ij}^{(k)}=({f}^N)^\\ast\\omega_k(\\frac{\\partial}{\\partial z_i},\\frac{\\partial}{\\partial z_j}),\n\\end{equation}\nwe have a self-adjoint matrix $\\left(h_{ij}^{(k)}\\right)$, \nthus one can adjust the holomorphic basis vectors\n$\\dfrac{\\partial}{\\partial z_1},\\cdots,\\dfrac{\\partial}{\\partial z_{2n}}$ \nunitarily, to keep the form \\eqref{eqn:3.2} of $\\omega_k$, yet make the matrix\n$\\left(h_{ij}^{(k)}\\right)$ diagonal. \nThis enables us to write $({f}^N)^\\ast\\omega_k$ at $x\\in X$ as\n\\begin{equation}\n \\label{eqn:3.4}\n ({f}^N)^\\ast\\omega_k=\\frac{\\sqrt{-1}}{2}\\sum_{i=1}^{2n}\\exp\\left(\\sigma_i^{(k)}(x,Nh)\\right)\\, dz_i\\wedge d\\overline{z}_i.\n\\end{equation}\nWe moreover choose the base index $i$ so that the numbers \n$\\sigma_1^{(k)},\\cdots,\\sigma_{2n}^{(k)}$ are in the decreasing order, i.e.,\n\\begin{equation}\n \\label{eqn:3.5}\n \\sigma_1^{(k)}(x,Nh)\\geq\\sigma_2^{(k)}(x,Nh)\\geq\\cdots\\geq\\sigma_{2n}^{(k)}(x,Nh).\n\\end{equation}\nThese quantities exhibit the following symmetry.\nThe assumption that $\\omega_k$ is hyperk\\\"ahler, is crucial here.\n\n\\begin{lemma}\n \\label{lem:05}\n Log-singular values $\\sigma_i^{(k)}$'s exhibit symmetry at 0. \n That is, for all $x$ and $i=1,2,\\cdots,n$,\n \\[\\sigma_i^{(k)}(x,Nh)+\\sigma_{2n+1-i}^{(k)}(x,Nh)=0.\\]\n\\end{lemma}\n\\begin{proof}\n Replacing ${f}$ to ${f}^N$ if necessary, we may assume that $N=1$. \n Also, for notational simplicity, denote $\\omega:=\\omega_k$.\n \n As $\\omega$ is a hyperk\\\"ahler metric, for each point $x\\in X$ one has a \n holomorphic coordinate $(z_1,\\cdots,z_{2n})$ that enables representations (valid only at $x$)\n \\begin{align*}\n \\omega &= \\sum_{i=1}^{2n}dz_i\\wedge d\\overline{z}_i, \\\\\n \\Omega &= \\sum_{\\mu=1}^n dz_\\mu\\wedge dz_{n+\\mu}.\n \\end{align*}\n \n Denote $(w_1,\\cdots,w_{2n})$ for another holomorphic coordinate near ${f}(x)$ \n with analogous expressions of $\\omega$ and $\\Omega$, at ${f}(x)$. \n By this coordinate, describe the map $D_xf\\colon T_xX\\to T_{{f}(x)}X$ as a matrix \n $A=\\left(a_{ij}\\right)$, where $a_{ij}$'s are determined by the relation\n \\[D_x{f}\\left(\\frac{\\partial}{\\partial z_i}\\right)=\\sum_{j=1}^{2n}a_{ij}\\frac{\\partial}{\\partial w_j}.\\]\n\n Now Lemma \\ref{lem:02}, ${f}^\\ast\\Omega=k_{f}\\Omega$, implies\n \\[\\sum_{k,\\ell=1}^{2n}a_{ik}\\Omega_{k\\ell}a_{j\\ell}=k_{f}\\Omega_{ij}\\]\n (where $\\Omega_{ij}=\\Omega\\left(\\dfrac{\\partial}{\\partial z_i},\\dfrac{\\partial}{\\partial z_j}\\right)$;\n in other words, \n $(\\Omega_{ij})=\\begin{bmatrix} 0 & I_n \\\\ -I_n & 0 \\end{bmatrix}$), \n which entails $A\\Omega A^\\top=k_{f}\\Omega$. \n This gives $k_{f}^{-1\/2}A^\\top\\in\\mathrm{Sp}(2n,\\C)$, \n which implies $AA^\\dagger\\in\\mathrm{Sp}(2n,\\C)$.\n\n We then describe how ${f}^\\ast\\omega$ is represented at $x$. Since\n \\begin{align*}\n {f}^\\ast\\omega\\left(\\frac{\\partial}{\\partial z_i},\\frac{\\partial}{\\partial z_j}\\right) &= \n \\sum_{k,\\ell=1}^{2n}\\omega\\left(a_{ik}\\frac{\\partial}{\\partial w_k},a_{j\\ell}\\frac{\\partial}{\\partial w_\\ell}\\right) \\\\\n &= \\sum_{k,\\ell=1}a_{ik}\\overline{a_{j\\ell}}\\omega\\left(\\frac{\\partial}{\\partial w_k},\\frac{\\partial}{\\partial w_\\ell}\\right) \\\\\n &= \\sum_{k=1}^{2n}a_{ik}\\overline{a_{jk}} = (AA^\\dagger)_{ij},\n \\end{align*}\n we have\n ${f}^\\ast\\omega=\\frac{\\sqrt{-1}}{2}\\sum_{i,j=1}^{2n}(AA^\\dagger)_{ij}\\, dz_i\\wedge d\\overline{z}_j$\n at $x$. Thus $\\sigma_i^{(k)}$ consists of log of eigenvalues of $AA^\\dagger$. \n An exercise, say \\cite[Problem 22]{dGM2011}, yields that a self-adjoint positive-definite symplectic matrix like $AA^\\dagger$ has eigenvalues that are\n (multiplicatively) symmetric at 1.\n If this fact is translated to the list $\\sigma_1^{(k)},\\cdots,\\sigma^{(k)}_{2n}$, we get the desired symmetry.\n\\end{proof}\n\n\\subsection{Local computations}\n\nBased on the setups established in the previous section \\ref{subsec:local-setups}, we now compute the forms $\\omega_k^{2n}$ and $\\omega_k^{2n-1}\\wedge({f}^N)^\\ast\\omega_k$, at $x$, and compare them.\n\\begin{align} \n \\omega_k^{2n}&=\\left(\\frac{\\sqrt{-1}}{2}\\right)^{2n}(2n)!(dz_1\\wedge d\\overline{z}_1\\wedge\\cdots\\wedge dz_{2n}\\wedge d\\overline{z}_{2n}),\\label{eqn:3.6} \\\\\n \\omega_k^{2n-1}\\wedge({f}^N)^\\ast\\omega_k &= \\left(\\frac{\\sqrt{-1}}{2}\\right)^{2n}\\left(\\sum_{i=1}^{2n}dz_i\\wedge d\\overline{z}_i\\right)^{2n-1}\\left(\\sum_{j=1}^{2n}e^{\\sigma_j^{(k)}(x,Nh)}dz_j\\wedge d\\overline{z}_j\\right) \\nonumber \\\\\n &= \\left(\\frac{\\sqrt{-1}}{2}\\right)^{2n}\\left[\\sum_{j=1}^{2n}e^{\\sigma_j^{(k)}(x,Nh)}\\right](2n-1)!(dz_1\\wedge d\\overline{z}_1\\wedge\\cdots\\wedge dz_{2n}\\wedge d\\overline{z}_{2n}). \\label{eqn:3.7}\n \\intertext{Now applying \\eqref{eqn:3.6},}\n &= \\left[\\sum_{j=1}^{2n}e^{\\sigma_j^{(k)}(x,Nh)}\\right]\\frac{(2n-1)!}{(2n)!}\\omega_k^{2n}, \\label{eqn:3.8}\n \\intertext{and applying Lemma \\ref{lem:05}, we can `fold' the sum of \n exponentials as}\n &=\\left[\\sum_{j=1}^n2\\cosh(\\sigma_j^{(k)}(x,Nh))\\right]\\frac{\\omega_k^{2n}}{2n}, \\nonumber\n\\end{align}\nand obtain the following\n\\begin{proposition}\n \\label{lem:06}\n As differential forms,\n \\[\\omega_k^{2n-1}\\wedge({f}^N)^\\ast\\omega_k=\\left[\\frac{1}{n}\\sum_{j=1}^n\\cosh\\left(\\sigma_j^{(k)}(x,Nh)\\right)\\right]\\omega_k^{2n}.\\]\n\\end{proposition}\n\n\\subsection{Cohomological Analysis}\n\nThe form $\\omega_k^{2n-1}\\wedge({f}^N)^\\ast\\omega_k$ can also be understood \ncohomologically, but with some approximations. Think of the integral of the \nform, represented as a cup product of cohomology classes:\n\\[\\int_X\\omega_k^{2n-1}\\wedge({f}^N)^\\ast\\omega_k = [\\omega_k]^{2n-1}.({f}^N)^\\ast[\\omega_k].\\]\nDenote $[\\omega_0]:=[\\eta_+]+[\\eta_-]$. \nThen, by Lemma \\ref{lem:01}, \nwe have that $[\\omega_k]\\to[\\omega_0]$ as $k\\to\\infty$. Thus it leads us to \nconsider the product $[\\omega_0]^{2n-1}.({f}^N)^\\ast[\\omega_0]$, \nwhich is evaluated as in the following\n\\begin{proposition}\n \\label{lem:07}\n We have the following equation in cohomology:\n \\[[\\omega_0]^{2n-1}.({f}^N)^\\ast[\\omega_0]=\\cosh(Nh)\\cdot[\\omega_0]^{2n}.\\]\n\\end{proposition}\n\\begin{proof}\n The proof is a manual computation with \n $[\\omega_0]=[\\eta_+]+[\\eta_-]$ and Corollary \\ref{lem:04}.\n\\begin{align}\n [\\omega_0]^{2n} &= ([\\eta_+]+[\\eta_-])^{2n} = \\sum_{k=0}^{2n}\\binom{2n}{k}[\\eta_+]^k[\\eta_-]^{2n-k} \\nonumber \\\\\n &= \\binom{2n}{n}[\\eta_+]^n[\\eta_-]^n. \\label{eqn:4.1} \\\\\n [\\omega_0]^{2n-1}.({f}^N)^\\ast[\\omega_0] &= ([\\eta_+]+[\\eta_-])^{2n-1}.(e^{Nh}[\\eta_+]+e^{-Nh}[\\eta_-]) \\nonumber \\\\\n &=\\left[\\sum_{k=0}^{2n-1}\\binom{2n}{k}[\\eta_+]^k[\\eta_-]^{2n-1-k}\\right].(e^{Nh}[\\eta_+]+e^{-Nh}[\\eta_-]) \\nonumber \\\\\n &=\\left[\\binom{2n-1}{n-1}[\\eta_+]^{n-1}[\\eta_-]^n+\\binom{2n-1}{n}[\\eta_+]^n[\\eta_-]^{n-1}\\right] \\nonumber\\\\\n &\\hspace{.5\\textwidth}.(e^{Nh}[\\eta_+]+e^{-Nh}[\\eta_-]) \\nonumber \\\\\n &=\\binom{2n-1}{n-1}e^{Nh}[\\eta_+]^n[\\eta_-]^n+\\binom{2n-1}{n}e^{-Nh}[\\eta_+]^n[\\eta_-]^n, \\label{eqn:4.2}\n \\intertext{and because\n $\\binom{2n-1}{n-1}=\\binom{2n-1}{n}=\\frac{1}{2}\\binom{2n}{n}$, we have}\n &=\\frac{1}{2}\\binom{2n}{n}(e^{Nh}+e^{-Nh})[\\eta_+]^n[\\eta_-]^n. \\label{eqn:4.3}\n \\intertext{\\eqref{eqn:4.1} then applies to give}\n &=\\frac{1}{2}(e^{Nh}+e^{-Nh})[\\omega_0]^{2n}=\\cosh(Nh)[\\omega_0]^{2n}. \\label{eqn:4.4}\n\\end{align}\n\\end{proof}\n\nWe continue our discussion, now for $[\\omega_k]$'s. Because $[\\omega_k]$'s converge to $[\\omega_0]$ in cohomology, we see that $[\\omega_k]^{2n}$ and $[\\omega_k]^{2n-1}.({f}^N)^\\ast[\\omega_k]$ respectively converge to $[\\omega_0]^{2n}$ and $[\\omega_0]^{2n-1}.({f}^N)^\\ast[\\omega_0]$.\nThat is, we have the following limit:\n\\begin{align*}\n \\lim_{k\\to\\infty}[\\omega_k]^{2n-1}.({f}^N)^\\ast[\\omega_k] - \\cosh(Nh)[\\omega_k]^{2n} &= [\\omega_0]^{2n-1}.({f}^N)^\\ast[\\omega_0]-\\cosh(Nh)[\\omega_0]^{2n} \\\\\n &=0,\n\\end{align*}\nwhere the last zero is the result of Proposition \\ref{lem:07}. If this is rewritten in the integral form, and combined \nwith the local computation in Proposition \\ref{lem:06}, we have\n\\begin{equation}\n \\label{eqn:4.5}\n \\int_X\\left[\\frac{1}{n}\\sum_{j=1}^n\\cosh\\left(\\sigma_j^{(k)}(x,Nh)\\right)\\right]\\omega_k^{2n} - \\cosh(Nh)\\int_X\\omega_k^{2n} \\xrightarrow{k\\to\\infty} 0.\n\\end{equation}\nBy Lemma \\ref{lem:01}, as we have $\\omega_k^{2n}=\\mathrm{vol}$, \n\\eqref{eqn:4.5} is equivalently written as\n\\begin{equation}\n \\label{eqn:4.6}\n \\int_X\\left[\\frac{1}{n}\\sum_{j=1}^n\\cosh\\left(\\sigma_j^{(k)}(x,Nh)\\right)\\right]\\mathrm{vol}(dx) \\xrightarrow{k\\to\\infty}\\cosh(Nh).\n\\end{equation}\n\n\\section{Relation with Lyapunov Exponents}\n\\label{sec:relation-with-lyapunov-exponents}\n\nProposition \\ref{lem:08} below gives an estimate on the `average' of log-singular values $\\sigma^{(k)}_i(x,Nh)$. Such an estimate is obtained by a relation of log-singular values with the Lyapunov exponents. The goal of this section is to reveal that relation and prove the Proposition \\ref{lem:08}.\n\n\\begin{proposition}\n \\label{lem:08}\n The log-singular values $\\sigma_i^{(k)}(x,Nh)$'s in \\eqref{eqn:3.4} satisfy\n\\begin{equation}\n \\label{eqn:5.1}\n\\frac{1}{N}\\sum_{j=1}^n\\int_X\\sigma_j^{(k)}(x,Nh)\\mathrm{vol}(dx)\\geq nh.\n\\end{equation}\n\\end{proposition}\nEquivalently,\n\\begin{equation}\n \\label{eqn:5.2}\n \\int_X\\left[\\frac{1}{n}\\sum_{j=1}^n\\sigma_j^{(k)}(x,Nh)\\right]\\mathrm{vol}(dx)\\geq Nh.\n\\end{equation}\n\nOne note before the details, is that $nh$ in \\eqref{eqn:5.1} is coming from \nthe topological entropy of $(X,{f})$ \\cite[Theorem 1.1]{oguiso2009}. \nAs assumed in Theorem \\ref{lem:00}, this is exactly same as the entropy \nof the volume form $\\mathrm{vol}$.\n\n\\subsection{Lyapunov Exponents}\n\nDenote $\\mu_1\\geq\\cdots\\geq\\mu_{4n}$ for Lyapunov exponents of \n$({f},\\mathrm{vol})$, from the cocycles $D{f}\\colon T_\\R X\\to T_\\R X$ on the \nreal tangent bundle \\cite[Theorem 2.2.6]{Fil17}\\cite[Theorem 1.6]{ruelle1979}.\nNote that they are constant, as $\\mathrm{vol}$ is ergodic (cf. \\S{\\ref{subsec:ergodicity}}).\n\nNote first that, $\\mu_i$'s \nexhibit symmetrical behaviors \\cite[\\S2.2.10]{Fil17}\\cite[\\S3]{ruelle1979}: \n$\\mu_i+\\mu_{4n+1-i}=0$. \nThus, the first $n$ Lyapunov exponents $\\mu_1,\\cdots,\\mu_{2n}$ are nonnegative, \nand the rest are nonpositive. \nMoreover, due to the invariant complex structure that $(X,{f})$ has, \neach number $\\mu_i$ appears in a pair: that is, $\\mu_{2i-1}=\\mu_{2i}$.\n\nWe now claim the following\n\n\\begin{lemma}\n \\label{lem:09}\n The Lyapunov exponents are $\\pm h\/2$ with multiplicity $2n$ each. That is, $\\mu_1=\\cdots=\\mu_{2n}=h\/2$, and $\\mu_{2n+1}=\\cdots=\\mu_{4n}=-h\/2$.\n\\end{lemma}\n\\begin{proof}\n By Ledrappier--Young formula \\cite[Corollary G]{LY85II}, the entropy of the volume $h_{\\mathrm{vol}}({f})$ equals\n \\begin{equation}\\label{eqn:lem:09-1}h_{\\mathrm{vol}}({f})=\\mu_1+\\cdots+\\mu_{2n}.\\end{equation}\n \n Denote the $p$-th dynamical degrees of $(X,{f})$ as $d_p({f})$. Then by \\cite[Theorem 1.1]{oguiso2009}, we have an increasing-decreasing relation\\[1=d_0({f})d_{n+1}({f})>\\cdots>d_{2n}({f})=1.\\] The bounds of Lyapunov exponents by Th\\'elin \\cite[Corollaire 3]{The08} then yields,\n \\begin{equation}\\label{eqn:lem:09-2}\\mu_1\\geq\\cdots\\geq\\mu_{2n}\\geq\\frac{1}{2}\\log\\frac{d_n({f})}{d_{n-1}({f})}=\\frac{h}{2}(>0).\\end{equation}\n\n Because $\\mathrm{vol}$ is the measure of maximal entropy, we have $h_{\\mathrm{vol}}({f})=h_{\\mathrm{top}}({f})=nh$. Hence combining \\eqref{eqn:lem:09-1} and \\eqref{eqn:lem:09-2}, we get\n \\[nh=\\mu_1+\\cdots+\\mu_{2n}\\geq 2n\\cdot\\frac{h}{2}=nh.\\]\n This tells that the inequalities in \\eqref{eqn:lem:09-2} are in fact equalities. That $\\mu_{2n+1}=\\cdots=\\mu_{4n}=-h\/2$ follows from the symmetry $\\mu_i+\\mu_{4n+1-i}=0$.\n\\end{proof}\n\n\n\\subsection{Proof of Proposition \\ref{lem:08}}\n\nFor this section, we reuse the coordinate $(z_1,\\cdots,z_{2n})$ in the equations\n\\eqref{eqn:3.2} and \\eqref{eqn:3.4}. Denote $\\partial_i$ and \n$\\overline{\\partial}_i$ for\n\\begin{equation}\n \\label{eqn:5.4}\n \\partial_i:=\\frac{\\partial}{\\partial z_i},\\quad\\overline{\\partial}_i:=\\frac{\\partial}{\\partial\\overline{z}_i}.\n\\end{equation}\nDenote by $\\|\\cdot\\|_k$ and $\\|\\cdot\\|_{k,N}$ the norms corresponding to $\\omega_k$ and $(f^N)^\\ast\\omega_k$ respectively.\nWe first claim on, an invariant formula for log-singular values \n$\\sigma_i^{(k)}(x,Nh)$.\n\\begin{proposition}\n \\label{lem:12}\n Endow the tangent bundle $TX$ with the metric $\\omega_k$. Then,\n \\[ \\log\\left\\|(D_x{f}^N)^{\\wedge 2n}\\right\\|_{op}=\\sigma_1^{(k)}(x,Nh)+\\cdots+\\sigma_n^{(k)}(x,Nh). \\]\n Here, $(D_x{f}^N)^{\\wedge 2n}$ is the result of wedging \n $D_x{f}^N\\colon T_{\\R,x}X\\to T_{\\R,{f}^N(x)}X$ by $2n$ times \n as real linear maps.\n\\end{proposition}\n\\begin{proof}\n Notice first that $T_{\\R,x}X$ has an $\\R$-basis\n \\begin{equation}\n \\label{eqn:5.5}\n \\left\\{\\frac{1}{\\sqrt{2}}(\\partial_i+\\overline{\\partial}_i),\\frac{1}{\\sqrt{-2}}(\\partial_i-\\overline{\\partial}_i){\\,\\bigg\\vert\\,} i=1,2,\\cdots,2n\\right\\}.\n \\end{equation}\n \n This set \\eqref{eqn:5.5} is an orthogonal set with respect to both \n $\\omega_k$ and $(f^N)^\\ast\\omega_{k}$, and is an orthonormal basis for $\\omega_k$. \n For $(f^N)^\\ast\\omega_{k}$, each basis element has the norm\n \\begin{equation}\n \\label{eqn:5.6}\n \\left\\|\\frac{1}{\\sqrt{2}}(\\partial_i+\\overline{\\partial}_i)\\right\\|_{{k,N}}=\\left\\|\\frac{1}{\\sqrt{-2}}(\\partial_i-\\overline{\\partial}_i)\\right\\|_{{k,N}}=e^{\\sigma_i^{(k)}(x,Nh)\/2}.\n \\end{equation}\n \n Thus, we have that\n \\begin{equation}\n \\label{eqn:5.7}\n \\|(D_x{f}^N)^{\\wedge 2n}\\|_{op} = \\sup_{\\substack{v\\in \\bigwedge^{2n}T_{\\R,x}X \\\\ v\\neq0}}\\frac{\\|(D_x{f}^N)^{\\wedge 2n}(v)\\|_{k}}{\\|v\\|_{k}}=\\sup_{\\substack{v\\in \\bigwedge^{2n}T_{\\R,x}X \\\\ v\\neq 0}}\\frac{\\|v\\|_{{k,N}}}{\\|v\\|_{k}}.\n \\end{equation}\n \n Now the unit vector $v\\in\\bigwedge^{2n}T_{\\R,x}Y$ that will maximize \n \\eqref{eqn:5.7} is, by the relation \\eqref{eqn:3.5},\n \\begin{equation}\n \\label{eqn:5.8}\n \\left(\\frac{1}{\\sqrt{2}}(\\partial_1+\\overline{\\partial}_1)\\wedge\\frac{1}{\\sqrt{-2}}(\\partial_1-\\overline{\\partial}_1)\\right)\\wedge\\cdots\\wedge \\left(\\frac{1}{\\sqrt{2}}(\\partial_n+\\overline{\\partial}_n)\\wedge\\frac{1}{\\sqrt{-2}}(\\partial_n-\\overline{\\partial}_n)\\right).\n \\end{equation}\n The proposition then follows from \\eqref{eqn:5.6} and orthogonality of \n \\eqref{eqn:5.5}.\n\\end{proof}\n\nFrom the theory of Lyapunov exponents and Lemma \\ref{lem:09}, the following is known:\n\\begin{equation}\n \\label{eqn:5.9}\n \\lim_{N\\to\\infty}\\frac{1}{N}\\log\\|(D_x{f}^N)^{\\wedge 2n}\\|_{op}=\\mu_1+\\cdots+\\mu_{2n}=nh,\n\\end{equation}\nfor $\\mathrm{vol}$-a.e. $x$. Now setting\n\\begin{align}\n I_N&:=\\int_X\\log\\|(D_x{f}^N)^{\\wedge 2n}\\|_{op}\\,\\mathrm{vol}(dx), \\label{eqn:5.10}\n \\intertext{and via Proposition \\ref{lem:12}, we have}\n I_N&=\\int_X(\\sigma_1^{(k)}(x,Nh)+\\cdots+\\sigma_n^{(k)}(x,Nh))\\,\\mathrm{vol}(dx). \\label{eqn:5.11}\n\\end{align}\nThe intergrals $\\frac{1}{N}I_N$'s are having nonnegative integrands. \nFatou's Lemma $\\int\\liminf f_n\\leq\\liminf\\int f_n$ thus applies, \nwhich induces the following from \\eqref{eqn:5.9}:\n\\begin{align}\n \\lim_{N\\to\\infty}\\frac{1}{N}I_N &\\geq\\int_X\\lim_{N\\to\\infty}\\frac{1}{N}\\log\\|(D_x{f}^N)^{\\wedge 2n}\\|_{op}\\,\\mathrm{vol}(dx) \\nonumber \\\\\n &=nh. \\label{eqn:5.12}\n\\end{align}\n\nNow the inequality\n$\\|(D_x{f}^{N+M})^{\\wedge 2n}\\|_{op}\\leq\\|(D_x{f}^N)^{\\wedge 2n}\\|_{op}\\cdot\\|(D_{{f}^N(x)}{f}^M)^{\\wedge 2n}\\|_{op}$\ninduces subadditivity $I_{N+M}\\leq I_N+I_M$; by Fekete's Lemma \\cite{Fekete1923},\n\\begin{equation}\n \\label{eqn:5.13}\n \\inf_{N\\geq 1}\\frac{1}{N}I_N=\\lim_{N\\to\\infty}\\frac{1}{N}I_N,\n\\end{equation}\nthus for any $N$,\n\\begin{equation}\n \\label{eqn:5.14}\n \\frac{1}{N}I_N\\geq\\lim_{N\\to\\infty}\\frac{1}{N}I_N\\geq nh.\n\\end{equation}\nThis proves Proposition \\ref{lem:08}.\n\n\\section{Jensen's Inequality and Constant Log-singular Values}\n\\label{sec:jensen-inequality-constant-log-singular-values}\n\nNow we demonstrate how to combine \\eqref{eqn:4.6} and Proposition \\ref{lem:08}. \nThe trick is to use Jensen's inequality, combined with the (strong) convexity of the\n$\\cosh$ function.\n\nThe upshot of this combination is a result on the log-singular values of the `limit metrics' $({f}^N)^\\ast\\omega_0$, relative to $\\omega_0$ (Corollary \\ref{lem:14}). By this, we get a simple local representations of these metrics.\n\nLet $B$ be a probability space, whose underlying space is \n$(X\\setminus E)\\times\\{1,\\cdots,n\\}$, and whose probability measure is \n$\\mathrm{vol}\\times(\\frac{1}{n}\\#)$, where $\\#$ is the counting measure. \nFor $(x,j)\\in B$, define a random variable $\\Sigma^{(k)}$ as \n$\\Sigma^{(k)}(x,j)=\\sigma_j^{(k)}(x,Nh)$.\n\nThen, \\eqref{eqn:4.6} can be rewritten as\n\\begin{equation}\n \\label{eqn:6.1}\n \\mathbb{E}[\\cosh(\\Sigma^{(k)})]-\\cosh(Nh)\\xrightarrow{k\\to\\infty}0,\n\\end{equation}\nand \\eqref{eqn:5.2} following Proposition \\ref{lem:08} can be rewritten as\n\\begin{equation}\n \\label{eqn:6.2}\n \\mathbb{E}[\\Sigma^{(k)}]\\geq Nh.\n\\end{equation}\n\nTo motivate what follows, we note that Jensen's inequality applied to the \nconvex function $\\cosh$ gives: \n$\\mathbb{E}[\\cosh(\\Sigma^{(k)})]\\geq\\cosh(\\mathbb{E}[\\Sigma^{(k)}])\\geq\\cosh(Nh)$.\nThen \\eqref{eqn:6.1} implies that the inequality asymptotically collapses \nas $k\\to\\infty$. \nThis observation is a root of the following\n\\begin{proposition}\n \\label{lem:13}\n As $k\\to\\infty$,\n the variance of $\\Sigma^{(k)}$ is converging to 0,\n and the expected value of $\\Sigma^{(k)}$ is converging to $Nh$. \n That is,\n \\[\\int_X\\frac{1}{n}\\sum_{j=1}^n\\left(\\sigma^{(k)}_j(x,Nh)-Nh\\right)^2\\,\\mathrm{vol}(dx)\\xrightarrow{k\\to\\infty}0.\\]\n\\end{proposition}\n\\begin{proof}\n We start with an elementary inequality, \n which holds for any $x,a\\in\\R$:\n \\begin{equation}\n \\label{eqn:6.3}\n \\cosh(x) \\geq \\cosh(a) + \\sinh(a)\\cdot(x-a) + \\frac{1}{2}(x-a)^2.\n \\end{equation}\n Apply $x=\\Sigma^{(k)}$ and $a=Nh$ into \\eqref{eqn:6.3}. \n Taking average, we then get\n \\begin{align*}\n \\mathbb{E}[\\cosh(\\Sigma^{(k)})] &\\geq \\cosh(Nh) + \\sinh(Nh)\\cdot\\underbrace{\\left(\\mathbb{E}[\\Sigma^{(k)}]-Nh\\right)}_{\\geq 0\\text{ by \\eqref{eqn:6.2}}} + \\frac{1}{2}\\mathbb{E}[(\\Sigma^{(k)}-Nh)^2] \\\\\n &\\geq \\cosh(Nh) + \\frac{1}{2}\\mathbb{E}[(\\Sigma^{(k)}-Nh)^2].\n \\end{align*}\n This entails\n \\[0\\leq\\mathbb{E}[(\\Sigma^{(k)}-Nh)^2]\\leq 2\\cdot\\left( \\mathbb{E}[\\cosh(\\Sigma^{(k)})]-\\cosh(Nh)\\right)\\xrightarrow{k\\to\\infty}0,\\]\n where we have used the limit fact \\eqref{eqn:6.1}. This implies \n $\\mathbb{E}[(\\Sigma^{(k)}-Nh)^2]\\to 0$. \n Our proposition restates this limit fact.\n\\end{proof}\n\nPassing to a subsequence of $\\left(\\omega_k\\right)$ if necessary, we further have that \n$\\sigma^{(k)}_j(x,Nh)\\to Nh$, for $\\mathrm{vol}$-a.e. $x$.\n\nThe implication of Proposition \\ref{lem:13} to the metric $\\omega_0$, the \nK\\\"ahler metric on $X\\setminus E$ introduced in Lemma \\ref{lem:01}, is the following\n\n\\begin{corollary}\n \\label{lem:14}\n For each $x\\in X\\setminus E$, the log-singular values of \n $({f}^N)^\\ast\\omega_0$ relative to $\\omega_0$ are $Nh$ and $-Nh$, counted\n $n$ times respectively. \n\n That is, for each $x\\in X\\setminus E$, one can find a holomorphic coordinate \n $(z_1,\\cdots,z_{2n})$ in which the following expressions hold in the tangent space at $x$.\n \\begin{align*}\n \\omega_0 &= \\frac{\\sqrt{-1}}{2}\\sum_{i=1}^{2n}dz_i\\wedge d\\overline{z}_i, \\\\\n ({f}^N)^\\ast\\omega_0 &= \\frac{\\sqrt{-1}}{2}\\sum_{\\mu=1}^n e^{Nh}\\, dz_\\mu\\wedge d\\overline{z}_\\mu + e^{-Nh}\\, dz_{n+\\mu}\\wedge d\\overline{z}_{n+\\mu}.\n \\end{align*}\n\\end{corollary}\n\\begin{proof}\n First, fix $x\\in X\\setminus E$ in which $\\sigma^{(k)}_j(x,Nh)\\to Nh$ as \n $k\\to\\infty$.\n\n As $\\left(\\omega_k\\right)\\to\\omega_0$ in \n $C^\\infty_{\\mathrm{loc}}(X\\setminus E)$ topology, focusing on the \n compact set $\\{x,{f}^N(x)\\}$, we see that the matrices \n $\\left(h^{(k)}_{ij}\\right)$ \\eqref{eqn:3.2} at $x$ converge to the analogous matrix $\\left(h_{ij}\\right)$ for $\\omega_0$, at $x$. \n As eigenvalues behave continuously with perturbing matrix \\cite[Theorem II.5.1]{kato2013}, \n this implies that the numbers $\\exp\\left(\\sigma^{(k)}_j(x,Nh)\\right)$'s approximate \n eigenvalues of $\\left(h_{ij}\\right)$ as well. Thanks to our assumption on $x$,\n this implies that $\\left(h_{ij}\\right)$ has eigenvalues \n $e^{Nh}$ and $e^{-Nh}$, counted $n$ times for each.\n\n Now to claim this for \\emph{all} $x\\in X\\setminus E$, we note that $\\omega_0$ \n is smooth; thus the matrices $\\left(h_{ij}\\right)$ also vary smoothly \n with respect to $x$. As eigenvalues thus behaves continuously, \n we still have the constant eigenvalues for all $x\\in X\\setminus E$.\n\\end{proof}\n\n\\section{Stable and Unstable Distributions}\n\\label{sec:stable-unstable-distributions}\n\nThe local expressions of $({f}^N)^\\ast\\omega_0$ and $\\omega_0$ (Corollary \\ref{lem:14}) vividly shows that ${f}^N$ is expanding and contracting along certain directions in a uniform rate. At glance, this seems to define stable and unstable directions right away, but such definitions are dependent on the time $N$. Proposition \\ref{lem:15} in this sections shows that the stable and unstable directions `at time $N$' are the same. The upshots of this observation are the smooth nature of the stable and unstable foliations, and the uniform hyperbolicity of $(X\\setminus E,{f})$.\n\nIn this section, we denote by $\\|\\cdot\\|_{N}$ and $\\|\\cdot\\|_0$ for the metric norm induced by \n$({f}^N)^\\ast\\omega_{0}$ and $\\omega_0$ respectively.\n\n\\begin{proposition}\n \\label{lem:15}\n For any $x\\in X\\setminus E$, define the following \n subsets of $T_{\\R,x}X$, $E^+_x,E^-_x,F^+_x$, and $F^-_x$, for $N>0$:\n \\begin{align}\n E^{\\pm N}_x & := \\{v\\in T_{\\R,x}X\\mid ({f}^N)^\\ast\\omega_{0}(v,w)=e^{\\pm Nh}\\omega_0(v,w)\\ \\forall w\\in T_{\\R,x}X\\}, \\label{eqn:7.1} \\\\\n F^{\\pm N}_x & := \\{v\\in T_{\\R,x}X\\mid \\|v\\|_{N}=e^{\\pm Nh\/2}\\|v\\|_{0}\\}. \\label{eqn:7.2}\n \\end{align}\n Then $E^{+N}_x=F^{+N}_x$, $E^{-N}_x=F^{-N}_x$, $E^{+N}_x=E^{+1}_x$, and \n $E^{-N}_x=E^{-1}_x$ holds. \n The distributions $E^{\\pm 1}$ defined in that fashion, which will be denoted \n $E^\\pm$, are ${f}$-invariant and have dimensions $2n$.\n\\end{proposition}\n\\begin{proof}\n We start with a holomorphic coordinate $(z_1,\\cdots,z_{2n})$ at $x$ that \n appears in the conclusion of the Corollary \\ref{lem:14}.\n \n For ease of computations, we denote $z_i=x_i+\\sqrt{-1}y_i$ for real coordinates \n in $(z_1,\\cdots,z_{2n})$. We denote $\\partial_{x,i}$ and $\\partial_{y,i}$ for \n the vectors $\\dfrac{\\partial}{\\partial x_i}$ and $\\dfrac{\\partial}{\\partial y_i}$ \n respectively. \n The local expressions in Corollary \\ref{lem:14} then turns into,\n \\begin{align}\n \\omega_0 &= \\sum_{i=1}^{2n}dx_i\\wedge dy_i, \\label{eqn:7.3} \\\\\n ({f}^N)^\\ast\\omega_{0} &= \\sum_{i=1}^ne^{Nh}dx_i\\wedge dy_i+\\sum_{j=n+1}^{2n}e^{-Nh}dx_j\\wedge dy_j. \\label{eqn:7.4}\n \\end{align}\n \n Note that vectors $\\partial_{x,i}$'s and $\\partial_{y,i}$'s are orthogonal \n to each other, under $\\omega_0$ or $({f}^N)^\\ast\\omega_{0}$, which are even orthonormal under $\\omega_0$. \n Moreover, we have the length formulas\n \\[\\|\\partial_{x,i}\\|_{N}=\\|\\partial_{y,i}\\|_{N}=\\begin{cases} e^{Nh\/2} & \\text{ if }i=1,\\cdots,n, \\\\ e^{-Nh\/2} & \\text{ if }i=n+1,\\cdots,2n.\\end{cases}\\]\n \n Based on the above setups, we first show that the set $F^{+N}_x$ is a linear subspace, with an explicit list of basis vectors. \n Manual computation of norm of a general vector\n \\[v = \\sum_{i=1}^{2n}v_i\\partial_{x,i}+w_i\\partial_{y,i}\\]\n yields\n \\begin{equation}\n \\label{eqn:7.5}\n \\log\\frac{\\|v\\|_{N}}{\\|v\\|_0}=\\frac{1}{2}\\log\\frac{e^{Nh}(\\sum_{i=1}^n|v_i|^2+|w_i|^2)+e^{-Nh}(\\sum_{j=n+1}^{2n}|v_j|^2+|w_j|^2)}{\\sum_{i=1}^{2n}(|v_i|^2+|w_i|^2)}.\n \\end{equation}\n The quantity \\eqref{eqn:7.5} is maximized and attains the value $Nh\/2$, \n if and only if $v$ is generated by \n $\\partial_{x,1},\\partial_{y,1},\\cdots,\\partial_{x,n},\\partial_{y,n}$. \n By this we characterize the set $F^{+N}_x$.\n \n Next, we show that $F^{+N}_x=E^{+N}_x$. \n Let $v\\in F^{+N}_x$. Then $v$ can be expressed as \n $v=\\sum_{i=1}^nv_i\\partial_{x,i}+w_i\\partial_{y,i}$, \n and by \\eqref{eqn:7.3} and \\eqref{eqn:7.4}, for any vector \n $\\theta=\\sum_{i=1}^{2n}\\theta_i\\partial_{x,i}+\\eta_i\\partial_{y,i}$,\n we have\n \\begin{equation}\n \\label{eqn:7.6}\n ({f}^N)^\\ast\\omega_{0}(v,\\theta)=\\sum_{i=1}^n(v_i\\eta_i-w_i\\theta_i)e^{Nh}=e^{Nh}\\omega_0(v,\\theta).\n \\end{equation}\n Thus $v\\in E^{+N}_x$.\n \n Conversely, let $v\\in E^{+N}_x$. Then again through local expressions \n \\eqref{eqn:7.3} and \\eqref{eqn:7.4},\n \\begin{align}\n ({f}^N)^\\ast\\omega_{0}(v,\\theta) &= \\sum_{i=1}^n(v_i\\eta_i-w_i\\theta_i)e^{Nh}+\\sum_{j=n+1}^{2n}(v_j\\eta_j-w_j\\theta_j)e^{-Nh}, \\label{eqn:7.7} \\\\\n e^{Nh}\\omega_0(v,\\theta) &= \\sum_{i=1}^n(v_i\\eta_i-w_i\\theta_i)e^{Nh}+\\sum_{j=n+1}^{2n}(v_j\\eta_j-w_j\\theta_j)e^{Nh}. \\label{eqn:7.8}\n \\end{align}\n Subtracting \\eqref{eqn:7.7} to \\eqref{eqn:7.8}, we see that\n \\[\\sum_{j=n+1}^{2n}(v_j\\eta_j-w_j\\theta_j)=0\\]\n for any $\\eta_j$ and $\\theta_j$. So $v$ is generated by \n $\\partial_{x,1},\\partial_{y,1},\\cdots,\\partial_{x,n},\\partial_{y,n}$, \n and $v\\in F^{+N}_x$ follows.\n \n Similar argument applies for showing that $F^{-N}_x=E^{-N}_x$. \n In this case, $v\\in F^{-N}_x=E^{-N}_x$ are precisely the vectors generated by \n $\\partial_{x,n+1},\\partial_{y,n+1},\\cdots,\\partial_{x,2n},\\partial_{y,2n}$, \n and $v$'s are characterized by minimzing the quantity \\eqref{eqn:7.3} to the value $-Nh\/2$.\n \n As a side note, we remark that the explicit lists of basis for $E^{\\pm N}_x$ have $2n$ vectors each. \n Thus we conclude that $E^{\\pm N}_x$ has dimensions $2n$ for each.\n \n Now it remains to show that $E^{+N}_x=E^{+1}_x$, for $N>1$. \n Let $v\\in E^{+N}_x=F^{+N}_x$. Then,\n \\begin{align}\n \\frac{Nh}{2}=\\log\\frac{\\|v\\|_{N}}{\\|v\\|_0}\n &=\\log\\frac{\\|v\\|_{N}}{\\|v\\|_{1}}+\\log\\frac{\\|v\\|_{1}}{\\|v\\|_0} \\label{eqn:7.9}\\\\\n &=\\log\\frac{\\|{f}_\\ast v\\|_{N-1}}{\\|{f}_\\ast v\\|_0}+\\log\\frac{\\|v\\|_1}{\\|v\\|_0} \\label{eqn:7.10}\\\\\n &\\leq\\frac{(N-1)h}{2}+\\frac{h}{2}=\\frac{Nh}{2}. \\label{eqn:7.11}\n \\end{align}\n In \\eqref{eqn:7.11}, we have used the fact that the quantity \\eqref{eqn:7.5} has \n maximum value $Nh\/2$, but with $N=N-1$.\n \n The equality condition for \\eqref{eqn:7.11} also tells that, \n $v\\in F^{+1}_x=E^{+1}_x$ necessarily holds. \n Therefore $E^{+N}_x\\subset E^{+1}_x$, and by dimension comparison, \n $E^{+N}_x=E^{+1}_x$ follows.\n \n Moreover, the equality condition for \\eqref{eqn:7.11} tells, \n ${f}_\\ast v\\in E^{+(N-1)}_{{f}(x)}=E^{+1}_{{f}(x)}$ as well. \n Since $v\\in E^{+1}_x$, this tells ${f}_\\ast E^{+1}_x\\subset E^{+1}_{{f}(x)}$, \n and in fact they are the same by dimension counts. This shows the ${f}$-invariance.\n \n That $E^{-N}_x=E^{-1}_x$ and their ${f}$-invariances are shown in a similar way.\n\\end{proof}\n\n\\begin{proposition}\n \\label{lem:16}\n We have the following operator norms, with respect to $\\omega_0$:\n \\[\\|D{f}^N|E^\\pm\\|_{op}=e^{\\pm Nh\/2},\\quad\\text{ and }\\quad\\|D{f}^{-N}|E^\\pm\\|_{op}=e^{\\mp Nh\/2},\\]\n applied for any \\emph{integer} $N\\in\\Z$. In particular, ${f}$ is uniformly hyperbolic on $X\\setminus E$, and\n $E^+$, $E^-$ respectively denotes unstable and stable distributions\n on $X\\setminus E$.\n\\end{proposition}\n\\begin{proof}\n We have shown that $\\|D{f}^N|E^\\pm\\|_{op}=e^{Nh\/2}$ in Proposition \\ref{lem:15}.\n It remains to show the opposite-direction behavior.\n For $N>0$, we denote $\\|\\cdot\\|_{-N}$ by the norm induced from\n $({f}^{-N})^\\ast\\omega_{0}$.\n\n To estimate $\\|D{f}^{-N}|E^\\pm\\|_{op}$, we pick up $v\\in E^\\pm_x$, and estimate \n $\\|v\\|_{-N}\/\\|v\\|_0$. Here, by definition of $({f}^{-N})^\\ast\\omega_{0}$, \n we have\n \\[\\frac{\\|v\\|_{-N}}{\\|v\\|_0}=\\frac{\\|{f}^{-N}_\\ast v\\|_0}{\\|{f}^{-N}_\\ast v\\|_{N}}=\\frac{1}{e^{\\pm Nh\/2}}=e^{\\mp Nh\/2},\\] \n since ${f}^{-N}_\\ast v\\in E^{\\pm}_{{f}^{-N}(x)}$ as well.\n This shows the claim.\n\\end{proof}\n\nThe expression \\eqref{eqn:7.1} shows that, $E^\\pm$ are characterized\nby $C^\\infty$ conditions (essentially because $\\omega_0$ is smooth).\nTherefore they are $C^\\infty$ distributions.\nOne can appropriately multiply matrices for ${f}^\\ast\\omega_0$ and\n$\\omega_0^{-1}$ to find explicit descriptions for\n$C^\\infty$ vector field generators of them.\n\n\n\\section{Holomorphicity of the Stable Distribution}\n\\label{sec:holomorphicity-stable-distribution}\n\nSo far we have studied about the stable and unstable distributions, and concluded that \nthey are defined in a $C^\\infty$ manner. \nWhat can be claimed furthermore is that, stable and unstable foliations \nare actually holomorphic. \n\nAs noted in \\cite[Lemma 2.1]{ghys1995}, each leaf of the foliations $W^\\pm$ \ngenerated by $E^\\pm$ (respectively), are holomorphic manifolds. \nIn particular, unstable vector fields are holomorphic along the \nunstable manifolds, and it thus remains to show its holomorphicity \nalong the transverse direction. \n(Similar claim may be made for stable vector fields.)\n\nThe trick is to use the \\emph{Poincar\\'e map}, as described in \n\\cite[\\S{}III.3]{mane1987}. \nFor a generic Poincar\\'e map $\\phi$ between local holomorphic \nunstable manifolds, we consider the commutator $[D\\phi,I]$, a map $T_xX\\to T_{\\phi(x)}X$ \ndefined in a similar fashion with \\eqref{eqn:2.1}:\n\\[[D\\phi,I]_x = D_x\\phi\\circ I_x - I_{\\phi(x)}\\circ D_x\\phi.\\]\nIf $[D\\phi,I]=0$ can be shown, then it shows that Poincar\\'e maps \nare holomorphic, showing the desired claim.\n\nThis goal setup is encoded into the following\n\\begin{proposition}\n \\label{lem:17}\n Let $U,U'$ be unstable local manifolds, not intersecting with one another, \n and close enough to induce a Poincar\\'e map $\\phi\\colon U\\to U'$. \n Then $[D\\phi,I]_x=0$, for all $x\\in U$.\n\\end{proposition}\n\n\\subsection{On a Foliated Chart}\n\nFix a foliated chart $(V,\\mathbf{x})$, for both $E^\\pm$. \nThis is possible because $E^\\pm$ are both generated by some \n$C^\\infty$ vector fields. \nNow suppose $x$ and $\\phi(x)$ are in $V$. \nThen a neighborhood of $x\\in U$ and $\\phi(x)\\in U'$ are laid along the\ncoordinate directions for $E^-$, \nand hence $\\phi$ near $x$ is described as a coordinate shift.\nThis concludes, $D\\phi$ is represented as an identity matrix\non $(V,\\mathbf{x})$.\n\nMoreover, because $I$ is a family of maps $T_{\\R,y}X\\to T_{\\R,y}X$, \nby the foliated coordinate on $V$, it has a matrix representation \n$I_y\\in\\mathrm{GL}(4n,\\R)$ for each $y\\in V$.\nCollecting these remarks, we conclude as follows.\n\n\\begin{lemma}\n \\label{lem:18}\n Let $U,U'$ be unstable local manifolds as in Proposition \\ref{lem:17}, \n and suppose $x\\in U$ is such that $x,\\phi(x)\\in V$.\n For the matrix representation $\\{I_y\\}_{y\\in V}$ of the \n complex structure $I$ on $V$, we then have\n \\begin{equation}\n \\label{eqn:8.1}\n [D\\phi,I]_x = I_x - I_{\\phi(x)}.\n \\end{equation}\n\\end{lemma}\n\nFurthermore, shrinking $V$ if necessary, we assume that the family \n$\\{I_y\\}_{y\\in V}$ satisfies the Lipschitz condition, i.e.,\n\\begin{equation}\n \\label{eqn:8.2}\n \\|I_p-I_q\\|\\leq C\\cdot\\mathrm{dist}(p,q).\n\\end{equation}\n\n\\subsection{From Ergodicity}\n\nApparently, $(V,\\mathbf{x})$ is set on an arbitrary place of $X$, \nthus it is hard to expect $x,\\phi(x)\\in V$ in most cases. \nNow as discussed in \\S{\\ref{subsec:ergodicity}}, ${f}\\colon X\\to X$ has the volume as an ergodic measure. We claim the following.\n\\begin{lemma}\n \\label{lem:19}\n Let $U,U'$ be unstable local manifolds as in Proposition \\ref{lem:17}. Then for $\\mathrm{vol}$-a.e. $x$, there are infinitely many $N$ such that ${f}^N(x),{f}^N\\phi(x)\\in V$.\n\\end{lemma}\n\\begin{proof}\n Let $V'\\Subset V$ be a nonempty precompact open subset.\n Let $\\epsilon=\\inf_{y\\in V'}\\mathrm{dist}(y,\\partial V)$, a positive number.\n\n By the uniform contraction of $E^-$ along ${f}$, we have\n \\begin{equation}\n \\label{eqn:8.3}\n \\mathrm{dist}_{E^-}({f}^N(x),{f}^N\\phi(x))\\lesssim_{{f},\\omega_0} e^{-Nh\/2}\\mathrm{dist}_{E^-}(x,\\phi(x)),\n \\end{equation}\n where $\\mathrm{dist}_{E^-}$ is the distance measured along the stable leaves. \n As $\\mathrm{dist}(p,q)\\leq\\mathrm{dist}_{E^-}(p,q)$, \n this tells that $\\mathrm{dist}({f}^N(x),{f}^N\\phi(x))<\\epsilon$ \n for sufficiently large $N\\geq N_0$. \n In particular, if ${f}^N(x)\\in V'$, $N\\geq N_0$, then \n ${f}^N(x),{f}^N\\phi(x)\\in V$.\n \n Now because $V'$ is a nonempty open set, it has a nonzero volume. \n Thanks to Birkhoff ergodicity, we thus conclude that there are infinitely many\n $N$ such that ${f}^N(x)\\in V'$, for $\\mathrm{vol}$-a.e. $x$. The lemma then follows.\n\\end{proof}\n\n\\subsection{Future Estimates}\n\nTo show Proposition \\ref{lem:17}, we use the trick of `sending to the future,' \nas commonly seen in \\cite[Theorem 2.2]{ghys1995} and \n\\cite[Theorem III.3.1]{mane1987}.\nThe trick starts from the following split:\n\\begin{equation}\n \\label{eqn:8.4}\n [D\\phi,I]_x=D_{{f}^N\\phi(x)}{f}^{-N}\\circ [D({f}^N\\phi {f}^{-N}),I]_{{f}^N(x)}\\circ D_x{f}^N.\n\\end{equation}\nWe then estimate each factor:\n$(D{f}^{-N}|T{f}^N(U'))=D{f}^{-N}|E^+$, $(D{f}^N|TU)=D{f}^N|E^+$, \nand $[D({f}^N\\phi {f}^{-N}),I]$.\n\n\\begin{itemize}\n \\item For $D{f}^{-N}|E^+$, we recall that $D{f}^{-1}$ is (under $\\omega_0$)\n uniformly contracting $E^+$ with the rate $e^{-h\/2}$. Applying such, we get\n \\begin{equation}\n \\label{eqn:8.5}\n \\|D{f}^{-N}|E^+\\|\\lesssim_{{f},\\omega_0}e^{-Nh\/2}.\n \\end{equation}\n\n \\item For $D{f}^N|E^+$, we recall that $D{f}$ is (under $\\omega_0$)\n uniformly expanding $E^+$ with the rate $e^{h\/2}$. Applying such, we get\n \\begin{equation}\n \\label{eqn:8.6}\n \\|D{f}^N|E^+\\|\\lesssim_{{f},\\omega_0}e^{Nh\/2}.\n \\end{equation}\n\n \\item Finally, for $[D({f}^N\\phi {f}^{-N}),I]$, we pick $N$ such that\n ${f}^N(x),{f}^N\\phi(x)\\in V$, by Lemma \\ref{lem:19}. (Note that this may be done only for $\\mathrm{vol}$-a.e. $x$.)\n \n Note that ${f}^N\\phi {f}^{-N}$ is a Poincar\\'e map ${f}^N(U)\\to {f}^N(U')$.\n Thus Lemma \\ref{lem:18} applies to give,\n \\begin{align}\n \\left\\|[D({f}^N\\phi{f}^{-N}),I]_{{f}^N(x)}\\right\\| &= \\left\\|I_{{f}^N(x)} - I_{{f}^N\\phi(x)}\\right\\| \\nonumber \\\\\n &\\leq C\\cdot\\mathrm{dist}({f}^N(x),{f}^N\\phi(x)). \\nonumber\n \\intertext{Via \\eqref{eqn:8.3}, we further estimate,}\n &\\leq C\\cdot\\mathrm{dist}_{E^-}({f}^N(x),{f}^N\\phi(x)) \\nonumber \\\\\n &\\leq C\\cdot e^{-Nh\/2}\\mathrm{dist}_{E^-}(x,\\phi(x)). \\label{eqn:8.7}\n \\end{align}\n\\end{itemize}\n\nCombining all three estimates \\eqref{eqn:8.5}, \\eqref{eqn:8.6}, and \n\\eqref{eqn:8.7}, we obtain, in \\eqref{eqn:8.4},\n\\[\\|[D\\phi,I]_x\\|\\leq C_{{f},\\omega_0}\\cdot e^{-Nh\/2}\\mathrm{dist}_{E^-}(x,\\phi(x)),\\]\nwhenever $N$ satisfies ${f}^N(x),{f}^N\\phi(x)\\in V$.\nFor $\\mathrm{vol}$-a.e. $x$, there are infinitely many such $N$'s; sending $N\\to\\infty$, \nwe have $[D\\phi,I]_x=0$, $\\mathrm{vol}$-a.e. $x$. Appealing to the continuity of $x\\mapsto[D\\phi,I]_x$, we prove the Proposition \\ref{lem:17}.\n\n\\section{Flatness}\n\\label{sec:flatness}\n\nThanks to the holomorphicity of stable and unstable foliations, we have the following flatness result. We note here that the proof below is shorter than a previous work, e.g., \\cite[Proposition 3.2.1]{FT18}.\n\n\\begin{proposition}\n \\label{lem:20}\n The metric $\\omega_0$ on $X\\setminus E$ is flat.\n\\end{proposition}\n\\begin{proof}\n Because $E^+$, and by a likewise proof with time sent backwards, $E^-$, \n are all holomorphic, a standard differential geometry trick builds \n a holomorphic coordinate $(w_1,\\cdots,w_{2n})$ such that\n\\begin{itemize}\n \\item $E^+=\\bigcap_{i=1}^n\\ker(dw_i)$, and\n \\item $E^-=\\bigcap_{j=n+1}^{2n}\\ker(dw_j)$.\n\\end{itemize}\n\nMoreover, it is easy to check that $E^-$ and $E^+$\nare orthogonal under $\\omega_0$, as follows. For $v\\in E^-$ and $w\\in E^+$, we get\n\\[e^{-h}\\omega_0(v,w)={f}^\\ast\\omega_{0}(v,w)=-{f}^\\ast\\omega_{0}(w,v)=-e^{h}\\omega_0(w,v)=e^{h}\\omega_0(v,w),\\]\nand thus $\\omega_0(v,w)=0$. Therefore, one can re-write $\\omega_0$ as\n\\[\\omega_0 = \\frac{\\sqrt{-1}}{2}\\left[\\sum_{i,j=1}^n a_{i\\overline{j}}\\, dw_i\\wedge d\\overline{w}_j + \\sum_{k,\\ell=n+1}^{2n}b_{k\\overline{\\ell}}\\, dw_k\\wedge d\\overline{w}_\\ell\\right],\\]\nwith some positive-definite matrix-valued functions\n$(a_{i\\overline{j}})$ and $(b_{k\\overline{\\ell}})$.\n\nThat $d\\omega_0=0$ then implies,\n$w_{n+1},\\overline{w}_{n+1},\\cdots,w_{2n},\\overline{w}_{2n}$-derivatives of $a_{i\\overline{j}}$ shall vanish and\n$w_1,\\overline{w}_1,\\cdots,w_n,\\overline{w}_n$-derivatives of $b_{k\\overline{\\ell}}$ shall vanish.\nConsequently, $\\omega_0$ is split completely into:\n\\[\\omega_0 = \\frac{\\sqrt{-1}}{2}\\omega_0^-(w_1,\\cdots,w_n) + \\frac{\\sqrt{-1}}{2}\\omega_0^+(w_{n+1},\\cdots,w_{2n}).\\]\n\nIn short, the metric $\\omega_0$ decomposes as $\\omega_0=\\omega_0^-\\times\\omega_0^+$ (locally).\nNow consider the Levi-Civita connection $\\nabla$ of the metric $\\omega_0$.\nThis connection satisfies the followings:\n\\begin{enumerate}\n \\item $\\nabla\\Omega=0$.\nAs $\\omega_k$'s satisfy this, and $\\nabla\\Omega=0$ is expressed with Christoffel\nsymbols of the metric, that $\\omega_k\\to\\omega_0$ in $C^\\infty_{\\mathrm{loc}}$\ncertifies this for $\\omega_0$ as well.\n \\item $\\nabla E^+\\subset E^+$ and $\\nabla E^-\\subset E^-$.\nThis follows from the local product structure of $\\omega_0=\\omega_0^-\\times \\omega_0^+$, \nwhere each $\\omega^\\pm_0$ is supported on $E^\\pm$, respectively.\n \\item For vector fields $Z^+$ on $E^+$ and $Z^-$ on $E^-$, the following holds:\n \\begin{align*}\n \\nabla_{Z^-}Z^+ &= p^+([Z^-,Z^+]), \\\\\n \\nabla_{Z^+}Z^- &= p^-([Z^+,Z^-]),\n \\end{align*}\n where $p^\\pm$ denotes the parallel projections $E^-\\oplus E^+\\to E^\\pm$.\nThis follows from the torsion-free property\n$[Z^-,Z^+]=\\nabla_{Z^-}Z^+-\\nabla_{Z^+}Z^-$,\nas well as $\\nabla E^\\pm\\subset E^\\pm$ verified above.\n\\end{enumerate}\n\nAccording to \\cite[Lemme 3.4.4]{BFL92}, connections satisfying all three above\nare unique. Now by the proof of \\cite[Lemme 2.2.3(b)]{BFL92},\nwe get that $\\nabla$ is a flat connection, i.e., $\\omega_0$ itself is flat on $X\\setminus E$.\n(The cited theorems are all local, thus the non-compact nature of\n$X\\setminus E$ does not matter here.)\n\\end{proof}\n\n\\section{Proof of Theorem \\ref{lem:00}}\n\\label{sec:upshots-of-flatness}\n\nNow we are almost ready to prove the Theorem \\ref{lem:00}. One of the key facts required for the Theorem is the following\n\\begin{theorem}[{\\cite[Theorem D]{claudon2020kahler}}]\n \\label{lem:21}\n Let $Y$ be a complex normal space which is compact and K\\\"ahler.\n Suppose $Y$ has klt singularities.\n If $T|Y_{\\mathrm{reg}}$ is a flat sheaf\n (cf. \\cite[Definition 2.3]{claudon2020kahler}), then\n $Y$ is a quotient of a complex torus\n by a finite group acting freely in codimension one.\n\\end{theorem}\n\nIn the followings, we introduce the normal space $Y$ to plug in the above\nTheorem \\ref{lem:21}.\nMorally, it is constructed by contracting $E\\subset X$ by a contraction\n$\\phi\\colon X\\to Y$, and the construction of this requires\n$X$ to be projective (which is also the only place that we use projectivity).\n\n\\subsection{Construction of the Contraction}\n\nThis section is aimed to prove the following\n\\begin{proposition}\n \\label{lem:contraction-construct}\n There exists a contraction $\\phi\\colon X\\to Y$ in which $Y$ is a normal projective variety, and its regular locus $Y_{\\mathrm{reg}}$ is the image of $X\\setminus E$. Moreover, $Y$ has canonical singularities, and has a K\\\"ahler current $\\phi_\\ast\\omega_0$ on $Y_{\\mathrm{reg}}$ that is a flat metric on $Y_{\\mathrm{reg}}$.\n\\end{proposition}\n\nThe proof of this fact extensively uses the fact that $X$ is projective. As a preparation, we first show that the eigenclasses $[\\eta_+]$ and $[\\eta_-]$ are in fact $(1,1)$-classes of nef Cartier $\\R$-divisors. Appealing to the projectivity of $X$, fix an ample class $[A]$. Then by Corollary \\ref{lem:spectral-sequence}, as $n\\to\\infty$,\n\\[\\frac{\\lambda^{-n}}{2q([A],[\\eta_-])}({f}^n)^\\ast[A]\\to[\\eta_+];\\quad \\frac{\\lambda^{-n}}{2q([A],[\\eta_+])}({f}^{-n})^\\ast[A]\\to[\\eta_-].\\]\n\nTherefore, $\\alpha=[\\eta_+]+[\\eta_-]$ is also (the class of) a Cartier $\\R$-divisor, which is big and nef. Note that $\\alpha$ is not ample; otherwise, then its null locus $E=\\varnothing$, and by Proposition \\ref{lem:20}, we have $X$ a compact flat manifold. The only such manifolds are tori \\cite{BieberbachI}\\cite{BieberbachII}, which contradicts to that $X$ is simply connected.\n\nThe first step of proving Proposition \\ref{lem:contraction-construct} is to construct the contraction $\\phi$. This is essentially done by \\cite[Theorem A]{BCL14}, but generalized to $\\R$-Cartier classes. We present it in the following\n\\begin{lemma}\n \\label{lem:22}\n Let $\\alpha$ be the $(1,1)$-class of a big and nef Cartier $\\R$-divisor, and $E$ be its null locus \\cite{CT15}. Then one can construct a contraction $\\phi\\colon X\\to Y$ in which $X\\setminus E$ is the maximal Zariski open subset in which $\\phi$ maps it isomorphically onto its image. (That is, $E=\\mathrm{Exc}(\\phi)$.)\n\\end{lemma}\n\\begin{proof}\nDenote $\\mathrm{Amp}(X)$ and $\\mathrm{Big}(X)$ for the cone of $(1,1)$-classes of ample and big Cartier $\\R$-divisors, respectively. By Kawamata's Rational Polyhedral Theorem \\cite[Theorem 5.7]{Kaw88}, the face $F$ of the cone $\\mathrm{Big}(X)\\cap\\overline{\\mathrm{Amp}}(X)$ in which $\\alpha$ lies on, is represented by a rational linear equation. Consequently, one can write $\\alpha=\\sum_{\\mathrm{finite}}a_i c_1(L_i)$ where each $c_i>0$ and each $L_i$ is a big and nef line bundle in which $c_1(L_i)\\in F$.\n\nBecause each $L_i$ is big and nef, by basepoint-free theorems \\cite[Theorem 3.9.1]{BCHM}\\cite[Theorem 1.3]{Kaw88}, it is semiample. Because all $L_i$'s lie on the same face of the big and nef cone $\\mathrm{Big}(X)\\cap\\overline{\\mathrm{Amp}}(X)$, the images of the morphism\n\\[\\Phi_{mL_i}\\colon X\\to\\Pp H^0(X,mL_i)\\]\nare isomorphic to each other, when $m\\gg 0$ (cf. \\cite[Definition 3-2-3]{KMM}).\n\nFor each $L_i$, denote its augmented base loci as $E_i$, denote the image of $\\Phi_{mL_i}$ as $Y_i$, and the restriction $\\phi_i:=\\Phi_{mL_i}|Y_i$.\n\nWe claim that $E_i=E_j$. Fix an isomorphism $\\psi\\colon Y_i\\to Y_j$ in which $\\psi\\circ\\phi_i=\\phi_j$. By \\cite[Theorem A]{BCL14}, the complement $X\\setminus E_i$ of the locus is characterized as the maximal Zariski open subset in which $\\Phi_{mL_i}$ isomorphically sends the subset into its image. Composing $\\psi$ to $\\Phi_{mL_i}$, we thus see that $X\\setminus E_i$ is sent isomorphically into its image via $\\Phi_{mL_j}$. Consequently, $X\\setminus E_i\\subset X\\setminus E_j$. Arguing symmetrically, we have the claim.\n\nFix a bundle $L_i$ and denote $\\phi:=\\phi_i$. An upshot of the above paragraph is, $\\phi$ is a contraction in which $X\\setminus E_i$ is the maximal Zariski open subset in which $\\phi$ maps it isomorphically onto its image.\n\nWe claim that $E=E_i$. Let $L=\\sum L_i$. Denote $E'$ for the augmented base locus of $L$. Then we have (i) $E'=E_i$, by the same token of showing $E_i=E_j$, and (ii) $E'\\subset E\\subset E_i$, by the followings. (By \\cite[Corollary 1.2]{CT15}, it suffices to compare the null loci.)\n\\begin{description}\n \\item[($E\\subset E_i$)] Fix any subvariety $V\\subset X$ in which $\\int_V\\alpha^{\\dim V}=0$. By $\\alpha=\\sum a_i c_1(L_i)$ and as the multinomial theorem gives nonnegative terms, we obtain $\\int_V c_1(L_i)^{\\dim V}=0$. Thus $V\\subset E_i$, and $E\\subset E_i$ follows.\n \\item[($E'\\subset E$)] For any subvariety $V\\subset X$ in which $\\int_V\\left(\\sum c_1(L_i)\\right)^{\\dim V}=0$, expand it with the multinomial theorem with nonnegative terms, to have $\\int_V \\prod c_1(L_i)^{e_i}=0$, whenever $\\sum e_i=\\dim V$. As $\\alpha=\\sum a_ic_1(L_i)$, again by multinomial theorem, we have $\\int_V\\alpha^{\\dim V}=0$. This shows $E'\\subset E$.\n\\end{description}\nCombining (i) and (ii) we have $E=E_i$, as required.\n\\end{proof}\n\nDefine the \\emph{exceptional set} $\\mathrm{Exc}(\\phi)$ as the minimal Zariski closed subset $E'\\subset X$ in which $X\\setminus E'$ is mapped isomorphically onto its image by $\\phi$. By what is stated in Lemma \\ref{lem:22}, $E=\\mathrm{Exc}(\\phi)$ is thus just the definition. This set is, by inverse function theorem, same as the set of $x\\in X$ in which $D_x\\phi$ is invertible.\n\n\\begin{proof}[Proof of Proposition \\ref{lem:contraction-construct}]\nConstruct the contraction map $\\phi\\colon X\\to Y$ by Lemma \\ref{lem:22}, for $\\alpha=[\\eta_+]+[\\eta_-]$. Then $Y_{\\mathrm{reg}}=\\phi(X\\setminus\\mathrm{Exc}(\\phi))=\\phi(X\\setminus E)$ follows.\n\nRecall that there is a flat metric $\\omega_0$ on $X\\setminus E$ (Proposition \\ref{lem:21}). Pushforwarding this to $Y_{\\mathrm{reg}}$, we have a K\\\"ahler current $\\phi_\\ast\\omega_0$ which is also a flat metric, on $Y_{\\mathrm{reg}}$.\n\nTo see why $Y$ has canonical singularities, we use the remark in \\cite[Remark 1]{wierzba}. As $\\phi$ is a hyperk\\\"ahler resolution, $\\phi$ is crepant, i.e., $\\phi^\\ast K_Y=K_X$.\n\\end{proof}\n\n\\subsection{Proof of Theorem \\ref{lem:00}}\n\nWhat was claimed about $Y$ in Proposition \\ref{lem:contraction-construct} suffices to apply \n\\cite[Theorem D]{claudon2020kahler}. \nThe final piece is claimed on the following\n\n\\begin{lemma}[Flat Connection implies Flat Sheaf]\n \\label{lem:24}\n Let $X$ be a connected complex manifold, and let $E\\to X$ be a \n rank $r$ vector bundle that admits a flat connection $\\nabla$. \n Then $E$ is flat in the sense of \\cite[Definition 2.3]{claudon2020kahler}.\n\\end{lemma}\n\\begin{proof}\n According to \\cite[Definition 2.3]{claudon2020kahler}, \n we set up our goal as follows. \n We find a linear representation $\\rho\\colon\\pi_1(X)\\to\\mathrm{GL}(r,\\C)$ \n such that $E\\to X$ is isomorphic to the bundle \n $(\\widetilde{X}\\times\\C^r)\/\\pi_1(X)\\to X$,\n where $\\pi_1(X)$ acts diagonally.\n\n To declare $\\rho$, we fix $x$, and set the holonomy map \n $\\mathrm{hol}_x\\colon\\pi_1(X)\\to\\mathrm{Hol}_x(\\nabla)\/\\mathrm{Hol}^0_x(\\nabla)$,\n $\\gamma\\mapsto P_\\gamma$. Here, $P_\\gamma\\colon E_x\\to E_x$ is a parallel transport map defined along the loop $\\gamma$,\n \\[\\mathrm{Hol}_x(\\nabla)=\\{P_\\gamma\\in\\mathrm{GL}(E_x)\\mid \\gamma\\text{ is a loop based on }x\\},\\]\n and $\\mathrm{Hol}^0_x(\\nabla)$ the identity component.\n But because $\\nabla$ is flat, $\\mathrm{Hol}^0_x(\\nabla)=1$.\n Moreover, as $\\mathrm{Hol}_x(\\nabla)\\leq\\mathrm{GL}(E_x)=\\mathrm{GL}(r,\\C)$, \n the map $\\mathrm{hol}_x$ declares the map \n $\\rho\\colon\\pi_1(X)\\to\\mathrm{GL}(E_x)=\\mathrm{GL}(r,\\C)$.\n\n Pick an open cover $\\{U_\\alpha\\}$ of $X$ in which each open set trivializes $E$\n and is evenly covered by the universal cover $\\widetilde{X}$. \n Fix a lift $\\widetilde{U}_\\alpha\\subset\\widetilde{X}$, \n for each $U_\\alpha\\subset X$. \n Now if $U_\\alpha\\cap U_\\beta\\neq\\varnothing$, one can find a unique \n $\\gamma_{\\alpha\\beta}\\in\\pi_1(X)$ in which any point \n $\\widetilde{y}\\in\\widetilde{U}_\\alpha$\n (that projects into $U_\\alpha\\cap U_\\beta$) gets deck transformed to \n $\\widetilde{y}.\\gamma_{\\alpha\\beta}\\in\\widetilde{U}_\\beta$. \n Now by the way holonomy map is defined, the transition map \n $E|U_\\beta\\to E|U_\\alpha$ is described as \n $\\mathrm{hol}_x(\\gamma_{\\alpha\\beta})$. \n But this is same as the transition map of \n $(\\widetilde{X}\\times\\C^r)\/\\pi_1(X)$.\n This shows the isomorphicity in question.\n\\end{proof}\n\nBecause $\\phi_\\ast\\omega_0$ on $Y$ produces a flat connection on \n$T|Y_{\\mathrm{reg}}$, we therefore have the desired flatness of \n\\cite[Theorem D]{claudon2020kahler}.\nTherefore $Y$ is a torus quotient; that is,\nthere exists a complex torus $\\mathbb{T}=\\C^{2n}\/\\Lambda$ and a finite group\nof toral isomorphisms $\\Gamma$\nin which $Y=\\mathbb{T}\/\\Gamma$.\n\nTo show that ${f}$ is induced from a hyperbolic linear transform, \nrecall that $\\phi$ isomorphically sends an open subset \n$U\\subset X\\setminus E$ to an open $V\\subset Y$.\nConjugating ${f}$ via $\\phi$, we then have a map \n$\\widetilde{f}\\colon V\\to Y$. \nSince $V\\supset Y_{\\mathrm{reg}}$, this $\\widetilde{f}$ lifts to a rational map \n$\\mathbb{T}\\dashrightarrow\\mathbb{T}$, defined in codimension 1. \nThe only such map is affine-linear \\cite[Lemma 1.25]{LoB17}, \nand this descends down to a morphism $\\widetilde{f}\\colon Y\\to Y$. \nThis verifies the desired classification of ${f}$, \nand finishes the proof of Theorem \\ref{lem:00}.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{INTRODUCTION}\n\\IEEEPARstart{L}{egged}\nrobots offer the possibility of negotiating challenging environments and, thus, are versatile platforms for various types of terrains~\\cite{bellicoso2018jfr}. In research and industry, there is an emphasis on replicating nature to improve the hardware design and algorithmic approach of robotic systems~\\cite{eckert2015comparing,nyakatura2019reverse}. Even with extensive research, matching the locomotion skills of conventional legged robots to their natural counterparts remains elusive. In contrast, wheels offer a chance to extend some capabilities, particularly speed, of these legged robotic systems beyond those of their natural counterparts, which can be crucial for any task requiring rapid and long-distance mobility skills in challenging environments. With this motivation, the central contribution of this work involves locomotion planning on a wheeled-legged robot to perform dynamic hybrid\\footnote{In our work, \\emph{hybrid} locomotion denotes simultaneous walking and driving.} walking-driving motions on various terrains, as shown in Fig.~\\ref{fig:anymal_on_wheels}.\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\columnwidth]{best_picture_3.png}\n \\caption{The fully torque-controlled quadrupedal robot ANYmal~\\cite{hutter2017anymaljournal} equipped with four non-steerable, torque-controlled wheels. The robot is traversing over a wooden plank (top images), on rough terrain (left middle image). In addition, the robot rapidly maps, navigates and searches dynamic underground environments at the \\ac{DARPA} Subterranean Challenge (lower images), and the robot's wheels are equipped with chains to traverse the muddy terrain (right middle image). A video can be found at \\href{https:\/\/youtu.be\/ukY0vyM-yfY}{https:\/\/youtu.be\/ukY0vyM-yfY}.}\n \\label{fig:anymal_on_wheels}\n\\end{figure}\n\\subsection{Related Work}\nThe online generation of optimal solutions for dynamic motions has been an active research area for conventional legged robots. Methods like \\ac{TO} and \\ac{MPC} are prevalent and recommended in the literature for aiding robots to be reactive against external disturbances and modeling errors. Finding control policies for performing walking motions in an articulated mobile robot is an involved task because of the system's many \\ac{DOF} and its nonlinear dynamics. This demands substantial computational power and introduces the challenge of overcoming local minima, making on-the-fly computations hard.\n\nIn the literature concerning wheeled-legged robots, hybrid walking-driving motions are scarce. The focus is mostly on statically-stable driving motions where the legs are used for active suspension alone~\\cite{reid2016actively,giordano2009kinematic,cordes2014active,giftthaler2017efficient,suzumura2014real,grand2010motion}. These applications do not show any instance of wheel lift-offs. Hence, sophisticated motion planning for the wheels is unnecessary and, therefore, usually skipped.\n\nAgile motions over steps and stairs are demonstrated for the first time in our previous work~\\cite{bjelonic2019keep}, where a hierarchical \\ac{WBC} tracks the motion trajectories that include the rolling conditions associated with the wheels. The robot can execute walking and driving motions, but not simultaneously due to missing wheel trajectories over a receding horizon. As such, the robot needs to stop and switch to a pure walking mode to overcome obstacles. The work in \\cite{viragh2019trajectory} extends the approach by computing base and wheel trajectories in a single optimization framework. This approach, however, decreases the update rate to \\unit[50]{Hz}, and no hybrid walking-driving motions are shown on the real robot.\n\n\\emph{CENTAURO}, a wheeled-legged quadruped with a humanoid upper-body, performs a walking gait with automatic footstep placement using a linear \\ac{MPC} framework~\\cite{laurenzi2018quadrupedal}. The authors, however, only perform walking maneuvers without making use of the wheels. In contrast, the path planner in \\cite{klamt2018planning} shows driving and walking motions in simulation without considering the robot's dynamics. Among the robots that employ hybrid walking-driving motions, Jet Propulsion Laboratory's (JPL) \\emph{Robosimian} uses a \\ac{TO} framework~\\cite{bellegarda2019trajectory}, but for passive wheels and results are only shown in a simulation. \\emph{Skaterbots}~\\cite{geilinger2018skaterbots} provide a generalized approach to motion planning by solving a \\ac{NLP} problem. This approach, however, is impractical to update online in a receding horizon fashion, i.e., in a \\ac{MPC} fashion, due to excessive computational demand.\n\nGiven state of the art, we notice a research gap in trajectory generation methods for hybrid walking-driving motions on legged robots with actuated wheels, which can be both robust on various terrains and be used on-the-fly. Fortunately, research in traditional legged locomotion offers solutions to bridge this gap. The quadrupedal robot \\emph{ANYmal} (without wheels) performs highly dynamic motions using \\ac{MPC}~\\cite{bellicoso2017dynamic,grandia2019feedback} and \\ac{TO}~\\cite{winkler2018gait,carius2019trajectory} approaches. Impressive results are shown by \\emph{MIT Cheetah}, which performs blind locomotion over stairs~\\cite{di2018dynamic} and jumps onto a desk with the height of \\unit[0.76]{m}~\\cite{nguyen2019optimized}. The quadrupedal robot \\emph{HyQ} shows an online, dynamic foothold adaptation strategy based on visual feedback~\\cite{magana2019fast}. Therefore, we conjecture that extending these approaches to wheeled-legged systems can aid in producing robust motions. \n\\subsection{Contribution}\nIn our work, we present an online \\ac{TO} framework for wheeled-legged robots capable of running in a \\ac{MPC} fashion by breaking the problem down into separate wheel and base \\ac{TO}s. The former takes the rolling constraints of the wheels into account, while the latter accounts for the robot's balance during locomotion using the idea of the \\ac{ZMP}~\\cite{vukobratovic2004zero}. A hierarchical \\ac{WBC}~\\cite{bjelonic2019keep} tracks these motions by computing torque commands for all joints. Our \\emph{hybrid locomotion framework} extends the capabilities of wheeled-legged robots in the following ways:\n\n1) Our framework is versatile over a wide variety of gaits, such as pure driving, statically stable gaits, dynamically stable gaits, and gaits with full-flight phases.\n\n2) We generate wheel and base trajectories for hybrid walking-driving motions in the order of milliseconds. Thanks to these fast update rates, the resulting motions are robust against unpredicted disturbances, making real-world deployment of the robot feasible. Likewise, we demonstrate the performance of our system at the \\ac{DARPA} Subterranean Challenge, where the robot autonomously maps, navigates and searches dynamic underground environments.\n\\section{MOTION PLANNING}\n\\label{sec:motionPlanning}\nThe whole-body motion planner is based on a task synergy approach~\\cite{farshidian2017planning}, which decomposes the optimization problem into wheel and base \\ac{TO}s. By breaking down the problem into these two tasks, we hypothesize that the issue of locomotion planning for high-dimensional (wheeled-)legged robots becomes more tractable. The optimization can be solved in real-time in a \\ac{MPC} fashion, and with high update rates, the locomotion can cope with unforeseen disturbances.\n\nThe main idea behind our approach is visualized in Fig.~\\ref{fig:motion_planner_overview}. Given a fixed gait pattern and the reference velocities\\footnote{The reference velocities are generated from an external source, e.g., an operator device, or a navigation planner.} \\ac{w.r.t.} the robot's base frame $B$ as shown in Fig.~\\ref{fig:wheel_trajectory}, i.e., the linear velocity vector of its \\ac{COM} $\\bm{v}_{\\mathrm{ref}}$ and the angular velocity vector $\\bm{\\omega}_{\\mathrm{ref}}=\\begin{bmatrix}0 & 0 & \\omega_{\\mathrm{ref}}\\end{bmatrix}^T$, desired motion plans are generated in two steps, where the wheel \\ac{TO} is followed by a base \\ac{TO} which satisfies the \\ac{ZMP}~\\cite{vukobratovic2004zero} stability criterion. The latter simplifies the system dynamics for motion planning of the \\ac{COM} to enable real-time computations onboard. Finally, a controller tracks these motion plans by generating torque commands which are sent to the robot's motor drives. Due to this decomposition of the locomotion problem, the wheel \\ac{TO}, the base \\ac{TO}, and the tracking controller can run in parallel.\n\nThe following two sections discuss the main contribution of our work and show how the locomotion of the independent wheel and base \\ac{TO}s are synchronized to generate feasible motion plans.\n\\begin{figure}[t!]\n \n \\centering\n \\includegraphics[width=\\columnwidth]{motion_planner_overview}\n \\caption{Overview of the motion planning and control structure. The motion planner is based on a \\ac{ZMP} approach, which takes into account the optimized wheel trajectories and the state of the robot. The hierarchical \\ac{WBC}, which optimizes the whole-body accelerations $\\bm{\\dot{u}^*}$ and contact forces $\\bm{\\dot{\\lambda}^*}$, tracks the operational space references. Finally, torque references $\\bm{\\tau}$ are sent to the robot. The wheel \\ac{TO}, base \\ac{TO}, and \\ac{WBC} can be parallelized due to the hierarchical structure.}\n \\label{fig:motion_planner_overview}\n\\end{figure}\n\\section{WHEEL TRAJECTORY OPTIMIZATION}\n\\label{sec:wheelTrajectoryOptimization}\n\\begin{figure}[t!]\n \n \\centering\n \\includegraphics[width=0.9\\columnwidth]{trajectories}\n \\caption{Timings and coordinate frames. The figure shows a sketch of the wheel and base trajectory. The wheel trajectories are optimized for each of the wheels separately and \\ac{w.r.t.} the coordinate frame $W$ whose $z$-axis is aligned with the estimated terrain normal, and whose $x$-axis is perpendicular to the estimated terrain normal and aligned with the rolling direction of the wheel. The origin of $W$ is at the projection of the wheel's axis center on the terrain. We show exemplarily the wheel trajectory of the right front leg over a time horizon of one stride duration, which is composed of four splines. The lift-off time $t_{\\mathrm{lo}}$, the time at maximum swing height $t_{\\mathrm{sh}}$, the touch-down time $t_{\\mathrm{td}}$, and the time horizon $t_\\mathrm{f}$ are specified by a fixed gait pattern. The base trajectories are optimized \\ac{w.r.t.} the coordinate frame $B$ whose origin is located at the robot's \\ac{COM}, and whose orientation is equal to that of the frame $W$.}\n \\label{fig:wheel_trajectory}\n\\end{figure}\nWe formulate the task of finding the wheel trajectories, i.e., the $x$, $y$ and $z$ trajectories \\ac{w.r.t.} a wheel coordinate frame $W$ as illustrated in Fig.~\\ref{fig:wheel_trajectory}, as a separate \\ac{QP} problem for each of the wheels given by\n\\begin{equation}\n\\label{eq:quadraticProgram}\n\\begin{aligned}\n& \\underset{\\bm{\\xi}}{\\text{minimize}}\n& & \\frac{1}{2} \\bm{\\xi}^T \\bm{Q} \\bm{\\xi} + \\bm{c}^T \\bm{\\xi}, \\\\\n& \\text{subject to}\n& & \\bm{A} \\bm{\\xi} = \\bm{b}, \\ \\bm{D}\\bm{\\xi} \\leq \\bm{f},\n\\end{aligned}\n\\end{equation}\nwhere $\\bm{\\xi}$ is the vector of optimization variables. The quadratic objective $\\frac{1}{2} \\bm{\\xi}^T \\bm{Q} \\bm{\\xi} + \\bm{c}^T \\bm{\\xi}$ is minimized while respecting the linear equality $\\bm{A} \\bm{\\xi} = \\bm{b}$ and inequality $\\bm{D}\\bm{\\xi} \\leq \\bm{f}$ constraints. In the following, the parameterization of the optimization variable is presented, and we introduce each of the objectives, equality constraints and inequality constraints which form the optimization problem.\n\\subsection{Parameterization of Optimization Variables}\nWe describe the wheel trajectories as a sequence of connected splines. In our implementation, one spline is allocated for each of the two segments where the wheel is in contact with the ground, and two splines are used for describing the trajectory of the wheels in the air. Therefore, the total number of splines for one gait sequence is $n_{\\mathrm{s}}=4$ (see Fig.~\\ref{fig:wheel_trajectory}). These two types of trajectory segments, i.e., corresponding to leg in the air and contact, are defined by different parameterizations as described next.\n\\subsubsection{Wheel segments in air}\n\\label{sec:wheel_segments_in_air}\nWe parameterize each coordinate of the wheel trajectory in air as quintic splines. Thus, the position vector at spline segment $i$ is described by\n\\begin{equation}\n\\bm{r}(t) = \n\\begin{bmatrix}\n\\bm{\\eta}^T(t)& \\bm{0}_{1\\times6}& \\bm{0}_{1\\times6} \\\\\n\\bm{0}_{1\\times6}& \\bm{\\eta}^T(t)&\\bm{0}_{1\\times6} \\\\\n\\bm{0}_{1\\times6}&\\bm{0}_{1\\times6}&\\bm{\\eta}^T(t)\n\\end{bmatrix}\n\\begin{bmatrix}\n\\bm{\\alpha}_{i,x} \\\\\n\\bm{\\alpha}_{i,y} \\\\\n\\bm{\\alpha}_{i,z}\n\\end{bmatrix} =\n\\bm{T}(t)\\bm{\\xi}_{i},\n\\end{equation}\nwhere $\\bm{\\eta}^T(t)=\\begin{bmatrix}t^5 & t^4 & t^3 & t^2 & t & 1\\end{bmatrix}$ and $\\bm{\\alpha}_{i,*}\\in\\mathbb{R}^6$ contains the polynomial coefficients. Here, $t \\in [\\bar{t}_i,\\bar{t}_i+\\Delta t_i]$ describes the time interval of spline $i$ with a duration of $\\Delta t_i$, where $\\bar{t}_i$ is the sum of all the previous $(i-1)$ splines' durations (see the example of the fourth spline in Fig.~\\ref{fig:wheel_trajectory}). We seek to optimize the polynomial coefficients for all coordinates of spline segment $i$ and hence contain them in the vector $ \\bm{\\xi}_{i} = \\begin{bmatrix} \\bm{\\alpha}_{i,x}^T & \\bm{\\alpha}_{i,y}^T & \\bm{\\alpha}_{i,z}^T \\end{bmatrix}^T \\in \\mathbb{R}^{18}$.\n\\subsubsection{Wheel segments in contact}\nAs shown in our previous work~\\cite{viragh2019trajectory}, we employ a different parameterization for wheel segments in contact, such that they inherently capture the velocity constraints corresponding to the no-lateral-slip of the wheel. For this purpose, we represent the wheel's velocity in the $x$ coordinate of $W$, i.e., the rolling direction, as a quadratic polynomial. In contrast, the velocities of the remaining directions are set to zero. Thus, the velocity vector of the $i$-th spline is\n\\begin{equation} \\label{eq:wheelSegmentVelocity}\n\\dot{\\bm{r}}(t) = \n\\begin{bmatrix}\n1 & t & t^2 \\\\\n0 & 0 & 0 \\\\\n0 & 0 & 0\n\\end{bmatrix}\n\\begin{bmatrix}\n\\alpha_{i,0} \\\\\n\\alpha_{i,1} \\\\\n\\alpha_{i,2}\n\\end{bmatrix},\n\\end{equation}\nand the position vector is obtained by integrating \\ac{w.r.t.} $t$ and adding the initial position $x_{i}(\\bar{t}_i)$ and $y_{i}(\\bar{t}_i)$ of the trajectory as\n\\begin{equation}\\label{eq:wheelSegmentSpline}\n\\bm{r}(t) = \n\\begin{bmatrix}\nx_{i}(\\bar{t}_i) \\\\\ny_{i}(\\bar{t}_i) \\\\\n0\n\\end{bmatrix} + \n\\int\\limits_{\\bar{t}_i}^{\\bar{t}_i+\\Delta t_i} \\bm{R}(t \\bm{\\omega}_{\\mathrm{ref}}) \\dot{\\bm{r}}(t) \\mathrm{d}t = \\bm{T}(\\omega_{\\mathrm{ref}},t) \\bm{\\xi}_i,\n\\end{equation}\nwhere the rotation matrix $\\bm{R}(t \\bm{\\omega}_{\\mathrm{ref}})$ describes the change in the wheel's orientation caused by the reference yaw rate, i.e., the vector $\\bm{\\omega}_{\\mathrm{ref}}=\\begin{bmatrix} 0 & 0 & \\omega_{\\mathrm{ref}} \\end{bmatrix}^T$. By assuming a constant reference yaw rate $\\omega_{\\mathrm{ref}}$ over the optimization horizon, the integration is solved analytically, giving a linear expression $\\bm{r}(t) = \\bm{T}(\\omega_{\\mathrm{ref}},t) \\bm{\\xi}_i$ \\ac{w.r.t.} the coefficients $\\bm{\\xi}_i= \\begin{bmatrix} \\alpha_{i,0} & \\alpha_{i,1} & \\alpha_{i,2} & x_{i}(\\bar{t}_i) & y_{i}(\\bar{t}_i) \\end{bmatrix}^T$. Thus, the velocity and acceleration trajectories of spline $i$ are described by $\\dot{\\bm{r}}(t)=\\dot{\\bm{T}}(\\omega_{\\mathrm{ref}},t) \\bm{\\xi}_i$ and $\\ddot{\\bm{r}}(t)=\\ddot{\\bm{T}}(\\omega_{\\mathrm{ref}},t) \\bm{\\xi}_i$, respectively.\n\\subsection{Formulation of Trajectory Optimization}\nTo achieve robust locomotion, we deploy an online \\ac{TO} which is executed in a \\ac{MPC} fashion, i.e., the optimization is continuously re-evaluated providing a motion over a time horizon of $t_{\\mathrm{f}}$ seconds, where $t_{\\mathrm{f}}$ can be chosen as the stride duration of the locomotion gait.\n\nThe complete \\ac{TO} of the wheel trajectories is formulated as a \\ac{QP} problem as follows, \n\\begin{equation}\n\\label{eq:trajectoryOptimization}\n\\begin{aligned}\n& \\underset{\\bm{\\xi}}{\\text{min.}}\n&& \\frac{1}{2} \\bm{\\xi}^T \\bm{Q}_{\\mathrm{acc}} \\bm{\\xi} & \\begin{matrix} \\text{acc-} \\\\ \\text{eleration} \\end{matrix}& \\\\\n&&& \\begin{matrix*}[l] + \\sum\\limits_{k=1}^{N} \\norm{\\bm{r}(t_k) - \\bm{r}_{\\mathrm{pre}}(t_k+t_{\\mathrm{pre}})}^{2}_{\\bm{W}_{\\mathrm{pre}}} \\Delta t \\\\ \\forall t \\in [0,t_{\\mathrm{f}}] \\end{matrix*} & \\begin{matrix} \\text{previous} \\\\ \\text{solution} \\end{matrix}& \\\\\n&&& \\text{if leg in contact:} \\\\ \n&&& \\hspace*{3mm} + \\norm{\\dot{\\bm{r}}(0)-\\bm{v}_{\\mathrm{ref}}}^{2}_{\\bm{W}_{\\mathrm{ref}}} & \\begin{matrix} \\text{reference} \\\\ \\text{velocity} \\end{matrix}& \\\\\n&&& \\hspace*{3mm} \\begin{matrix*}[l]+ \\sum\\limits_{k=1}^{N} \\norm{r_{x}(t_k)- r_{x,\\mathrm{def}}}^{2}_{w_{\\mathrm{def}}} \\Delta t \\\\ \\forall t \\in [\\bar{t}_i,\\bar{t}_i+\\Delta t_i] \\end{matrix*} & \\begin{matrix} \\text{default} \\\\ \\text{position} \\end{matrix}& \\\\\n&&& \\text{if leg in air:} \\\\\n&&& \\hspace*{3mm} + \\norm{\\bm{r}_{xy}(t_{\\mathrm{td}})-\\bm{r}_{xy,\\mathrm{ref}}-\\bm{r}_{xy,\\mathrm{inv}}}^{2}_{\\bm{W}_{\\mathrm{fh}}} & \\begin{matrix} \\text{foothold} \\\\ \\text{projection} \\end{matrix}& \\\\\n&&& \\hspace*{3mm} + \\norm{r_{z}(t_{\\mathrm{sh}}) - z_{\\mathrm{sh}}}^{2}_{w_{\\mathrm{sh}}}, & \\begin{matrix} \\text{swing} \\\\ \\text{height} \\end{matrix}& \\\\\n& \\text{s.t.}\n&& \\bm{r}(0) = \\bm{r}_{\\mathrm{init}}, \\dot{\\bm{r}}(0) = \\dot{\\bm{r}}_{\\mathrm{init}}, \\ddot{\\bm{r}}(0) = \\ddot{\\bm{r}}_{\\mathrm{init}}, & \\begin{matrix} \\text{initial} \\\\ \\text{state} \\end{matrix}& \\\\\n&&& \\begin{matrix*}[l] \\begin{bmatrix*}[l] \\bm{r}_{i}(\\bar{t}_i+\\Delta t_i) \\\\ \\dot{\\bm{r}}_{i}(\\bar{t}_i+\\Delta t_i) \\\\ \\ddot{\\bm{r}}_{i}(\\bar{t}_i+\\Delta t_i) \\end{bmatrix*} = \\begin{bmatrix*}[l] \\bm{r}_{i+1}(\\bar{t}_{i+1}) \\\\ \\dot{\\bm{r}}_{i+1}(\\bar{t}_{i+1}) \\\\ \\ddot{\\bm{r}}_{i+1}(\\bar{t}_{i+1}) \\end{bmatrix*}, \\\\ \\forall i \\in [0,n_\\mathrm{s}-1], \\end{matrix*} & \\begin{matrix} \\text{spline} \\\\ \\text{continuity} \\end{matrix}& \\\\\n&&& \\begin{matrix*}[l] \\begin{bmatrix*}[l] \\abs{r_{x}(t)-r_{x,\\mathrm{def}}} \\\\ \\abs{r_{y}(t)-r_{y,\\mathrm{def}}} \\\\ \\abs{r_{z}(t)-r_{z,\\mathrm{def}}} \\end{bmatrix*} < \\begin{bmatrix*}[l] x_{\\mathrm{kin}} \\\\ y_{\\mathrm{kin}} \\\\ z_{\\mathrm{kin}} \\end{bmatrix*}, \\\\ \\forall t \\in [0,t_{\\mathrm{f}}], \\end{matrix*} & \\begin{matrix} \\text{kinematic} \\\\ \\text{limits} \\end{matrix}& \\\\ \n\\end{aligned}\n\\end{equation}\nwhere each element is described in more detail in the following sections.\n\\subsection{Objectives}\n\\subsubsection{Acceleration minimization}\nThe acceleration $\\ddot{\\bm{r}}$ of the entire wheel trajectory is minimized to generate smooth motions and to regularize the optimization problem. The cost term for a wheel in air over the time duration $\\Delta t_i$ of spline $i$ is given by\n\\begin{equation}\n\\frac{1}{2}\\bm{\\xi}_i^T \\bigg(\\underbrace{2\\int_{\\bar{t}_i}^{\\bar{t}_i+\\Delta t_i}\\ddot{\\bm{T}}^T(t)\\bm{W}_{i,\\mathrm{acc}} \\ddot{\\bm{T}}(t) \\mathrm{d}t}_{\\bm{Q}_{i,\\mathrm{acc}}} \\bigg)\\bm{\\xi}_i,\n\\end{equation}\nwhere $\\bm{Q}_{i,\\mathrm{acc}} \\in \\mathbb{R}^{18 \\times 18}$ is the hessian matrix, and $\\bm{W}_{i,\\mathrm{acc}} \\in \\mathbb{R}^{3 \\times 3}$ is the corresponding weight matrix. Here, the linear term of (\\ref{eq:quadraticProgram}) is null, i.e., $\\bm{c}_{i,\\mathrm{acc}}=\\bm{0}_{18\\times1}$. Similar, for a spline segment $i$ in contact, the hessian matrix, $\\bm{Q}_{i,\\mathrm{acc}} \\in \\mathbb{R}^{5 \\times 5}$, is obtained by squaring and integrating the acceleration of the wheel trajectory over the time duration $\\Delta t_i$. The time matrix $\\bm{T}(\\omega_{\\mathrm{ref}},t)$, and hence, $\\bm{Q}_{i,\\mathrm{acc}}$ is dependent on the reference yaw rate as discussed in (\\ref{eq:wheelSegmentSpline}).\n\\subsubsection{Minimize deviations from previous solution}\nFor a \\ac{TO} with high update rates, large deviations between successive solutions can produce quivering motions. To avoid this, we add a cost term that penalizes deviations of kinematic states between consecutive solutions. We penalize the position deviations between the optimization variables from the current solution $\\bm{\\xi}$ and the previous solution $\\bm{\\xi}_{\\mathrm{pre}}$ as\n\\begin{equation} \\label{eq:minPrevSolution}\n\\sum\\limits_{k=1}^{N} \\norm{\\bm{r}(t_k) - \\bm{r}_{\\mathrm{pre}}(t_k+t_{\\mathrm{pre}})}^{2}_{\\bm{W}_{\\mathrm{pre}}} \\Delta t, \\hspace*{2mm} \\forall t \\in [0,t_{\\mathrm{f}}],\n\\end{equation}\nwhere $\\bm{r}_{\\mathrm{pre}}(t_k+t_{\\mathrm{pre}})$ is the position vector of the wheel from the previous solution shifted by the elapsed time $t_{\\mathrm{pre}}$ since computing the last solution, and $\\bm{W}_{\\mathrm{pre}} \\in \\mathbb{R}^{3 \\times 3}$ is the corresponding weight matrix. This cost is penalized over the time horizon $t_{\\mathrm{f}}$ with $N$ sampling points, where $t_k$ is the time at time step $k$ and $\\Delta t = t_{k} - t_{k-1}$. Objectives for minimizing velocity and acceleration deviations are added in a similar formulation.\n\\subsubsection{Track reference velocity of wheels in contact} \\label{sec:trackReferenceVel}\nAs shown in (\\ref{eq:wheelSegmentVelocity}), the velocity along the rolling direction of the wheel trajectory is described by a quadratic polynomial which inherently satisfies the no-slip constraint. To track the reference velocity $\\bm{v}_{\\mathrm{ref}}$, we minimize the norm $\\norm{\\dot{r}_x(0)-v_{x,\\mathrm{ref}}}^{2}_{w_{\\mathrm{ref}}}$ which gives\n\\begin{equation}\n\\frac{1}{2} \\bm{\\xi}_i^T \\underbrace{(2w_{\\mathrm{ref}}\\bm{\\Gamma}^T \\bm{\\Gamma})}_{\\bm{Q}_{i,\\mathrm{ref}}} \\bm{\\xi}_i + \\underbrace{(-2w_{\\mathrm{ref}}v_{x,\\mathrm{ref}}\\bm{\\Gamma})}_{\\bm{c}^T_{i,\\mathrm{ref}}}\\bm{\\xi}_{i}, \n\\end{equation}\nwhere $\\bm{\\Gamma} = \\begin{bmatrix} 1 & 0 & 0 \\end{bmatrix}\\dot{\\bm{T}}(\\omega_{\\mathrm{ref}},0)$.\n\\subsubsection{Minimize deviations from default wheel positions}\nWhen a wheel is in contact, differences in heading velocities of the wheels and the base can lead to configurations where the corresponding leg can get extended in the forward or backward direction. To guide the optimizer towards solutions within a desired leg configuration, we minimize the distance of the wheel from a default position $r_{x,\\mathrm{def}}$ along the rolling direction $x$ as\n\\begin{equation} \\label{eq:minDefWheelPos}\n\\sum\\limits_{k=1}^{N} \\norm{r_{x}(t_k)- r_{x,\\mathrm{def}}}^{2}_{w_{\\mathrm{def}}} \\Delta t, \\hspace*{2mm} \\forall t \\in [\\bar{t}_i,\\bar{t}_i+\\Delta t_i],\n\\end{equation}\nwhere $w_{\\mathrm{def}}$ is the corresponding weight, and the sampling over the $i$-th contact segment's time duration $\\Delta t_i$ is the same as shown in the paragraph below (\\ref{eq:minPrevSolution}). \n\\subsubsection{Foothold projection}\nThe placement of the wheel after a swing phase is crucial for hybrid locomotion (and for legged locomotion in general) because it contributes to maintaining balance and reacting to external disturbances. As shown in (\\ref{eq:trajectoryOptimization}), the cost term to guide the foothold placement is given by $\\norm{\\bm{r}_{xy}(t_{\\mathrm{td}})-\\bm{r}_{xy,\\mathrm{ref}}-\\bm{r}_{xy,\\mathrm{inv}}}^{2}_{\\bm{W}_{\\mathrm{fh}}}$, where $\\bm{W}_{\\mathrm{fh}}\\in\\mathbb{R}^{2\\times2}$ is the weight matrix, and $t_{\\mathrm{td}}=\\bar{t}_i+\\Delta t_i$ is the touchdown time of spline segment $i$ in air, i.e., at the end of the spline in air representing the second half of the swing phase (see Fig.~\\ref{fig:wheel_trajectory}). The subscript $xy$ indicates that only footholds on the terrain plane are considered, i.e., the $z$ component is given by the height of the terrain estimation.\n\nThe position vector $\\bm{r}_{xy,\\mathrm{ref}}$ guides the locomotion depending on the reference velocity, which is composed of the linear velocity vector $\\bm{v}_{\\mathrm{ref}}$ and the angular velocity vector $\\bm{\\omega}_{\\mathrm{ref}}$, as\n\\begin{equation}\n\\begin{bmatrix}\\bm{r}_{xy,\\mathrm{ref}} \\\\ 0 \\end{bmatrix} = \\begin{bmatrix}\\bm{r}_{xy,\\mathrm{def}} \\\\ 0 \\end{bmatrix} + (\\bm{v}_{\\mathrm{ref}} + \\bm{\\omega}_{\\mathrm{ref}} \\times \\bm{r}_{BW_{xy}})\\Delta t_i,\n\\end{equation}\nwhere $\\bm{r}_{xy,\\mathrm{def}} \\in \\mathbb{R}^{2}$ is a specified default wheel position similar to (\\ref{eq:minDefWheelPos}), and $\\bm{r}_{BW_{xy}} \\in \\mathbb{R}^{3}$ is the position vector from the robot's \\ac{COM} to the projection of the measured wheel position $W$ onto the terrain plane.\n\nDecoupling the locomotion problem into wheel and base \\ac{TO}s requires an additional heuristic to maintain balance. Balancing is achieved by adding a feedback term to the foothold obtained from reference velocities, through an inverted pendulum model~\\cite{gehring2016practice,raibert1986legged} given by\n\\begin{equation} \\label{eq:inverted_pendulum}\n\\bm{r}_{\\mathrm{inv}} = k_{\\mathrm{inv}}(\\bm{v}_{BH,\\mathrm{ref}} - \\bm{v}_{BH})\\sqrt{\\frac{h}{g}}, \n\\end{equation}\nwhere $\\bm{v}_{BH,\\mathrm{ref}} \\in \\mathbb{R}^3$ and $\\bm{v}_{BH} \\in \\mathbb{R}^3$ are the reference and the measured velocity between the associated hip and base frame, respectively. Here, $h$ is the height of the hip above the ground, $g$ represents the gravitational acceleration, and $k_\\mathrm{inv}$ is the gain for balancing.\n\\subsubsection{Swing height}\nSimilar to the objective in Section~\\ref{sec:trackReferenceVel}, we guide the wheel \\ac{TO} to match a predefined height. The objective $\\norm{r_{z}(t_{\\mathrm{sh}}) - z_{\\mathrm{sh}}}^{2}_{w_{\\mathrm{sh}}}$ given in (\\ref{eq:trajectoryOptimization}) can be expanded, with a weight of $w_{\\mathrm{sh}}$, to\n\\begin{equation}\n\\frac{1}{2}\\bm{\\xi}^T_i \\underbrace{(2w_\\mathrm{sh} \\bm{\\Gamma}^T \\bm{\\Gamma})}_{\\bm{Q}_{i,\\mathrm{sh}}}\\bm{\\xi}_{i} + \\underbrace{(-2w_{\\mathrm{sh}} z_{\\mathrm{sh}}\\bm{\\Gamma})}_{\\bm{c}^T_{i,\\mathrm{sh}}}\\bm{\\xi}_{i}, \n\\end{equation}\nwith $\\bm{\\Gamma} = \\begin{bmatrix} 0 & 0 & 1 \\end{bmatrix} \\bm{T}(t_{\\mathrm{sh}})$, and $t_{\\mathrm{sh}}=\\bar{t}_i+\\Delta t_i$ is the time at maximum swing height of spline segment $i$ in air, i.e., at the end of the spline in air representing the first half of the swing phase (see Fig.~\\ref{fig:wheel_trajectory}).\n\nSimilarly, we set the $x$ and $y$ coordinates of the swing trajectory at maximum swing height to match the midpoint of lift-off and touch-down position. \n\\subsection{Equality Constraints}\n\\subsubsection{Initial states}\nTo achieve a reactive behaviour, every optimization is initialized with the current state of the robot. As discussed in (\\ref{eq:wheelSegmentSpline}), the initial position of the wheel segments in contact are set as equality constraints given by\n\\begin{equation}\n\\bm{T}(0)\\bm{\\xi}_{i} = \\begin{bmatrix}x_{\\mathrm{init}} & y_{\\mathrm{init}} & 0\\end{bmatrix}^T,\n\\end{equation}\nwhere the initial values $x_{\\mathrm{init}}$ and $y_{\\mathrm{init}}$ are the measured positions of the wheel. \n\nIf the optimization problem begins with a wheel trajectory in air, we set the initial position, velocity, and acceleration to the measured state of the wheels, i.e., $\\bm{r}(0) = \\bm{r}_{\\mathrm{init}}$, $\\dot{\\bm{r}}(0) = \\dot{\\bm{r}}_{\\mathrm{init}}$, and $\\ddot{\\bm{r}}(0) = \\ddot{\\bm{r}}_{\\mathrm{init}}$.\n\\subsubsection{Spline continuity}\nWe constrain the position, velocity and acceleration at the junction of two consecutive wheel trajectory segments $i$ and $i+1$ in air as\n\\begin{equation} \\label{eq:splineContinuity}\n\\begin{bmatrix}-\\bm{T}_{i}(\\bar{t}_i+\\Delta t_i) & \\bm{T}_{i+1}(\\bar{t}_{i+1})\\\\ -\\dot{\\bm{T}}_{i}(\\bar{t}_i+\\Delta t_i) & \\dot{\\bm{T}}_{i+1}(\\bar{t}_{i+1})\\\\\n-\\ddot{\\bm{T}}_{i}(\\bar{t}_i+\\Delta t_i) & \\ddot{\\bm{T}}_{i+1}(\\bar{t}_{i+1})\n\\end{bmatrix}\n\\begin{bmatrix} \\bm{\\xi}_{i} \\\\ \\bm{\\xi}_{i+1} \\end{bmatrix} = \n\\begin{bmatrix} \\bm{0}_{3\\times1} \\\\\\bm{0}_{3\\times1} \\\\\\bm{0}_{3\\times1} \\end{bmatrix}.\n\\end{equation}\n\nJunction constraints between air and contact phases are only formulated on position and velocity level. Here, the acceleration is not constrained so that the optimizer accepts abrupt changes in accelerations, allowing lift-off and touch-down events.\n\\subsection{Inequality Constraints}\n\\subsubsection{Avoid kinematic limits}\nTo avoid over-extensions of the legs, we keep the wheel trajectories in a kinematic feasible space which is approximated by a rectangular cuboid centered around the default positions defined in (\\ref{eq:minDefWheelPos}). As introduced in (\\ref{eq:trajectoryOptimization}), the kinematic limits $x_{\\mathrm{kin}}$, $y_{\\mathrm{kin}}$, and $z_{\\mathrm{kin}}$ are enforced over the full time horizon $t_{\\mathrm{f}}$ as $ \\abs{r_{x}(t_k)-r_{x,\\mathrm{def}}} < x_{\\mathrm{kin}}$, $\\abs{r_{y}(t_k)-r_{y,\\mathrm{def}}}< y_{\\mathrm{kin}}$, $\\abs{r_{z}(t_k)-r_{z,\\mathrm{def}}} < z_{\\mathrm{kin}}$ , $\\forall k \\in [1,..,t_{\\mathrm{f}}\/\\Delta t],$ with a fixed sampling time $\\Delta t = t_{k} - t_{k-1}$ similar to (\\ref{eq:minPrevSolution}).\n\\section{BASE TRAJECTORY OPTIMIZATION}\n\\label{sec:baseTrajectoryOptimization}\nThe online \\ac{TO} of the base motion relies on a \\ac{ZMP}~\\cite{vukobratovic2004zero}-based optimization, which continuously updates reference trajectories for the free-floating base. Here, we extend the approach shown in our previous work~\\cite{bjelonic2019keep}, which originates from the motion planning problem of traditional legged robots~\\cite{bellicoso2017dynamic} and does not provide any optimized trajectories for the wheels\/feet over a receding horizon. Moreover, the work in~\\cite{bellicoso2017dynamic} only considers the optimization of the footholds. Given the wheel \\ac{TO} in (\\ref{eq:trajectoryOptimization}), we can generalize the idea of the \\ac{ZMP} to wheeled-legged systems taking into account the trajectories of the wheels over the time horizon $t_f$.\n\nAs shown in Figure~\\ref{fig:motion_planner_overview}, the motion planner of the free-floating base is described by a nonlinear optimization problem, which minimizes a nonlinear cost function $\\bm{f}(\\bm{\\xi})$ subjected to nonlinear equality $\\bm{c}(\\bm{\\xi})=\\bm{0}$ and inequality constraints $\\bm{h}(\\bm{\\xi})>\\bm{0}$. Here, the vector of optimization variables is composed of the position of the \\ac{COM} $\\bm{r}_{\\mathrm{COM}} \\in \\mathbb{R}^3$ and the yaw-pitch-roll Euler angles of the base $\\bm{\\theta} \\in \\mathbb{R}^3$.\n\\subsection{Parameterization of Optimization Variables and Formulation of Trajectory Optimization}\nThe trajectories for each \\ac{DOF} of the free-floating base is represented as a sequence of quintic splines, which allows setting position, velocity and acceleration constraints. Thus, the parameterization is formulated similarly to the definition of the wheel trajectories in air given in Section~\\ref{sec:wheel_segments_in_air}.\n\nThe online \\ac{TO} of the base has a similar structure as the \\ac{TO} described in (\\ref{eq:trajectoryOptimization}). Cost terms are added to maintain smooth motions and to track the reference velocity. The equality constraints initialize the variables with the current measured state of the base and add junction constraints between consecutive splines. For balancing, we add a \\ac{ZMP} inequality constraint, which is described in more detail in the next section, since this is the only part of the base optimization problem which is affected by the computed wheel trajectories in Section~\\ref{sec:wheelTrajectoryOptimization}. A complete list of each objective and constraint can be obtained in~\\cite{bjelonic2019keep}.\n\\subsection{Generalization of ZMP Inequality Constraint}\nTo ensure dynamic stability of the robot, the acceleration of the \\ac{COM} must be chosen so that the \\ac{ZMP} position $\\bm{r}_{\\mathrm{ZMP}} \\in \\mathbb{R}^3$ lies inside the support polygon\\footnote{A support polygon is defined by the convex hull of the expected wheels' contact trajectories.}. This nonlinear inequality constraint is given by\n\\begin{equation} \\label{eq:zmp_constraint}\n\\begin{bmatrix}\np(t_k) & q(t_k) & 0\n\\end{bmatrix}\n \\bm{r}_{\\mathrm{ZMP}}(t_k)+r(t_k) \\geq 0, \\hspace*{2mm}\\forall t_k\\in[0,t_{\\mathrm{f}}]\n\\end{equation}\nwhere $\\bm{r}_{\\mathrm{ZMP}} = \\bm{n} \\times \\bm{m}_{\\mathrm{gi}}\/ (\\bm{n}^T \\bm{f}_{\\mathrm{gi}})$~\\cite{sardain2004forces} and $\\bm{n} \\in \\mathbb{R}^3$ is the the terrain normal. The \\emph{gravito-inertial} wrench~\\cite{caron2017zmp} is given by $\\bm{f}_{\\mathrm{gi}} = m \\cdot (\\bm{g} - \\ddot{\\bm{r}}_{\\mathrm{COM}}) \\in \\mathbb{R}^3$ and $\\bm{m}_{\\mathrm{gi}} = m \\cdot \\bm{r}_{\\mathrm{COM}} \\times (\\bm{g}-\\ddot{\\bm{r}}_{\\mathrm{COM}}) - \\dot{\\bm{l}}_{\\mathrm{COM}} \\in \\mathbb{R}^3$, where $m$ is the mass of the robot, $\\bm{l}_{\\mathrm{COM}} \\in \\mathbb{R}^3$ is the angular momentum of the \\ac{COM}, and $\\bm{g} \\in \\mathbb{R}^3$ is the gravity vector. In contrast to \\cite{bellicoso2017dynamic,bjelonic2019keep}, the line coefficients $\\bm{d}(t)=[p(t) \\ q(t) \\ r(t)]^T$ that describe an edge of a support polygon depend on the time $t$, since the contact points of wheeled-legged robots continue to move even when a leg is in contact, unlike conventional legged robots. The \\ac{ZMP} inequality constraint is sampled over the time horizon $t_{\\mathrm{f}}$ with a fixed sampling time $\\Delta t = t_k - t_{k-1}$.\n\\section{EXPERIMENTAL RESULTS AND DISCUSSION}\n\\label{sec:experiments}\nTo validate the performance of our hybrid locomotion framework, this section reports on experiments and real-world applications conducted with ANYmal equipped with non-steerable, torque-controlled wheels (see Fig.~\\ref{fig:anymal_on_wheels}). A video\\footnote{\\hbox{Available at \\href{https:\/\/youtu.be\/ukY0vyM-yfY}{https:\/\/youtu.be\/ukY0vyM-yfY}}} showing the results accompanies this paper.\n\n\\subsection{Implementation}\nThe wheel \\ac{TO}, base \\ac{TO}, tracking controller, and state estimator are running on a single PC (Intel i7-7500U, 2.7 GHz, dual-core 64-bit). All computation regarding the autonomy, i.e., perception, mapping, localization, path planning, path following, and object detection, is carried out by three different PCs. The robot is entirely self-contained in terms of computation and perception. As can be obtained in Fig.~\\ref{fig:motion_planner_overview}, we run each wheel \\ac{TO}, the base \\ac{TO}, and the \\ac{WBC} in concurrent threads where each optimization reads the last available solutions from its predecessor. Moreover, all optimization problems are run online due to fast solver times.\n\nA hierarchical \\ac{WBC} tracks the computed trajectories in Section~\\ref{sec:wheelTrajectoryOptimization} and Section~\\ref{sec:baseTrajectoryOptimization} by generating torque commands for each actuator and accounting for the full rigid body dynamics including its physical constraints, e.g., the non-holonomic rolling constraint, friction cone, and torque limits~\\cite{bjelonic2019keep}. The \\ac{WBC} runs together with state estimation~\\cite{bloesch2018two} in a \\unit[400]{Hz} loop. Similar to~\\cite{bloesch2013stateconsistent}, we fuse the \\ac{IMU} reading and the kinematic measurements from each actuator to acquire the robot's state. Moreover, the frame $W$ in Fig.~\\ref{fig:wheel_trajectory} requires an estimate of the terrain normal. In this work, the robot is locally modeling the terrain as a three-dimensional plane, which is estimated by fitting a plane through the most recent contact locations~\\cite{bjelonic2019keep}. The contact state of each leg is determined through an estimation of the contact force, which takes into account the measurements of the motor drives and the full-rigid body dynamics.\n\nWe model and compute the kinematics and dynamics of the robot based on the open-source \\ac{RBDL}~\\cite{rbdl}, which uses the algorithms described in~\\cite{featherstone2014rigid}. The nonlinear optimization problem in Section~\\ref{sec:baseTrajectoryOptimization} is solved with a custom \\ac{SQP} algorithm, which solves the problem by iterating through a sequence of \\ac{QP} problems. Each \\ac{QP} problem including the optimization problem in Section~\\ref{sec:wheelTrajectoryOptimization} is solved using QuadProg++~\\cite{quadProg}, which internally implements the Goldfarb-Idnani active-set method~\\cite{goldfarb1983numerically}. To maintain a positive definite Hessian $\\bm{Q}$ in (\\ref{eq:quadraticProgram}) and to ensure the convexity of the resulting \\ac{QP} problem, a regularizer $\\rho$ is added to its diagonal elements, e.g., $\\rho = 10^{-8}$ as in~\\cite{bellicoso2017dynamic}. The tuning of the cost function in (\\ref{eq:trajectoryOptimization}) remains a manual task where a single value describes the diagonal elements of the weighting matrices, and one parameter set is provided for all motions shown next.\n\\subsection{Solver Time of Different Contact Scheduler and Gait Switching}\nAs shown in Table~\\ref{table:solver_time}, the wheel and base optimizations are solved in the order of milliseconds, and a great variety of gaits from driving, i.e., all legs in contact, up to gaits with full-flight phases are possible. Besides, the accompanying video shows manual gait switches between driving and hybrid walking-driving gaits, which can be useful for future works regarding automatic gait switches to reduce the \\ac{COT} further.\n\\begin{table}[b]\n \\caption{Time horizon $t_{\\mathrm{f}}$ and optimization times including model setup for different gaits. The reported solver times for wheel TO are for one wheel, and the hybrid running trot is a gait with full-flight phases.}\n \\label{table:solver_time}\n \\begin{center}\n \\begin{tabular}{cccc}\n \\toprule\n Gait &$t_{\\mathrm{f}}$ \/ (s) &Wheel \\ac{TO} \/ (ms) &Base \\ac{TO} \/ (ms) \\\\\\midrule\n Driving &1.7 &0.14 & 6.93 \\\\\n Hybrid walk &2.0 &0.81 & 14.83 \\\\\n Hybrid pace &0.95 &0.42 & 1.88 \\\\\n Hybrid trot &0.85 &0.47 & 2.4 \\\\ \n Hybrid running trot &0.64 &0.58 & 5.77\n \\\\\\bottomrule\n \\end{tabular}\n \\end{center}\n\\end{table}\n\\subsection{Rough Terrain Negotiation}\nThe robot is capable of blind locomotion in a great variety of unstructured terrains, e.g., inclines, steps, gravel, mud, and puddles. Fig.~\\ref{fig:anymal_on_wheels} and the accompanying video shows the performance of the robot in these kinds of environments. As depicted in Fig.~\\ref{fig:step_3d_plot}, the robot can overcome blindly steps up to \\unit[20]{\\%} of its leg length. The obstacle verifies the advantage of our hybrid locomotion framework. In contrast to the related work and our previous work~\\cite{bjelonic2019keep}, the robot traverses obstacles without stopping and switching to a pure walking motion. To our best knowledge, this is the first time a robot has demonstrated this level of obstacle negotiation at high speeds, with multiple gaits. Moreover, the locomotion becomes more robust since the framework accounts for possible motions on the ground. The accompanying video shows an instance where the wheel collides with the edge of a step. Our framework is capable of adapting to these scenarios by merely driving over the obstacle.\n\\subsection{High Speed and Cost of Transport}\nOn flat terrain, the robot achieves a mechanical \\ac{COT}~\\cite{bjelonic2018skating} of 0.2 while hybrid trotting at the speed of \\unit[2]{m\/s} and the mechanical power consumption is \\unit[156]{W}. The \\ac{COT} is by a factor of two higher than a pure driving gait at the same speed. A comparison to traditional walking and skating with passive wheels~\\cite{bjelonic2018skating} shows that the \\ac{COT} is lower by \\unit[42]{\\%} \\ac{w.r.t.} the traditional trotting gait and by \\unit[9]{\\%} \\ac{w.r.t.} skating motions.\n\\subsection{DARPA Subterranean Challenge: Tunnel Circuit}\nThe first \\ac{DARPA} Subterranean Challenge, the Tunnel Circuit, was held close to Pittsburgh in the NIOSH mine. The main objective was to search, detect, and provide autonomously spatially referenced locations of artifacts inside the underground mine. The wheeled version of ANYmal participated in two runs as part of the CERBERUS team~\\cite{cerberus} alongside flying and other mobile platforms. Moreover, the wheeled quadrupedal robot was deployed next to the traditional version of ANYmal without wheels.\n\nAs depicted in the lower images of Fig.~\\ref{fig:anymal_on_wheels}, the terrain consisted of hilly, bumpy, and muddy terrain and in some parts of the mine, the robot needed to cross puddles. Throughout both runs, the robot locomoted the terrain with a hybrid trot. In the first run, the wheeled version of ANYmal traversed \\unit[70]{m} without significant issues, and the robot successfully reported the correct location of one artifact. In the end, however, one of the wheels started slipping on the muddy terrain before the fall. As can be seen in the accompanying video, the robot managed to balance after the first slip because of the foothold adaptation of the inverted pendulum model in (\\ref{eq:inverted_pendulum}). The mechanical design was improved after the first run by adding a chain around the wheels to increase the friction coefficient while traversing the mud (see the right middle image of Fig.~\\ref{fig:anymal_on_wheels}). Fig.~\\ref{fig:darpa_2d_plot} shows the desired trajectories of the \\ac{COM} and wheels for a few meters of the second run. Here, it can be seen that the robot executes a hybrid trotting gait since, during ground contact, the wheel moves along its rolling direction. Despite the challenging environment, the hybrid locomotion framework enabled the robot to travel for more than \\unit[100]{m}.\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\columnwidth]{step_3d_plot.pdf}\n \\caption{Measured \\ac{COM} and wheel trajectories of ANYmal over a step while hybrid trotting, as depicted in the upper images of Fig.~\\ref{fig:anymal_on_wheels}. The three-dimensional plot shows the wheel trajectories of the front legs (red line), the wheel trajectories of the hind legs (blue line), and the \\ac{COM} trajectory (green line) \\ac{w.r.t.} the inertial frame, which is initialized at the beginning of the run.}\n \\label{fig:step_3d_plot}\n\\end{figure}\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\columnwidth]{darpa_2d_plot.pdf}\n \\caption{Desired \\ac{COM} and wheel trajectories of ANYmal at the DARPA Subterranean Challenge. The robot, ANYmal, is autonomously locomoting with a hybrid driving-trotting gait during the second scoring run. The environment is a wet, inclined, muddy, and rough underground mine, as depicted in the lower images of Fig.~\\ref{fig:anymal_on_wheels}. Despite the challenging terrain, the robot manages to explore fully autonomously the mine for more than \\unit[100]{m}. The plots show the desired motions for approximately two stride durations. Due to the fast update rates of the \\ac{TO} problems and reinitialization of the optimization problem with the measured state, the executed trajectories are almost identical to the desired motion shown here.}\n \\label{fig:darpa_2d_plot}\n\\end{figure}\n\nDue to the time limitation of the challenge, the speed of mobile platforms becomes an essential factor. Most of the wheeled platforms shown from the other competing teams were faster than our traditional legged robot by a factor of two or more. The upcoming Urban Circuit of the Subterranean Challenge includes stairs and other challenging obstacles. Therefore, we believe, only a wheeled-legged robot is capable of combining speed and versatility. At the Tunnel Circuit, the wheeled version of ANYmal traversed with an average speed of \\unit[0.5]{m\/s}, which was more than double the average speed of the traditional legged system. Our chosen speed was limited by the update frequency of our mapping approach or otherwise could have traversed the entire terrain with much higher speeds without any loss in agility. On the whole, the performance validation for real-world applications is satisfying, and a direct comparison with the traditional ANYmal reveals the advantages of wheeled-legged robots.\n\\section{CONCLUSIONS}\n\\label{sec:conclusions}\nThis work presents an online \\ac{TO} generating hybrid walking-driving motions on a wheeled quadrupedal robot. The optimization problem is broken down into wheel and base trajectory generation. The two independent \\ac{TO}s are synchronized to generate feasible motions by time sampling the prior generated wheel trajectories, which form the support polygons of the \\ac{ZMP} inequality constraint of the base \\ac{TO}. The presented algorithm makes the locomotion planning for high dimensional wheeled-legged robots more tractable, enables us to solve the problem in real-time on-board in a \\ac{MPC} fashion, and increases the robustness in the robot's locomotion against unforeseen disturbances.\n\nTo the best of our knowledge, this is the first time that a hybrid walking-driving robot is deployed for real-world missions at one of the biggest robotics competition. In future work, we plan to incorporate the optimization of the gait timings to enable automatic switching between pure driving and hybrid walking-driving. As shown in our work, an automated way of choosing when to lift a leg can increase the speed and robustness of the locomotion.\n\n\n \n \n \n \n \n\n\n\n\n\n\n\n\n\n\n\\balance\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{1}\nLet $B(n)$ denote the $n$-th Bell number and $S(n,k)$ denote the Stirling number of the second kind. Spivey \\cite{Spi} established combinatorially the following identity for Bell numbers\n\\begin{equation}\\label{spv}\nB(n+m) = \\sum_{k=0}^n \\sum_{j=0}^m j^{n-k} \\binom{n}{k} S(m,j) B(k)\\,.\n\\end{equation}\nVarious alternative proofs and extensions of this identity have appeared in the literature. For instance, Gould and Quaintance \\cite{GouQua} provided a generating function proof, which, in turn, was extended by Xu \\cite{Xu} in the case of Hsu and Shuie's \\cite{HsuShu} generalized Stirling numbers. Katriel \\cite{Kat} proved a $q$-analogue using certain $q$-differential operators. Still, Belbachir and Mihoubi \\cite{BelMih} proved \\ref{spv} using a decomposition of the Bell polynomial using a certain polynomial basis. See also \\cite[Theorem 10]{MaaMih}, \\cite[Theorem 5.3]{ManSchSha} and \\cite{Mez} for other generalizations and methods. The present authors also derived a generalization \\cite[Theorem 4.5]{CorCelGon} of Spivey's identity in the case of the generalized $q$-Stirling numbers that arise from normal ordering.\n\nIn this short paper, we derive variations of \\ref{spv} and other related identities using the aforementioned rook model which we describe in Section \\ref{2}. The results are presented in Section \\ref{3}. In Section \\ref{4}, we introduce a new rook model for the Type II generalized $q$-Stirling numbers of Remmel and Wachs \\cite{RemWac} which is a variation of Goldman and Haglund's. We show how the method used in deriving the earlier identities can be easily adapted under this model.\n\n\\section{The rook model}\\label{2}\nIn this section, we introduce a rook model based on Goldman and Haglund's \\cite{GolHag}. We will later show that these two models are essentially identical and give rise to the same rook numbers.\n\nLet $w$ be a word consisting of the letters $U$ and $V$. If we let $U$ correspond to a unit horizontal step and $V$ a unit vertical step, then $w$ outlines a Ferrers boards (or simply, board), which we denote by $B(w)$. For example, Figure \\ref{board} shows the board outlined by $UVUUUVVU$. Note that we allow the rightmost columns and the bottommost rows to be empty, i.e., not containing any cell. The initial $U$'s and final $V$'s, if any, are extraneous when we consider rook placements, but they are natural in the context of normal ordering (see \\cite{Var}). Alternatively, we can also describe a board by the length of its columns.\n\nGiven a board $B$, we associate to each cell of $B$ an integer called its \\emph{pre-weight}. Fix $s\\in\\mathbb R$. A placement of $k$ rooks on a board $B$ is a marking of $k$ cells of $B$ with ``$\\bullet$'' such that at most one rook is placed in each column. We assume that the rooks are placed from right to left among $k$ chosen column. Once a rook is placed on a cell, $s-1$ is added to the pre-weight of every cell to its left in the same row. However, if a cell lies above a rook (in which case it said to be \\emph{cancelled} by the rook), then the cell is assigned the pre-weight 0. We denote by $C_k(B;s)$ the set of all placement of $k$ rooks on $B$ under the rules we just described.\n\nFor the purpose of distinguishing the pre-weights of a board before and after rooks are placed, we will call the former as \\emph{default pre-weights}. We number the rows and columns of a board $B$ from top to bottom and from right to left, respectively. The term pre-weight is chosen because it determines the weight of a board. Specifically, a cell with pre-weight $t$ is assigned the weight $q^t$ if it does not contain a rook, and $[t]_q=\\frac{q^t-1}{q-1}$ if it contains a rook. The weight of a rook placement $\\phi$, denoted by $\\wt(\\phi)$, is defined as the product of the weight of the cells.\n\nDenote by $J_n$ the board outlined by $(VU)^n$ whose cells have default pre-weight 1. Note that columns of $J_n$ have lengths $0, 1, \\ldots n-1$. Figure \\ref{rookplc} shows a rook placement on $J_5$ where the values in the cells indicate the pre-weights. This particular rook placement has weight $q^{2s+2}[s]_q$.\n\nWe denote by $S_{s,q}[n,k]$ the sum of the weights of all placements of $n-k$ rooks on $J_n$, i.e.,\n\\[\nS_{s,q}[n,k] = \\sum_{\\phi\\in C_{n-k}(J_n;s)} \\wt(\\phi)\\,.\n\\]\n\n\\begin{remark} If, for instance, a column has two cells where the bottom cell has pre-weight $c_1$ and the top cell has pre-weight $c_2$, then the placement of a rook on the bottom cell has weight $[c_1]_q$ while the placement of a rook on the top cell has weight $q^{c_1}[c_2]_q$. Hence, the total weight of all rook placements on this column is $[c_1]_q+q^{c_1}[c_2]_q=[c_1+c_2]_q$. In general, if $c$ is the sum of the pre-weights of the cells in a column, then the total weight of all rook placements on the column is $[c]_q$. \\label{ddd}\n\\end{remark}\n\n\\begin{proposition}\\label{trr} The number $S_{s,q}[n,k]$ satisfies the recurrence\n\\begin{equation*}\nS_{s,q}[n,k] = q^{s(n-1)-(s-1)(k-1)} S_{s,q}[n-1,k-1] + [s(n-1)-(s-1)k]_q S_{s,q}[n-1,k]\\,,\n\\end{equation*}\nwith initial conditions $S_{s,q}[n,0]=S_{s,q}[0,n]=\\delta_{0,n}$, where $\\delta_{0,n}=1$ if $n=0$ and 0 if $n\\neq 0$.\n\\begin{proof}If there is a rook on the $n$-th column, then the other columns form a rook placement from $C_{n-1-k}(J_{n-1};s)$. Due to the placement of $n-1-k$ rooks on the first $n-1$ columns, the $n$-th column has pre-weight $(n-1)+(s-1)(n-1-k)=s(n-1)-(s-1)k$. Hence, the total weight contributed by all rook placements on the $n$-th column is $[s(n-1)-(s-1)k]_q$.\n\nIf there is no rook on the $n$-th column, then the remaining columns form a rook placement from $C_{n-k}(J_{n-1};s)$. Because of the placement of $n-k$ rooks on the first $n-1$ columns, the $n$-th column has pre-weight $(n-1)+(s-1)(n-k)=s(n-1)-(s-1)(k-1)$, which implies that the $n$-th column contributes a weight of $q^{s(n-1)-(s-1)(k-1)}$.\n\\end{proof}\n\\end{proposition}\n\nBy comparing recurrences, we see that $h^{n-k}S_{s,q}[n,k]$ equals the number $\\mathfrak S_{s;h}(n,k|q)$ in \\cite{ManSchSha2}. These numbers are coefficients of the string $(VU)^n$ in its normally ordered form, i.e., an equivalent expression where the $V$'s are to the right of $U$'s obtained using the relation $UV=qVU+hV^s$. When $h=1,q=1$, the cases $s=1$ and $s=0$ produce the usual Stirling number of the first kind and Stirling number of the second kind, respectively. (See \\cite{ManSchSha2} for other special cases and their relationship with normal ordering.)\n\n\n\\begin{remark} The original Goldman-Haglund rook model is as follows (see \\cite[Section 7]{GolHag}). Let $B$ be a board and consider a rook placement $\\phi$ on $B$ such that no two rooks occupy the same column. Given a cell $\\gamma$ in $B$, let $v(\\gamma)$ be the number of rooks strictly to the right of, and in same row as $\\gamma$. Define the weight of $\\gamma$ to be\n\n\\begin{align*}\n\\wt(\\gamma) = \\begin{cases}\n 1 & \\mbox{if there is a rook below} \\\\ & \\hspace{0.3in}\\mbox{and in the same column as $\\gamma$,} \\\\\n [(s-1)v(\\gamma)+1]_q & \\mbox{if $\\gamma$ contains a rook,}\\\\\nq^{(s-1)v(\\gamma)+1}& \\mbox{else}\\,.\n\\end{cases}\n\\end{align*}\n\nThe weight of $\\phi$, denoted by $\\wt(\\phi)$ is then defined as the product of the weight of the cells. (Note that in \\cite{GolHag}, the boards are bottom and right justified, the rooks are placed from left to right and a rook cancels cells that lie below it. In addition, $\\alpha$ is used instead of $s$. The definition of $\\wt(\\gamma)$ above has been modified to reflect the definitions in the current paper.)\n\nIt can be easily seen that the weight of a rook placement under the Goldman-Haglund model and under the current model are identical. First, note that $\\wt(\\gamma)$ is precisely the pre-weight of $\\gamma$. Under the latter, a cell lying above a rook has pre-weight 0 and hence, has weight $q^0=1$. A cell containing a rook and has $t$ rooks lying to its right and on the same row will have pre-weight $t(s-1)+1$ and hence, the weight $[t(s-1)+1]_q$. Finally, a cell not containing a rook and has $t$ rooks lying to its right and on the same row will have pre-weight $t(s-1)+1$ and thus, the weight $q^{t(s-1)}+1$. Hence, given a rook placement $\\phi$, $\\wt(\\phi)$ is identical under both models.\n\\end{remark}\n\nThe concept of pre-weights provides us with some degree of flexibility in further generalizing the earlier model. For instance, instead of adding $s-1$ to the pre-weight of each cell to the left of a rook, we can fix a \\emph{rule} $R$ which specifies, given a placement of rook in a cell, the cell (in each column to the right of a rook) that receives the pre-weight increment. Denote by $C^R_k(B;s)$ the set of placement of $k$ rooks on the board $B$ under this rule. In particular, we denote by $\\mathcal R$ the rule which specifies that the $i$-th rook (from the right) adds a pre-weight of $s-1$ to the $i$-th cell above the bottom cell in every column to its left. Of course, when the default rule is used, we simply use the notation $C_k(B;s)$.\n\nIn addition to specifying a rule $R$, we can also assign other values for the default pre-weights. This will be useful when we consider Type II $q$-analogues of Stirling numbers in Section \\ref{4}. For $\\alpha\\in\\mathbb R$, denote by $J'_{n,\\alpha}$ the board outlined by $(VU)^nV$ where the bottom cell in each column each has default pre-weight $\\alpha$. Figure \\ref{rookplc2} shows one rook placement in $C^{\\mathcal R}_{3}(J'_{4,\\alpha};s)$, which has weight $q^{2s+2\\alpha}[\\alpha]_q^2[s]_q$.\n\n\\begin{proposition}\\label{prew} Given any rule $R$ and two boards $B_1$ and $B_2$ with the same number of non-empty columns such that the corresponding columns have identical total column pre-weights, we have $\\sum_{\\phi\\in C^{R}_k(B_1;s)} \\wt(\\phi) = \\sum_{\\phi\\in C^{R}_k(B_2;s)} \\wt(\\phi)$. Furthermore, given a board $B$ and two rules $R_1$ and $R_2$, $\\sum_{\\phi\\in C^{R_1}_k(B;s)} \\wt(\\phi) = \\sum_{\\phi\\in C^{R_2}_k(B;s)} \\wt(\\phi)$.\n\\begin{proof} We can establish, using the same reasoning as Remark \\ref{ddd}, that the sum of the weights of all possible rook placements on a particular column is completely determined by the column's total pre-weight, regardless of the distribution of the pre-weights of the individual cells. This proves the first statement. The second statement follows from the first statement and the fact that different rules applied on the same board result to boards with identical total column pre-weights.\n\\end{proof}\n\\end{proposition}\n\n\\section{Results}\\label{3}\nDefine the $n$-th generalized Bell polynomial by \\[B_{s,q}[n;x]=\\sum_{k=0}^n S_{s,q}[n,k] x^k\\,.\\] The $n$-th generalized Bell number, denoted by $B_{s,q}[n]$, is defined as $B_{s,q}[n;1]$. The $q$-binomial coefficients are given by $\\pq{n}{k}_q=\\frac{[n]_q!}{[k]_q![n-k]_q!}$, where $[n]_q!=[n]_q[n-1]_q\\cdots[2]_q[1]_q$. Clearly, $\\pq{n}{k}_q = \\pq{n}{n-k}_q$. They also satisfy the relations (see \\cite[Table 1 and Identity (2.2)]{MedLer})\n\\begin{align*}\n\\pq{n}{k}_q &= \\sum_{0\\leq t_1 \\leq t_2 \\leq \\ldots \\leq t_{n-k}\\leq k} q^{t_1+t_2+\\cdots+t_{n-k}}\\\\\n\\pq{n}{k}_q &= \\sum_{t_0+t_1+t_2+\\cdots+t_{k}=n-k} q^{0t_0+1t_1+2t_2+\\cdots+kt_{k}}\\,.\n\\end{align*}\n\nIn this section, we show how Spivey's identity, specifically, its generalization involving $S_{s,q}[n,k]$, is a consequence of a convolution identity. We then proceed by deriving other convolution identities and the identities for Bell numbers that arise from them.\n\nA version of Lemma \\ref{lemma1} and Theorem \\ref{or} already appeared in \\cite{CorCelGon}, but only the case where $s\\in\\mathbb N$ and with pre-weights interpreted as subdivisions in cells was proved in detail.\n\n\\begin{lemma}\\label{lemma1} Let $\\phi$ be a placement of $k$ rooks on the board $J_n$. Then, there exists a unique (possibly empty) collection $\\mathcal C$ of columns in $\\phi$, such that if $|\\mathcal C|=\\mu+1$, then (a) these columns have a rook in the bottom $1,2,\\ldots,\\mu+1$ cells and (b) every column not in $\\mathcal C$ contains at least $1+t$ uncanceled cells not containing a rook, where $t$ is the number of columns in $\\mathcal C$ to the right of that column.\n\\begin{proof}\nLet $\\phi\\in C^{R}_k(J'_{n,\\alpha};s)$. The elements of $\\mathcal C$ can be obtained iteratively as follows. Let $c_1$ be the first column from the right containing a rook on the bottom cell. If there exists no such $c_1$, then $\\mathcal C=\\varnothing$. Let $c_2$ be the next column containing a rook in one of the bottom two cells, i.e., either on the bottom cell or on the cell above the bottom cell. If there exists no such $c_2$, then $\\mathcal C=\\{c_1\\}$. We can continue this process as long as needed until the elements of $\\mathcal C$ are all determined. Note that this process also shows the uniqueness of $\\mathcal C$.\n\\end{proof}\n\\end{lemma}\n\n\\begin{lemma}\\label{alpha} Let $\\alpha\\in\\mathbb R,n,k\\in\\mathbb N$. Then, for any rule $R$,\n\\begin{align}\n\\sum_{\\phi\\in C^R_{n-k}(J'_{n;\\alpha};s)} \\wt(\\phi) &= \\sum_{r=0}^n q^{\\alpha r}\\pq{n}{r}_{q^s} S_{s,q}[r,k] \\prod_{i=0}^{n-r-1}[\\alpha+si]_q \\label{oh}\\\\\n\\sum_{\\phi\\in C^R_{n-k}(J'_{n;\\alpha};s)} \\wt(\\phi) &= \\sum_{r=0}^n q^{\\alpha k} S_{s,q}[n,r] \\pq{r}{k}_{q^{s-1}} \\prod_{i=0}^{r-k-1} [\\alpha+i(s-1)]_q\\,.\\label{oh1}\n\\end{align}\n\\begin{proof}\nWe prove \\ref{oh} first. Let $\\phi\\in C^{\\mathcal R}_{n-k}(J'_{n;\\alpha};s)$. By Lemma \\ref{lemma1}, we can write $\\phi$ uniquely as a pair $(\\mathcal C_{\\phi},\\phi-\\mathcal C_{\\phi})$, where $\\mathcal C_{\\phi}$ are the cells of $\\phi$ that either lie in the columns that satisfy the properties in the lemma with $\\mu=n-r-1$ or on the bottom $1+t$ cells of the other columns, and $\\phi-\\mathcal C_{\\phi}$ are the cells in $\\phi$ not in $\\mathcal C_{\\phi}$. One sees that $\\phi-\\mathcal C_{\\phi}$ forms some rook placement in $C_{r-k}(J_r;s)$. This accounts for the factor $S_{s,q}[r,k]$.\n\nLet $L_{\\mu}$ be the set of all such possible $\\mathcal C_{\\phi}$, with $\\phi\\in C^{\\mathcal R}_{n-k}(J'_{n;\\alpha};s)$ with $\\mu=n-r-1$ as in Lemma \\ref{lemma1}. It suffices to compute the sum of the weights of the elements of $L_{\\mu}$. Clearly, the columns containing rooks contribute a weight of $\\prod_{i=0}^{n-r-1} [\\alpha+si]_q$ and the bottom $r$ cells of the columns not containing rooks contribute a weight of $q^{\\alpha r}$ regardless of where the rooks are. On the other hand, the weight contributed by the remaining cells depends on the placement of the rooks, and varies as $q^{st_1} q^{st_2} \\cdots q^{st_{n-r}}$ for some $0\\leq t_1\\leq t_2 \\leq \\ldots \\leq t_{n-r} \\leq r$. Hence, these cells contribute\n\\[\n\\sum_{0\\leq t_1\\leq t_2 \\leq \\ldots \\leq t_{n-r} \\leq r} q^{st_1} q^{st_2} \\cdots q^{st_{n-r}} = \\pq{n}{r}_{q^s}\\,.\n\\]\n\nFor \\ref{oh1}, we consider rook placements using the default rule on the board outlined by $(VU)^nV$ where the cells in the first row each have pre-weight $\\alpha$ while all the other cells have pre-weight 1. By Proposition \\ref{prew}, the sum of the weight of all placements of $n-k$ rook on this board equals that of $C^R_{n-k}(J'_{n;\\alpha};s)$ for any rule $R$.\n\nA rook placement in $C_{n-k}(J'_{n;\\alpha};s)$ can then be formed as follows. First, we place $n-r$ rooks in cells lying on rows $2,\\ldots,n-1$ and columns $2,\\ldots,n$. The sum of the weights of all such placements is $S_{s,q}[n,r]$. Then, place the remaining $r-k$ rooks on the first row. The weight contributed by the cells in the first row containing rooks is $\\prod_{i=0}^{r-k-1} [\\alpha+i(s-1)]_q$, and this is independent of the choice of columns. On the other hand, the weight contributed by the cells in the first row not containing rooks is $q^{\\alpha t_0} q^{\\alpha t_1}\\cdots q^{\\alpha t_{r-k}}$ for some $t_0+t_1+\\ldots+t_{r-k}=k$. Finally,\n\\begin{align*}\n\\sum_{t_0+t_1+\\ldots+t_{r-k}=k} &q^{\\alpha t_0} q^{\\alpha t_1}\\cdots q^{\\alpha t_{r-k}} = q^{\\alpha } \\pq{r}{r-k}_{q^{s-1}} = q^{\\alpha } \\pq{r}{k}_{q^{s-1}}\\,.\n\\end{align*}\n\\end{proof}\n\\end{lemma}\n\n\\begin{remark}\nIt is interesting to note that the individual terms on the $RHS$ of \\ref{oh} and \\ref{oh1} are not equal for fixed $r$.\n\\end{remark}\n\n\\begin{theorem}\\label{or} Let $n,m,k\\in\\mathbb N$. Then,\n\\begin{align}\\label{iden1}\nS_{s,q}[n+m,k] = \\sum_{r=0}^n \\sum_{j=0}^m S_{s,q}[m,j]\\,q^{r(j(1-s)+sm)} \\pq{n}{r}_{q^s} S_{s,q}[r,k-j] \\prod_{i=0}^{n-r-1} [j(1-s)+sm+si]_q\\,.\n\\end{align}\nConsequently,\n\\begin{equation}\\label{bbb1}\nB_{s,q}[n+m;x] = \\sum_{r=0}^n \\sum_{j=0}^m S_{s,q}[m,j]\\,q^{r(j(1-s)+sm)} \\pq{n}{r}_{q^s} B_{s,q}[r;x] x^j \\prod_{i=0}^{n-r-1} [j(1-s)+sm+si]_q\\,.\n\\end{equation}\nIn particular,\n\\begin{align}\\label{an}\nB_{s,q}[n+m] &= \\sum_{r=0}^n \\sum_{j=0}^m S_{s,q}[m,j]\\,q^{r(j(1-s)+sm)} \\pq{n}{r}_{q^s} B_{s,q}[r] \\prod_{i=0}^{n-r-1} [j(1-s)+sm+si]_q\\,.\n\\end{align}\n\n\\begin{proof} The number $S_{s,q}[n+m,k]$ equals the sum of the weights of all rook placements in the set $C_{n+m-k}(J_{n+m;s})$. The rooks may be placed as follows. First, place $m-j$ rooks in columns $2,\\ldots,m$. The sum of the weights of all such placements is $S_{s,q}[m,j]$. Next, place the remaining $n+j-k$ rooks in columns $m+1,\\ldots,n$. Due to the placement of the first $m-j$ rooks, columns $m+1,\\ldots,n$ form a board whose column pre-weights are identical to those of $J'_{n;j(1-s)+sm}$. Identity \\ref{iden1} follows from \\ref{oh} of Theorem \\ref{alpha} using the replacement $\\alpha \\rightarrow j(1-s)+sm, k \\rightarrow k-j$.\n\nTo obtain \\ref{bbb1}, multiply both sides of \\ref{iden1} by $x^k$ and take the sum over all $0\\leq k \\leq n+m$.\n\\end{proof}\n\\end{theorem}\n\n\n\n\\begin{theorem}\\label{ne} Let $n,m\\in\\mathbb N$. Then,\n\\begin{align}\nS_{s,q}[n+m,k] = \\sum_{r=0}^n \\sum_{j=0}^m S_{s,q}[m,j] q^{(j(1-s)+sm)(k-j)} \\pq{r}{k-j}_{q^{s-1}} S_{s,q}[n,r] \\prod_{i=0}^{r-k+j-1} [j(1-s)+sm+i(s-1)]_q\\,.\n\\end{align}\nConsequently,\n\\begin{align}\nB_{s,q}[n+m;x] &=\\sum_{r,j,k} S_{s,q}[m,j] q^{(j(1-s)+sm)(k-j)} \\pq{r}{k-j}_{q^{s-1}} S_{s,q}[n,r] x^{k} \\prod_{i=0}^{r-k+j-1} [j(1-s)+sm+i(s-1)]_q\\,.\n\\end{align}\nIn particular,\n\\begin{align}\\label{exd}\nB_{s,q}[n+m] &=\\sum_{r,j,k} S_{s,q}[m,j] q^{(j(1-s)+sm)(k-j)} \\pq{r}{k-j}_{q^{s-1}} S_{s,q}[n,r] \\prod_{i=0}^{r-k+j-1} [j(1-s)+sm+i(s-1)]_q\\,.\n\\end{align}\n\\begin{proof} The proof is the same as that of Theorem except that we use Identity \\ref{oh1} of Theorem \\ref{alpha}.\n\\end{proof}\n\\end{theorem}\n\nTaking cue from the identities in \\cite[Theorem 2.6]{MedLer}, we derive another set of convolution identities.\n\n\\begin{theorem}\\label{sec}For $n,m,j\\in\\mathbb N$, we have\n\\begin{align}\nS_{s,q}[n+1,m+j+1] &= \\sum_{k=m}^n \\sum_{r=0}^{n-k} S_{s,q}[k,m]\\, q^{r((k-m)(s-1)+k+1)+k+(k-m)(s-1)} \\nonumber \\\\ &\\hspace{0.5in} \\pq{n-k}{r}_{q^s} S_{s,q}[r,j] \\prod_{i=0}^{n-k-r-1} [(k-m)(s-1)+k+1+si]_q\\,,\\label{hey1} \\\\\nS_{s,q}[n+1,m+j+1] &= \\sum_{k=m}^n \\sum_{r=0}^{n-k} S_{s,q}[k,m]\\, q^{(j + 1)(k + (k - m)(s - 1)) + j} \\nonumber \\\\ &\\hspace{0.5in} \\pq{r}{j}_{q^{s-1}} S_{s,q}[n-k,r] \\prod_{t=0}^{r-j-1} [(k-m)(s-1)+k+1+t(s-1)]_q\\,.\\label{hey2}\n\\end{align}\n\\begin{proof} For a given rook placement in $C_{n-m-j}(J_{n+1};s)$, there exists a unique $k$ such that there are $k-m$ rooks in columns $2,\\ldots,k$, $0$ rooks in column $k+1$ and $n-j-k$ rooks in columns $k+2,k+3,\\ldots,n+1$. Indeed, if there exists $k_1n$. Since $S^{1,p,q}_{n,k}(\\alpha,\\beta,\\rho)=S^{2,p,q}_{n,k}(\\beta,\\alpha,-\\rho)$, we also consider just one of these numbers, which we choose to be $S^{1,p,q}_{n,k}(\\alpha,\\beta,\\rho)$. When $(\\alpha,\\beta,\\rho)=(0,1,0)$ and $p=q=1$, $S^{1,p,q}_{n,k}(\\alpha,\\beta,\\rho)=S(n,k)$.\n\nRemmel and Wachs also gave combinatorial interpretations for these numbers for a certain choice of parameters. Our goal in this section is to introduce an alternative rook interpretation for $S^{1,1,q}_{n,k}(\\alpha,\\beta,\\rho)$ which adapts the method used in Section \\ref{3}. To do this, we will derive the analogue of Theorem \\ref{or} and Theorem \\ref{ne}. We leave the corresponding versions of the other identities in Section \\ref{3} to the interested reader.\n\nLet $c,d\\in\\mathbb R$ and denote by $J^{c,d}_n$ the board outlined by $(VU)^n$ such that each bottom cell has default pre-weight $d$ and all other cells have default pre-weight $c+d$. Let $S^{c,d}_{s,q}[n,k]$ denote the sum of the weights of all placements of $n-k$ rooks on $J^{c,d}_n$.\n\n\\begin{remark} Since $J^{1,0}_{n}=J_n$, it follows that $S^{1,0}_{s,q}[n,k]=S_{s,q}[n,k]$.\n\\end{remark}\n\n\\begin{remark} A ``special case'' of $J^{c,d}_{n-1}$ with $c=m,d=0$ are the $m$-jump boards $Jb_{n,m}$ \\cite[Example 3]{GolHag} which have column heights $0,m,2m,\\ldots,(n-1)m$. One sees that these two boards have identical column pre-weights.\n\\end{remark}\n\n\\begin{proposition} The number $S^{c,d}_{s,q}[n,k]$ satisfies the recurrence\n\\[\nS^{c,d}_{s,q}[n,k] = q^{c(n-1)+d+(n-k)(s-1)} S^{c,d}_{s,q}[n-1,k-1] + [c(n-1)+d+(n-k-1)(s-1)]_q S^{c,d}_{s,q}[n-1,k]\n\\]\nHence,\n\\[\nS^{\\beta-\\alpha,-\\rho}_{-\\beta+1,q}[n,k] = S^{1,1,q}_{n,k}(\\alpha,\\beta,\\rho)\\,.\n\\]\n\\begin{proof} The proof is similar to that of Proposition \\ref{trr}.\\end{proof}\n\\end{proposition}\n\n\\begin{theorem}\\label{t1}\nFor $n,m,k\\in\\mathbb N$, we have\n\\begin{multline*}\nS^{c,d}_{s,q}[n+m,k] = \\sum_{r=0}^n \\sum_{j=0}^m S^{c,d}_{s,q}[m,j]\\, q^{r(d+mc+(m-j)(s-1))} \\\\ \\pq{n}{r}_{q^{c+s-1}} S^{c,0}_{s,q}[r,k-j] \\prod_{i=0}^{n-r-1} [d+mc+(m-j)(s-1)+(c+s-1)i]_q\\,.\n\\end{multline*}\nThis implies that\n\\begin{multline*}\nS^{1,1,q}_{n,k}(\\alpha,\\beta,\\rho) = \\sum_{r=0}^n \\sum_{j=0}^m S^{1,1,q}_{m,j}(\\alpha,\\beta,\\rho)\\, q^{r(\\beta j-\\alpha m-\\rho)} \\pq{n}{r}_{q^{-\\alpha}} S^{1,1,q}_{r,k-j}(\\alpha,\\beta,0) \\prod_{i=0}^{n-r-1} [\\beta j-\\rho-\\alpha(m+i)]_q\\,.\n\\end{multline*}\n\\begin{proof}\nWe proceed as in Theorem \\ref{or}. The $m-j$ rooks in columns $2,\\ldots,m$ explains the factor $S^{c,d}_{s,q}[m,j]$. We then determine the sum of the weights of all placements of $n+j-k$ rooks in the remaining columns.\n\nThe placement of $m-j$ rooks adds $(m-j)(s-1)$ to the total pre-weights of columns $m+1,\\ldots,n$. These columns form a board whose column pre-weights are identical to the board outlined by $(VU)^nV$ such that the bottom cells have pre-weight $d+mc+(m-j)(s-1)$ and the other cells have pre-weight $c$. For convenience, let us denote such board by $B$.\n\nNote that Lemma \\ref{lemma1} is a statement regarding only the placement of rooks and thus, it holds true regardless of the default pre-weights of the cells and the rule used. Hence, we can use this lemma on $B$ with rule $\\mathcal R$, reasoning as in Theorem \\ref{or}.\n\nLet $\\phi\\in C^{\\mathcal R}_{n+j-k}(B;s)$. Again, by Lemma \\ref{lemma1}, we can write $\\phi$ uniquely as $(\\mathcal C_{\\phi},\\phi-\\mathcal C_{\\phi})$. Note that $\\phi-\\mathcal C_{\\phi}$ forms a rook placement in $C_{r-k+j}(J^{c,0}_r;s)$ and not in $C_{r-k+j}(J^{c,d}_r;s)$! This is because $\\phi-\\mathcal C_{\\phi}$ does not include any bottom cell of $\\phi$.\n\nThe board $B$ is similar to $J'_{n;\\alpha}$ with $\\alpha=d+mc+(m-j)(s-1)$, the only difference is that the non-bottom cells of $B$ have pre-weight $c$. Hence, a cell with pre-weight $c$ which receives an additional pre-weight due to the placement of a rook following rule $\\mathcal R$ will have pre-weight $c+s-1$.\n\\end{proof}\n\\end{theorem}\n\nDefine $B^{1,1,q}_{n;x}(\\alpha,\\beta,\\rho)=\\sum_{k=0}^n S^{1,1,q}_{n}(\\alpha,\\beta,\\rho)$.\n\\begin{corollary}\nFor $n,m,\\in\\mathbb N$, we have\n\\begin{multline}\\label{mezz}\nB^{1,1,q}_{n+m;x}(\\alpha,\\beta,\\rho) = \\sum_{r=0}^n \\sum_{j=0}^m S^{1,1,q}_{m,j}(\\alpha,\\beta,\\rho)\\, q^{r(\\beta j-\\alpha m-\\rho)} \\pq{n}{r}_{q^{-\\alpha}} B^{1,1,q}_{r;x}(\\alpha,\\beta,0) \\prod_{i=0}^{n-r-1} [\\beta j-\\rho -\\alpha(m+i) ]_q\\,.\n\\end{multline}\n\\end{corollary}\n\nLetting $q=1$ produces a generalization of Spivey's identity, which is a variant of Xu's result \\cite[Corollary 8]{Xu}. Also, Mez\\\"o \\cite[Theorem 2]{Mez} obtained a special case of \\ref{mezz} for $(\\alpha,\\beta,\\rho)=(0,1,\\rho)$.\n\n\\begin{theorem} Let $n,m\\in\\mathbb N$. Then,\n\\begin{multline*}\nS^{c,d}_{s,q}[n+m,k] = \\sum_{r=0}^n \\sum_{j=0}^m S^{c,d}_{s,q}[m,j] q^{(d + mc + (m - j)(s - 1))(k - j)} \\\\ \\pq{r}{k-j}_{q^{s-1}} S^{c,0}_{s,q}[n,r] \\prod_{i=0}^{r-k+j-1} [d + mc + (m - j)(s - 1) + i(s-1)]_q\\,.\n\\end{multline*}\nThis gives\n\\begin{multline}\nS^{1,1,q}_{n+m,k}(\\alpha,\\beta,\\rho) = \\sum_{r=0}^n \\sum_{j=0}^m S^{1,1,q}_{m,j}(\\alpha,\\beta,\\rho) q^{(\\rho+\\alpha m -\\beta j)(j-k)} \\\\ \\pq{r}{k-j}_{q^{-\\beta}} S^{1,1,q}_{n,r}(\\alpha,\\beta,0) \\prod_{i=0}^{r-k+j-1} [\\rho+\\alpha m -\\beta (j+i)]_q\\,.\n\\end{multline}\n\\begin{proof} Instead of using the board $J^{c,d}_{n}$, we use the board outlined by $(VU)^n$ such that the bottom cells of columns $2,\\ldots,m$ have pre-weight $c+d$, the cells in row $m$ have pre-weight $c+d$, and all other cells have pre-weight $c$. We then proceed as in the proof of Theorem \\ref{ne}.\n\\end{proof}\n\\end{theorem}\n\\section{Some Remarks}\nMez\\\"o's dual of Spivey's identity is based on the relation $n!=\\sum_{k=0}^n c(n,k)$. However, it is not true that $[n]_q!=\\sum_{k=0}^n c_q[n,k]$, where $c_q[n,k]=S_{1,q}[n,k]$. It would be interesting to derive an expression for $[n+m]_q!$ similar to Mez\\\"o's for $q$-Stirling numbers of the first kind.\n\nWe also ask if a modification of the rook model for the Type II generalized $q$-Stirling numbers can be made to given another combinatorial interpretation for both the Type I and Type II $p,q$-analogues. It is also possible that the techniques outlined in this paper may be modified to obtain the corresponding identities for the generalized Stirling numbers that arise from other rook models, such as those in \\cite{RemWac} and \\cite{Brig}.\n\nLastly, we note that other forms of convolution identities are given in \\cite[Theorem 1]{AgoDil} and \\cite[page 5]{MedLer}. Can these identities and their $S_{s,q}[n,k]$ versions be proved using partitions on rook boards?\n\n\\bibliographystyle{amsplain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro}\n\nQuasi-twisted (QT) codes form an important class of block codes that includes cyclic codes, quasi-cyclic (QC) codes and constacyclic codes as special subclasses. In addition to their rich algebraic structure (\\cite{Y}), QT codes are also asymptotically good (\\cite{C,DH}) and they yield good parameters (\\cite{AGOSS1,AGOSS2}).\n\nSeveral bounds on the minimum distance of cyclic codes had been derived. The first and perhaps the most famous one was the BCH bound, given by Bose and Chaudhuri (\\cite{BC}), and by Hocquenghem (\\cite{H}). An extension of the bound was formulated by Hartmann and Tzeng in \\cite{HT}. One can consider the HT bound as a two-directional BCH bound. The Roos bound in \\cite{R2} generalized this idea further by allowing the HT bound to have a certain number of gaps in both directions. The Roos bound was extended to constacyclic codes in \\cite{RZ}. Another remarkable extension of the HT bound, known as the shift bound, was introduced by van Lint and Wilson in \\cite{LW}. This bound is known to be particularly powerful on many non-binary codes (\\cite{EL}). \n\nDespite being interesting from both theoretical and practical points of view, studies on the minimum distance estimates for QC and QT codes are not as rich as for cyclic and constacyclic codes. Semenov and Trifonov developed a spectral analysis of QC codes (\\cite{ST}), based on the work done by Lally and Fitzpatrick in~\\cite{LF}, and formulated a BCH-like bound, together with a comparison with a few other bounds for QC codes. Their approach is generalized by Zeh and Ling, by using the HT bound, in \\cite{ZL}.\n\nThis paper is organized as follows. Section \\ref{basics} recalls necessary background material and adapts the spectral method of Semenov-Trifonov to QT codes. We formulate and prove a generalized spectral bound on the minimum distance in Section \\ref{bounds section}, where the Roos and shift bounds for QT codes are derived as special cases. Section \\ref{exmp section} supplies numerical examples showing how the proposed bound performs in comparison with the Semenov-Trifonov (ST) and Zeh-Ling (ZL) bounds.\n \n\\section{Background}\\label{basics}\n\\subsection{Constacyclic codes and minimum distance bounds from their defining sets}\nLet $\\mathbb{F}_{q}$ denote the finite field with $q$ elements, where $q$ is a prime power. Let $m$ be, throughout, a positive integer with $\\gcd(m,q)=1$. For some nonzero element $\\lambda \\in \\mathbb{F}_{q}$, a linear code $C\\subseteq \\mathbb{F}_{q}^m$ is called a {\\it $\\lambda$-constacyclic code} if it is invariant under the $\\lambda$-constashift of codewords, \\emph{i.e.}, $(c_0,\\ldots,c_{m-1}) \\in C$ implies $(\\lambda c_{m-1},c_0,\\ldots,c_{m-2}) \\in C$. In particular, if $\\lambda = 1$ or $q=2$, then $C$ is a cyclic code.\n\nConsider the principal ideal $I=\\langle x^m-\\lambda \\rangle$ of $\\mathbb{F}_{q}[x]$ and define the residue class ring $R:=\\mathbb{F}_{q}[x]\/I$. To a vector $\\vec{a}\\in \\mathbb{F}_{q}^m$, we associate an element of $R$ via the isomorphism:\n\\begin{eqnarray}\\label{identification-1}\n\\phi: \\ {\\mathbb F}_q^{m} & \\longrightarrow & R \\\\\n\\vec{a}=(a_0,\\ldots,a_{m-1}) & \\longmapsto & a(x)= a_0+\\cdots + a_{m-1}x^{m-1}.\\nonumber\n\\end{eqnarray}\nNote that the $\\lambda$-constashift in ${\\mathbb F}_q^{m}$ amounts to multiplication by $x$ in $R$. Hence, a $\\lambda$-constacyclic code $C\\subseteq \\mathbb{F}_{q}^m$ can be viewed as an ideal of $R$. Since $R$ is a principal ideal ring, there exists a unique monic polynomial $g(x)\\in R$ such that $C=\\langle g(x)\\rangle$, \\emph{i.e.}, each codeword $c(x)\\in C$ is of the form $c(x)=a(x)g(x)$, for some $a(x)\\in R$. The polynomial $g(x)$, which is a divisor of $x^m-\\lambda$, is called the {\\it generator polynomial} of $C$. \n\nLet $\\mbox{wt}(c)$ denote the number of nonzero coefficients in $c(x)\\in C$. Recall that the minimum distance of $C$ is defined as $d(C):=\\min\\{\\mbox{wt}(c) : 0\\neq c(x)\\in C\\}$ when $C$ is not the trivial zero code. For any positive integer $p$, let $\\vec{0}_p$ denote throughout the all-zero vector of length $p$. A $\\lambda$-constacyclic code $C=\\{\\vec{0}_m\\}$ if and only if $g(x)=x^m-\\lambda$. \n\nThe roots of $x^m-\\lambda$ are of the form $\\alpha, \\alpha\\xi, \\ldots, \\alpha\\xi^{m-1}$, where $\\alpha$ is a fixed $m^{th}$ root of $\\lambda$ and $\\xi$ is a fixed primitive $m^{th}$ root of unity. Henceforth, let $\\Omega :=\\{\\alpha\\xi^k\\ :\\ 0\\leq k \\leq m-1\\}$ be the set of all $m^{th}$ roots of $\\lambda$ and let ${\\mathbb F}$ be the smallest extension of $\\mathbb{F}_{q}$ that contains $\\Omega$ (equivalently, ${\\mathbb F}$ is the splitting field of $x^m-\\lambda$). Given the $\\lambda$-constacyclic code $C=\\langle g(x)\\rangle$, the set $L:=\\{\\alpha\\xi^k\\ :\\ g(\\alpha\\xi^k)=0\\}\\subseteq \\Omega$ of roots of its generator polynomial is called the {\\it defining set} of $C$. Note that $\\alpha\\xi^k\\in L$ implies $\\alpha\\xi^{qk}\\in L$, for each $k$. A nonempty subset $E\\subseteq\\Omega$ is said to be {\\it consecutive} if there exist integers $e,n$ and $\\delta$ with $e\\geq 0,\\delta \\geq 2, n> 0$ and $\\gcd(m,n)=1$ such that\n\\begin{equation} \\label{cons zero set}\nE:=\\{\\alpha\\xi^{e+zn}\\ :\\ 0\\leq z\\leq \\delta-2\\}\\subseteq\\Omega.\n\\end{equation}\n\nWe now describe the Roos bound for constacyclic codes (see \\cite[Theorem 2]{R2} for the original Roos bound for cyclic codes). For $P\\subseteq\\Omega$, let $C_P$ denote any $\\lambda$-constacyclic code of length $m$ over ${\\mathbb F}_q$, whose defining set contains $P$. Let $d_P$ denote the minimum distance of $C_P$. \n\n\\begin{thm}~\\cite[Theorem 6]{RZ} (Roos bound) \\label{Roos} Let $N$ and $M$ be two nonempty subsets of $\\Omega$. If there exists a consecutive set $M'$ containing $M$ such that $|M'| \\leq |M| + d_N -2$, then we have $d_{MN}\\geq |M| + d_N -1$, where \n$\\displaystyle{\nMN:=\\frac{1}{\\alpha}\\bigcup_{\\varepsilon\\in M} \\varepsilon N}$.\n\\end{thm}\nIf $N$ is consecutive like in (\\ref{cons zero set}), then we get the following.\n\\begin{cor}\\cite[Corollary 1]{RZ}, \\cite[Corollary 1]{R2} \\label{Roos2}\nLet $N, M$ and $M'$ be as in Theorem \\ref{Roos}, with $N$ consecutive. Then $|M'| < |M| + |N|$ implies $d_{MN}\\geq |M| + |N|$.\n\\end{cor}\n\n\\begin{rem}\\label{Roos remark}\nIn particular, the case $M=\\{\\alpha\\}$ yields the BCH bound for the associated constacyclic code (see \\cite[Corollary 2]{RZ} and the original BCH bound for cyclic codes in \\cite{BC} and \\cite{H}). Taking $M'=M$ yields the HT bound (see \\cite[Corollary 3]{RZ} and the HT bound for cyclic codes in \\cite[Theorem 2]{HT}). \n\\end{rem}\n\nAnother improvement to the HT bound for cyclic codes was given by van Lint and Wilson in \\cite{LW}, which is known as the shift bound. We now formulate the shift bound for constacyclic codes. To do this, we need the notion of an {\\it independent set}, which can be constructed over any field in a recursive way.\n\nLet $S$ be a subset of some field $\\mathbb{K}$ of any characteristic. One inductively defines a family of finite subsets of $\\mathbb{K}$, called independent with respect to $S$, as follows.\n\\begin{enumerate}\n\\item $\\emptyset$ is independent with respect to $S$.\n\\item If $A \\subseteq S$ is independent with respect to $S$, then $A\\cup\\{b\\}$ is independent with respect to $S$ for all $b\\in \\mathbb{K} \\setminus S$. \\label{cond:2}\n\\item If $A$ is independent with respect to $S$ and $c\\in\\mathbb{K}^{*}$, then $cA$ is independent with respect to $S$. \\label{cond:3}\n\\end{enumerate}\n\nRecall that the {\\it weight} of a polynomial $f(x) \\in \\mathbb{K}[x]$, denoted by $\\mbox{wt}(f)$, is the number of nonzero coefficients in $f(x)$.\n\n\\begin{thm} \\cite[Theorem 11]{LW} (Shift bound)\\label{shift bound}\nLet $0\\neq f(x)\\in \\mathbb{K}[x]$ and let $S:=\\{\\theta\\in \\mathbb{K}\\ | \\ f(\\theta)=0\\}$. Then $\\mbox{wt}(f)\\geq |A|,$ for every subset $A$ of $\\mathbb{K}$ that is independent with respect to $S$.\n\\end{thm}\n\nThe minimum distance bound for a given $\\lambda$-constacyclic code follows by considering the weights of its codewords $c(x)\\in C$ and the independent sets with respect to subsets of its defining set $L$. Observe that, in this case, the universe of the independent sets is $\\Omega$, not ${\\mathbb F}$, because all of the possible roots of the codewords are contained in $\\Omega$. Moreover, we choose $b$ from $\\Omega \\setminus P$ in Condition \\ref{cond:2}) above, where $P\\subseteq L$, and $c$ in Condition \\ref{cond:3}) is of the form $\\xi^k\\in{\\mathbb F}^{*}$, for some $0\\leq k\\leq m-1$. \n\n\\begin{rem}\\label{shift remark}\nIn particular, $A=\\{\\alpha\\xi^{e+zn}\\ :\\ 0\\leq z\\leq \\delta-1\\}$ is independent with respect to the consecutive set $E$ in (\\ref{cons zero set}), which gives the BCH bound for $C_E$. Let $D := \\{\\alpha\\xi^{e + z n_1 + y n_2} : 0\\leq z \\leq \\delta-2, 0\\leq y\\leq s\\}$, for integers $b \\geq 0$, $\\delta \\geq 2$ and positive integers $s, n_1$ and $n_2$ such that $\\gcd(m, n_1) = 1$ and $\\gcd(m, n_2) < \\delta$. Then, for any fixed $z\\in \\{0,\\ldots,\\delta-2\\}$, $A_z:=\\{\\alpha\\xi^{e + z n_1} :\t0\\leq z \\leq\\delta-2\\}\\cup\\{\\alpha\\xi^{e + z n_1 + y n_2} : 0\\leq y \\leq s+1\\}$ is independent with respect to $D$ and we get the HT bound for $C_D$.\n\\end{rem}\n\n\\subsection{Spectral theory of quasi-twisted codes}\nA linear code $C\\subseteq\\mathbb{F}_{q}^{m\\ell}$ is called {\\it $\\lambda$-quasi-twisted} ($\\lambda$-QT) of index $\\ell$ if it is invariant under the $\\lambda$-constashift of codewords by $\\ell$ positions with $\\ell$ being the smallest positive integer with this property. In particular, if $\\ell=1$, then $C$ is $\\lambda$-constacyclic. If $\\lambda = 1$ or $q=2$, then $C$ is QC of index $\\ell$. For a codeword $\\vec{c}\\in C$, seen as an $m \\times \\ell$ array\n\\begin{equation}\\label{array}\n\\vec{c}=\\left(\n \\begin{array}{ccc}\n c_{00} & \\ldots & c_{0,\\ell-1} \\\\\n \\vdots & \\vdots & \\vdots \\\\\n c_{m-1,0} & \\ldots & c_{m-1,\\ell-1} \\\\\n \\end{array}\n\\right),\n\\end{equation} \nbeing invariant under $\\lambda$-constashift by $\\ell$ units in $\\mathbb{F}_{q}^{m\\ell}$ corresponds to being closed under row $\\lambda$-constashift in $\\mathbb{F}_{q}^{m\\times\\ell}$.\n\nTo an element $\\vec{c}\\in \\mathbb{F}_{q}^{m\\times \\ell} \\simeq \\mathbb{F}_{q}^{m\\ell}$ in (\\ref{array}), we associate an element of $R^\\ell$ (cf. (\\ref{identification-1}))\n\\begin{equation} \\label{associate-1}\n\\vec{c}(x):=(c_0(x),c_1(x),\\ldots ,c_{\\ell-1}(x)) \\in R^\\ell ,\n\\end{equation}\nwhere, for each $0\\leq j \\leq \\ell-1$, \n\\begin{equation*}\nc_j(x):= c_{0,j}+c_{1,j}x+c_{2,j}x^2+\\cdots + c_{m-1,j}x^{m-1} \\in R .\n\\end{equation*} \nThe isomorphism $\\phi$ in (\\ref{identification-1}) extends naturally to\n\\begin{equation}\\begin{array}{rll} \\label{identification-2}\n\\Phi: {\\mathbb F}_q^{m\\ell} & \\longrightarrow & R^\\ell \\\\\n\\vec{c} \\hspace{5pt}& \\longmapsto & \\vec{c}(x) .\n\\end{array}\\end{equation}\nThe row $\\lambda$-constashift in $\\mathbb{F}_{q}^{m\\times\\ell}$ corresponds to componentwise multiplication by $x$ in $R^\\ell$. The map $\\Phi$ above is, therefore, an $R$-module isomorphism and a $\\lambda$-QT code $C\\subseteq {\\mathbb F}_q^{m\\ell}$ of index $\\ell$ can be viewed as an $R$-submodule of $R^\\ell$.\n\nLally and Fitzpatrick proved in~\\cite{LF} that every QC code has a polynomial generator in the form of a reduced matrix. We provide an easy adaptation of their findings for QT codes. \n\nConsider the ring homomorphism\n\\begin{eqnarray}\\label{eq:Psi}\n\\Psi \\ :\\ \\mathbb{F}_{q}[x]^{\\ell} &\\longrightarrow& R^{\\ell} \\\\\\nonumber\n(f_0(x),\\ldots, f_{\\ell-1}(x)) &\\longmapsto& (f_0(x)+I,\\ldots ,f_{\\ell-1}(x)+I).\n\\end{eqnarray}\nLet each $\\vec{e}_j$ denote the standard basis vector of length $\\ell$ with $1$ at the $j^{th}$ coordinate and $0$ elsewhere. Given a $\\lambda$-QT code $C\\subseteq R^{\\ell}$, the preimage $\\widetilde{C}$ of $C$ in $\\mathbb{F}_{q}[x]^{\\ell}$ is an $\\mathbb{F}_{q}[x]$-submodule containing $\\widetilde{K} =\\{(x^m-\\lambda)\\vec{e}_j : 0\\leq j \\leq \\ell-1\\}$. From here on, the tilde indicates structures over $\\mathbb{F}_{q}[x]$.\n\nSince $\\widetilde{C}$ is a submodule of the finitely generated free module $F_q[x]^{\\ell}$ over the principal ideal domain $\\mathbb{F}_{q}[x]$ and contains $\\widetilde{K}$, it has a generating set of the form $$\\{\\vec{u}_1,\\ldots,\\vec{u}_p, (x^m-\\lambda)\\vec{e}_0,\\ldots,(x^m-\\lambda)\\vec{e}_{\\ell-1}\\},$$ where $p\\geq 1$ is an integer and $\\vec{u}_b = (u_{b,0}(x), \\ldots, u_{b,\\ell-1}(x)) \\in \\mathbb{F}_{q}[x]^{\\ell}$, for each $b \\in \\{1,\\ldots,p\\}$. Hence, the rows of \n\n$$\\mathcal{G}=\\left(\\begin{array}{ccc}\n u_{1,0}(x) & \\ldots & u_{1,\\ell-1}(x) \\\\\n \\vdots & \\vdots & \\vdots \\\\\n u_{p,0}(x) & \\ldots & u_{p,\\ell-1}(x) \\\\\n x^m-\\lambda & \\ldots & 0 \\\\\n \\vdots & \\ddots & \\vdots \\\\\n 0 & \\ldots & x^m-\\lambda \\\\\n \\end{array}\n\\right)$$\ngenerate $\\widetilde{C}$. We triangularise $\\mathcal{G}$ by elementary row operations to obtain another equivalent generating set from the rows of an upper-triangular $\\ell \\times \\ell$ matrix with entries in $\\mathbb{F}_{q}[x]$ \n\\begin{equation}\\label{PGM}\n\\widetilde{G}(x)=\\left(\\begin{array}{cccc}\n g_{0,0}(x) & g_{0,1}(x) & \\ldots & g_{0,\\ell-1}(x) \\\\\n 0 & g_{1,1}(x) & \\ldots & g_{1,\\ell-1}(x) \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n 0 & 0 &\\ldots & g_{\\ell-1,\\ell-1}(x)\\\\\n \\end{array}\n\\right),\n\\end{equation}\nwhere $\\widetilde{G}(x)$ satisfies (see \\cite[Theorem 2.1]{LF}):\n\\begin{enumerate}\n \\item $g_{i,j}(x)=0$ for all $0\\leq j < i \\leq \\ell-1$.\n \\item $\\deg(g_{i,j}(x)) < $ $\\deg(g_{j,j}(x))$ for all $i 0$. Note that $\\overline{\\Omega}=\\emptyset$ if and only if the diagonal elements $g_{j,j}(x)$ in $\\widetilde{G}(x)$ are constant and $C$ is the trivial full space code. Choose an arbitrary eigenvalue $\\beta_i\\in\\overline{\\Omega}$ with multiplicity $n_i$ for some $i \\in\\{1,\\ldots,t\\}$. Let $\\{\\vec{v}_{i,0},\\ldots,\\vec{v}_{i,n_i-1}\\}$ be a basis for the corresponding eigenspace $\\mathcal{V}_i$. Consider the matrix \n\\begin{equation}\\label{Eigenspace} \n V_i:=\\begin{pmatrix}\n\\vec{v}_{i,0} \\\\\n\\vdots\\\\\n\\vec{v}_{i,n_i-1} \n\\end{pmatrix}\n=\n\\begin{pmatrix}\nv_{i,0,0}&\\ldots&v_{i,0,\\ell-1} \\\\\n\\vdots & \\vdots & \\vdots\\\\\nv_{i,n_i-1,0}&\\ldots&v_{i,n_i-1,\\ell-1}\n \\end{pmatrix},\n\\end{equation} \nhaving the basis elements as its rows. We let\n\\[\nH_i:=(1, \\beta_i,\\ldots,\\beta_i^{m-1})\\otimes V_i \\mbox{ and }\n\\]\n\\begin{equation}\\label{parity check matrix} \nH:=\\begin{pmatrix}\nH_1 \\\\\n\\vdots\\\\\nH_t \n\\end{pmatrix}=\\begin{pmatrix}\nV_1&\\beta_1 V_1 & \\ldots &(\\beta_1)^{m-1} V_1 \\\\\n\\vdots &\\vdots & \\vdots & \\vdots\\\\\nV_t & \\beta_t V_t & \\ldots & (\\beta_t)^{m-1} V_t\n\\end{pmatrix}.\n\\end{equation} \nObserve that $H$ has $n:=\\sum_{i=1}^t n_i$ rows. By Lemma \\ref{multiplicity lemma}, we have $n=\\sum_{j=0}^{\\ell-1}\\mbox{deg}(g_{j,j}(x))$. To prove Lemma~\\ref{rank lemma} below, it remains to show the linear independence of these $n$ rows, which was already shown in~\\cite[Lemma 2]{ST}.\n\\begin{lem}\\label{rank lemma}\nThe matrix $H$ in (\\ref{parity check matrix}) has rank $m\\ell -\\dim_{\\mathbb{F}_{q}}(C)$.\n\\end{lem}\n\nIt is immediate to confirm that $H \\vec{c}^{\\top}=\\vec{0}_n$ for any codeword $\\vec{c}\\in C$. Together with Lemma \\ref{rank lemma}, we obtain the following easily.\n\n\\begin{prop}\\cite[Theorem 1]{ST}\nThe $n\\times m\\ell$ matrix $H$ in (\\ref{parity check matrix}) is a parity-check matrix for $C$.\n\\end{prop}\n\n\\begin{rem}\nNote that if $\\overline{\\Omega}=\\emptyset$, then the construction of $H$ in (\\ref{parity check matrix}) is impossible. Hence, we have assumed $\\overline{\\Omega}\\neq\\emptyset$ and we can always say $H=\\vec{0}_{m\\ell}$ if $C=\\mathbb{F}_{q}^{m\\ell}$. The other extreme case is when $\\overline{\\Omega}=\\Omega$. By using Lemma \\ref{rank lemma} above, one can easily deduce that a given QT code $C=\\{\\vec{0}_{m\\ell}\\}$ if and only if $\\overline{\\Omega}=\\Omega$, each $\\mathcal{V}_i={\\mathbb F}^{\\ell}$ (equivalently, each $V_i=I_{\\ell}$, where $I_{\\ell}$ denotes the $\\ell\\times\\ell$ identity matrix) and $n=m\\ell$ so that we obtain $H=I_{m\\ell}$. On the other hand, $\\overline{\\Omega}=\\Omega$ whenever $x^m-\\lambda \\mid \\mbox{det}(\\widetilde{G}(x))$, but $C$ is nontrivial unless each eigenvalue in $\\Omega$ has multiplicity $\\ell$.\n\\end{rem}\n\n\\begin{defn}\\label{eigencode}\nLet $\\mathcal{V}\\subseteq {\\mathbb F}^\\ell$ be an eigenspace. We define the {\\it eigencode} corresponding to $\\mathcal{V}$ by\n$$\\mathbb{C}(\\mathcal{V})=\\mathbb{C}:=\\left\\{\\vec{u}\\in \\mathbb{F}_{q}^\\ell\\ : \\ \\sum_{j=0}^{\\ell-1}{v_ju_j}=0, \\forall \\vec{v} \\in \\mathcal{V}\\right\\}.$$\nIn case we have $\\mathbb{C}=\\{\\vec{0}_{\\ell}\\}$, then it is assumed that $d(\\mathbb{C})=\\infty$.\n\\end{defn}\n\nThe BCH-like minimum distance bound of Semenov and Trifonov for a given QC code in ~\\cite[Theorem 2]{ST} is expressed in terms of the size of a consecutive subset of eigenvalues in $\\overline{\\Omega}$ and the minimum distance of the common eigencode related to this consecutive subset. Zeh and Ling generalized their approach and derived an HT-like bound in~\\cite[Theorem 1]{ZL} without using the parity-check matrix in their proof. The eigencode, however, is still needed. In the next section we will prove the analogues of these bounds for QT codes in terms of the Roos and shift bounds. \n\n\\section{Spectral Bounds for QT Codes} \\label{bounds section}\nFirst, we establish a general spectral bound on the minimum distance of a given QT code. Let $C\\subseteq\\mathbb{F}_{q}^{m\\ell}$ be a $\\lambda$-QT code of index $\\ell$ with nonempty eigenvalue set $\\overline{\\Omega}\\varsubsetneqq\\Omega$. Let $P\\subseteq\\overline{\\Omega}$ be a nonempty subset of eigenvalues such that $P=\\{\\alpha\\xi^{u_1}, \\alpha\\xi^{u_2},\\ldots,\\alpha\\xi^{u_r}\\}$, where $0 \\omega$, we have $\\vec{c}_k \\notin \\mathbb{C}_P$ and therefore $ \\vec{s}_k=V_P\\vec{c}_k^{\\top}\\neq \\vec{0}_t$, for all $\\vec{c}_k \\neq \\vec{0}_{\\ell}$, $k\\in\\{0,\\ldots,m-1\\}$. Hence, $0 < \\lvert\\{\\vec{s}_k : \\vec{s}_k \\neq \\vec{0}_t \\} \\rvert \\leq \\omega < \\min(d_P, d(\\mathbb{C}_P))$. Let $S := [\\vec{s}_0\\ \\vec{s}_1 \\cdots \\vec{s}_{m-1}]$. Then $\\widetilde{H}_PS^{\\top} = 0$, which implies that the rows of the matrix $S$ lies in the right kernel of $\\widetilde{H}_P$. But this is a contradiction since any row of $S$ has weight at most $\\omega