diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzjccq" "b/data_all_eng_slimpj/shuffled/split2/finalzzjccq" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzjccq" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nIn this paper, we consider the array low-density parity-check (LDPC) codes, originally introduced by Fan in \\cite{fan00}, and their minimum\/stopping distance. Array LDPC codes are specified by two integers $q$ and $m$, where $q$ is an odd prime and $m \\leq q$. Furthermore, in this work, $\\mathcal{C}(q,m)$ will denote the array LDPC code with parameters $q$ and $m$, and $d(q,m)$ (resp.\\ $h(q,m)$) its minimum (resp.\\ stopping) distance. \n\nSince the original work by Fan, several authors have consider the \\emph{structural} properties of these codes (see, e.g., \\cite{mit02,yan03,sug08,esm09,esm11,liu10,dol10}). For high rate and\nmoderate length, these codes perform well under iterative decoding, and they are also well-suited for practical implementation due to their regular structure \\cite{olc03,bha05}.\n\n\n\n\nThe minimum distance of these codes was first analyzed by Mittelholzer in \\cite{mit02}, where general (i.e., independent of $q$) minimum distance upper bounds for $m \\leq 6$ were provided. Subsequently, Yang and Helleseth \\cite{yan03} investigated the minimum\ndistance of these codes in an algebraic way by first proving that the codes are invariant under a doubly transitive group of ``affine'' permutations. Then, they proved the general lower bound $d(q,4) \\geq 10$, for $q > 7$, on the minimum distance. In \\cite{sug08}, the general upper bounds $d(q,4) \\leq 10$ and $d(q,5) \\leq 12$\non the minimum distance were proved. Furthermore, by combining these bounds with the results in \\cite{yan03}, it follows that $d(q,4)=10$ and that $d(q,5)$ is either $10$ or $12$, for $q > 7$. In summary,\n\\begin{displaymath}\nd(q,m) \\leq \\begin{cases}\n6, & \\text{if $m=3$, with equality for $q \\geq 5$ \\cite{yan03}}\\\\\n10, & \\text{if $m=4$, with equality for $q > 7$ \\cite{yan03,sug08}}\\\\\n12, & \\text{if $m=5$, with exact value either} \\\\\n& \\text{$10$ or $12$ for $q > 7$ \\cite{yan03,sug08}}\\\\\n32, & \\text{if $m=6$ \\cite{mit02}.} \\end{cases}\n\\end{displaymath}\n\nThe case $m=6$ has not been treated in the literature before, except for in the initial work of Mittelholzer \\cite{mit02}. In this work, we will consider this case in more detail as well as the case $m=7$, both from an experimental point of view and by deriving an improved upper bound on $d(q,6)$ and a new upper bound on $d(q,7)$.\n\nThis paper is organized as follows. \nIn Section~\\ref{sec:prelim}, some of the basic notation is introduced and the definition of array LDPC codes is given.\nThe concept of a \\emph{template support matrix} is also introduced.\nIn Section~\\ref{sec:algo}, an heuristic is presented that will be used to infer a \\emph{candidate} template support matrix.\nThe heuristic analyzes the \\emph{graphical cycle structure} of support matrices of codewords\/stopping sets for different values of $q$, with $m$ fixed.\nIn Section~\\ref{sec:bound}, we use the template support matrix found in Section~\\ref{sec:algo} to formally prove the improved upper bound $d(q,6) \\leq 20$.\nFurthermore, in Section~\\ref{sec:bound_m7}, we present a template support matrix for $m=7$, found by using the heuristic of Section~\\ref{sec:algo}, which is used to formally prove the new upper bound $d(q,7)\\leq24$.\nIn Section~\\ref{sec:results}, new minimum\/stopping distance results are presented for fixed values of $m \\leq 7$ and $q \\leq 79$. \nFinally, in Section~\\ref{sec:conclu}, we draw the conclusions.\n\n\n\\begin{comment}\nFurthermore, a submatrix of the codeword \\emph{support matrix} is\n\n\\begin{displaymath}\n\\begin{pmatrix}\n0 & 0 & -1 & -2 & 5x & 5x & -2 & 3^{-1}y+y & -1 \\\\\n0 & z & 0 & -1 & 4x & z & y & y & 3^{-1}-1 \\\\\n0 & 2z & 1 & 0 & 3x & 2z-5x & 2y+2 & -3^{-1}y+y & 2 \\cdot 3^{-1}-1 \\\\\n0 & 3z & 2 & 1 & 2x & 3z-10x & 3y+4 & -2 \\cdot 3^{-1}y+y & 0 \\\\\n0 & 4z & 3 & 2 & x & 4z-15x & 4y+6 & 0 & 4 \\cdot 3^{-1}-1 \\\\\n0 & 5z & 4 & 3 & 0 & 5z-20x & 5y+8 & -4 \\cdot 3^{-1}y+y & 5 \\cdot 3^{-1}-1 \\\\\n\\end{pmatrix}\n\\end{displaymath}\nfor a codeword of weight $20$.\n\n\\end{comment}\n\n\n\\section{Preliminaries} \\label{sec:prelim}\n\nThe array LDPC code $\\mathcal{C}(q,m)$, with parameters $q$ and $m$, has length $q^2$ and can be defined by the parity-check matrix\n\\begin{equation} \\label{eq:pcmatrix}\n\\mathbf{H}(q,m) = \\begin{pmatrix}\n\\mathbf{I} & \\mathbf{I} & \\mathbf{I} & \\cdots & \\mathbf{I} \\\\\n\\mathbf{I} & \\mathbf{P} & \\mathbf{P}^2 & \\cdots & \\mathbf{P}^{q-1} \\\\\n\\mathbf{I} & \\mathbf{P}^2 & \\mathbf{P}^4 & \\cdots & \\mathbf{P}^{2(q-1)} \\\\\n& & \\vdots & & \\vdots \\\\\n\\mathbf{I} & \\mathbf{P}^{m-1} & \\mathbf{P}^{2(m-1)} & \\cdots & \\mathbf{P}^{(m-1)(q-1)} \\end{pmatrix}\n\\end{equation}\nwhere $\\mathbf{I}$ is the $q \\times q$ identity matrix and $\\mathbf{P}$ is a $q \\times q$ permutation matrix defined by\n\\begin{displaymath}\n\\mathbf{P} = \\begin{pmatrix}\n0 & 0 & \\cdots & 0 & 1 \\\\\n1 & 0 & \\cdots & 0 & 0 \\\\\n0 & 1 & \\cdots & 0 & 0 \\\\\n& \\vdots & & \\vdots & \\\\\n0 & 0 & \\cdots & 1 & 0 \\end{pmatrix}.\n\\end{displaymath}\nSince the number of ones in each row of the matrix in (\\ref{eq:pcmatrix}) is $q$ and the number of ones in each column is $m$, the array LDPC codes are $(m,q)$-regular codes. Furthermore, it is not hard to see that the parity-check matrix in (\\ref{eq:pcmatrix}) has rank $qm-m+1$, from which it follows that the dimension of $\\mathcal{C}(q,m)$ is $q^2-qm+m-1$. \n\nIn \\cite{yan03}, a new representation for $\\mathbf{H}(q,m)$ was introduced. In particular, since each column of the parity-check matrix $\\mathbf{H}(q,m)$ has $m$ blocks and each block is a permutation of $(1,0,0,\\dots,0,0)^T$, we can represent each column as a vector of integers between $0$ and $q-1$, where\n\\begin{equation} \\label{eq:rep}\ni \\triangleq \\left(\\overbrace{0,\\dots,0}^{i},1,\\overbrace{0,\\dots,0}^{q-i-1} \\right)^T\n\\end{equation}\nand $(\\cdot)^T$ denotes the transpose of its argument, i.e., the $1$-positions are associated with the integers modulo $q$.\n Furthermore, it follows from (\\ref{eq:pcmatrix}) and the integer representation in (\\ref{eq:rep}) that any column in an array LDPC code parity-check matrix is of the form \n\\begin{equation} \\label{eq:form}\n(x,x+y,x+2y,\\dots,x+(m-1)y)^T \\pmod q\n\\end{equation}\nwhere $x$ and $y$ are integers between $0$ and $q-1$. Thus, a column can be specified by two integers $x$ and $y$. Also, note that since there are $q^2$ distinct columns in an array LDPC code parity-check matrix, any pair $(x,y) \\in \\{0,\\dots,q-1\\}^2$ specifies a valid column.\n\nIn the following, the \\emph{support matrix} of a codeword\/stopping set will be the submatrix of $\\mathbf{H}(q,m)$ corresponding to the \\emph{support set} of the codeword\/stopping set, i.e., we keep the columns of $\\mathbf{H}(q,m)$ whose column indices coincide with the support set of the codeword\/stopping set. Also, we will use the integer representation in (\\ref{eq:rep}) for the columns of the submatrix.\n\nFurthermore, a \\emph{template} support matrix with parameters $m$, $q$, $w$, and $q_0$ is formally defined as an $m \\times w$ matrix with entries that are functions of $q$ and such that it is the support matrix (possibly column-permuted) of a codeword\/stopping set of weight\/size $w$ of $\\mathcal{C}(q,m)$ for all $q \\geq q_0$.\nThe specific matrix which results when a template support matrix is evaluated for a specific value of $q$ is called an \\emph{instance} of the template support matrix.\n\n\n\\begin{comment}\n\n\\begin{figure*}[!t]\n\\normalsize\n\\begin{displaymath}\n{\\tiny {\n\\left[ \\begin{smallmatrix}\n0 & 0 & 1 & 1 & -11 \\cdot 3^{-1} & -11 \\cdot 3^{-1} & -8 \\cdot 3^{-1} & -8 \\cdot 3^{-1} & -2 \\cdot 3^{-1} & -2 \\cdot 3^{-1} & 2 \\cdot 3^{-1} & 2 \\cdot 3^{-1} & -4 \\cdot 3^{-1} & -4 \\cdot 3^{-1} & 5 \\cdot 3^{-1} & 5 \\cdot 3^{-1} & -3 & -3 & -2 & -2 & -1 & -1 \\\\\n0 & 6^{-1} & 3 \\cdot 2^{-1} & 5 \\cdot 6^{-1} & -11 \\cdot 6^{-1} & -2 & -11 \\cdot 6^{-1} & -4 \\cdot 3^{-1} & - 2^{-1} & 6^{-1} & 5 \\cdot 6^{-1} & 4 \\cdot 3^{-1} & 0 & -2 \\cdot 3^{-1} & 4 \\cdot 3^{-1} & 3 \\cdot 2^{-1} & -2 & -4 \\cdot 3^{-1} & -2 \\cdot 3^{-1} & -2^{-1} & 0 & -2^{-1} \\\\\n0 & 3^{-1} & 2 & 2 \\cdot 3^{-1} & 0 & -3^{-1} & -1 & 0 & -3^{-1} & 1 & 1 & 2 & 4 \\cdot 3^{-1} & 0 & 1 & 4 \\cdot 3^{-1} & -1 & 3^{-1} & 2 \\cdot 3^{-1} & 1 & 1 & 0 \\\\\n0 & 2^{-1} & 5 \\cdot 2^{-1} & 2^{-1} & 11 \\cdot 6^{-1} & 4 \\cdot 3^{-1} & -6^{-1} & 4 \\cdot 3^{-1} & -6^{-1} & 11 \\cdot 6^{-1} & 7 \\cdot 6^{-1} & 8 \\cdot 3^{-1} & 8 \\cdot 3^{-1} & 2 \\cdot 3^{-1} & 2 \\cdot 3^{-1} & 7 \\cdot 6^{-1} & 0 & 2 & 2 & 5 \\cdot 2^{-1} & 2 & 2^{-1}\\\\\n0 & 2 \\cdot 3^{-1} & 3 & 3^{-1} & 11 \\cdot 3^{-1} & 3 & 2 \\cdot 3^{-1} & 8 \\cdot 3^{-1} & 0 & 8 \\cdot 3^{-1} & 4 \\cdot 3^{-1} & 10 \\cdot 3^{-1} & 4 & 4 \\cdot 3^{-1} & 3^{-1} & 1 & 1 & 11 \\cdot 3^{-1} & 10 \\cdot 3^{-1} & 4 & 3 & 1 \\\\\n0 & 5 \\cdot 6^{-1} & 7 \\cdot 2^{-1} & 6^{-1} & 11 \\cdot 2^{-1} & 14 \\cdot 3^{-1} & 3 \\cdot 2^{-1} & 4 & 6^{-1} & 7 \\cdot 2^{-1} & 3 \\cdot 2^{-1} & 4 & 16 \\cdot 3^{-1} & 2 & 0 & 5 \\cdot 6^{-1} & 2 & 16 \\cdot 3^{-1} & 14 \\cdot 3^{-1} & 11 \\cdot 2^{-1} &4 & 3 \\cdot 2^{-1} \n\\end{smallmatrix} \\right]}}\n\\end{displaymath}\n \\hrulefill\n\\vspace*{-2mm}\n\\end{figure*}\n\n\\end{comment}\n\n\n\n\n\n\n\\section{Deriving Upper Bounds on $d(q,m)$} \\label{sec:algo}\n\nIn this section, we describe an heuristic which can be used to derive upper bounds on the minimum\/stopping distance of array LDPC codes. For simplicity, we will only consider the codeword case (the stopping set case is similar). \nThe heuristic is a three-step procedure:\n\\begin{enumerate}\n\\item In the first step, pairs of codewords $\\mathbf{c}_1 \\in \\mathcal{C}(q_1,m)$ and $\\mathbf{c}_2 \\in \\mathcal{C}(q_2,m)$, $q_1 < q_2$ and $m$ fixed, where $\\mathbf{c}_1$ and $\\mathbf{c}_2$ have the same \\emph{graphical cycle structure} (a concept to be defined later), are identified.\n\\item The second step is to infer a \\emph{candidate} template support matrix (which may or may not exist) such that the instances for $q=q_1$ and $q=q_2$ are the support matrices (possibly column-permuted) of the two codewords $\\mathbf{c}_1$ and $\\mathbf{c}_2$, respectively. \nWe emphasize here that the inferred matrix is only a \\emph{candidate} template support matrix, since a formal proof is needed to show that all instances for $q \\geq q_0$, for some $q_0$, are in fact valid (possibly column-permuted) support matrices. \n\\item The third step is a formal proof that the instances of the candidate template support matrix are indeed valid \n(possibly column-permuted) \nsupport matrices of codewords for all possible values of $q$ larger than or equal to $q_0$. \n\\end{enumerate}\nNote that all instances of a template support matrix may not have their columns in the order implied by the parity-check matrix in (\\ref{eq:pcmatrix}). \nThis is obviously not important, since the order of the columns in a support matrix is not relevant (independent of the order, it will represent the same codeword\/stopping set).\n\n\\subsection{First Step: Graphical Cycle Structure} \\label{sec:structure}\n\nNote that for the array LDPC codes there exists a subgroup of the automorphism group which is doubly transitive \\cite{yan03}. This means that (by definition), for each codeword $\\mathbf{c} \\in \\mathcal{C}(q,m)$, there exists a codeword $\\rho(\\mathbf{c})$ (obtained by permuting the coordinates of $\\mathbf{c}$ according to the permutation $\\rho$ from this subgroup) having the coordinates $(p_1,p_2)$, for any $0 \\leq p_1 < p_2 \\leq q^2-1$, in its support set. Thus, it is always possible to permute any codeword (using permutations from this subgroup) such that the corresponding support matrix contains the columns $(0,0,0,\\dots,0)^T$ and $(q-1,0,1,\\dots,q-2)^T$. This is the case since these columns will always be in the parity-check matrix $\\mathbf{H}(q,m)$ for all valid values of $q$ and $m$. In particular, the column $(0,0,0,\\dots,0)^T$ (resp.\\ $(q-1,0,1,\\dots,q-2)^T$) is generated by $x=0$ and $y=0$ (resp.\\ $x=q-1$ and $y=1$) using the representation in (\\ref{eq:form}).\n\nAs argued above, the support matrix can be regarded as an $m \\times w$ matrix of integers modulo $q$, where $w$ is the weight of the underlying codeword. From this matrix we can make a bipartite graph, denoted by $G^{(i,j)}=G(V^{(i,j)},E^{(i,j)})$, for each pair of rows $(i,j)$, $i q_1$, be two distinct \\emph{minimal} codewords of the same Hamming weight, where a \\emph{minimal} codeword is a codeword that does not have the support set of a nonzero codeword as a proper subset of its own support set. From each of the corresponding support matrices we build the support matrix graphs $G^{(i,j)}$ for each pair of rows $(i,j)$, $0 \\leq i < j \\leq m-1$, as outlined above. The graphs corresponding to $\\mathbf{c}_1$ and $\\mathbf{c}_2$ are denoted by $G_{\\mathbf{c}_1}^{(i,j)}$ and $G_{\\mathbf{c}_2}^{(i,j)}$, respectively. Now, $\\mathbf{c}_1$ and $\\mathbf{c}_2$ are said to have the same \\emph{graphical cycle structure} (by definition) if and only if the graphs $G_{\\mathbf{c}_1}^{(i,j)}$ and $G_{\\mathbf{c}_2}^{(i,j)}$, for each pair $(i,j)$,\nhave the same number of cycles of a given length containing the edge $(v^{(i)}_{q-1+i \\pmod q}, v^{(j)}_{q-1+j \\pmod q})$ and also the same number of cycles of a given length containing the edge $(v^{(i)}_0,v^{(j)}_0)$.\n\n\n\n\n\n\nThe basic idea is to identify pairs of (minimal) codewords $\\mathbf{c}_1 \\in \\mathcal{C}(q_1,m)$ and $\\mathbf{c}_2 \\in \\mathcal{C}(q_2,m)$, $q_2 > q_1$ and $m$ fixed, with the same graphical cycle structure, since if they do not have the same graphical cycle structure, then it is likely (although not impossible when $q_1$ or $q_2$ is small)\nthat their support matrices \ncan not be instances of the same template support matrix. Then, for a pair of (minimal) codewords with the same graphical cycle structure, we would like to infer a template support matrix\nsuch that the instances for $q=q_1$ and $q=q_2$ are the support matrices (possibly column-permuted) of the codewords $\\mathbf{c}_1$ and $\\mathbf{c}_2$, respectively. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{comment}\n\n\\begin{example} \\label{ex:1}\nConsider the case $q=29$ and $m=6$. Using a computer search, we have found a stopping set of size $22$. The corresponding support matrix is\n\\begin{equation} \\label{eq:supportmatrix1}\n{\\tiny {\n\\left[ \\begin{smallmatrix} \n0 & 0 & 1 & 1 & 6 & 6 & 7 & 7 & 9 & 9 & 20 & 20 & 18 & 18 & 21 & 21 & 26 & 26 & 27 & 27 & 28 & 28 \\\\\n0 & 5 & 16 & 25 & 3 & 27 & 3 & 18 & 14 & 5 & 25 & 11 & 0 & 9 & 11 & 16 & 27 & 18 & 9 & 14 & 0 & 14 \\\\\n0 & 10 & 2 & 20 & 0 & 19 & 28 & 0 & 19 & 1 & 1 & 2 & 11 & 0 & 1 & 11 & 28 & 10 & 20 & 1 & 1 & 0 \\\\\n0 & 15 & 17 & 15 & 26 & 11 & 24 & 11 & 24 & 26 & 6 & 22 & 22 & 20 & 20 & 6 & 0 & 2 & 2 & 17 & 2 & 15 \\\\\n0 & 20 & 3 & 10 & 23 & 3 & 20 & 22 & 0 & 22 & 11 & 13 & 4 & 11 & 10 & 1 & 1 & 23 & 13 & 4 & 3 & 1 \\\\\n0 & 25 & 18 & 5 & 20 & 24 & 16 & 4 & 5 & 18 & 16 & 4 & 15 & 2 & 0 & 25 & 2 & 15 & 24 & 20 & 4 & 16 \n\\end{smallmatrix} \\right]}}\n\\end{equation}\nand the support matrix graph $G^{(0,1)}$ (corresponding to the first two rows) is shown in Fig.~\\ref{fig:graph1}. \nThere are two distinct cycles in the graph containing the edge $(v^{(0)}_{28},v^{(1)}_0)$, namely the cycle \n\\begin{displaymath}\n\\left(v^{(0)}_{28},v^{(1)}_0,v^{(0)}_{18},v^{(1)}_9,v^{(0)}_{27},v^{(1)}_{14},v^{(0)}_{28} \\right)\n\\end{displaymath}\n (indicated with dashed edges in Fig.~\\ref{fig:graph1}) and the cycle \n\\begin{displaymath}\n\\left( v^{(0)}_{28},v^{(1)}_0,v^{(0)}_0,v^{(1)}_5,v^{(0)}_9,v^{(1)}_{14},v^{(0)}_{28} \\right)\n\\end{displaymath}\n both of length $6$. Furthermore, for the support matrix\n\\begin{equation} \\label{eq:supportmatrix2}\n{\\tiny {\n\\left[ \\begin{smallmatrix}\n0 & 0 & 1 & 1 & 21 & 21 & 22 & 22 & 24 & 24 & 13 & 13 & 11 & 11 & 14 & 14 & 34 & 34 & 35 & 35 & 36 & 36 \\\\\n0 & 31 & 20 & 7 & 29 & 35 & 29 & 11 & 18 & 31 & 7 & 26 & 0 & 24 & 26 & 20 & 35 & 11 & 24 & 18 & 0 & 18 \\\\\n0 & 25 & 2 & 13 & 0 & 12 & 36 & 0 & 12 & 1 & 1 & 2 & 26 & 0 & 1 & 26 & 36 & 25 & 13 & 1 & 1 & 0 \\\\\n0 & 19 & 21 & 19 & 8 & 26 & 6 & 26 & 6 & 8 & 32 & 15 & 15 & 13 & 13 & 32 & 0 & 2 & 2 & 21 & 2 & 19 \\\\\n0 & 13 & 3 & 25 & 16 & 3 & 13 & 15 & 0 & 15 & 26 & 28 & 4 & 26 & 25 & 1 & 1 & 16 & 28 & 4 & 3 & 1 \\\\\n0 & 7 & 22 & 31 & 24 & 17 & 20 & 4 & 31 & 22 & 20 & 4 & 30 & 2 & 0 & 7 & 2 & 30 & 17 & 24 & 4 & 20 \n\\end{smallmatrix} \\right]}}\n\\end{equation}\ncorresponding to a stopping set of size $22$ for $q=37$, the corresponding cycles are \n\\begin{displaymath}\n\\left(v^{(0)}_{36},v^{(1)}_0,v^{(0)}_{11},v^{(1)}_{24},v^{(0)}_{35},v^{(1)}_{18},v^{(0)}_{36} \\right)\n\\end{displaymath}\nand \n\\begin{displaymath}\n\\left(v^{(0)}_{36},v^{(1)}_0,v^{(0)}_0,v^{(1)}_{31},v^{(0)}_{24},v^{(1)}_{18},v^{(0)}_{36} \\right)\n\\end{displaymath}\n both of length $6$. Thus, we get the same cycle lengths. Continuing with the remaining pairs of rows, $(i,j)=(0,2),(0,3),\\dots,(4,5)$, we get\nthe same cycle lengths for cycles containing the edge $(v^{(i)}_{q-1+i \\pmod{q}},v^{(j)}_{q-1+j \\pmod{q}})$ or the edge $(v^{(i)}_{0},v^{(j)}_{0})$ for both support matrices. Thus, we would expect that there exists a template support matrix which is consistent with both support matrices for $q=29$ and $37$, respectively.\nIn fact, by carefully looking at the support matrices in (\\ref{eq:supportmatrix1}) and (\\ref{eq:supportmatrix2}), we can observe that they are consistent with the template matrix shown in (\\ref{eq:supportm}) at the top of the page where all entries should be reduced modulo $q$.\n\\end{example}\n\n\\end{comment}\n\n\\begin{figure}[tbp]\n \\centerline{\\includegraphics[height=0.5\\columnwidth]{Graph3}}\n \\caption{Support matrix graph $G^{(0,1)}$ for the support matrix in (\\ref{eq:supportmatrix1}). The cycle with dashed edges is the cycle $(v^{(0)}_{46},v^{(1)}_0,v^{(0)}_{0},v^{(1)}_{28},v^{(0)}_{5},v^{(1)}_{8},v^{(0)}_{11},v^{(1)}_{32},v^{(0)}_{6},v^{(1)}_{4},v^{(0)}_{46})$ of length $10$.}\n \\label{fig:graph1}\n\\end{figure}\n\n\n\\begin{example} \\label{ex:1}\nConsider the case $q=47$ and $m=6$. Using a computer search, we have found a (minimal) codeword of weight $20$. The corresponding support matrix is\n\\begin{equation} \\label{eq:supportmatrix1}\n{\\tiny {\n\\left[ \\begin{smallmatrix} \n0&\t42&\t46&\t5&\t36&\t46&\t37&\t31&\t11&\t5&\t43&\t6&\t37&\t0&\t43&\t42&\t36&\t31&\t11&\t6\\\\\t\n0&\t43&\t0&\t8&\t39&\t4&\t43&\t39&\t32&\t28&\t20&\t32&\t16&\t28&\t24&\t24&\t20&\t16&\t8&\t4\\\\\t\n0&\t44&\t1&\t11&\t42&\t9&\t2&\t0&\t6&\t4&\t44&\t11&\t42&\t9&\t5&\t6&\t4&\t1&\t5&\t2\\\\\t\n0&\t45&\t2&\t14&\t45&\t14&\t8&\t8&\t27&\t27&\t21&\t37&\t21&\t37&\t33&\t35&\t35&\t33&\t2&\t0\\\\\t\n0&\t46&\t3&\t17&\t1&\t19&\t14&\t16&\t1&\t3&\t45&\t16&\t0&\t18&\t14&\t17&\t19&\t18&\t46&\t45\\\\\t\n0&\t0&\t4&\t20&\t4&\t24&\t20&\t24&\t22&\t26&\t22&\t42&\t26&\t46&\t42&\t46&\t3&\t3&\t43&\t43\n\\end{smallmatrix} \\right]}}\n\\end{equation}\nand the support matrix graph $G^{(0,1)}$ (corresponding to the first two rows) is shown in Fig.~\\ref{fig:graph1}. \nThere is one distinct cycle in the graph containing the edge $(v^{(0)}_{46},v^{(1)}_0)$, namely the cycle \n\\begin{equation} \\label{eq:cycle1}\n\\left(v^{(0)}_{46},v^{(1)}_0,v^{(0)}_{0},v^{(1)}_{28},v^{(0)}_{5},v^{(1)}_{8},v^{(0)}_{11},v^{(1)}_{32},v^{(0)}_{6},v^{(1)}_{4},v^{(0)}_{46} \\right)\n\\end{equation}\n (indicated with dashed edges in Fig.~\\ref{fig:graph1}) of length $10$.\nFurthermore, for the support matrix\n\\begin{equation} \\label{eq:supportmatrix2}\n{\\tiny {\n\\left[ \\begin{smallmatrix}\n0&\t54&\t58&\t5&\t48&\t58&\t49&\t43&\t11&\t5&\t55&\t6&\t49&\t0&\t55&\t54&\t48&\t43&\t11&\t6\\\\\t\n0&\t55&\t0&\t8&\t51&\t4&\t55&\t51&\t38&\t34&\t26&\t38&\t22&\t34&\t30&\t30&\t26&\t22&\t8&\t4\\\\\t\n0&\t56&\t1&\t11&\t54&\t9&\t2&\t0&\t6&\t4&\t56&\t11&\t54&\t9&\t5&\t6&\t4&\t1&\t5&\t2\\\\\t\n0&\t57&\t2&\t14&\t57&\t14&\t8&\t8&\t33&\t33&\t27&\t43&\t27&\t43&\t39&\t41&\t41&\t39&\t2&\t0\\\\\t\n0&\t58&\t3&\t17&\t1&\t19&\t14&\t16&\t1&\t3&\t57&\t16&\t0&\t18&\t14&\t17&\t19&\t18&\t58&\t57\\\\\t\n0&\t0&\t4&\t20&\t4&\t24&\t20&\t24&\t28&\t32&\t28&\t48&\t32&\t52&\t48&\t52&\t56&\t56&\t55&\t55\t\n\\end{smallmatrix} \\right]}}\n\\end{equation}\ncorresponding to a (minimal) codeword of weight $20$ for $q=59$ (and $m=6$), the corresponding cycle (also of length $10$) is \n\\begin{equation} \\label{eq:cycle2}\n\\left(v^{(0)}_{58},v^{(1)}_0,v^{(0)}_{0},v^{(1)}_{34},v^{(0)}_{5},v^{(1)}_{8},v^{(0)}_{11},v^{(1)}_{38},v^{(0)}_{6},v^{(1)}_{4},v^{(0)}_{58} \\right).\n\\end{equation}\nThus, we get the same cycle lengths. Continuing with the remaining pairs of rows, $(i,j)=(0,2),(0,3),\\dots,(4,5)$, we get\nthe same cycle lengths for cycles containing the edge $(v^{(i)}_{q-1+i \\pmod{q}},v^{(j)}_{q-1+j \\pmod{q}})$ or the edge $(v^{(i)}_{0},v^{(j)}_{0})$ for both support matrices. Thus, we would expect that there might exist a template support matrix whose instances (possibly column-permuted) for $q=47$ and $q=59$ are the support matrices in (\\ref{eq:supportmatrix1}) and (\\ref{eq:supportmatrix2}), respectively.\n\\end{example}\n\n\n\n\\begin{algorithm}[tbp] \n\\caption{Template Support Matrix Inference}\n\\label{alg:tsmi}\n\\begin{algorithmic}[1]\n\\small\n\\STATE $\/*$ Fill in entries in the candidate template support matrix in (\\ref{eq:support_matrix_gen}) based on the support matrices of two (minimal) codewords $\\mathbf{c}_1 \\in \\mathcal{C}(q_1,m)$ and $\\mathbf{c}_2 \\in \\mathcal{C}(q_2,m)$, $q_2 > q_1$, of the same Hamming weight and with the same graphical cycle structure.\\\\\n{\\bf Input:} Row indices $i$ and $j$, and a pair of cycles $(\\mathbf{v}_1,\\mathbf{v}_2)$ (of the same length $2l$) as defined in (\\ref{eq:cycle1gen}) and (\\ref{eq:cycle2gen}).\\\\\n{\\bf Output:} A (partial) candidate template support matrix as defined in (\\ref{eq:support_matrix_gen}), and a (partial) permutation $\\pi(\\cdot)$. $*\/$\n\\STATE Assign to $\\mathcal{I}$ all integers in $\\{1,\\dots,I\\}$ for some integer $I \\geq 1$.\n\\FOR{$r \\leftarrow 0$ to $2l-1$}\n\\STATE Find an index pair $(a,b)$ (which is also unique) such that $a \\in \\psi_1(v^{(\\gamma)}_{\\alpha_{1,r}}) \\cap \\psi_1(v^{(\\delta)}_{\\alpha_{1,r+1}})$ and $b \\in \\psi_2(v^{(\\gamma)}_{\\alpha_{2,r}}) \\cap \\psi_2(v^{(\\delta)}_{\\alpha_{2,r+1}})$, where $\\gamma = i$ and $\\delta=j$ if $r$ is even, and $\\gamma = j$ and $\\delta=i$ if $r$ is odd.\n\\STATE Solve the two systems of equations\n\\begin{align}\nx^{(1)}_a+\\gamma y^{(1)}_a \\pmod {q_1} &= \\alpha_{1,r} \\notag \\\\\nx^{(1)}_a+\\delta y^{(1)}_a \\pmod {q_1} &= \\alpha_{1,r+1} \\notag\n\\end{align}\nand\n\\begin{align}\nx^{(2)}_b+\\gamma y^{(2)}_b \\pmod {q_2} &= \\alpha_{2,r} \\notag \\\\\nx^{(2)}_b+\\delta y^{(2)}_b \\pmod {q_2} &= \\alpha_{2,r+1} \\notag\n\\end{align}\n\\STATE Find the integers $k_x$ and $k_y$ in $\\mathcal{I}$ that give the \\emph{simplest} solutions (for $x$ and $y$, modulo $q_1 q_2$) to the two systems of congruences\n\\begin{align}\nx &\\equiv k_x \\cdot x^{(1)}_a \\pmod {q_1} \\notag \\\\\nx &\\equiv k_x \\cdot x^{(2)}_b \\pmod {q_2} \\notag \n\\end{align}\nand\n\\begin{align}\ny &\\equiv k_y \\cdot y^{(1)}_a \\pmod {q_1} \\notag \\\\\ny &\\equiv k_y \\cdot y^{(2)}_b \\pmod {q_2} \\notag \n\\end{align}\n\\IF{$|x| \\leq |x-q_1 q_2|$}\n\\STATE $\\tilde{x} \\leftarrow x \\cdot k_x^{-1}$\\\\ \n\\ELSE \n\\STATE $\\tilde{x} \\leftarrow (x-q_1 q_2) \\cdot k_x^{-1}$\\\\ \n\\ENDIF\n\\IF{$|y| \\leq |y-q_1 q_2|$}\n\\STATE $\\tilde{y} \\leftarrow y \\cdot k_y^{-1}$ \\\\%F_{v,2}(y^{(1)}_b,y^{(2)}_b; q_1,q_2,\\mathcal{P})$ \\\\\n\\ELSE\n\\STATE $\\tilde{y} \\leftarrow (y -q_1 q_2) \\cdot k_y^{-1}$ \\\\%F_{v,2}(y^{(1)}_b,y^{(2)}_b; q_1,q_2,\\mathcal{P})$ \\\\\n\\ENDIF\n\\IF{$x_a = \\ast$ (and $y_a = \\ast$)}\n\\STATE $x_a \\leftarrow \\tilde{x}$, $y_a \\leftarrow \\tilde{y}$, \n$\\pi(b) \\leftarrow a$, and go to Step 3.\n\\ELSIF{$x_a \\neq x$ or $y_a \\neq y$}\n\\STATE an inconsistency has occurred. Exit.\n\\ENDIF\n\\ENDFOR\n\\end{algorithmic}\n\\end{algorithm} \n\n\n\\subsection{Second Step: Inferring a Candidate Template Support Matrix} \\label{sec:infer}\n\nIn this subsection, we consider the second step of the procedure, i.e., to infer a candidate template support matrix from two minimal codewords with the same graphical cycle structure. This is done by solving simple $2$-by-$2$ equation systems and congruences.\nWe remark that this procedure will give a \\emph{candidate} template support matrix, since we formally need to prove that the resulting matrix is a template support matrix.\n\nNow, let \n\\begin{equation} \\label{eq:cycle1gen}\n\\mathbf{v}_1 = \n(v_{\\alpha_{1,0}}^{(i)}, v_{\\alpha_{1,1}}^{(j)}, \\dots, v_{\\alpha_{1,2l-2}}^{(i)}, v_{\\alpha_{1,2l-1}}^{(j)}, v_{\\alpha_{1,2l}}^{(i)})\n\\end{equation}\n denote a cycle of length $2l$, where $\\alpha_{1,0} = \\alpha_{1,2l}$, in the support matrix graph $G_{\\mathbf{c}_1}^{(i,j)}$\ncomputed from a given minimal codeword $\\mathbf{c}_1 \\in \\mathcal{C}(q_1,m)$. In a similar manner, we denote by \n\\begin{equation} \\label{eq:cycle2gen}\n\\mathbf{v}_2 = \n(v_{\\alpha_{2,0}}^{(i)}, v_{\\alpha_{2,1}}^{(j)}, \\dots, v_{\\alpha_{2,2l-2}}^{(i)}, v_{\\alpha_{2,2l-1}}^{(j)}, v_{\\alpha_{2,2l}}^{(i)})\n\\end{equation}\nwhere $\\alpha_{2,0} = \\alpha_{2,2l}$, cycle of length $2l$ in the support matrix graph $G_{\\mathbf{c}_2}^{(i,j)}$\ncomputed from a given minimal codeword $\\mathbf{c}_2 \\in \\mathcal{C}(q_2,m)$, where $q_2 > q_1$. We assume here that $\\mathbf{c}_1$ and $\\mathbf{c}_2$ have the same Hamming weight and also the same graphical cycle structure. Now, the purpose is to infer the entries in a matrix\n\\begin{equation} \\label{eq:support_matrix_gen}\n\\left[ \\begin{smallmatrix}\nx_0 & x_1 & \\cdots & x_{w-1} \\\\\nx_0+y_0 & x_1+y_1 & \\cdots &x_{w-1}+y_{w-1} \\\\\n\\cdots & \\cdots & \\cdots & \\cdots \\\\\nx_0+(m-1)y_0 & x_1+(m-1)y_1 & \\cdots &x_{w-1}+(m-1)y_{w-1}\n\\end{smallmatrix}\n\\right]\n\\end{equation}\nwhere $w$ is the Hamming weight of $\\mathbf{c}_1$ and $\\mathbf{c}_2$, and \nsuch that the instances of the matrix for $q=q_1$ and $q=q_2$ are the support matrices (possibly column-permuted) of $\\mathbf{c}_1$ and $\\mathbf{c}_2$, respectively. \n\n\n\n\n\n\n\nAlgorithm~\\ref{alg:tsmi} presents such an algorithm, where $\\psi_1(v^{(i)}_{\\alpha_1})$ (resp.\\ $\\psi_2(v^{(i)}_{\\alpha_2})$) denotes the set of column indices of the support matrix of $\\mathbf{c}_1$ (resp.\\ $\\mathbf{c}_2$) containing the entry ${\\alpha_1}$ (resp.\\ ${\\alpha_2}$) in the $i$th row. \nAll entries in the resulting matrix (after applying Algorithm~\\ref{alg:tsmi}) should be reduced modulo $q$ to get an instance for a specific value of $q$. The algorithm works on two cycles of the same length, one from the support matrix graph of a minimal codeword $\\mathbf{c}_1 \\in \\mathcal{C}(q_1,m)$ and the other from the support matrix graph of a minimal codeword $\\mathbf{c}_2 \\in \\mathcal{C}(q_2,m)$, where $q_2 > q_1$. The purpose is to fill in the entries in a candidate template support matrix, which initially is filled with erasures denoted by $\\ast$. Furthermore, the algorithm also updates a permutation $\\pi(\\cdot)$ which gives the index mapping that should be applied to the columns of the support matrix of $\\mathbf{c}_2$ to get the instance of the candidate template support matrix for $q=q_2$. The algorithm is run on pairs of cycles (both containing either the edge $(v^{(i)}_{q-1+i \\pmod q},v^{(j)}_{q-1+j \\pmod q})$ or the edge $(v^{(i)}_0,v^{(j)}_0)$ as the left-most or first edge in the cycle) until all entries are filled in. In Step 4 of the algorithm, an index pair $(a,b)$ is identified, where $a$ (resp.\\ $b$) is the index of the column in the support matrix of $\\mathbf{c}_1$ (resp.\\ $\\mathbf{c}_2$) containing $\\alpha_{1,r}^{(\\gamma)}$ (resp.\\ $\\alpha_{2,r}^{(\\gamma)}$) as the $\\gamma$th entry and $\\alpha_{1,r+1}^{(\\delta)}$ (resp.\\ $\\alpha_{2,r+1}^{(\\delta)}$) as the $\\delta$th entry. Later in Step 18 of the algorithm, these two indices are used to fill the permutation $\\pi$ ($\\pi(b) \\leftarrow a$). Actually, the index pairs $(a,b)$ can be computed in a preprocessing stage before the algorithm is even run, since they are available by simple cycle analysis.\n\nIn Steps 5 and 6 of the algorithm we determine the entries in column $a$ (of the candidate template support matrix) based on the two cycles. In Step 5 we first determine the actual values for $x$ and $y$ modulo $q_1$ (denoted by $x_a^{(1)}$ and $y_a^{(1)}$, respectively) for column $a$ of the support matrix of $\\mathbf{c}_1$, and then the corresponding values modulo $q_2$, now in column $b$ (denoted by $x_b^{(2)}$ and $y_b^{(2)}$, respectively), of the support matrix of $\\mathbf{c}_2$. Then, in Step 6, we find the \\emph{simplest} template solutions, i.e., the \\emph{simplest} expressions for both $x$ and $y$ such that they both evaluate modulo $q$ (for $q=q_1$ and $q=q_2$) to the correct values as given by the support matrices of the codewords $\\mathbf{c}_1$ and $\\mathbf{c}_2$, respectively, and then the entries for $x_a$ and $y_a$ are filled in the candidate template support matrix as defined in (\\ref{eq:support_matrix_gen})\nand \nas indicated in Steps 7 to 18. Note that in Steps 8 and 10 neither the inverse nor the product operation are performed and the formal string of three characters; $x$ (or $x-q_1q_2$ in Step 10) (with a specific value inserted for $x$ (or $x-q_1 q_2$ evaluated for a specific value of $x$ in Step 10)), $\\cdot$, and $k_x^{-1}$ (with a specific value inserted for $k_x$), is assigned to $\\tilde{x}$. A similar comment applies to the assignments in Steps 13 and 15. \nBy the \\emph{simplest} solutions we mean the values for $k_x \\in \\mathcal{I}$ and $k_y \\in \\mathcal{I}$ such that, respectively, $\\max(|k_x|, \\min(|x|, |x-q_1 q_2|))$ and $\\max(|k_y|, \\min(|y|, |y-q_1 q_2|))$ are as small as possible. We remark here that this is only to find a candidate template support matrix with a nice\/compact representation, which also makes it easier to prove analytically that all instances (possibly column-permuted) are indeed valid support matrices of codewords for all values of $q$ larger than or equal to some $\\tilde{q}$, when $\\tilde{q}$ is small. In any case, for any practical value of $\\tilde{q}$, regardless of the representation, a simple and fast computer search can be used to prove that the candidate template support matrix indeed gives a valid support matrix for all values of $q_0 \\leq q < \\tilde{q}$, for some $q_0$. For details, see the proofs of Theorems~\\ref{th:1} and \\ref{th:2} in Sections~\\ref{sec:bound} and \\ref{sec:bound_m7}, respectively. Note that if a specific pair of values $(k_x, k_y) \\in \\mathcal{I}^2$ gives a valid template support matrix (which of course requires a formal proof), then all pairs $(k_x,k_y) \\in \\mathcal{I}^2$ will give a valid template support matrix, possibly with different values for $q_0$. Interestingly, however, is the fact that the template support matrices will not necessarily be equivalent, in the sense that the instances of the different templates, for a given value of $q$ different from $q_1$ and $q_2$, will not necessarily give the same codeword from $\\mathcal{C}(q,m)$. In conclusion, a template support matrix is \\emph{not} unique, even if it is based on the same pair of codewords.\n\nNote that Algorithm~\\ref{alg:tsmi} does in fact identify a one-to-one mapping (through the permutation $\\pi(\\cdot)$) between the columns of the support matrices of the codewords $\\mathbf{c}_1 \\in \\mathcal{C}(q_1,m)$ and $\\mathbf{c}_2 \\in \\mathcal{C}(q_2,m)$ by \\emph{matching} cycles in the corresponding support matrix graphs. Then, the template values for $x$ and $y$ are established by \\emph{matching} columns (and solving equations and congruences independently for each column) through this one-to-one mapping. It is in fact this particular one-to-one mapping (as opposed to an arbitrary mapping) that make sure that the resulting candidate template support matrix have entries that appear an even number of times (the codeword case) or at least two times (the stopping set case) in each row.\n\nIn principle, one type of error condition can occur, i.e., we can exit in Step 20. \nThis happens when a previous pair of cycles has determined the entries in column $a$ and then the current pair of cycles gives different values.\nIf the algorithm exits in Step 20, we need to start from scratch by considering a different pair of minimal codewords $\\mathbf{c}_1 \\in \\mathcal{C}(q_1,m)$ and $\\mathbf{c}_2 \\in \\mathcal{C}(q_2,m)$ of the same Hamming weight and with the same graphical cycle structure, and revert (back to erasures) all the entries filled in so far in the candidate template support matrix.\n\nIn Step 5 of Algorithm~\\ref{alg:tsmi} two systems of equations need to be solved. They have the following solutions:\n\\begin{align}\nx^{(1)}_a &= \\alpha_{1,r}-\\gamma (\\delta-\\gamma)^{-1} (\\alpha_{1,r+1}-\\alpha_{1,r}) \\pmod {q_1} \\notag \\\\\ny^{(1)}_a &= (\\delta-\\gamma)^{-1}(\\alpha_{1,r+1}-\\alpha_{1,r}) \\pmod {q_1} \\notag \\\\\nx^{(2)}_b &= \\alpha_{2,r}-\\gamma (\\delta-\\gamma)^{-1} (\\alpha_{2,r+1}-\\alpha_{2,r}) \\pmod {q_2} \\notag \\\\\ny^{(2)}_b &= (\\delta-\\gamma)^{-1}(\\alpha_{2,r+1}-\\alpha_{2,r}) \\pmod {q_2} \\notag\n\\end{align}\nwhich also gives the rationale behind the assignment to the set of integers $\\mathcal{I}$ in Step 2 of the algorithm,\n since the solutions involve a multiplication by $(\\delta-\\gamma)^{-1}$.\n\nIn Step 6 of Algorithm~\\ref{alg:tsmi} two systems of congruences need to be solved. They have the following solutions:\n\\begin{align}\nx &= k_x (x_a^{(1)}+q_1 \\cdot \\kappa (x_b^{(2)}-x_a^{(1)})) \\pmod {q_1 q_2} \\label{eq:congu1} \\\\\ny &= k_y (y_a^{(1)}+q_1 \\cdot \\kappa (y_b^{(2)}-y_a^{(1)})) \\pmod {q_1 q_2} \\label{eq:congu2} \n\\end{align}\nmodulo $q_1 q_2$,\nwhere $\\kappa$ can be found using the extended Euclidean algorithm which yields integers $\\kappa$ and $\\eta$ such that $\\kappa \\cdot q_1 + \\eta \\cdot q_2 = \\gcd(q_1,q_2)=1$.\n\nWe will illustrate the procedure in Example~\\ref{ex:2} below.\n\n\n\n\n\n\n\\begin{example} \\label{ex:2}\nConsider the two cycles in (\\ref{eq:cycle1}) and (\\ref{eq:cycle2}) for $q=47$ and $q=59$, respectively. Here, $i=0$ and $j=1$, and $\\kappa = -5$ and $\\eta=4$ (since $-5 \\cdot 47 + 4 \\cdot 59 = 1$). For $r=0$ (see Step 3 in Algorithm~\\ref{alg:tsmi}), $\\alpha_{1,r} = \\alpha_{1,0} = 46$, $\\alpha_{1,r+1} = \\alpha_{1,1} = 0$, $\\alpha_{2,r} = \\alpha_{2,0} = 58$, $\\alpha_{2,r+1} = \\alpha_{2,1} = 0$, $\\gamma=i=0$, and $\\delta=j=1$. Since $46$ appears in the first row and $0$ in the second row of the third column (column index $2$) of the support matrix in (\\ref{eq:supportmatrix1}), $a=2$. Similarly, $b=2$, since $58$ appears in the first row and $0$ in the second row of the third column of the support matrix in (\\ref{eq:supportmatrix2}). This completes Step 4 of the algorithm, and we get the solutions\n\\begin{align}\nx_{2}^{(1)} &= 46 - 0 \\cdot (1-0)^{-1} (0-46) \\pmod {47} = 46 \\notag \\\\\ny_{2}^{(1)} &= (1-0)^{-1} (0-46) \\pmod {47} = 1 \\notag \\\\\nx_{2}^{(2)} &= 58 - 0 \\cdot (1-0)^{-1} (0-58) \\pmod {59} = 58 \\notag \\\\\ny_{2}^{(2)} &= (1-0)^{-1} (0-58) \\pmod {59} = 1 \\notag\n\\end{align}\nin Step 5, \nfrom which we can calculate the following solutions for $x$ and $y$ in Step 6 (with $I=m-1=5$), using (\\ref{eq:congu1}) and (\\ref{eq:congu2}), respectively:\n\\begin{displaymath}\n\\begin{tabular}{r|rrrrrr}\n$k_x$\/$k_y$ & $1$ & $2$ & $3$ & $4$ & $5$ \\\\\n\\hline\n$x$ & $2772$ & $2771$ & $2770$ & $2769$ & $2768$ \\\\\n$x-q_1 q_2$ & $-1$ & $-2$&$-3$ &$-4$ & $-5$ \\\\\n$y$ & $1$ & $2$&$3$ &$4$ & $5$ \\\\\n$y-q_1 q_2$ & $-2772$ & $-2771$ & $-2770$ & $-2769$ & $-2768$\n\\end{tabular}\n\\end{displaymath}\nThus, we can fill in $\\pi(2)=2$, $x_{2}=-1$, and $y_{2}=1$ (corresponding to the values $k_x=1$ and $k_y=1$, which give the simplest solutions).\n\nIn a similar manner, for instance for $r=3$, we get $\\alpha_{1,r} = \\alpha_{1,3} = 28$, $\\alpha_{1,r+1} = \\alpha_{1,4} = 5$, $\\alpha_{2,r} = \\alpha_{2,3} = 34$, $\\alpha_{2,r+1} = \\alpha_{2,4} = 5$, $\\gamma=j=1$, and $\\delta=i=0$. For this case we have $a=9$ and $b=9$ (from Step 4), and the solutions\n\\begin{align}\nx_{9}^{(1)} &= 28 - 1 \\cdot (0-1)^{-1} (5-28) \\pmod {47} = 5 \\notag \\\\\ny_{9}^{(1)} &= (0-1)^{-1} (5-28) \\pmod {47} = 23 \\notag \\\\\nx_{9}^{(2)} &= 34 - 1 \\cdot (0-1)^{-1} (5-34) \\pmod {59} = 5 \\notag \\\\\ny_{9}^{(2)} &= (0-1)^{-1} (5-34) \\pmod {59} = 29 \\notag\n\\end{align}\n in Step 5, \nfrom which we can calculate the following solutions for $x$ and $y$ in Step 6 (with $I=m-1=5$), using (\\ref{eq:congu1}) and (\\ref{eq:congu2}), respectively:\n\\begin{displaymath}\n\\begin{tabular}{r|rrrrrr}\n$k_x$\/$k_y$ & $1$ & $2$ & $3$ & $4$ & $5$ \\\\\n\\hline\n$x$ & $5$ & $10$ & $15$ & $20$ & $25$ \\\\\n$x-q_1 q_2$ & $-2768$ & $-2763$ & $-2758$ & $-2753$ & $-2748$ \\\\\n$y$ & $1386$ & $2772$ & $1385$ & $2771$ & $1384$ \\\\\n$y-q_1 q_2$ & $-1387$ & $-1$ & $-1388$ & $-2$ & $-1389$ \n\\end{tabular}\n\\end{displaymath}\nThus, we can fill in $\\pi(9)=9$, $x_{9}=5$, and $y_{9}=-2^{-1}$ (corresponding to the values $k_x=1$ and $k_y=2$, which give the simplest solutions).\n\nContinuing with the rest of the values for $r$ (see Step 3 in Algorithm~\\ref{alg:tsmi}) a total of $10$ (the cycle length) columns of the candidate template support matrix can be determined. To determine the rest of the entries in the matrix, other cycle pairs must be considered. \nFor instance, by looking at the graphs $G^{(0,2)}$, we find the cycles\n\\begin{equation} \\notag\n\\left(v^{(0)}_{46},v^{(2)}_1,v^{(0)}_{31},v^{(2)}_{0},v^{(0)}_{0},v^{(2)}_{9},v^{(0)}_{46} \\right)\n\\end{equation}\nand\n\\begin{equation} \\notag\n\\left(v^{(0)}_{58},v^{(2)}_1,v^{(0)}_{43},v^{(2)}_{0},v^{(0)}_{0},v^{(2)}_{9},v^{(0)}_{58} \\right)\n\\end{equation}\nfor $q=47$ and $q=59$, respectively. Choose $r=1$, from which we get $\\alpha_{1,r} = \\alpha_{1,1} = 1$, $\\alpha_{1,r+1} = \\alpha_{1,2} = 31$, $\\alpha_{2,r} = \\alpha_{2,1} = 1$, $\\alpha_{2,r+1} = \\alpha_{2,2} = 43$, $\\gamma=j=2$, and $\\delta=i=0$. For this case we have $a=17$ and $b=17$ (from Step 4), and the solutions\n\\begin{align}\nx_{17}^{(1)} &= 1 - 2 \\cdot (0-2)^{-1} (31-1) \\pmod {47} = 31 \\notag \\\\\ny_{17}^{(1)} &= (0-2)^{-1} (31-1) \\pmod {47} = 32 \\notag \\\\\nx_{17}^{(2)} &= 1 - 2 \\cdot (0-2)^{-1} (43-1) \\pmod {59} = 43 \\notag \\\\\ny_{17}^{(2)} &= (0-2)^{-1} (43-1) \\pmod {59} = 38 \\notag\n\\end{align}\nin Step 5, \nfrom which we can calculate the following solutions for $x$ and $y$ in Step 6 (with $I=m-1=5$), using (\\ref{eq:congu1}) and (\\ref{eq:congu2}), respectively:\n\\begin{displaymath}\n\\begin{tabular}{r|rrrrrr}\n$k_x$\/$k_y$ & $1$ & $2$ & $3$ & $4$ & $5$ \\\\\n\\hline\n$x$ & $2757$ & $2741$ & $2725$ & $2709$ & $2693$ \\\\\n$x-q_1 q_2$ & $-16$ & $-32$ & $-48$ & $-64$ & $-80$ \\\\\n$y$ & $1395$ & $17$ & $1412$ & $34$ & $1429$ \\\\\n$y-q_1 q_2$ & $-1378$ & $-2756$ & $-1361$ & $-2739$ & $-1344$\n\\end{tabular}\n\\end{displaymath}\nThus, we can fill in $\\pi(17)=17$, $x_{17}=-16$, and $y_{17}=17 \\cdot 2^{-1}$ (corresponding to the values $k_x=1$ and $k_y=2$, which give the simplest solutions).\n Continuing (by considering more cycle pairs) we can determine the rest of the columns, and we end up with the candidate template support matrix shown in (\\ref{eq:supportm}) at the top of the page, where all entries should be reduced modulo $q$ to get an instance for a specific value of $q$. The remaining detailed calculations are omitted for brevity.\n\n\n\n\\end{example}\n\n\n\\begin{figure*}[!t]\n\\normalsize \\setcounter{mytempeqncnt}{\\value{equation}}\n\\setcounter{equation}{12}\n\\begin{equation} \\label{eq:supportm}\n{\\tiny {\n\\left[ \\begin{smallmatrix}\n0 & -5 & -1 & 5 & -11 & -1 & -10 & -16 & 11 & 5 & -4 & 6 & -10 & 0 & -4 & -5 & -11 & -16 & 11 & 6 \\\\\n0 & -4 & 0 & 8 & -8 & 4 & -4 & -8 & 17 \\cdot 2^{-1} & 9 \\cdot 2^{-1} & -7 \\cdot 2^{-1} & 17 \\cdot 2^{-1} & -15 \\cdot 2^{-1} & 9 \\cdot 2^{-1} & 2^{-1} & 2^{-1} & -7 \\cdot 2^{-1} & -15 \\cdot 2^{-1} & 8 & 4\\\\\n0 & -3 & 1 & 11 & -5 & 9 & 2 & 0 & 6 & 4 & -3 & 11 & -5 & 9 & 5 & 6 & 4 & 1 & 5 & 2 \\\\\n0 & -2 & 2 & 14 & -2 & 14 & 8 & 8 & 7 \\cdot 2^{-1} & 7 \\cdot 2^{-1} & -5 \\cdot 2^{-1} & 27 \\cdot 2^{-1} & -5 \\cdot 2^{-1} & 27 \\cdot 2^{-1} & 19 \\cdot 2^{-1} & 23 \\cdot 2^{-1} & 23 \\cdot 2^{-1} & 19 \\cdot 2^{-1} & 2& 0\\\\\n0 & -1 & 3 & 17 & 1 & 19 & 14 & 16 & 1 & 3 & -2 & 16 & 0 & 18 & 14 & 17 & 19 & 18 & -1 & -2 \\\\\n0 & 0 & 4 & 20 & 4 & 24 & 20 & 24 & -3 \\cdot 2^{-1} & 5 \\cdot 2^{-1} & -3 \\cdot 2^{-1} & 37 \\cdot 2^{-1} & 5 \\cdot 2^{-1} & 45 \\cdot 2^{-1} & 37 \\cdot 2^{-1} & 45 \\cdot 2^{-1} & 53 \\cdot 2^{-1} & 53 \\cdot 2^{-1} & -4 & -4\n\\end{smallmatrix} \\right]}}\n\\end{equation}\n\\setcounter{equation}{\\value{mytempeqncnt}}\n \\hrulefill\n\\vspace*{-2mm}\n\\end{figure*}\n\n\n\n\\subsection{Third Step: A Formal Proof} \\label{sec:formal}\n\nThe third step is showing that the candidate template support matrix is indeed a valid template support matrix for some parameter $q_0$, i.e., the instances for $q \\geq q_0$ (possibly column-permuted) are all valid support matrices of codewords from $\\mathcal{C}(q,m)$. In fact it is sufficient (to prove an upper bound on the minimum distance) to show that the instances for $q \\geq q_0$ (possibly column-permuted) all contain as submatrices valid support matrices of codewords from $\\mathcal{C}(q,m)$. In the case an instance (possibly column-permuted) contains as a proper submatrix a valid support matrix of a codeword, the established upper bound is obviously not tight. In particular, we need to show, for any value of $q \\geq q_0$, for some $q_0$, that\n\\begin{enumerate}\n\\item all entries in a row occur an even number of times, \n\\item all columns in the matrix are in fact valid columns in an array LDPC code parity-check matrix, and \n\\item at least two columns are distinct modulo $q$ and appear an odd number of times.\n\\end{enumerate}\nNote that the first item above will always be satisfied if Algorithm~\\ref{alg:tsmi} indeed produces a complete candidate template support matrix, since the entries in the matrix are filled in by solving systems of equations and congruences that are based on cycles (see the discussion in Section~\\ref{sec:infer}). Furthermore, the second item is also always satisfied, since by construction all columns are of the form in (\\ref{eq:form}), for some $x$ and $y$, and all possible values for $x$ and $y$ will give a valid column\n(see the discussion following (\\ref{eq:form})). Thus, only the third item above needs to be explicitly checked if in fact the candidate template support matrix was produced by Algorithm~\\ref{alg:tsmi}.\n\n\n\n\\subsection{Applicability}\n\nThe heuristic outlined above in Sections~\\ref{sec:structure} through \\ref{sec:formal} is very general and can be applied for any pair of values $(q,m)$. However, the difficult part is finding low-weight\/small-size candidate codewords\/stopping sets for different values of $q$, which is increasingly difficult when $m$ grows, since the minimum\/stopping distance increases with $m$. For this we have used the algorithm in \\cite{ros09,ros12}, and the minimum distance probabilistic algorithm in \\cite{tom07}.\n\nIn this work, we have applied the heuristic for $m=6$ and $m=7$, but remark that it will easily provide the upper bounds $d(q,4) \\leq 10$ and $d(q,5) \\leq 12$ which can be found in the literature \\cite{sug08}.\n\n\n\\subsection{Improved Algorithm} \\label{sec:improved}\n\nThe basic algorithm from Sections~\\ref{sec:structure} and \\ref{sec:infer} can be improved in the sense of increasing its probability of success, i.e., finding a valid template support matrix. The key observation in this respect is that even though the two codewords $\\mathbf{c}_1 \\in \\mathcal{C}(q_1,m)$ and $\\mathbf{c}_2 \\in \\mathcal{C}(q_2,m)$ do not have the same graphical cycle structure, their support matrices (possibly column-permuted) may still be instances of the same template matrix. The reason is that different entries in the template matrix may reduce to the same value modulo $q$ for different values of $q$. This typically happens when either $q_1$ or $q_2$ is small. A simple way to deal with such scenarios is by relaxing the condition that $\\mathbf{c}_1$ and $\\mathbf{c}_2$ should have \\emph{exactly} the same graphical cycle structure. In particular, it may be sufficient to require that the \\emph{minimum} cycle length of all cycles containing the edge $(v^{(i)}_{q-1+i \\pmod q},v^{(j)}_{q-1+j \\pmod q})$ and the \\emph{minimum} cycle length of all cycles containing the edge $(v^{(i)}_0,v^{(j)}_0)$ are the same for both support matrix graphs $G_{\\mathbf{c}_1}^{(i,j)}$ and $G_{\\mathbf{c}_2}^{(i,j)}$, $0 \\leq i < j \\leq m-1$, and then run Algorithm~\\ref{alg:tsmi} on such pairs of cycles (which have the same length).\n\n\n\\section{Upper Bound on $d(q,6)$} \\label{sec:bound}\n\nBy using the heuristic from Section~\\ref{sec:algo}\nwe have found the \\emph{candidate} template support matrix in (\\ref{eq:supportm}) shown at the top of the previous page. All entries in the matrix should be reduced modulo $q$. At this stage we emphasis that this is a \\emph{candidate} template support matrix, since we need to formally prove that the matrix is a template support matrix. In particular, we have used the procedure from Section~\\ref{sec:infer} to infer the matrix in (\\ref{eq:supportm}) from the codewords of Example~\\ref{ex:1}, which have the same graphical cycle structure. Also, in Example~\\ref{ex:2}, some of the columns in the matrix in (\\ref{eq:supportm}) were explicitly determined. The rest of the columns can be determined in a similar manner. Details are omitted for brevity.\nWe can now prove the following theorem.\n\n\\begin{theorem} \\label{th:1}\nThe minimum distance $d(q,6)$ is upper-bounded by $20$ for $q > 7$.\n\\end{theorem}\n\n\\begin{IEEEproof}\nThe proof is based on the candidate template support matrix in (\\ref{eq:supportm}). As explained in Section~\\ref{sec:formal}, there are three conditions that need to be checked. Also, if the candidate template support matrix was indeed produced by Algorithm~\\ref{alg:tsmi}, only the third condition needs to be explicitly checked. For completeness and for providing a formal proof, we will however check all three conditions. Obviously, computing an upper bound on the minimum distance from a template support matrix based on a codeword is easy; the upper bound is just the number of columns in the matrix. Thus, establishing that the matrix in (\\ref{eq:supportm}) is a valid template support matrix, in the sense that all instances (possibly column-permuted) for $q > 7$ contain the support matrix of a codeword as a submatrix, establishes the upper bound of $20$, since there $20$ columns in the matrix.\n\nIt is easy to check that each entry in each row of the matrix appears exactly twice, which means that the result is true if for any value of $q > 7$ \n\\begin{enumerate}\n\\item[2)] all columns in the matrix are in fact valid columns in an array LDPC code parity-check matrix, and\n\\item[3)] at least two columns are distinct modulo $q$ and appear an odd number of times.\n\\end{enumerate}\n\nSince all columns in the matrix in (\\ref{eq:supportm}) are of the form in (\\ref{eq:form}), it follows that they are all valid columns in an array LDPC code parity-check matrix (see the discussion following (\\ref{eq:form})). \nIn particular, the values for $x,y$ for the first $6$ columns are\n\n\\begin{displaymath}\n\\begin{tabular}{c|cccccc}\n$x$&$0$&$-5$&$-1$&$5$&$-11$&$-1$\\\\\n\\hline\n$y$&$0$&$1$&$1$&$3$&$3$&$5$\n\\end{tabular}.\n\\end{displaymath}\n\nFor the third part of the proof, we need to show, for any value of $q > 7$, that there exist (at least two) columns in the parity-check matrix which are not identical modulo $q$ and appear an odd number of times. This is simple (and very fast) to check by a computer search for any finite value of $q$ that would be of any practical value. It is only for large values of $q$ that the theoretical proof below is needed.\n\nNote that the maximum absolute value of the entries in the first row of the matrix in (\\ref{eq:supportm}) is $16$. \nThus, the only possibility for repeated columns, when $q > 2 \\cdot 16 = 32$, is for two \\emph{neighboring} columns (with identical entries in the first row) to be the same. \nHowever, by looking at the third row in the matrix, this possibility can be ruled out by requiring that $q$ is larger than twice the maximum absolute value of the entries in the third row, i.e., by requiring $q > 2 \\cdot 11 = 22$. \nIn summary, it follows that there are no identical columns in the matrix in (\\ref{eq:supportm}) if $q > \\max(32,22)=32$. \nFurthermore, for values of $7 < q < 32$, it can be checked numerically that there are no repeated columns in (\\ref{eq:supportm}), and the result follows. \n\\end{IEEEproof}\n\n\n\n\\setcounter{equation}{13}\nWe remark that for $q=7$, the matrix in (\\ref{eq:supportm}) reduces to\n\\begin{equation} \\label{eq:supportmq7}\n\\left[ \\begin{smallmatrix}\n 0& 2& 6& 5& 3& 6& 4& 5& 4& 5& 3& 6& 4& 0& 3& 2& 3& 5& 4& 6\\\\ \n 0& 3& 0& 1& 6& 4& 3& 6& 5& 1& 0& 5& 3& 1& 4& 4& 0& 3& 1& 4 \\\\\n 0& 4& 1& 4& 2& 2& 2& 0& 6& 4& 4& 4& 2& 2& 5& 6& 4& 1& 5& 2 \\\\\n 0& 5& 2& 0& 5& 0& 1& 1& 0& 0& 1& 3& 1& 3& 6& 1& 1& 6& 2& 0 \\\\\n 0& 6& 3& 3& 1& 5& 0& 2& 1& 3& 5& 2& 0& 4& 0& 3& 5& 4& 6& 5 \\\\\n 0& 0& 4& 6& 4& 3& 6& 3& 2& 6& 2& 1& 6& 5& 1& 5& 2& 2& 3& 3 \n\\end{smallmatrix} \\right].\n\\end{equation}\nWe observe that there are indeed some identical columns when $q=7$. \nHowever, the bound in Theorem~\\ref{th:1} is still valid, since these columns can just be removed from (\\ref{eq:supportmq7}) and we will end up in the valid (but column-permuted) support matrix\n\\setcounter{equation}{15}\n\\begin{equation} \\label{eq:supportmq7red}\n\\left[ \\begin{smallmatrix}\n 0& 2& 6& 3& 5& 4& 6& 0& 3& 2& 5& 4 \\\\ \n 0& 3& 0& 6& 6& 5& 5& 1& 4& 4& 3& 1 \\\\\n 0& 4& 1& 2& 0& 6& 4& 2& 5& 6& 1& 5 \\\\\n 0& 5& 2& 5& 1& 0& 3& 3& 6& 1& 6& 2 \\\\\n 0& 6& 3& 1& 2& 1& 2& 4& 0& 3& 4& 6 \\\\\n 0& 0& 4& 4& 3& 2& 1& 5& 1& 5& 2& 3 \n\\end{smallmatrix} \\right]\n\\end{equation}\nwhich corresponds to a codeword of weight $12$, but the bound $d(7,6) \\leq 20$ is of course not tight in this case. In fact, we found by exhaustive search that the codeword corresponding to the matrix in (\\ref{eq:supportmq7red}) is indeed a minimum-weight codeword.\n\nFinally, we remark that the template support matrix in (\\ref{eq:supportm}) for $q=7$, $11$, $13$, $17$, and $19$ do not give instances with columns in the order as implied by the parity-check matrix in (\\ref{eq:pcmatrix}). This can easily be seen from the sequence of $y$-values for the matrix in (\\ref{eq:supportm}), which should be nondecreasing.\nFurthermore, if two $y$-values are the same, then the corresponding sequence of $x$-values should be nondecreasing. For $q > 19$, it can easily be proved that the order is always according to (\\ref{eq:pcmatrix}). However, as argued previously, this is not important (independent of the order, a support matrix will represent the same codeword\/stopping set).\n\n\n\\begin{figure*}[!t]\n\\normalsize \\setcounter{mytempeqncnt}{\\value{equation}}\n\\setcounter{equation}{14}\n\\begin{equation} \\label{eq:supportm7}\n{\\tiny {\n\\begin{split}\n&\\left[ \\begin{smallmatrix}\n0 & 3 \\cdot 2^{-1} & 0 & -9 \\cdot 2^{-1} & -7 \\cdot 2^{-1} & -1 & -11 \\cdot 2^{-1} & -5 & 2 & -2 & -5 & -1 & 2 & 5 \\cdot 2^{-1} & 2^{-1} & 3 \\cdot 2^{-1} & -3 & -2 & -9 \\cdot 2^{-1} & \\\\\n0 & 3 \\cdot 2^{-1} & 1 & -7 \\cdot 2^{-1} & -5 \\cdot 2^{-1} & 0 & -7 \\cdot 2^{-1} & -3 & 9\\cdot 2^{-2} & -7 \\cdot 2^{-2} & -15\\cdot 2^{-2} & 2^{-2} & 3 \\cdot 2^{-1} & 2 & 1 & 2 & -5 \\cdot 2^{-1} & -3 \\cdot 2^{-1} & -3 & \\\\\n0 & 3 \\cdot 2^{-1} & 2 & -5 \\cdot 2^{-1} & -3 \\cdot 2^{-1} & 1 & -3 \\cdot 2^{-1} & -1 & 5 \\cdot 2^{-1} & -3 \\cdot 2^{-1} & -5 \\cdot 2^{-1} & 3 \\cdot 2^{-1} & 1 & 3 \\cdot 2^{-1} & 3 \\cdot 2^{-1} & 5 \\cdot 2^{-1} & -2 & -1 & -3 \\cdot 2^{-1} & \\\\\n0 & 3 \\cdot 2^{-1} & 3 & -3 \\cdot 2^{-1} & -2^{-1} & 2 & 2^{-1} & 1 & 11\\cdot 2^{-2} & -5\\cdot 2^{-2}\u00a0& -5\\cdot 2^{-2} & 11\\cdot 2^{-2} & 2^{-1} & 1 & 2 & 3 & -3 \\cdot 2^{-1} & -2^{-1} & 0 & \\\\\n0 & 3 \\cdot 2^{-1} & 4 & -2^{-1} & 2^{-1} & 3 & 5 \\cdot 2^{-1} & 3 & 3 & -1 & 0 & 4 & 0 & 2^{-1} & 5 \\cdot 2^{-1} & 7 \\cdot 2^{-1} & -1 & 0 & 3 \\cdot 2^{-1} & \\\\\n0 & 3 \\cdot 2^{-1} & 5 & 2^{-1} & 3 \\cdot 2^{-1} & 4 & 9 \\cdot 2^{-1} & 5 & 13\\cdot 2^{-2} & -3\\cdot 2^{-2} & 5\\cdot 2^{-2} & 21 \\cdot 2^{-2} & -2^{-1} & 0 & 3 & 4 & -2^{-1} & 2^{-1} & 3 & \\\\\n0 & 3 \\cdot 2^{-1} & 6 & 3 \\cdot 2^{-1} & 5 \\cdot 2^{-1} & 5 & 13 \\cdot 2^{-1} & 7 & 7 \\cdot 2^{-1} & -2^{-1} & 5 \\cdot 2^{-1} & 13 \\cdot 2^{-1} & -1 & -2^{-1} & 7 \\cdot 2^{-1} & 9 \\cdot 2^{-1} & 0 & 1 & 9 \\cdot 2^{-1} &\n\\end{smallmatrix} \\right. \\\\\n& \\left. \\begin{smallmatrix}\n& -3 & 2^{-1} & 5 \\cdot 2^{-1} & -11 \\cdot 2^{-1} & -7 \\cdot 2^{-1} \\\\\n& -3 \\cdot 2^{-1} & 2^{-2} & 9\\cdot 2^{-2} & -15\\cdot 2^{-2} & -7\\cdot 2^{-2} \\\\\n& 0 & 0 & 2 & -2 & 0 \\\\\n\\hspace{10.0cm} & 3 \\cdot 2^{-1} & -2^{-2} & 7\\cdot 2^{-2} & -2^{-2} & 7\\cdot 2^{-2} \\\\\n& 3 & -2^{-1} & 3 \\cdot 2^{-1} & 3 \\cdot 2^{-1} & 7 \\cdot 2^{-1} \\\\ \n& 9 \\cdot 2^{-1} & -3\\cdot 2^{-2} & 5 \\cdot 2^{-2} & 13 \\cdot 2^{-2} & 21 \\cdot 2^{-2} \\\\\n& 6 & -1 & 1 & 5 & 7 \\end{smallmatrix} \\right] \n\\end{split} }}\n\\end{equation}\n\\setcounter{equation}{\\value{mytempeqncnt}}\n \\hrulefill\n\\vspace*{-2mm}\n\\end{figure*}\n\n\n\\section{Upper Bound on $d(q,7)$} \\label{sec:bound_m7}\n\nFor the case $m=7$ we have found, using the algorithm from \\cite{tom07}, the support matrices\n\\begin{equation} \\notag\n\\left[ \\begin{smallmatrix}\n0 & 13 & 0 & 7 & 8 & 22 & 6 & 18 & 2 & 21 & 18 & 22 & 2 & 14 & 12 & 13 & 20 & 21 & 7 & 20 & 12 & 14 & 6 & 8 \\\\\n0 & 13 & 1 & 8 & 9 & 0 & 8 & 20 & 8 & 4 & 2 & 6 & 13 & 2 & 1 & 2 & 9 & 10 & 20 & 10 & 6 & 8 & 2 & 4 \\\\\n0 & 13 & 2 & 9 & 10 & 1 & 10 & 22 & 14 & 10 & 9 & 13 & 1 & 13 & 13 & 14 & 21 & 22 & 10 & 0 & 0 & 2 & 21 & 0 \\\\\n0 & 13 & 3 & 10 & 11 & 2 & 12 & 1 & 20 & 16 & 16 & 20 & 12 & 1 & 2 & 3 & 10 & 11 & 0 & 13 & 17 & 19 & 17 & 19 \\\\\n0 & 13 & 4 & 11 & 12 & 3 & 14 & 3 & 3 & 22 & 0 & 4 & 0 & 12 & 14 & 15 & 22 & 0 & 13 & 3 & 11 & 13 & 13 & 15 \\\\\n0 & 13 & 5 & 12 & 13 & 4 & 16 & 5 & 9 & 5 & 7 & 11 & 11 & 0 & 3 & 4 & 11 & 12 & 3 & 16 & 5 & 7 & 9 & 11 \\\\\n0 & 13 & 6 & 13 & 14 & 5 & 18 & 7 & 15 & 11 & 14 & 18 & 22 & 11 & 15 & 16 & 0 & 1 & 16 & 6 & 22 & 1 & 5 & 7\n\\end{smallmatrix} \\right]\n\\end{equation}\nand\n\\begin{equation} \\notag\n\\left[ \\begin{smallmatrix}\n0 & 16 & 0 & 10 & 11 & 28 & 9 & 24 & 15 & 17 & 9 & 11 & 2 & 17 & 15 & 16 & 26 & 27 & 10 & 26 & 2 & 27 & 24 & 28 \\\\\n0 & 16 & 1 & 11 & 12 & 0 & 11 & 26 & 22 & 24 & 18 & 20 & 16 & 2 & 1 & 2 & 12 & 13 & 26 & 13 & 24 & 20 & 18 & 22 \\\\\n0 & 16 & 2 & 12 & 13 & 1 & 13 & 28 & 0 & 2 & 27 & 0 & 1 & 16 & 16 & 17 & 27 & 28 & 13 & 0 & 17 & 13 & 12 & 16 \\\\\n0 & 16 & 3 & 13 & 14 & 2 & 15 & 1 & 7 & 9 & 7 & 9 & 15 & 1 & 2 & 3 & 13 & 14 & 0 & 16 & 10 & 6 & 6 & 10 \\\\\n0 & 16 & 4 & 14 & 15 & 3 & 17 & 3 & 14 & 16 & 16 & 18 & 0 & 15 & 17 & 18 & 28 & 0 & 16 & 3 & 3 & 28 & 0 & 4 \\\\\n0 & 16 & 5 & 15 & 16 & 4 & 19 & 5 & 21 & 23 & 25 & 27 & 14 & 0 & 3 & 4 & 14 & 15 & 3 & 19 & 25 & 21 & 23 & 27 \\\\\n0 & 16 & 6 & 16 & 17 & 5 & 21 & 7 & 28 & 1 & 5 & 7 & 28 & 14 & 18 & 19 & 0 & 1 & 19 & 6 & 18 & 14 & 17 & 21 \n\\end{smallmatrix} \\right]\n\\end{equation}\n of (minimal) codewords of weight $24$ for $q=23$ and $q=29$, respectively. \nNote that in the matrix for $q=23$ (the first matrix) the entries $2$ and $8$ appear four times in the second row, while in the matrix for $q=29$ (the second matrix) all entries appear twice in the second row. In the first row, however, all entries appear twice for both matrices. Thus, there are several cycles containing either the edge $(v^{(0)}_{q-1},v^{(1)}_{0})$ or the edge $(v^{(0)}_0,v^{(1)}_0)$ in the support matrix graph $G_{\\mathbf{c}_1}^{(0,1)}$ (corresponding to the first matrix), while there is only a single such cycle in the support matrix graph $G_{\\mathbf{c}_2}^{(0,1)}$ (corresponding to the second matrix), and as a consequence the codewords $\\mathbf{c}_1$ and $\\mathbf{c}_2$ do \\emph{not} have the same graphical cycle structure. Note, however, that the \\emph{minimum} cycle lengths are the same, and this is also the case for all the other pairs of graphs $G_{\\mathbf{c}_1}^{(i,j)}$ and $G_{\\mathbf{c}_2}^{(i,j)}$, $0 \\leq i < j \\leq m-1$. Following the discussion in Section~\\ref{sec:improved}, we may apply Algorithm~\\ref{alg:tsmi}, which infers the candidate template support matrix shown in (\\ref{eq:supportm7}) at the top of the previous page. Details are omitted for brevity.\n\nWe can now prove the following theorem.\n\n\\begin{theorem} \\label{th:2}\nThe minimum distance $d(q,7)$ is upper-bounded by $24$ for $q > 7$.\n\\end{theorem}\n\n\\begin{IEEEproof}\n The proof is based on the candidate template support matrix shown in (\\ref{eq:supportm7}) at the top of the previous page and is almost identical to the proof of Theorem~\\ref{th:1}. In particular, it is easy to check that each entry in each row of the matrix appears an even number of times and that all columns in the matrix are in fact valid columns in an array LDPC code parity-check matrix (all columns are of the form in (\\ref{eq:form})).\n\nFor the third part of the proof, we need to show, for any value of $q > 7$, that there exist columns in the parity-check matrix which are not identical modulo $q$ and appear an odd number of times. Again, this is simple (and very fast) to check by a computer search for any finite value of $q$ that would be of any practical value. It is only for large values of $q$ that the theoretical proof below is needed. \n\nNow, let the largest absolute value of the entries in the $i$th row of the matrix in (\\ref{eq:supportm7}) which do not involve a multiplication by $2^{-1}$ be denoted $\\lambda_i$, and let the largest absolute value of the factor in front of $2^{-1}$ of the remaining entries in the $i$th row be denoted by $\\mu_i$. Since $a \\cdot 2^{-1} \\pmod{q}$, when $a$ is odd (which is always the case in (\\ref{eq:supportm7})), can be written as $(q+a)\/2$, it follows easily that for a row $i$ where all entries are of the form $a$ or $a \\cdot 2^{-1}$, different template entries can never be the same modulo $q$ when $q > 2\\lambda_i+\\mu_i$. For the first row this bound is $2 \\cdot 5 + 11 = 21$, and for the third row, this bound is $2 \\cdot 2 + 5 = 9$. Thus, looking at the first row, the only possibility for repeated columns, when $q > 21$ (the bound for the first row), is for two \\emph{neighboring} columns (with identical entries in the first row) to be the same. \nHowever, by looking at the third row in the matrix, this possibility can be ruled out by requiring that $q > 9$ (the bound for the third row). \nIn summary, it follows that there are no identical columns in the matrix in (\\ref{eq:supportm7}) if $q > \\max(21,9)=21$. \nFurthermore, for values of $7 < q < 21$, it can be checked numerically that there are no repeated columns in (\\ref{eq:supportm7}), and the result follows. \n\\end{IEEEproof}\n\n\n\n\n\n\\section{Numerical Results} \\label{sec:results}\n\nIn addition to the analytic results of Theorems~\\ref{th:1} and \\ref{th:2}, we have performed a computer search to compute the exact values for $d(q,m)$ and $h(q,m)$ for small values of $q$ and $m$. The results are summarized in Table~\\ref{table:arrayLDPC}, where the entries that appear in bold are new results. Results from the literature have also been included with an explicit reference. \n\nFor $m=6$, we have computed the exact value of $d(q,6)$ for $q \\leq 19$. For larger values of $q$, we have run the exhaustive algorithm from \\cite{ros09,ros12} with an upper weight threshold of $16$ without finding any codewords. From the upper bound from Theorem~\\ref{th:1} and the fact that these codes are even-weight codes, we can conclude that the minimum distance, for $23 \\leq q \\leq 79$, is either $18$ or $20$. Furthermore, extensive minimum distance calculations using the probabilistic algorithm from \\cite{tom07} for several values of $q \\geq 23$, indicate that the minimum distance is indeed $20$ for $q \\geq 23$, from which it follows that the upper bound from Theorem~\\ref{th:1} appears to be tight.\n\nFor $m=7$, we have only been able to compute the exact value for $q=7$ and $11$ using the exhaustive algorithm from \\cite{ros09,ros12}. For $q=13$, we were able to run the exhaustive algorithm from \\cite{ros09,ros12} with an upper weight threshold of $16$. In addition, we found a codeword of weight $20$ using the probabilistic algorithm from \\cite{tom07}, from which (and the fact that the array LDPC codes are even-weight codes) we can conclude that the minimum distance is either $18$ or $20$. \nFor larger values of $q$, $17 \\leq q \\leq 29$, the probabilistic algorithm from \\cite{tom07} has provided the estimates (in the form of upper bounds) in Table~\\ref{table:arrayLDPC}. Note that even if the results are formally stated as upper bounds, the algorithm from \\cite{tom07} indicates that the estimates are indeed likely to give the exact values, which indicates that the bound from Theorem~\\ref{th:2} is in fact tight (for instance, $q=17$ gives a minimum distance of $24$ with very high probability).\nFor the high values of $q$ ($31 \\leq q \\leq 79$), Theorem~\\ref{th:2} has provided the estimates (in the form of upper bounds).\n\n\n\n\n\\begin{table}[t]\n\\scriptsize \\centering \\caption{Minimum\/Stopping Distance Results for Array LDPC Codes for Different Values of $q$ and $m$}\\label{table:arrayLDPC}\n\\def\\Hline{\\noalign{\\hrule height 2\\arrayrulewidth}}\n\\vskip -3.0ex\n\\begin{tabular}{cllllll}\n\\Hline \\\\ [-2.0ex]\n $q$ & $d(q,7)$ & $d(q,6)$ & $h(q,5)$ & $d(q,5)$ & $h(q,4)$ & $d(q,4)$\\\\\n\\hline\n\\\\ [-2.0ex] \\hline \\\\ [-2.0ex]\n7 & $\\mathbf{14}$ & $12$ \\cite{sug08} & $\\mathbf{9}$ & $12$ \\cite{sug08} & 8 \\cite{esm11} & 8 \\cite{sug08}\\\\\n11 & $\\mathbf{20}$ & $16$ \\cite{sug08} & $10$ \\cite{esm11} & $10$ \\cite{sug08} & 10 \\cite{esm11} & 10 \\cite{sug08}\\\\\n13 & $\\mathbf{18}$ or $\\mathbf{20}$ & $14$ \\cite{sug08} & $\\mathbf{12}$ & $12$ \\cite{sug08} & $\\mathbf{10}$ & 10 \\cite{sug08}\\\\\n17 & {\\bf $\\leq \\mathbf{24}$} & $\\mathbf{16}$ & $\\mathbf{12}$ & $12$ \\cite{sug08} & $\\mathbf{10}$ & 10 \\cite{sug08}\\\\\n19 & {\\bf $\\leq \\mathbf{20}$} & {\\bf $\\mathbf{18}$} & $\\mathbf{12}$ & $12$ \\cite{sug08} & $\\mathbf{10}$ & 10 \\cite{sug08}\\\\\n23 & {\\bf $\\leq \\mathbf{22}$} & $\\mathbf{18}$ or $\\mathbf{20}$& $\\mathbf{12}$ & $\\mathbf{12}$ & $\\mathbf{10}$ & 10 \\cite{sug08}\\\\\n29 & {\\bf $\\leq \\mathbf{24}$} & $\\mathbf{18}$ or $\\mathbf{20}$& $\\mathbf{12}$ & $\\mathbf{12}$ & $\\mathbf{10}$ & 10 \\cite{sug08}\\\\\n31 & {\\bf $\\leq \\mathbf{24}$} & $\\mathbf{18}$ or $\\mathbf{20}$& $\\mathbf{12}$ & $\\mathbf{12}$ & $\\mathbf{10}$ & 10 \\cite{sug08}\\\\\n37 & {\\bf $\\leq \\mathbf{24}$} & $\\mathbf{18}$ or $\\mathbf{20}$& $\\mathbf{12}$ & $\\mathbf{12}$ & $\\mathbf{10}$ & 10 \\cite{sug08}\\\\\n41 & {\\bf $\\leq \\mathbf{24}$} & $\\mathbf{18}$ or $\\mathbf{20}$& $\\mathbf{12}$ & $\\mathbf{12}$ & $\\mathbf{10}$ & 10 \\cite{sug08}\\\\\n43 & {\\bf $\\leq \\mathbf{24}$} & $\\mathbf{18}$ or $\\mathbf{20}$& $\\mathbf{12}$ & $\\mathbf{12}$ & $\\mathbf{10}$ & 10 \\cite{sug08}\\\\\n47 & {\\bf $\\leq \\mathbf{24}$} & $\\mathbf{18}$ or $\\mathbf{20}$& $\\mathbf{12}$ & $\\mathbf{12}$ & $\\mathbf{10}$ & 10 \\cite{sug08}\\\\\n53 & {\\bf $\\leq \\mathbf{24}$} & $\\mathbf{18}$ or $\\mathbf{20}$& $\\mathbf{12}$ & $\\mathbf{12}$ & $\\mathbf{10}$ & 10 \\cite{sug08}\\\\\n59 & {\\bf $\\leq \\mathbf{24}$} & $\\mathbf{18}$ or $\\mathbf{20}$& $\\mathbf{12}$ & $\\mathbf{12}$ & $\\mathbf{10}$ & 10 \\cite{sug08}\\\\\n61 & {\\bf $\\leq \\mathbf{24}$} & $\\mathbf{18}$ or $\\mathbf{20}$& $\\mathbf{12}$ & $\\mathbf{12}$ & $\\mathbf{10}$ & 10 \\cite{sug08}\\\\\n67 & {\\bf $\\leq \\mathbf{24}$} & $\\mathbf{18}$ or $\\mathbf{20}$& $\\mathbf{12}$ & $\\mathbf{12}$ & $\\mathbf{10}$ & 10 \\cite{sug08}\\\\\n71 & {\\bf $\\leq \\mathbf{24}$} & $\\mathbf{18}$ or $\\mathbf{20}$& $\\mathbf{12}$ & $\\mathbf{12}$ & $\\mathbf{10}$ & 10 \\cite{sug08}\\\\\n73 & {\\bf $\\leq \\mathbf{24}$} & $\\mathbf{18}$ or $\\mathbf{20}$& $\\mathbf{12}$ & $\\mathbf{12}$ & $\\mathbf{10}$ & 10 \\cite{sug08}\\\\\n79 & {\\bf $\\leq \\mathbf{24}$} & $\\mathbf{18}$ or $\\mathbf{20}$& $\\mathbf{12}$ & $\\mathbf{12}$ & $\\mathbf{10}$ & 10 \\cite{sug08}\n\\end{tabular}\n\\end{table}\n\n\n\\section{Conclusion} \\label{sec:conclu}\nIn this paper, the minimum\/stopping distance of array LDPC codes have been studied. We have presented an improved general (i.e., independent of $q$) upper bound on the minimum distance for the case $m=6$, using the concept of a template support matrix of a codeword\/stopping set, which significantly improves the currently best know bound. The bound appears to be tight with high probability in the sense that we have not found codewords of strictly lower weight for several values of $q$ using a minimum distance probabilistic algorithm. In addition, we have provided the new upper bound $d(q,7) \\leq 24$ which also (from extensive numerical computations) appear to be tight. Finally, we have provided several new specific minimum\/stopping distance results for $m \\leq 7$ and low-to-moderate values of $q \\leq 79$.\n\n\n\\balance\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe expression ``$\\omega$-automata'' generally refers to automata that accept or reject $\\omega$-words---infinite sequences of letters from some alphabet. They define $\\omega$-languages---sets of $\\omega$-words---just as ordinary automata define languages of finite words; they are means for working with $\\omega$-languages, just as ordinary automata are means for working with languages of finite words.\n\nThe fundamental questions about $\\omega$-automata are similar to the fundamental questions about automata on finite words. Can operations on languages, such as intersection, complementation, and projection, be performed on automata? What is the difference between nondeterminism and determinism? What is the descriptional complexity of various types of automata? Which types of automata can be transformed into other types and at which cost? Can automata be minimized efficiently? How can automata be compared? \\dots\\ \n\nSome of the techniques developed for finite words can be adopted in the infinite-word setting; some of the theory of $\\omega$-automata is very similar to ordinary automata theory. There are, however, many interesting new aspects, which, most often, have something to do with what happens ``in the infinite''. As this interesting behavior ``in the infinite'' is already present in $\\omega$-automata with a finite state space, this paper is limited to the theory of finite-state $\\omega$-automata.\n\nTo convey the core ideas as crisp and clear as possible within the given space limits, some of the material is presented in an uncommon way, at the expense of continuity with prior work.\n\nExcellent surveys that cover $\\omega$-languages and $\\omega$-automata in their\nentire breadth, especially their relationship with mathematical logic, have been\nwritten by Wolfgang Thomas, one in the late eighties \\cite{Thomas1990}, and one\nin the nineties \\cite{thomas-languages-automata-logic-handbook-1997}. There is\nalso a comprehensive monograph by Dominique Perrin and Jean-\\'{E}ric Pin\n\\cite{perrin-pin-infinite-words-2004}. This paper tries to be a concise introduction into the \\emph{theory} of $\\omega$-automata.\n\n\\subsection{\\texorpdfstring{$\\omega$}{\u03c9}-Words}\n\\label{sec:omega-words}\n\n$\\omega$-Automata are devices that work on $\\omega$-words rather than finite ones. Technically, an $\\omega$-word \\index{$\\omega$-word} over an alphabet $A$ is a function $\\omega \\to A$, where $\\omega$ stands for the set of natural numbers. In contrast, a finite word over $A$ is a function $[n] \\to A$, where $n$ is a natural number and $[n]$ denotes the set $\\{0, \\dots, n-1\\}$. When $u$ is a word, finite or infinite, then $\\Occ u$ denotes the set of letters occurring in $u$; when $u$ is an $\\omega$-word, then $\\inf(u)$ denotes the set of letters occurring infinitely often in $u$.\n\nThere are essentially three concatenation operations involving $\\omega$-words. First, given a finite word $u$ and an $\\omega$-word $v$, the $\\omega$-word obtained by appending $v$ to~$u$ is denoted $u \\cdot v$---we speak of $\\omega$-concatenation\\index{$\\omega$-concatenation}. Second, when $\\langle u_0, u_1, \\dots\\rangle$ is an infinite sequence of finite nonempty words, then $u_0 \\cdot u_1 \\cdot \\dots$ denotes the $\\omega$-word obtained by concatenating all the $u_i$'s in the given order---we speak of $\\omega$-product\\index{$\\omega$-product}. Finally, when $u$ is a finite nonempty word, then $u^\\omega$ denotes $u \\cdot u \\cdot \\dots$---we speak of $\\omega$-power\\index{$\\omega$-power}. Note that it is legitimate to use the same symbol $\\cdot$ for ordinary concatenation, $\\omega$-concatenation, and $\\omega$-product, because there are various laws of associativity that hold. As usual, the symbol ``$\\cdot$'', representing the different forms of concatenation, is omitted in many contexts, and these operations are extended to sets of words in a straightforward fashion. An $\\omega$-word $u$ is periodic if $u = v^\\omega$ for some finite nonempty word $v$; it is ultimately periodic\\index{ultimately periodic}\\index{word!ultimately periodic} if $u = v w^\\omega$ for finite words $v$ and $w$ with $w$ being nonempty. The convolution of $\\omega$-words $u$ and $v$ over alphabets $A$ and $B$, respectively, is an $\\omega$-word over the alphabet $A \\times B$, denoted $u * v$ and defined by $(u * v)(i) = \\langle u(i), v(i)\\rangle$ for every $i$. \n\nOne of the reasons why $\\omega$-automata are applicable in various situations is that $\\omega$-words can represent infinite objects and therefore $\\omega$-automata can represent sets of infinite objects or even transform infinite objects into other infinite objects. \n\nWhen the alphabet $A$ is the binary alphabet, $[2]$, then an $\\omega$-word can be identified with a subset of $\\omega$, that is, with a set of natural numbers, more precisely, a word $u$ can be identified with $\\{i \\in \\omega \\mid u(i) = 1\\}$. When the alphabet is $\\bigtimes_{i] (q) -- (q0);\n\t \\draw[->] (q0) to node[above] {$0$} (q1);\n\n\t \\draw[my loop] (q0) to node[above] {$0,1$} (q0);\n\t \\draw[my loop] (q1) to node[above] {$0$} (q1);\n\n\n\t \\node[] (p) [right of=q,xshift = 25mm] {};\n\t \\node (p0) [right of=p] {$q_0$};\n\t \\node (p1) [right of=p0] {$q_1$}; \n\n\t \\draw[->] (p) -- (p0);\n\t \\draw[->,bend left] (p0) to node[above] {$1$} (p1);\n\t \\draw[->,bend left] (p1) to node[below] {$0$} (p0);\n\n\t \\draw[my loop] (p0) to node[above] {$0$} (p0);\n\t \\draw[my loop] (p1) to node[above] {$1$} (p1);\n\n\n\t \\node[] (r) [right of=p,xshift = 25mm,yshift=-8mm] {};\n\t \\node (r0) [right of=r] {$q_0$};\n\t \\node (r1) [right of=r0] {$q_1$}; \n\t \\node (r2) [right of=r1] {}; \n\n\t \\draw[->] (r) -- (r0);\n\t \\draw[->] (r0) to node[above] {$1$} (r1);\n\t \\draw[->] (r2) -- (r1);\n\n\t \\draw[my loop] (r0) to node[above, left=1mm] {$0,1$} (r0);\n\t \\draw[my loop] (r1) to node[above, right=1mm] {$0$} (r1);\n\n\t \\node[] (s) [above of=r,yshift = 4mm] {};\n\t \\node (s0) [right of=s] {$q_2$};\n\t \\node (s1) [right of=s0] {$q_3$}; \n\n\t \\draw[->,bend left] (s0) to node[above] {$0$} (s1);\n\t \\draw[->,bend left] (s1) to node[below] {$1$} (s0);\n\n\t \\draw[my loop] (s0) to node[above, left=1mm] {$0$} (s0);\n\t \\draw[my loop] (s1) to node[above, right=1mm] {$1$} (s1);\n\n\n\t \\node[] (t) [right of=r,xshift = 25mm,yshift=8mm] {};\n\t \\node (t0) [right of=t] {$q_0$};\n\n\t \\draw[->] (t) -- (t0);\n\n\t \\draw[my loop] (t0) to node[above] {$0,1$} (t0);\n \\end{tikzpicture}\n \\caption{Different types of $\\omega$-automata for the set of all $\\omega$-words over $[2]$ with a finite number of occurrences of $1$, denoted $L_\\text{fin1}$: the first automaton is forward nondeterministic and, if augmented with the B\\\"{u}chi condition $\\{q_1\\}$, recognizes $L_\\text{fin1}$; the second one is forward deterministic and, if augmented with the co-B\\\"{u}chi condition $\\{q_0\\}$, the parity condition $\\pi \\colon q_0 \\mapsto 2, q_1 \\mapsto 1$, the Muller condition $\\{\\{q_0\\}\\}$, or a suitable Streett or Rabin condition, recognizes $L_\\text{fin1}$; the third one is backward deterministic if used with the B\\\"{u}chi condition $\\{q_1, q_3\\}$ and then recognizes $L_\\text{fin1}$; the fourth one is forward deterministic and recognizes $L_\\text{fin1}$ if augmented with the transition-Muller condition $\\{\\{(q_0, 0, q_0)\\}\\}$.}\n \\label{fig:simple-buechi}\n\\end{figure}\n\n\\newcommand{w.\\,r.\\,t.\\xspace}{w.\\,r.\\,t.\\xspace}\n\n\\index{B\u00fcchi recurrence condition}\n\\index{recurrence condition!B\u00fcchi}\n\\index{parity recurrence condition}\n\\index{recurrence condition!parity}\n\\index{Rabin recurrence condition}\n\\index{recurrence condition!Rabin}\n\\index{Streett recurrence condition}\n\\index{recurrence condition!Streett}\n\\index{Muller recurrence condition}\n\\index{recurrence condition!Muller}\n\\index{weak recurrence condition}\n\\index{recurrence condition!weak}\n\\index{co-B\u00fcchi recurrence condition}\n\\index{recurrence condition!co-B\u00fcchi}\n\\index{generalized B\u00fcchi recurrence condition}\n\\index{recurrence condition!generalized B\u00fcchi}\n\\index{transition recurrence condition}\n\\index{recurrence condition!transition}\nThere are essentially five different types of recurrence conditions that have been investigated traditionally, all explained in the upper part of Table~\\ref{tab:acceptance-conditions} and named after their originators \\cite{buechi-decision-method-restricted-second-order-arithmetic-1962,rabin-decidability-of-second-order-theories-and-automata-on-infinite-trees-1969,Streett82,muller-infinite-sequences-and-finite-machines-1963} except for the parity condition \\cite{mostowski-regular-expressions-for-infinite-trees-and-a-standard-form-of-automata-1984}. In the lower part of the table, there are three types of conditions derived from the B\\\"{u}chi condition \\cite{muller-saoudi-schupp-alternating-automata-weak-monadic-theory-tree-its-complexity-1992,klarlund-progress-measures-complementation-omega-automata-applications-temporal-logic-1991,courcoubetis-vardi-wolper-yannakakis-memory-efficient-algorithms-verification-temporal-properties-1992}. \n\nThe trivial recurrence condition, which is not mentioned in the table, considers every run recurrent. For instance, all representations of binary trees of a fixed width (over the alphabet indicated in Section~\\ref{sec:omega-words}) are recognized by an automaton with trivial recurrence condition.\n\n\\begin{table}[!t]\n \\renewcommand{\\arraystretch}{1.2}\n \\begin{tabularx}{\\textwidth}{lll>{\\raggedright\\arraybackslash}X}\n\t \\hline\n\t Name & & Format & Semantics\\\\ \\hline \n\t B\\\"{u}chi & $B$ & $B \\subseteq Q$ & $\\inf(r) \\cap B \\neq \\emptyset$\\\\\n\t parity & $\\mathscr P$ & $\\pi \\colon Q \\to [2n]$, $n \\in \\omega$ & $\\min(\\inf(\\pi \\circ r)) \\text{ mod } 2 = 0$\\\\\n\t Rabin & $\\mathscr R$ & $\\mathscr R \\subseteq \\powerset(R) \\times \\powerset(R)$ & there exists $\\langle L, U\\rangle \\in \\mathscr R$ such that $\\inf(r) \\cap L = \\emptyset$ and $\\inf(r) \\cap U \\neq \\emptyset$\\\\\n\t Streett & $\\mathscr S$ & $\\mathscr S \\subseteq \\powerset(R) \\times \\powerset(R)$ & for every $\\langle R, G\\rangle \\in \\mathscr S$, if $\\inf(r) \\cap R \\neq \\emptyset$, then $\\inf(r) \\cap G \\neq \\emptyset$\\\\\n\t Muller & $\\mathscr M$ & $\\mathscr M \\subseteq \\powerset(Q)$ & $\\inf(r) \\in \\mathscr M$\\\\\n\t \\hline\n\t weak & $W$ & $W \\subseteq Q$, union of SCC's & $\\inf(r) \\subseteq W$\\\\\n\t co-B\\\"{u}chi & $C$ & $C \\subseteq Q$ & $\\inf(r) \\subseteq C$\\\\\n\t gen.\\ B\\\"{u}chi & $\\mathscr G$ & $\\mathscr G \\subseteq \\powerset(Q)$ & $B \\cap \\inf(r) \\neq \\emptyset$ for every $B \\in \\mathscr G$\\\\\n\t \\hline\n \\end{tabularx}\n \\caption{Recurrence conditions, ordered according their expressive power; ``gen.\\ B\\\"{u}chi'' and ``SCC'' are abbreviations of ``generalized B\\\"{u}chi'' and ``strongly connected component'', respectively.}\n \\label{tab:acceptance-conditions}\n\\end{table}\n\nIt is convenient to give names to the elements of the recurrence conditions: a state $q \\in B$ is called a B\\\"{u}chi state, a pair $\\langle L, U\\rangle \\in \\mathscr R$ is called a Rabin pair, a pair $\\langle R, G\\rangle \\in \\mathscr S$ is called a Streett pair, a set $M \\in \\mathscr M$ is called a Muller set, a state $q \\in W$ is called a weak state, a state $q \\in C$ is called a co-B\\\"{u}chi state, and a set $B \\in \\mathscr G$ is called a B\\\"{u}chi set. The function $\\pi$ is called priority function. Only weak, co-B\\\"{u}chi, and B\\\"{u}chi conditions have straightforward representations of size polynomial in the number of states of a given automaton.\n\nEvery type of recurrence condition is also considered in a transition variant, where states from~$Q$ are replaced by transitions from $\\Delta$ and $\\inf(r)$ is replaced by the set of triples $\\langle q, a, q'\\rangle$ for which there exist an infinite number of $i$ such that $\\langle r(i), u(i), r(i+1)\\rangle = \\langle q, a, q'\\rangle$. Transition variants come in more handy in certain situations, for instance, $L_\\text{fin1}$ is recognized by a single-state transition-Muller automaton, see Figure~\\ref{fig:simple-buechi}.\\footnotemark[1]\n\nTable~\\ref{tab:conversions} shows how conditions of various types can be expressed in terms of conditions of other types: every B\\\"{u}chi condition may be viewed as a parity condition, which, in turn, can be viewed as a Rabin or Streett condition, and these can be viewed as Muller conditions. \n\nUnlike finite words, $\\omega$-words are not symmetric (in the sense that there is no order isomorphism from the order of the natural numbers to its inverse). So when talking about determinism the direction makes a difference.%\n\\footnotetext[1]{In hindsight this paper should have been written using transition conditions throughout, in particular, Section~\\ref{run trees} would profit much from this.}\n\\refstepcounter{footnote}\nA forward deterministic automaton is one where $Q_I$ consists of exactly one state and $|\\Delta(q, a)| = 1$ for all $q \\in Q, a \\in A$. (As usual, $\\Delta(q,a)$ is used as an abbreviation of $\\{q' \\in Q \\mid \\langle q, a, q'\\rangle \\in \\Delta\\}$.) A backward deterministic automaton%\n\\footnote{In \\cite{carton-michel-unambiguous-buechi-automata-2003}, where these automata were introduced, they are called ``complete unambiguous''; in \\cite{perrin-pin-infinite-words-2004}, the attribute ``prophetic'' is used; in \\cite{preugschat-wilke-effective-characterizations-of-simple-fragments-of-temporal-logic-using-carton-michel0automata-2013}, they are referred to as ``Carton-Michel automata''. The terminology used in this paper tries to be systematic.} \n\\index{Carton-Michel automaton}\n\\index{prophetic automaton}\n\\index{automaton!Carton-Michel}\n\\index{automaton!prophetic}\nis one where for every $\\omega$-word over $A$, there is exactly one recurring run and $|\\Delta(a, q')| = 1$ for all $a \\in A, q' \\in Q$. (Here, $\\Delta(a, q')$ stands for $\\{q \\in Q \\mid \\langle q, a, q'\\rangle \\in \\Delta\\}$.) At times, when deterministic automata are used, the transition relation $\\Delta$ is replaced by a transition function $\\delta \\colon Q \\times A \\to Q$ (forward automata) or $\\delta \\colon A \\times Q \\to Q$ (backward automata).\n\nIn general, a type of an $\\omega$-automaton is given by a type of recurrence condition \\emph{and} a type of mode, with the following modes being considered: nondeterministic (default), forward deterministic, backward deterministic, and alternating, defined in Section~\\ref{sec:alternation}.\n\n\\begin{table}[b]\n \\centering\n \\renewcommand{\\arraystretch}{1.2}\n \\begin{tabularx}{\\textwidth}{llX}\n\t \\hline\n\t From & To & Conversion \\\\ \\hline\n\t B\\\"{u}chi $B$ & parity $\\pi$ & $\\pi(q) = 0$ for $q \\in B$ and $\\pi(q) = 1$ for $q \\notin B$\\\\\n\t parity $\\pi$ & Rabin $\\mathscr R$ & $\\mathscr R = \\{(\\pi^{-1}(\\{0, \\dots, 2i-1\\}), \\pi^{-1}(\\{0, \\dots, 2i\\})) \\mid i + 1 < n\\}$ \\\\\n\t parity $\\pi$ & Streett $\\mathscr S$ & $\\mathscr S = \\{(\\pi^{-1}(\\{0, \\dots, 2i+1\\}), \\pi^{-1}(\\{0, \\dots, 2i\\})) \\mid i + 1 < n\\}$ \\\\\n\t Rabin $\\mathscr R$ & Muller $\\mathscr M$ & $\\mathscr M = $ set of all $Q' \\subseteq Q$ such that there exists $\\langle L, U\\rangle \\in \\mathscr R$ satisfying $Q' \\cap L = \\emptyset$ and $Q' \\cap U \\neq \\emptyset$ \\\\\n\t Streett $\\mathscr S$ & Muller $\\mathscr M$ & $\\mathscr M = $ set of all $Q' \\subseteq Q$ such that, for every $\\langle R, G\\rangle \\in \\mathscr S$, if $Q' \\cap R \\neq \\emptyset$, then $Q' \\cap G \\neq \\emptyset$ \\\\ \\hline\n \\end{tabularx}\n \\caption{Conversions between recurrence conditions}\n \\label{tab:conversions}\n\\end{table}\n\nThe most fundamental result about $\\omega$-automata compares the different types of $\\omega$-automata with respect to their expressive power. As a yardstick, nondeterministic B\\\"{u}chi automata are used; the $\\omega$-languages recognized by them are called regular $\\omega$-languages, see Theorem~\\ref{thm:regexps} for the origin of this terminology.\n\n\\begin{theorem}[equivalence of types of $\\omega$-automata]\n \\label{thm:fundamental}\n For every type of $\\omega$-automaton, consider the class of $\\omega$-languages recognized by automata of this type. Then all these classes coincide with the class of regular $\\omega$-languages except for the classes corresponding to the following types: \n \\begin{inparaitem}[]\n \\item forward deterministic generalized B\\\"{u}chi;\n \\item forward deterministic B\\\"{u}chi; \n \\item forward deterministic, backward deterministic, and nondeterministic co-B\\\"{u}chi; \n \\item forward deterministic, backward deterministic, and nondeterministic weak.\n \\end{inparaitem}\n\\end{theorem}\n\nMuch of $\\omega$-automata theory revolves around Theorem~\\ref{thm:fundamental}. The quest for good proofs of this theorem---efficient language-preserving transformations between automata of different types---has led to many interesting results. All types of $\\omega$-automata are interesting in their own right; each one has its advantages and applications in specific contexts. \n\n\n\\section{Basic properties of B\\\"{u}chi automata}\n\nSome basic insights into $\\omega$-automata can be derived from analyzing runs in a straightforward fashion, for instance, that regular languages can be defined by regular expressions of a certain type, that deterministic B\\\"{u}chi automata are less expressive than nondeterministic ones, and that complementation is problematic for B\\\"{u}chi automata.\n\n\\subsection{\\texorpdfstring{$\\omega$}{\u03c9}-Regular expressions}\n\nAn $\\omega$-regular expression \\index{$\\omega$-regular expression}\\index{expression!$\\omega$-regular} \\cite{muller-infinite-sequences-and-finite-machines-1963} is of the form \n\\begin{align}\n r_0 \\cdot s_0^\\omega + \\dots + r_{n-1} \\cdot s_{n-1}^\\omega \\enspace,\n\\end{align}\nwith $n$ being a natural number and the $r_i$'s and the $s_i$'s being ordinary regular expressions. The semantics is the obvious one.\n\nSince the empty set can be denoted by an empty expression ($n = 0$) and since our definition of $\\omega$-power is only defined for (sets of) nonempty finite words (see Section~\\ref{sec:omega-words}), it is reasonable to require that the $s_i$'s be built from the letters of the alphabet, ``$+$'' (for union), ``$\\cdot$'' (for concatenation), and ``${}^+$'' (for finite positive iteration). It is also reasonable to allow that individual $r_i$'s are omitted.\n\n\\begin{theorem}[$\\omega$-regular expression \\cite{buechi-decision-method-restricted-second-order-arithmetic-1962}]\n \\label{thm:regexps}\n Every $\\omega$-language recognized by a B\\\"{u}chi automaton is denoted by an $\\omega$-regular expression and vice versa.\n\\end{theorem}\n\nFor the proof, assume a B\\\"{u}chi automaton is given. The insight needed is that for every accepting run $r$ there are some state $q \\in B$ and an infinite sequence $\\langle i_0, i_1, \\dots\\rangle$ of positions such that $r(0)$ is initial, $i_0 < i_1 < \\dots$, and $r(i_j) = q$ for every $j$. This motivates the following definition. For states $q, q'$, let $L_{q,q'}$ be the language recognized by the ordinary automaton on finite words with $q$ as initial and $q'$ as final state. Then the language recognized by the B\\\"{u}chi automaton is \n\\begin{align} \n \\bigcup_{q \\in Q_I, q' \\in B} L_{q,q'} (L_{q',q'} \\setminus \\{\\epsilon\\})^\\omega\\enspace.\n\\end{align}\nThis representation can be turned into an $\\omega$-regular expression using techniques known from finite-state automata on finite words.\n\n\\newcommand{\\New}[1]{q_\\text{new}}\n\nFor the proof of the converse, first observe that it is enough to show that an expression of the form $r \\cdot s^\\omega$ (meaning $n = 1$) denotes a language recognized by a B\\\"{u}chi automaton, because the class of languages recognized by B\\\"{u}chi automata is closed under union: the disjoint union of two given B\\\"{u}chi automata is a B\\\"{u}chi automaton recognizing the union of the languages recognized by the given automata. So assume $\\mathscr A$ and $\\mathscr B$ are finite-state automata on finite words recognizing the languages denoted by $r$ and $s$, respectively. A B\\\"{u}chi automaton for $r \\cdot s^\\omega$ is obtained by modifying the disjoint union of $\\mathscr A$ and $\\mathscr B$ as follows. First, an additional state $\\New q$ is added. Second, for every transition $\\langle q, a, q'\\rangle$ in~$\\mathscr A$ where $q'$ is final, the transition $\\langle q, a, \\New q\\rangle$ is added. Third, for every transition $\\langle q, a, q'\\rangle$ in~$\\mathscr B$ where $q$ is initial, the transition $\\langle \\New q, a, q'\\rangle$ is introduced. Fourth, for every transition $\\langle q, a, q'\\rangle$ in $\\mathscr B$ where $q'$ is final, the transition $\\langle q, a, \\New q\\rangle$ is added. Finally, every final state looses its status as final state; every initial state of $\\mathscr B$ looses its status as initial state; the state $\\New q$ becomes the only B\\\"{u}chi state; if one of the initial states of $\\Aut A$ was final (the empty word was accepted), then $\\New q$ becomes initial, too.\\qed\n\nFrom the above proof, it immediately follows:\n\n\\begin{remark}\n \\label{thm:basic}\n \\begin{inparaenum}\n \\item Every nonempty regular $\\omega$-language contains an ultimately\n\t periodic word.\n \\item The emptiness problem for B\\\"{u}chi automata is decidable nondeterministically in logarithmic space, by a simple graph search.\n \\end{inparaenum}\n\\end{remark}\n\n\\subsection{Co-B\\\"{u}chi and deterministic B\\\"{u}chi automata}\n\\label{sec:deterministic-buchi}\n\nDis- and reassembling runs is a simple but powerful technique in the context of $\\omega$-automata, which can, for instance, be used to show that co-B\\\"{u}chi automata and deterministic B\\\"{u}chi automata are weaker than nondeterministic ones:\n\n\\begin{proposition}\n \\label{thm:co-buchi}\n \\begin{inparaenum}\n \\item The language denoted by $((0+1)^* 0)^\\omega$ cannot be recognized by a co-B\\\"{u}chi automaton.\n \\item The language denoted by $(0+1)^* 1^\\omega$, which is the complement of the language denoted by $((0+1)^* 0)^\\omega$, cannot be recognized by a deterministic B\\\"{u}chi automaton. \n \\end{inparaenum}\n\\end{proposition}\n\nFor the proof of the first part, assume a co-B\\\"{u}chi automaton with $n$ states recognizes the language. Then it accepts the word $(1^n0)^\\omega$, say $r$ is an accepting run. There is some~$k$ such that all letters in the segment $r[(n+1)k,(n+1)(k+1))$ are co-B\\\"{u}chi states. (As usual, if $u$ denotes an $\\omega$-word, then $u[i,j)$ denotes $u(i)u(i+1) \\dots u(j-1)$.) Because this segment has $n+1$ positions, there are $i$ and $j$ such that $i l$ without a descendant of $v$. We call this level the extinction level of $v$, denote it by $\\text{el}(v)$, and use it to rank~$v$. \n\nTo this end, assume a vertex $v$ is on some level $l$. The rank of $v$ measures how difficult it is to get from $v$ to its extinction level, more concretely, how ``wide'' the part of the run DAG is which one needs to pass by while moving from $v$ to its extinction level. For every level $i$ between $l$ and the extinction level of $v$, that is, for every $i$ with $l \\leq i < \\text{el}(v)$, let $W_i$ contain all extinction levels of vertices on level~$i$, but only the ones which are before the extinction level of~$v$, that is, $W_i = \\{\\text{el}(w) \\mid w \\in V^{(i)} \\text{ and } \\text{el}(w) < \\text{el}(v)\\}$. The maximum of the cardinalities of the $W_i$'s is the extinction rank of~$v$ and denoted $\\text{er}(v)$. Formally, $\\text{er}(v) = \\max\\{|W_i| \\mid l \\leq i < el(v)\\}$. For an illustration, see Figure~\\ref{fig:ext-ranks}.\n\n\\begin{figure}[t]\n \\small\n \\hfill\n \\protect\\begin{minipage}[t]{0.5\\linewidth}\n \\begin{tikzpicture}[baseline=(current bounding box.north),\n\t node distance=5mm, every node\/.style={circle,inner sep=1pt}]\n\t \\node (a0) {0};\n\t \\node[xshift=5mm] (b0) [right of=a0] {0};\n\t \\node[xshift=5mm] (c0) [right of=b0] {\\bf 1};\n\t \\node[xshift=5mm] (d0) [right of=c0] {0};\n\t \\node[xshift=5mm] (e0) [right of=d0] {0};\n\t \\node[xshift=5mm] (f0) [right of=e0] {};\n\n\t \\node[xshift=-2mm,yshift=3mm] (aa) [below of=a0] {};\n\t \\node[xshift=2mm,yshift=3mm] (fa) [below of=f0] {};\n\t \\draw (aa) -- (fa);\n\n\t\n\t\n\t\n\t\n\t\n\t\n\n\t \\node (ad) [below of=a0] {};\n\t \\node (bd) [below of=b0] {};\n\t \\node (cd) [below of=c0] {2};\n\t \\node (dd) [below of=d0] {1};\n\t \\node (ed) [below of=e0] {0};\n\t \\node (fd) [below of=f0] {};\n\n\t \\node (a1) [above of=a0] {2};\n\t \\node (a2) [above of=a1] {0};\n\n\t \\node (b1) [above of=b0] {0};\n\t \\node (b2) [above of=b1] {0};\n\t \\node (b3) [above of=b2] {2};\n\n\t \\node (c1) [above of=c0] {0};\n\t \\node (c2) [above of=c1] {\\bf 1};\n\t \\node (c3) [above of=c2] {\\bf 2};\n\n\t \\node (d1) [above of=d0] {\\bf 1};\n\t \\node (d2) [above of=d1] {0};\n\n\t \\node (e1) [above of=e0] {?};\n\t \\node (e2) [above of=e1] {?};\n\t \\node (e3) [above of=e2] {0};\n\n\t \\node (f1) [above of=f0] {};\n\t \\node (f2) [above of=f1] {};\n\t \\node (f3) [above of=f2] {};\n\n\t \\node (g1) [right of=f1] {\\dots};\n\n\n\t \\draw[->] (a0) -- (b1);\n\t \\draw[->] (a1) -- (b3);\n\t \\draw[->] (a2) -- (b2);\n\n\t \\draw[->] (b0) -- (c1);\n\t \\draw[->] (b1) -- (c1);\n\t \\draw[->] (b2) -- (c1);\n\t \\draw[->] (b3) -- (c3);\n\n\t \\draw[->] (c0) -- (d2);\n\t \\draw[->] (c2) -- (d2);\n\t \\draw[->] (c2) -- (d0);\n\t \\draw[->] (c3) -- (d1);\n\t \\draw[->] (c3) -- (d2);\n\n\t \\draw[->] (d1) -- (e0);\n\n\t \\draw[->] (e1) -- (f1);\n\t \\draw[->] (e2) -- (f2);\n\t \\draw[->] (e1) -- (f3);\n \\end{tikzpicture} \n \\end{minipage}\n \\hfill\n \\protect\\begin{minipage}[t]{0.4\\linewidth}\n\t \\protect\\caption{Beginning of a leveled DAG with vertices labeled by their extinction ranks and critical values at the bottom. Lifts are in bold; question marks indicate vertices with ranks which cannot be determined from the visible part of the DAG.}\n\t \\label{fig:ext-ranks}\n\t \\vspace*{\\fill}\n \\end{minipage}\n \\hspace*{\\fill}\n\\end{figure}\n\n\\begin{remark}\n \\label{thm:ext-ranks}\n \\begin{inparaenum}\n \\item The extinction rank of a vertex in a leveled graph of width at most $k$ is at most $k-1$.\n \\item On every maximum path starting in a finitary vertex the extinction rank is monotone descending and eventually reaches~$0$.\n \\end{inparaenum}\n\\end{remark}\n\nIn the lift construction, the backward deterministic automaton determines, for each vertex, its extinction rank, that is, a state of the automaton maps each vertex on the current level to a value in $[k] \\cup \\{\\infty\\}$, where $\\infty$ is used for infinitary vertices. It is possible to define a backward deterministic transition function accordingly, because the extinction ranks of the vertices on one level can be determined from the structure of the respective slice and the extinction ranks of the vertices on the next level, as described in what follows. \n\nLet $U$ be the set of vertices on the upper level of a slice~$a$. For every $v \\in U$, let $m_v$ be the maximum of all values $\\text{er}(v')$ where $v'$ is a successor of $v$ in the slice $a$; by convention, $m_v = -1$ if $v$ has no successor. If there is no $v \\in U$ with $m_v = -1$, then $\\text{er}(v) = m_v$ for all $v \\in U$. If there is such a $v$, then the largest number $c$ such that $[c] \\subseteq \\{m_v \\mid v \\in U\\}$ is called the critical value of the upper level. For every $v \\in U$ with $m_v \\geq c$, the equation $\\text{er}(v) = m_v$ still holds. For every other $v \\in U$, the values are ``lifted'': $\\text{er}(v) = m_v + 1$.\n\nIf extinction ranks are used, then an overapproximation as described above can be avoided by adding an appropriate recurrence condition, more precisely, a generalized transition-B\\\"{u}chi condition. For every rank $i$, there is a transition-B\\\"{u}chi set $B_i$ which includes all transitions in which $i$ does not occur as a value on the upper level or $i$ is less than the critical value (and thus lifted), see Remark~\\ref{thm:ext-ranks}(2) and Figure~\\ref{fig:ext-ranks}. \n\n\\begin{theorem}[lift construction \n \\cite{carton-michel-unambiguous-buechi-automata-2003}]\n \\label{thm:shift}\n The lift construction yields, for every $k$, a backward deterministic generalized transition-B\\\"{u}chi automaton with at most $(k+1)^n$ states outputting, for every leveled DAG of width at most $k$, the subgraph of its finitary [infinitary] vertices. \n\\end{theorem}\n\nNote that the above approach is very versatile. If, for instance, one wants to determine the vertices which have at least one descendant with no successors, which one could call weakly finitary vertices, then one can take the same approach, replacing maximization by minimization. \n\nThe above description of the lift construction is somewhat technical because of the measure introduced; a more ``automatic'' description follows. A state is a sequence $P_0 \\dots P_{m-1}$ of nonempty pairwise disjoint sets of vertices. Assume a letter $a$ (a slice) is read backwards. Then the new state is determined in two steps. First, the sequence $P'_{-1} P'_0 \\dots P'_{m-1}$ is determined where \n\\begin{inparaenum}[(i)]\n\\item $P_{-1}'$ consists of all vertices $v$ on the upper level of $a$ without\n successors and \n\\item $P_i'$, for $i \\geq 0$, consists of all such vertices with\n some successor in $P_i$, but no successor in $P_{i'}$ for any $i' < i$.\n\\end{inparaenum}\nSecond, the new state is obtained from $P'_{-1} P'_0 \\dots P'_{m-1}$ by removing all empty entries. The recurrence condition is, again, a generalized transition-B\\\"{u}chi condition: for every~$i$ there are infinitely many transitions with $i>m$ or $P_{i'}' \\neq \\emptyset$ for $i' < i$. The vertices that occur in the states are exactly the finitary ones.\n\n\\enlargethispage{\\baselineskip}\n\nThe lift construction is used for different purposes in Section~\\ref{sec:backward-determinization}.\n\n\\subsection{Latest appearance records}\n\\label{sec:lar}\n\\index{latest appearance record}\\index{LAR!latest appearance record}\\index{latest appearance automaton}\\index{LAA!latest appearance automaton}\n\nGiven an alphabet $A$ and a special symbol $\\$$ not in $A$, the latest appearance automaton (LAA) is a forward deterministic automaton with states being words over $A \\cup \\{\\$\\}$ where every letter from $A$ occurs at most once and $\\$$ occurs exactly once. One such word is called a latest appearance record (LAR) and the part to the right of ``\\$'' is its frame. \n\nThe initial state of the LAA is the one-letter word $\\$$; the recurrence condition is trivial; the transition function $\\delta$ is defined as follows. When $u$ is a state of the form $v\\$v'$ and $a$ is a letter of the alphabet occurring in $vv'$, say $vv' = waw'$, then $\\delta(u,a) = w\\$w'a$. When~$a$ does not occur in $vv'$, then $\\delta(u,a) = vv'\\$a$. So the order in which the letters occur in the current state of the automaton is the order of their latest appearances in the prefix of the given word read so far, with all letters in the frame of the current state being the ones that have occurred since the previous occurrence of the letter just read. From this, the following can be derived.\n\n\\begin{remark}\n \\cite{buchi-landweber-1969-a,gurevich-harrington-1982}\n Consider the frames of maximal length among all frames occurring infinitely often in the run of the LAA on a given word. Then all theses frames contain the same letters and these are exactly the ones occurring infinitely often in the given word.\n\\end{remark}\n\nAn interesting application of the latest appearance record is the transformation of a given Muller automaton into an equivalent parity automaton. First, the Muller condition is removed (and replaced by the trivial recurrence condition). Second, the automaton is augmented by the trivial output function, which simply outputs the current state. Third, the generated automaton is cascaded with the LAA over the state set of the Muller automaton. Finally, assuming the automaton has $n$ states, a priority function is added that assigns each state $\\langle q, v\\$v'\\rangle$ the priority $2n - 2|v'|$ if $\\Occ{v'}$ is a Muller set and else $2n - 2|v'| + 1$.\n\n\\begin{theorem}[Muller to parity]\n \\label{thm:muller-to-parity}\n \\cite{mostowski-regular-expressions-for-infinite-trees-and-a-standard-form-of-automata-1984}\n For every forward deterministic [non-deterministic] Muller automaton with $n$ states there is an equivalent forward deterministic [non-deter\\-min\\-is\\-tic] parity automaton with $(n+1)!$ states and at most $2n$ priorities.\n\\end{theorem}\n\nA refined construction, saving priorities if possible, is presented in Section~\\ref{sec:parity-index}.\n\n\\section{Run DAG's of B\\\"{u}chi automata}\n\\label{sec:dags}\n\nB\\\"{u}chi automata, in general, are nondeterministic automata, in other words, there may be several runs of a given B\\\"{u}chi automaton on a given word. These runs have to be considered at the same time if, for instance, one wants to turn a B\\\"{u}chi automaton into a B\\\"{u}chi automaton for the complement of the language recognized, because not to accept means \\emph{all} initial runs are not recurrent. \n\nThere are essentially two global structures that have been investigated for arranging all runs of a B\\\"{u}chi automaton in a concise way: DAG's and trees. The former are treated in this section, the latter in the next one. Applications are complementation, determinization, and disambiguation (defined in Section~\\ref{sec:disambiguation}). \n\nAssume a B\\\"{u}chi automaton is given. The run DAG \\index{run DAG}\\index{DAG!run} of a given $\\omega$-word $u$ is the leveled graph with levels $\\{Q \\times \\{i\\}\\}_{i \\in \\omega}$ and edges $\\langle \\langle q, i \\rangle, \\langle q', i+1\\rangle \\rangle$ for $\\langle q, u(i), q' \\rangle \\in \\Delta$. Its width is the number of states of the given automaton. \n\nOften, it is useful to think of a run DAG as a graph labeled with elements from $Q$; in this section, it is sufficient to think of it as providing only information about whether the state component of a vertex is an initial or a B\\\"{u}chi state. Technically, the DAG is labeled with elements from $\\powerset(\\{I, B\\})$ and we say it is $\\{I,B\\}$-tagged; if a vertex is labeled with a letter $a$ and $I \\in a$, we say it is $I$-tagged, and, analogously, if $B \\in a$, we say it is $B$-tagged.\n\nA vertex of an $\\{I,B\\}$-tagged DAG is called $B$-recurring if a path with an infinite number of $B$-tagged vertices starts in it; it is called $B$-free if none of its descendants (including itself) is $B$-tagged. The ultimate width of such a DAG is the limes inferior of the number of non-$B$-recurring infinitary vertices on a given level.\n\n\\begin{remark}\n\tAn $\\omega$-word is accepted by a B\\\"{u}chi automaton if, and only if, there is an $I$-tagged $B$-recurring vertex on level~$0$ of the run DAG of the word.\n\\end{remark}\n\nThe main insight needed about $\\{I,B\\}$-tagged DAG's (or simply $\\{B\\}$-tagged DAG's) of finite width is that they can be decomposed in a simple manner. Consider the following operation, here called peeling. First, remove all finitary vertices; second, remove all $B$-free vertices. Peeling does not remove any $B$-recurring vertex, and if it does not change the DAG at all, then all vertices are $B$-recurring, because every vertex has a strict $B$-tagged descendant. \nMoreover, if there are non-$B$-recurring infinitary vertices, then peeling decreases the ultimate width by at least one, as explained in what follows.\n\nConsider a non-$B$-recurring infinitary vertex. By K\\\"{o}nig's lemma \\cite{koening-theorie-der-endlichen-und-unendlichen-graphen-1936}, there is an infinite path starting in it. Assume that every $B$-tagged strict descendant of the vertex is finitary. Then, after removing the finitary vertices, each successor of the vertex is $B$-free, but the infinite path is still there and all of its vertices (except, maybe, the first one) are removed in the second step, decreasing the ultimate width by one. If there is a strict $B$-tagged infinitary descendant of the vertex, apply the same argument to it. This cannot go ad infinitum, because a path with an infinite number of $B$-tagged vertices would be constructed. \n\nThis all implies:\n\n\\begin{lemma}[peeling \\cite{kupferman-vardi-weak-alternating-automata-are-not-that-weak-2001}]\n For every B\\\"{u}chi automaton with $n$ states, peeling the run DAG of any $\\omega$-word $n$ times yields the subgraph induced by the $B$-recurring vertices.\\qed\n\\end{lemma}\n\n\\index{peeling!of a leveled DAG}\n\nThis can be used in various ways, in particular, it can be used for complementing B\\\"{u}chi automata, see Section~\\ref{sec:complementation}, determinizing them backward, see Section~\\ref{sec:backward-determinization}, and showing that alternating B\\\"{u}chi automata can easily be converted into weak alternating automata, see Section~\\ref{sec:weak-alternation}.\n\nTo describe these applications, it is useful to have some notation and terminology at hand. By the above, each vertex $v$ in a $\\{B\\}$-tagged DAG of finite width can be assigned a value in $\\omega \\cup \\{\\infty\\}$ according to when the vertex is removed by peeling the DAG successively. More precisely, when $i$ is a natural number and all vertices with value $< 2i$ are removed from the given DAG, the finitary vertices in the remaining DAG get assigned~$2i$; when all vertices with value $< 2i+1$ are removed, the $B$-free vertices in the remaining DAG get assigned~$2i+1$. The $B$-recurring vertices get assigned $\\infty$. The number assigned to a vertex $v$ is called its canonical rank, it is denoted $c(v)$, and, according to the above, it is $\\infty$ or $< 2n$, when $n$ is the width of the DAG.\\index{canonical rank function}\\index{rank function!canonical}\n\n\\begin{corollary} \n \\label{three}\n For a B\\\"{u}chi automaton with $n$ states, let $c$ be the canonical rank function of the run DAG of some $\\omega$-word. \n \\begin{inparaenum}\n \\item The word is accepted if, and only if, $c(v) = \\infty$ for some $I$-tagged vertex $v$ on level~$0$. \n \\item Equivalently, the word is not accepted if, and only if, $c(v) < 2n$ for every $I$-tagged vertex $v$ on level~$0$.\n \\end{inparaenum}\n\\end{corollary}\n\n\\subsection{Complementation via canonical ranks}\n\\label{sec:complementation}\n\nThe idea of using ranks or ``progress measures'' for complementing $\\omega$-automata goes back to \\cite{klarlund-progress-measures-complementation-omega-automata-applications-temporal-logic-1991} and has been improved and refined over the years, especially in \\cite{kupferman-vardi-weak-alternating-automata-are-not-that-weak-2001}. The basic idea is to implement Corollary~\\ref{three}(2). The starting point is a compilation of properties of the canonical rank function of a given $\\{I, B\\}$-tagged leveled DAG.\n\n\\begin{ennote}\n \\label{one} \n Let $v$ be any vertex. If $v$ does not have any successor, let $M = 0$, else let~$M$ be the maximum of all values $c(v')$ for successors~$v'$ of~$v$. If $v$ is not $B$-tagged or if $M$ is even, then $c(v) = M$; if $v$ is $B$-tagged and $M$ is odd, then $c(v) = M + 1$.\n\\end{ennote}\n\n\\begin{ennote}\n \\label{two} \n For any vertex with an even rank, the number of its descendants with the same rank is finite.\n\\end{ennote}\n\nIn general, a rank function of a leveled DAG of width~$n$ with vertex set~$V$ is a function $f \\colon V \\to [2n]$ satisfying Properties~\\ref{one} and~\\ref{two} with $f$ instead of $c$.\\index{rank function!of a leveled DAG}\\index{leveled DAG!rank function}\n\n\\begin{remark}\n Any rank function is pointwise greater or equal to the canonical rank function.\n\\end{remark}\n\nA complementation construction for B\\\"{u}chi automata can now be based on Corollary~\\ref{three}(2) and the following observations. First, there is a forward deterministic automaton with trivial recurrence condition that outputs the part of the $\\{I, B\\}$-tagged run DAG of a given word which is reachable from the $I$-tagged vertices on level~$0$. Second, there exists a nondeterministic B\\\"{u}chi automaton that produces for every $\\{I, B\\}$-tagged leveled graph of width at most~$n$ the same graph, but with any labeling with numbers from $[2n]$ such that Property~\\ref{one} is satisfied. Third, using a variant of the breakpoint construction, see Theorem~\\ref{thm:breakpoint}, a B\\\"{u}chi automaton can be constructed that checks Property~\\ref{two} for a $[2n]$-labeled DAG. In other words, a suitable cascade yields a B\\\"{u}chi automaton for the complement of the language recognized by a given B\\\"{u}chi automaton.\n\n\\begin{theorem}[complementation via ranks\n \\cite{kupferman-vardi-weak-alternating-automata-are-not-that-weak-2001}]\n \\label{thm:kupferman-vardi}\n Complementation via canonical ranks yields, for every B\\\"{u}chi automaton with $n$ states, a B\\\"{u}chi automaton with at most $(6n)^n$ states.\n\\end{theorem}\n\nIn \\cite{friedgut-kupferman-vardi-buechi-complementation-made-tighter-2006}, the above approach is improved, resulting in an asymptotic upper bound of $(0.96\\, n)^n$ for the number of states, and in \\cite{schewe-buechi-complementation-made-tight-2009} a further improvement leads to a construction which is optimal within a factor of $O(n^2)$ and has an asymptotic upper bound of $(0.76n)^n$.\n\n\\subsection{Backward determinization via canonical ranks}\n\\label{sec:backward-determinization}\n\nA second application of canonical ranks is the conversion of a given nondeterministic B\\\"{u}chi automaton into an equivalent backward deterministic generalized transition-B\\\"{u}chi automaton. The idea, which is due to \\cite{carton-michel-unambiguous-buechi-automata-2003}, is to use Corollary~\\ref{three}(1) and to construct an automaton which labels the run DAG in a backward deterministic fashion with the values of the canonical rank function.\n\nThe key to designing such an automaton is the fact that the canonical rank function is the only function on an $\\{I,B\\}$-tagged DAG satisfying Property~\\ref{one} and the following one, Property~\\ref{twostar}.\n\\begin{ennote}\n \\label{twostar}\n \\begin{inparaenum}\n \\item For every even number $i$, the set $c^{-1}(\\{i\\})$ is exactly the set of finitary vertices in the sub-DAG without the vertices in $c^{-1}([i])$.\n \\item For every vertex $v \\in V$, if $c(v) > 1$ and $c(v)$ is odd, then $v$ has a descendant $v'$ with $c(v') = c(v) - 1$.\n \\item In the sub-DAG consisting of the vertices in $c^{-1}(\\{\\infty\\})$, every vertex has a strict $B$-tagged descendant.\n \\end{inparaenum}\n\\end{ennote}\n\nThis means a backward deterministic generalized transition-B\\\"{u}chi automaton computing the rank function for a run DAG can be constructed as a cascade of two automata: \n\\begin{inparaenum}[(i)]\n\\item an automaton with a backward deterministic transition function and a trivial recurrence condition outputting the run DAG of a given word and an assignment to the vertices satisfying Property~\\ref{one}; \n\\item a backward deterministic automaton checking Property~\\ref{twostar} using adaptations of the lift construction, see Theorem~\\ref{thm:shift} and also the subsequent remark on weakly finitary vertices.\n\\end{inparaenum}\nAn automaton equivalent to the given B\\\"{u}chi automaton is obtained when the states which assign $\\infty$ to an $I$-tagged vertex are chosen to be initial. \n\n\\begin{theorem}[backward determinization via canonical ranks\n \\cite{carton-michel-unambiguous-buechi-automata-2003}]\n Backward determinization via canonical ranks yields, for every B\\\"{u}chi automaton with $n$~states, a generalized transition-B\\\"{u}chi automaton with at most $(3n)^n$ states.\n\\end{theorem}\n\n\\section{Run trees of B\\\"{u}chi automata}\n\\label{run trees}\n\nRun DAG's are one way to represent the set of all runs of a B\\\"{u}chi automaton on an $\\omega$-word. A different approach, which can serve as a basis for complementation, disambiguation, and forward determinization, is to use compressed run trees.\n\nIn a first step towards the definition of the compressed run tree for a given word $u$ with respect to a given B\\\"{u}chi automaton, a labeled binary tree $t$ is defined, using a refined subset construction. Just as in the subset construction, all the states reachable from the initial states are tracked at the same time. The difference is that in each step the set of states reachable by reading the next letter is split into the B\\\"{u}chi states and the non-B\\\"{u}chi states: a binary tree emerges. To keep this tree compact, only one occurrence of each state---more precisely, its leftmost occurrence---is kept, that is, the tree is pruned in a straightforward fashion.\n\nIn the following, when a vertex $v$ is said to be to the left of another vertex $v'$, then this means that $v$ and $v'$ are on the same level, that is, $|v| = |v'|$, and there exists $i < |v|$ such that $v(j) = v'(j)$ for all $j0$ such that $r(k)$ is a B\\\"{u}chi state. By adjusting $i$, we can assume $k=1$. If, on one hand, $r(1) \\notin t(v_{i+1})$, we can obtain a contradiction similar to above. If, on the other hand, $r(1) \\in v_{i+1}$, then, by definition, $v_{i+1} = v_{i+1-1}0$---a contradiction.\n\nFor the converse, assume $\\langle v_0, v_1, \\dots\\rangle$ is a left-recurring path in $T$. For every $i < \\omega$ and every $q \\in t(v_i)$, there is an initial run of the automaton on $u[0,i)$ such that $r(i) = q$ and $r(j) \\in t(v_j)$ for every $0 < j < i$. All these runs can be organized in a straightforward fashion in an infinite tree with branching degree at most the number of states of the given automaton. By K\\\"{o}nig's lemma, this tree has an infinite rooted path, and, by construction, the labeling of this path is an initial run of the automaton on~$u$. Further, for every $i>0$, the state in position~$i$ of this run belongs to~$B$ if, and only if, $v_i$ is a left successor.\\qed\n\nFrom an automata-theoretic point of view the important observation is that compressed run trees of a given B\\\"{u}chi automaton have width at most $n$ and can be constructed in a forward deterministic fashion by an $\\omega$-automaton with output and trivial recurrence condition. One way to realize such an automaton is to use states of the form $\\langle Q_0, \\dots, Q_{m-1}\\rangle$ with the $Q_i$'s being pairwise disjoint, nonempty subsets of $Q$, representing the labeling of the current level of the labeled compressed run tree. An upper bound on the number of such states can be obtained using ordered Bell numbers.\n\n\\begin{remark}\n \\label{thm:run-trees-deterministic}\n For every B\\\"{u}chi automaton with $n$ states, there is a forward deterministic automaton that outputs, for every $\\omega$-word, a representation of its compressed run tree, and has a number of states which is asymptotically bounded from above by $(0.54 \\, n)^n$.\n\\end{remark}\n\n\\subsection{Complementation and disambiguation via compressed run trees}\n\\label{sec:disambiguation}\n\nCompressed run trees can be used for complementation. To see this, consider the subtree of a compressed run tree which contains only the infinitary vertices and call it the core of the run tree.\\index{core!of a run tree}\\index{run tree!core of} From Lemma~\\ref{thm:run-trees} and the fact that the run tree has finite width it follows that an $\\omega$-word is not accepted if, and only if, in the core of its run tree there are only finitely many slices with a left successor. In other words, a B\\\"{u}chi automaton for the complement is obtained as a cascade of the following automata: \n\\begin{inparaenum}[(i)]\n\\item the automaton from Remark~\\ref{thm:run-trees-deterministic}; \n\\item an automaton based on the breakpoint construction removing the finitary vertices; \n\\item a two-state automaton checking that from some level onward, no slice with a left successor occurs anymore.\n\\end{inparaenum}\nA careful implementation leads to state spaces similar in size to those described in~\\cite{schewe-buechi-complementation-made-tight-2009}.\n\nAn interesting observation is that the cascade of the first and the second automaton from above yields a nondeterministic B\\\"{u}chi automaton which, for every $\\omega$-word over the given alphabet, outputs the core of its compressed run tree and has exactly one accepting run. In general, an automaton which has at most one accepting run for each word is called an unambiguous automaton.\\index{unambiguous!$\\omega$-automaton}\\index{$\\omega$-automaton!unambiguous} In other words, the above automaton is an unambiguous automaton for the set of all $\\omega$-words over the given alphabet. Moreover, it can be modified in two ways.\n\\begin{inparaenum}[(i)]\n\\item By cascading it with a two-state deterministic automaton checking\n that there are infinitely many slices with left successors, one\n obtains an unambiguous automaton for the language recognized by the given automaton.\n\\item By cascading it with a two-state unambiguous automaton checking that there are only finitely many slices with left successors, one obtains an unambiguous automaton for the complement.\n\\end{inparaenum}\nIn the terminology of~\\cite{colcombet-forms-determinism-automata-2012}, such an automaton could be called strongly unambiguous.\\index{strongly unambiguous!$\\omega$-automaton}\\index{$\\omega$-automaton!strongly unambiguous}\n\n\n\n\n\n\n\\subsection{Forward determinization via compressed run and history trees}\n\nTheorem~\\ref{thm:fundamental} states in particular that every nondeterministic B\\\"{u}chi automaton is equivalent to a forward deterministic parity, Rabin, Streett, or Muller automaton. The quest for good constructions establishing this---determinization constructions---has resulted in different approaches. The approach followed in this section is motivated by \\cite{muller-schupp-alternating-automata-infinite-trees-1987}, but it is also closely related to Safra-like constructions, as explained towards the end.\n\nIn view of Lemma~\\ref{thm:run-trees} and Remark~\\ref{thm:run-trees-deterministic}, a determinization construction has been established once it has been shown that the set of all binary trees of width at most $k$ which have a left-recurring path is recognized by a forward deterministic automaton. Therefore, the objective in what follows is exactly to describe such an automaton.\n\nTo understand the mechanics of infinite trees of finite width better, we associate with every vertex $v$ in a binary tree its origin.\\index{origin!of a vertex in a binary tree} This is the earliest ancestor of~$v$ (shortest prefix of $v$) with the property that no vertex to the right of $v$ has the same ancestor; it is denoted $\\text{or}(v)$. For an illustration, see Figure~\\ref{fig:origin}. \n\nObserve, for instance, that\n\\begin{inparaenum}[(i)]\n\\item the root is the origin of the rightmost vertex on each level; \n\\item vertices on the same level have distinct origins; \n\\item if a vertex has a left and a right successor, then the left successor is its own origin.\n\\end{inparaenum}\n\nThe important definition specifies that an origin moves left in one slice if it is the origin of a vertex $v$ on the upper level of the slice and of a vertex $v'$ on the lower level of the slice and $v'$ is not the right successor of $v$, that is, $v'$ is to the left of $v1$. (Note that, by definition, $v'$ cannot be to the right of $v1$.) For an illustration, see Figure~\\ref{fig:origin}.\n\nThe key for the construction to be presented is:\n\n\\begin{lemma}\n A binary tree of finite width contains a left-recurring path if, and only if, there is some origin which moves left in infinitely many slices.\n\\end{lemma}\n\nTo prove the lemma, assume $\\langle v_0, v_1, \\dots\\rangle$ is a left-recurring rooted path. Let $i$ be minimal such that there is no other infinite path $\\langle v_0, \\dots, v_j, v_j1, \\dots\\rangle$ for any $j \\geq i$. (This number $i$ exists because there are at most $k$ rooted infinite paths in any tree of width $k$.) For every $j \\geq i$, consider the rightmost vertex $w_j$ on level $j$ such that $v_j \\LeftLeq w_j$ and there is no infinitary vertex $v'$ with $v_j <_{\\text{lft}} v' \\LeftLeq w_j$. Then $v_i$ is the origin of every $w_j$ and moves left in infinitely many slices. \n\nFor the converse, let $v$ be an origin which moves left in infinitely many slices. Consider, for every level $i \\geq |v|$, the vertex $w_i$ with origin $v$. By K\\\"{o}nig's lemma, there is a rooted path $\\langle v_0, v_1, \\dots\\rangle$ in the tree which consists of all vertices $w_i$ and their ancestors. This path is left-recurring, because otherwise there would be some $j$ with $v_k \\in v_j1^*$ for all $k \\geq j$, which is a contradiction to $v$ moving left in infinitely many slices.\\qed \n\nThere are several ways for an automaton to check whether there is an origin which moves to the left in infinitely many slices. One is explained in what follows, another one is sketched later.\n\nWe use the notion of military ordering, denoted $<_\\text{mil}$, and defined by $v <_\\text{mil} v'$ if, and only if, either $|v| < |v'|$ or $|v| = |v'|$ and $v <_{\\text{lft}} v'$.\\index{military ordering}\\index{ordering!military}\n\nFrom level to level, the automaton determines, for each vertex, its origin number,\\index{origin number!of a vertex in a binary tree} which is defined as follows. For a given level $l$, let $v_0, \\dots, v_{r-1}$ be an enumeration of the vertices on level~$l$, ordered according to the military order of their origins, that is, $\\text{or}(v_0) <_\\text{mil} \\dots <_\\text{mil} \\text{or}(v_{r-1})$. The index $i$ is the origin number of $v_i$ and denoted $\\text{on}(v_i)$. The index $i$ is said to refer to the origin $\\text{or}(v_i)$ on level $l$. For an illustration, see Figure~\\ref{fig:origin}. \n\n\\begin{figure}[t]\n \\centering\n \\small\n \\begin{tikzpicture}[%\n\t level distance=5mm,\n\t level 1\/.style={sibling distance=13mm},\n\t level 2\/.style={sibling distance=7mm},\n\t level 3\/.style={sibling distance=5mm},\n\t level 4\/.style={sibling distance=3mm}]\n\n\t \\begin{scope}[every node\/.style={draw,circle,fill,inner sep=1pt}]\n\t \\node {}\n\t\t child {node {} \n\t\t\t child {node {}\n\t\t\t\t child {node {}\n\t\t\t\t\t child {node {}}\n\t\t\t\t\t child[missing]}\n\t\t\t\t child[missing]}\n\t\t\t child {node {}\n\t\t\t }}\n\t\t child {node {} \n\t\t\t child {node {}\n\t\t\t\t child {node {}\n\t\t\t\t\t child {node {}}\n\t\t\t\t\t child {node {}}}\n\t\t\t\t child[missing]\n\t\t\t }\n\t\t\t child {node {}\n\t\t\t\t child {node {}\n\t\t\t\t\t child {node {}}\n\t\t\t\t\t child {node {}}} \n\t\t\t\t child {node {}}}};\n\t \\end{scope}\n\n\t \\begin{scope}[every node\/.style={draw=none,fill=white,inner sep=1pt}]\n\t \\node (o3) at (4,0) {0}\n\t\t child {node (o2) {1} \n\t\t\t child {node {2}\n\t\t\t\t child {node {1}\n\t\t\t\t\t child {node (l0) {1}}\n\t\t\t\t\t child[missing]}\n\t\t\t\t child[missing]}\n\t\t\t child {node {1}\n\t\t\t }}\n\t\t child {node {0} \n\t\t\t child {node (o4) {3}\n\t\t\t\t child {node {2}\n\t\t\t\t\t child {node (l1) {3}}\n\t\t\t\t\t child {node (l2) {2}}}\n\t\t\t\t child[missing]\n\t\t\t }\n\t\t\t child {node {0}\n\t\t\t\t child {node {3}\n\t\t\t\t\t child {node (l3) {4}} \n\t\t\t\t\t child {node (l4) {0}}} \n\t\t\t\t child {node {0}}}};\n\t\t \\draw[densely dashed,->] (l2) .. controls +(45:0.5) and +(300:0.5) .. (o4);\n\t\t \\draw[densely dashed,->] (l4) .. controls +(0:1.5) and +(0:1.5) .. (o3);\n\t\t \\draw[densely dashed,->] (l1) .. controls +(240:0.7) and +(300:0.7) .. (l1);\n\t\t \\draw[densely dashed,->] (l3) .. controls +(240:0.7) and +(300:0.7) .. (l3);\n\t\t \\draw[densely dashed,->] (l0) .. controls +(170:0.9) and +(190:0.9) .. (o2);\n\t \\end{scope}\n\n\t \\begin{scope}[every node\/.style={draw,circle,fill,inner sep=1pt}]\n\t \\node at (8,0) {}\n\t\t\tchild {node {}}\n\t\t\tchild {node {}\n\t\t\t\tchild {node {}}}\n\t\t\tchild {node {}};\n\t\t\\end{scope}\n \\end{tikzpicture}\n \\caption{The first five levels of a compressed run tree; the same tree with the origin numbers of each vertex and the origins of the vertices on level $4$; the tree induced by the origins of the vertices on level~$4$. There are two origins that move left in the last slice depicted.}\n \\label{fig:origin}\n\\end{figure}\n\nWhen a forward deterministic automaton computes the origin numbers, it can be augmented by a transition-Rabin condition to check for a left-recurring path. To see this, assume $v$ is an origin on some level $l$ that moves infinitely often to the left and let $v_0, v_1, \\dots$ be the list of vertices such that $v_i$ is on level $l+i$ and $\\text{or}(v_i) = v$, in particular, $v_0 = v$. Then there is some $i$ such that $\\text{on}(v_i) = \\text{on}(v_{i+1}) = \\dots$. So an appropriate transition-Rabin recurrence condition can be chosen to have, for each $i < k$ (maximum width of the trees considered), a Rabin pair $\\langle L_i, U_i\\rangle$ as follows. The set $L_i$ contains all transitions where for every $j \\leq i$ the index $j$ does not refer to the same origin in the upper and the lower level of the current slice. The set $U_i$ contains all transitions where the origin which $i$ refers to on the upper level moves left in the slice.\n\nAs states of a forward deterministic automaton computing the origin numbers one can choose bijections $g \\colon [m] \\to [m]$ where $m \\leq k$. The meaning of such a state $g$ would be that if $v$ is vertex $i$ on the current level in the order from left to right, then $\\text{on}(v) = g(i)$. The actual definition of the transition function is somewhat technical and omitted. \n\nIn the determinization constructions presented in \\cite{safra-complexity-omega-automata-1988, Piterm2007a, schewe-tighter-bounds-for-determinization-buechi-automata-2009}, states are trees. For instance, in \\cite{schewe-tighter-bounds-for-determinization-buechi-automata-2009}, so-called history trees\\index{history tree} are used and defined as trees where\n\\begin{inparaenum}[(i)]\n\\item each node is labeled by a nonempty set of states, \n\\item the label of every node is a strict superset of the union of the labels of its children, and \n\\item the labels of siblings are disjoint.\n\\end{inparaenum}\nObserve that, alternatively, one could require that the vertices of a history tree are labeled with pairwise disjoint, nonempty sets of states. Such a tree is obtained from the labeled compressed run tree of a given word for each level in a straightforward fashion: move every label of a vertex on the respective level to its origin and then remove all vertices except for these origins and contract edges accordingly. For an illustration, see Figure~\\ref{fig:origin}.\n\nIf one constructs a deterministic automaton which only keeps track of the history trees just described, one arrives at a fundamental construction, which is also known to be optimal in a certain sense \\cite{colcombet-zdanowski-tight-lower-bound-for-determinization-transition-labeled-buechi-automata-2009}:\n\n\\begin{theorem}[determinization via history trees\n \\cite{schewe-tighter-bounds-for-determinization-buechi-automata-2009}]\n \\label{thm:schewe}\n Determinization via history trees yields, for every B\\\"{u}chi automaton with $n$ states, an equivalent deterministic transition-Rabin automaton with an asymptotic upper bound of $(1.65\\,n)^n$ for the number of states and $2^n-1$ Rabin pairs.\n\\end{theorem}\n\n\\enlargethispage{0\\baselineskip}\n\nThe transformations on different types of $\\omega$-automata discussed in this section and the previous one are fundamental transformations, but not all one can consider. Optimal solutions for most of the basic transformation tasks can be found in \\cite{safra-complexity-omega-automata-1988} and a later paper by the same author \\cite{safra-exponential-determinization-for-omega-automata-with-a-strong-fairness-acceptance-condition-2006}. Here, ``optimal'' means with respect to a rough measure of complexity: polynomial, exponential, doubly exponential. The ``optimal'' results stated subsequent to Theorem~\\ref{thm:kupferman-vardi} and before Theorem~\\ref{thm:schewe} are with respect to much finer measures and based on very good lower bounds. A breakthrough with regard to lower bounds on $\\omega$-automata is \\cite{yan-lower-bounds-complementation-omega-automata-full-automata-technique-2006}.\n\n\\section{Congruence relations}\n\nCongruence relations are a useful tool for working with finite-state automata on finite words. For instance, the minimum-state deterministic finite-state automaton for a given regular language of finite words can be derived from the Myhill-Nerode congruence relation for the language---it merely is this congruence relation.\n\nFor $\\omega$-automata, congruences are also useful, but the situation is more complex. \n\n\\subsection{Right (and left) congruence relations}\n\\label{sec:finite-state}\n\nThe straightforward adaptation of the Myhill-Nerode congruence relation (see, for instance, \\cite{MN}) to $\\omega$-lan\\-guages is the initial syntactic congruence relation.\\index{initial syntactic congruence relation!of an $\\omega$-language}\\index{congruence relation!initial syntactic! of an $\\omega$-language} For a given $\\omega$-lan\\-guage~$L$, it considers finite words $u$ and $v$ equivalent if, and only if, for every $\\omega$-word $w$, either $\\{uw, vw\\} \\subseteq L$ or $\\{uw,vw\\} \\cap L = \\emptyset$. \n\nOn the automata-theoretic side, there is a corresponding notion. Given an $\\omega$-au\\-tom\\-a\\-ton of any type, its initial congruence relation\\index{initial congruence relation!of an $\\omega$-automaton}\\index{congruence relation!initial! of an $\\omega$-automaton} considers finite words $u$ and $v$ equivalent if, and only if, for every initial run of the automaton on~$u$ ending in some state~$q$ there is such a run on~$v$, and vice versa.\n\nThe analogy to the Myhill-Nerode congruence relation is as follows.\n\n\\begin{remark}\n \\begin{inparaenum}\n \\item The initial syntactic congruence relation for an $\\omega$-language is a right congruence relation. \n \\item For an $\\omega$-automaton of any type, the initial congruence relation is a right congruence relation and equal to or finer than the initial syntactic congruence relation for the language recognized and has a finite number of congruence classes.\n \\end{inparaenum}\n\\end{remark}\n\n(Here, as usual, an equivalence relation on words over a given alphabet is a right congruence relation if $uw$ and $vw$ are equivalent whenever $u$ and $v$ are and $w$ is a finite word over the alphabet. A relation is finer than another one if it is a subset of it.)\n\nFor a simple language such as the set of all ultimately periodic words over a given alphabet, which is not regular, the initial syntactic congruence relation has only one equivalence class. Hence, it cannot serve as a vehicle to define regularity. It can neither be used for classifying regular $\\omega$-languages: the languages denoted by $(0+1)^\\omega$ and by $(0^*1)^\\omega$ have the same initial syntactic congruence relation, but they are completely different in nature.\n\nThe initial syntactic congruence relation can, however, take over the role of the Myhill-Nerode congruence relation for the small class of $\\omega$-languages which are recognized by forward deterministic weak automata (see also Section~\\ref{thm:weak-characterization}):\n\n\\begin{theorem}[minimization of forward deterministic weak automata\n \\cite{staiger-finite-state-omega-languages-1983,loeding-efficient-minimization-deterministic-weak-omega-automata-2001}]\n \\label{thm:initial-congruence} \n Let $L$ be an $\\omega$-language recognized by a forward deterministic weak automaton $\\Aut A$ and let $\\Aut D$ be the DFA (without final state set) corresponding to the initial syntactic congruence relation for~$L$.\n \\begin{enumerate}\n \\item The automaton $\\Aut D$ can be augmented by a weak acceptance condition in such a way that the resulting automaton recognizes $L$.\n\n \\item The automaton from (1) is, up to isomorphism, the smallest forward\n deterministic automaton recognizing~$L$ and can be computed from $\\Aut A$ by\n DFA minimization, see, for instance, \\cite{HMU}.\n \\end{enumerate}\n\\end{theorem}\n\nThe role that the initial syntactic congruence relation of a language~$L$ plays in the context of forward deterministic automata is taken over by the final syntactic congruence relation in the context of backward deterministic automata. This relation, which is a left congruence relation, considers $\\omega$-words $v$ and $w$ congruent if, and only if, for every finite word $u$, either $\\{uv, uw\\} \\subseteq L$ or $\\{uv,uw\\} \\cap L = \\emptyset$.\n\n\\subsection{Two-sided congruence relations}\n\\label{sec:saturation}\n\nThere are essentially two straightforward adaptations of the two-sided syntactic\ncongruence relation for languages of finite words (see, for instance, \\cite{SC}) to $\\omega$-languages.\n\nLet $L$ be an $\\omega$-language over some alphabet $A$. In the first adaptation, finite words $u$ and $v$ are congruent if, and only if, for all $x \\in A^*$ and $y \\in A^\\omega$, either $\\{xuy, xvy\\} \\subseteq L$ or $\\{xuy, xvy\\} \\cap L = \\emptyset$. In the second adaptation, nonempty finite words $u$ and $v$ are congruent if, and only if, whenever $u_0, u_1, \\dots$ and $v_0, v_1, \\dots$ are sequences of nonempty finite words over the given alphabet such that $u_i = v_i$ or $\\{u_i, v_i\\} \\subseteq \\{u,v\\}$, then either $\\{u_0 u_1 \\dots, v_0 v_1 \\dots\\} \\subseteq L$ or $\\{u_0 u_1 \\dots, v_0 v_1 \\dots\\} \\cap L = \\emptyset$.\n\nThe two adaptations try to capture what it means for two finite words to behave equally in the same context, and they both yield two-sided congruence relations. The first one is finer or equal to the initial syntactic congruence relation; the second one is finer or equal to the first one and called the syntactic congruence relation of~$L$.\\index{syntactic congruence relation!of an $\\omega$-language}\\index{congruence relation!syntactic! of an $\\omega$-language}\n\nIn the following, the syntactic congruence relation is further discussed, because it provides a finer means of characterization.\n\nCorresponding to the syntactic congruence relation one can define, for every $\\omega$-au\\-tom\\-a\\-ton of any type, a suitable two-sided congruence relation. For a B\\\"{u}chi automaton, this relation considers nonempty finite words $u$ and $v$ congruent if the following two conditions hold for all states $q, q' \\in Q$.\n\\begin{inparaenum}[(i)]\n\\item There is a run of the automaton on $u$ starting with $q$ and ending in $q'$ if, and only if, this is true for $v$.\n\\item There is a run $r$ of the automaton on $u$ starting with $q$, ending in $q'$, and passing through an element of $B$, that is, $r(i) \\in B$ for some $i < |r|$, if, and only if, this is true for~$v$.\n\\end{inparaenum}\n\nJust as before, the syntactic congruence relation of an $\\omega$-language does not characterize regularity. Consider%\n\\footnote{Slides of a presentation given by Miko\\l aj Boja\\'nczyk.} the language $L_\\text{ub0}$ of all $\\omega$-words of the form $0^{i_0}10^{i_1}\\dots$ where $\\limsup_{j \\to \\infty} i_j = \\infty$. The language $L_\\text{ub0}$ and the one denoted by $(0^*1)^\\omega$ have the same syntactic congruence relation, and this has just two equivalence classes.\n\nStill, the syntactic congruence relation has interesting properties. One is phrased in terms of saturation, where an equivalence relation on finite words is said to saturate \\index{saturation!of an $\\omega$-language} an $\\omega$-language $L$ if, for all sequences $\\langle V_0, V_1 \\dots\\rangle$ of equivalence classes, either $V_0V_1 \\dots \\subseteq L$ or $V_0 V_1 \\dots \\cap L = \\emptyset$ holds.\n\n\\begin{theorem}[saturation \\cite{buechi-decision-method-restricted-second-order-arithmetic-1962,arnold-syntactic-congruence-omega-languages-1983}]%\n \\label{thm:syntactic-congruence}\n \\begin{enumerate}\n\t \\item The two-sided au\\-tom\\-a\\-ton congruence relation of a B\\\"{u}chi automaton with $n$ states has at most $2^{2n^2}$ congruence classes and saturates the language recognized by the automaton.\n \\item The syntactic congruence relation of a given regular $\\omega$-language is the coarsest congruence relation saturating the language.\n \\item An $\\omega$-language is regular if, and only if, there exists a congruence relation saturating it and having a finite number of congruence classes.\n \\end{enumerate}\n\\end{theorem}\n\nThe proofs of (1) and (2) are straightforward; one direction of~(3) follows from~(1). The other direction of~(3) can be proved on the basis of Ramsey's Theorem~A\\index{Ramsey's Theorem A} \\cite{ramsey-problem-formal-logic-1929}, which says that, for a given equivalence relation on finite words with a finite number of equivalence classes and an infinite sequence of nonempty finite words $u_0,u_1, \\dots$, there is a strictly monotone infinite sequence $\\langle i_0, i_1, \\dots\\rangle$ of natural numbers such that all finite words of the form $u_{i_j} u_{i_j+1} \\dots u_{i_k-1}$ with $j < k$ are equivalent. In the context of $\\omega$-languages, this means the following.\n\n\\begin{remark}\n \\label{thm:ramsey}\n\\cite{buechi-decision-method-restricted-second-order-arithmetic-1962}\n Given an alphabet $A$, a congruence relation on $A^+$ having a finite number of congruence classes, and $u \\in A^\\omega$, there are congruence classes $U$ and $V$ satisfying $UV \\subseteq U$, $V^2 \\subseteq V$, and $u \\in UV^\\omega$.\n\\end{remark}\n\nTo prove the other direction of Theorem~\\ref{thm:syntactic-congruence}(3), observe that from the previous remark it follows that if a congruence relation with a finite number of equivalence classes saturates a given $\\omega$-language~$L$, then $ L = \\bigcup UV^\\omega$ with $U$ and $V$ ranging over equivalence classes with \\mbox{$UV^\\omega \\subseteq L$}. Based on this, a construction such as the one described in the proof of Theorem~\\ref{thm:regexps} can be used to arrive at a B\\\"{u}chi automaton recognizing~$L$.\\qed\n\nThe procedure just described is far from being as natural as the procedure that turns the Myhill-Nerode congruence relation for a given regular language of finite words into the minimum-state DFA for the language. In fact, nothing which would come close to this is known for regular $\\omega$-languages. Still, two-sided congruence relations for $\\omega$-languages are useful in several contexts, for instance, when it comes to classifying regular $\\omega$-languages, see~\\cite{perrin-pin-infinite-words-2004}. Another application, described in what follows, is complementation.\n\n\\subsection{Complementation via two-sided congruence relations}\n\nWhen a two-sided congruence relation saturates a given language, then, by definition, it also saturates the complement of the language, which establishes once again that the class of $\\omega$-languages recognized by B\\\"{u}chi automata is closed under complementation. In fact, the first proof of this fact was along these lines \\cite{buechi-decision-method-restricted-second-order-arithmetic-1962}.\n\nIn view of the bound stated in Theorem~\\ref{thm:syntactic-congruence}(1), using congruences for complementation leads to much larger automata than the ones described in Sections~\\ref{sec:complementation} and~\\ref{sec:disambiguation}. To obtain smaller automata with this approach, the following modification suggests itself. \n\nWrite the complement of the language recognized by a given B\\\"{u}chi automaton again in the form $\\bigcup_{i < k} U_i V_i^\\omega$, but choose the $U_i$'s and $V_i$'s to be unions of congruence classes in a way such that the resulting B\\\"{u}chi automaton is small. Observe that if $m$ is the number of states of an automaton recognizing all $U_i$'s (with a different set of final states for each~$i$) and $m'$ is an upper bound on the number of states needed in automata recognizing the $V_i$'s, then $m + k (m' +1)$ is an upper bound for the number of states in the resulting B\\\"{u}chi automaton.\n\nTo see how the indicated approach works, assume a B\\\"{u}chi automaton is given as usual, with $n$ states and recognizing a language $L$. For every set $P \\subseteq Q$, let $U_P$ be set of words $u$ such that every run of the automaton on $u$ starting in some initial state ends in some state of~$P$. Then each set $U_P$ is a union of equivalence classes of the automaton congruence relation and can be recognized by a deterministic automaton with $2^n$ states.\n\nFor every nonempty sequence $\\sigma = \\langle P_0, \\dots, P_{k-1}\\rangle$ of nonempty, pairwise disjoint sets of states, let the set $P(\\sigma)$ be defined by $P(\\sigma) = P_0 \\cup \\dots \\cup P_{k-1}$, and let $V_\\sigma$ be the set of all finite words $u$ satisfying the following two conditions for every run $r$ of the automaton on~$u$.\n\\begin{inparaenum}[(i)]\n\\item If $r(0) \\in P_i$ for some $i$, then $r(|u|) \\in P_j$ for some $j$ with $i \\leq j < k$.\n\\item If $r(0) \\in P_i$ and $r$ contains a B\\\"{u}chi state, then $r(|u|) \\in P_j$ for some $j$ with $i < j < k$. \n\\end{inparaenum}\n\n\\begin{theorem}[complementation by saturation \\cite{breuers-loeding-olschewski-improved-ramsey-based-complementation-2012}]\n For every B\\\"{u}chi automaton with $n$ states, the language $\\bigcup_\\sigma U_{P(\\sigma)} V_\\sigma^\\omega $\n is the complement of the language recognized by the B\\\"{u}chi automaton, and a conversion of this expression into a B\\\"{u}chi automaton yields an automaton with $2^{\\theta(n \\log n)}$ states.\n\\end{theorem}\n\nFor the proof, first observe that each set $V_\\sigma$ is a union of equivalence classes of the two-sided automaton congruence relation: compare (i) and (ii) above with (i) and (ii) in the definition of the two-sided automaton congruence relation. \n\nNext, let $\\sigma$ and $P(\\sigma)$ be as above. To see that $U_{P(\\sigma)} V_\\sigma^\\omega \\cap L = \\emptyset$ holds, let $r$ be an initial run on any word $u \\in U_{P(\\sigma)} V_\\sigma^\\omega$ and $i_0 < i_1 < \\dots$ be such that $u[0,i_0) \\in U_{P(\\sigma)}$ and $u[i_j, i_{j+1}) \\in V_\\sigma$ for every $j$. From the definition of $V_\\sigma$ we can conclude that $r(i_0) \\in P(\\sigma)$ holds and that there are $i$ and $k$ such that $r(i_j) \\in P_i$ holds for every $j \\geq k$. This implies that, for every $j \\geq k$, there is no B\\\"{u}chi state in $r[i_j, i_{j+1})$, which means $u \\notin L$.\n\nConversely, if an $\\omega$-word $u$ is not accepted by the given B\\\"{u}chi automaton, then, by Remark~\\ref{thm:ramsey}, there are classes $U$ and $V$ of the automaton congruence relation such that $u \\in UV^\\omega$, $UV \\subseteq U$, and $V^2 \\subseteq V$. Let $P$ be the set of states which can be reached by reading some word from $U$ from some initial state. Consider the graph with vertex set~$P$ and an edge between $q$ and $q'$ if, and only if, there is a run of the automaton on some word $v \\in V$ starting in $q$ and ending in $q'$. Let $\\sigma = \\langle P_0, \\dots, P_{k-1} \\rangle$ be a list of the SCC's of this graph in topological order. Then $V \\subseteq V_\\sigma$, which means $u \\in U_{P(\\sigma)} V_\\sigma^\\omega$. \n\nTo prove the claim about the size of the resulting $\\omega$-automaton, we describe how to construct a deterministic automaton of size $(k+1)^n$ for a language $V_\\sigma$ as above. The states are functions $Q \\to \\{-\\infty\\} \\cup [k]$. The transition function is defined in a way such that if by reading a finite word $v$ the automaton reaches state $f$, then the following holds for every $q \\in Q$. If in the B\\\"{u}chi automaton there is no run on $v$ starting in $P(\\sigma)$ and ending in $q$, then $f(q) = - \\infty$; if there are such runs, then $f(q)$ is the greatest index $i$ such that a run on $v$ starting in some state from $P_i$ ends in $q$.\\qed\n\nThe construction described above can be generalized so as to improve the construction of a B\\\"{u}chi automaton from a saturating congruence relation.\n\n\\section{Loop structure}\n\nAs the set of states visited infinitely often in a run of an $\\omega$-automaton determines whether the run is recurring, it is only natural to investigate the structure of the strongly connected subsets in a given $\\omega$-automaton.\n\nA loop\\index{loop!in an $\\omega$-automaton} at some state $q$ is a word $q_0 a_0 q_1 a_1 \\dots a_n q_{n+1}$ where $\\langle q_i, a_i, q_{i+1}\\rangle \\in \\Delta$ for every $i \\in [n]$ and $q_0 = q_{n+1} = q$. The word $a_0 \\dots a_n$ is the label of the loop, the set $\\{q_0, \\dots, q_n\\}$ is the loop set. The loop is positive if it satisfies the recurrence condition of the given automaton (for a B\\\"{u}chi condition, this means $\\{q_0, \\dots, q_n\\} \\cap B \\neq \\emptyset$), it is negative if it does not---we speak of the sign of the loop. In a deterministic automaton (forward or backward), $q$ and the label determine the loop.\n\nIn forward deterministic automata, the nesting depth of positive and negative loops sets is an interesting measure for the complexity of the language recognized, explained in Section~\\ref{sec:wagner}, whereas in backward deterministic automata, the distribution of the labels of positive loops is interesting, as explained in Section~\\ref{sec:backward-loops}. \n\nIn the following, we assume, without loss of generality, that in forward deterministic automata every state is reachable from the initial state.\n\n\\subsection{Alternating loops in forward deterministic automata}\n\\label{sec:wagner}\n\nA tower\\index{tower!in an $\\omega$-automaton} is a nonempty sequence $\\langle C_0, \\dots, C_{m-1}\\rangle$ of loop sets such that $C_0 \\supseteq \\dots \\supseteq C_{m-1}$ and the signs alternate; the sign of the last loop is the sign of the tower, the number~$m$ is the height of the tower. A maximal tower is one of maximal height. A wall\\index{wall!in an $\\omega$-automaton} is a sequence of maximal towers where each one is reachable from the previous one and the signs alternate; the sign of the wall is the sign of the first tower, the number of towers in the sequence is the length of the wall.\n\nThe types of towers and walls in a given forward deterministic $\\omega$-automaton are invariants of the language recognized:\n\n\\begin{theorem}[towers and walls \\cite{wagner-1977}]\n \\label{thm:wagner}\n All forward deterministic $\\omega$-automata recognizing the same language have the same types of towers and walls in the sense that if one of them has a tower of a certain height and sign or a wall of a certain length and sign, then the other has so, too. \n\\end{theorem}\n\nTo illustrate this theorem we prove the claim for towers and start with a useful remark.\n\n\\begin{remark}\n \\label{thm:loops}\n Consider a forward deterministic automaton with~$n$ states over an alphabet~$A$. For $u \\in A^*$, $v \\in A^+$ and $k \\geq n$, some power of $v^k$ is the label of a loop at $\\delta^*(q_I, uv^k)$, and this loop is positive if, and only if, $uv^\\omega$ is accepted. (As usual, $\\delta^*$ stands for the extended transition function, defined by $\\delta^*(q, \\epsilon) = q$ and $\\delta^*(q, ua) = \\delta(\\delta^*(q,u),a)$ for all $q \\in Q$, $u \\in A^*$, and $a \\in A$.)\n\\end{remark}\n\nFor the proof of the claim on towers, assume equivalent forward deterministic $\\omega$-automata $\\Aut A$ and $\\Aut A'$ are given and consider any tower $\\langle C_0, \\dots, C_{m-1}\\rangle$ in $\\Aut A$, say a positive one; the argument is symmetric for a negative one. Let $q$ be a state in $C_{m-1}$ and, for every $i < m$, let the word $v_i$ be a label for a loop at $q$ with loop set $C_i$. Further, let $u$ be a word such that $\\delta(q_I, u) = q$. Then $\\delta(q_I, uw) = q$ for every word $w \\in (v_0 + \\dots + v_{m-1})^*$. Moreover, whether a word $u v_{i_0} v_{i_1} \\dots$ is accepted is determined by the least index occurring infinitely often among $i_0, i_1, \\dots$. Let $k$ be greater than the number of states of $\\Aut A'$ and consider the words $w_i$ defined inductively by $w_0 = \\epsilon$ and $w_{i+1} = (v_{m-i-1} w_i)^k$, for $i < m$. Then, using Remark~\\ref{thm:loops}, we find that for each $i < m$, some power of $w_{i+1}$ is the label of a loop at $\\delta^*(q_I', u w_m)$ and has the same sign as $C_{m-i-1}$. So the reverse sequence of the loop sets forms a positive tower in~$\\Aut A'$ of height~$m$.\\qed\n\nThere is a strong relationship between towers and walls on one side and topological aspects of $\\omega$-languages on the other side, see, for instance, \\cite{perrin-pin-infinite-words-2004}.\n\n\\subsection{The parity index}\n\\label{sec:parity-index}\n\nFrom Theorem~\\ref{thm:wagner} it follows that, in particular, the greatest height of a tower in a forward deterministic automaton is characteristic for the language recognized. This number is intimately connected with the number of priorities needed by a forward deterministic parity automaton to recognize the same language. To make this more precise we say a parity automaton uses $n$ priorities if $n$ is the maximum of the number of priorities occurring in any strongly connected component of the automaton. Given a regular $\\omega$-language the smallest number of priorities used in any forward deterministic parity automata recognizing the language is its parity index.\\index{parity index!of an $\\omega$-language} \n\n\\begin{corollary}[parity index]\n The greatest height of a tower in a given forward deterministic $\\omega$-automa\\-ton is exactly the parity index of the language recognized.\n\\end{corollary}\n\nThat the parity index is at least the greatest height of a tower follows from Theorem~\\ref{thm:wagner}. For the converse, reconsider the construction from Section~\\ref{sec:lar} that turns a Muller automaton into an equivalent parity automaton. Essentially, the Muller automaton without recurrence condition is cascaded with the LAA (latest appearance automaton) and augmented by a parity condition. It is enough to adjust the latter as follows. A state $\\langle q, v\\$v'\\rangle$ is assigned the value $n - l + o$ where \n\\begin{inparaenum}[(i)]\n\\item the number $l$ is the greatest height of a tower ending in a loop with loop\n set $\\Occ{v'}$ and \n\\item the number $o < 2$ is chosen in a way such that $n - l + o$ is even if the loop is positive and else odd.\\qed \n\\end{inparaenum}\n\nThe Rabin index of a regular $\\omega$-language \\cite{wagner-1977} is a similar but somewhat coarser measure.\\index{Rabin index!of an $\\omega$-language}\n\n\\subsection{Forward deterministic weak automata}\n\nIn terms of the above complexity measure---parity index---the simplest forward deterministic automata that can be considered are the ones with parity index~$1$; these automata are exactly the forward deterministic weak automata. \n\nOn one hand, weak automata are indeed weak in the sense that the class of languages recognized by them is small, for instance, $(01)^\\omega$ cannot even be recognized by such automata. In fact, there is a simple characterization of languages recognized by forward deterministic weak automata.\n\n\\begin{remark}\\cite{staiger-wagner-automatentheoretische-automatenfreie-charakterisierungen-1974}\n \\label{thm:weak-characterization}\n An $\\omega$-language can be recognized by a forward deterministic weak automaton over some alphabet $A$ if, and only if, it is a boolean combination of languages of the form $UA^\\omega$ where $U$ is a regular language of finite words.\n\\end{remark}\n\nOn the other hand, weak automata have some properties which general $\\omega$-automata are lacking. One interesting property is described in Theorem~\\ref{thm:initial-congruence}. Another property has to do with their determinization:\n\n\\index{conditional determinization!of an $\\omega$-automaton}\n\\begin{theorem}[conditional determinization \\cite{boigelot-jodogne-wolper-effective-decision-procedure-linear-arithmetic-over-integers-reals-2005}]\n \\label{thm:forward-deterministic-weak} \n If an $\\omega$-language is recognized by some forward deterministic weak automaton, then a variant of the breakpoint construction can be used to transform a forward nondeterministic weak automaton recognizing the language into an equivalent forward deterministic weak automaton.\n\\end{theorem}\n\n\\subsection{Loops in backward deterministic automata}\n\\label{sec:backward-loops}\n\nThe requirement that in a backward deterministic automaton there is exactly one recurring run for every $\\omega$-word over the given alphabet is a very strong one, which has interesting implications.\n\n\\begin{proposition}\n \\cite{carton-michel-unambiguous-buechi-automata-2003}\n An $\\omega$-automaton is backward deterministic if, and only if, its transition relation is backward deterministic and every nonempty finite word is the label of a positive loop at exactly one state.\n\\end{proposition}\n\nFor the proof, assume a backward deterministic automaton is given and let $u$ be a nonempty finite word. If it is the label of a positive loop at two distinct states $q$ and $q'$, then there are at least two recurring runs of the automaton on $u^\\omega$---a contradiction. So $u$ can only be the label of a positive loop at at most one state. Since there is a recurring run of the automaton on $u^\\omega$, there is some $k$ such that $u^k$ is the label of a loop at state $q$. If $\\delta(u,q) \\neq q$, then $u^k$ would also be the label of a loop at $\\delta(u,q)$ and there would be two recurring runs for $u^\\omega$---a contradiction. So $u$ is the label of a positive loop at at least one state. \n\nFor the converse, assume every nonempty finite word is the label of a positive loop at exactly one state. Then every periodic word over the given alphabet has a recurring run and hence every ultimately periodic word over the same alphabet has so, too. In other words, the set of all words without recurring run is a regular $\\omega$-language without ultimately periodic words. From Remark~\\ref{thm:basic}(1), we can conclude this set is empty. This shows that for every $\\omega$-word there is at least one recurring run. By way of contradiction, assume there are two distinct recurring runs on a given $\\omega$-word~$u$, say $r$ and $r'$. Because of the backward deterministic transition relation there must be some $i$ such that $r(j) \\neq r'(j)$ for all $j \\geq i$.\nAs a consequence, there are positions $i$ and $j$ such that \n\\begin{inparaenum}[(i)]\n\\item $i] (a) to [bend right] (b);\n\t \\draw[->] (b) to [bend right] (a);\n\t \\draw[->] (c) to [bend left] (a);\n\t \\draw[->] (a) to [bend left] (c);\n \\end{tikzpicture} \\enspace,\n\\end{center}\nwhere $v_0$ is her vertex, then she cannot base her decision what to do in vertex $v_0$ only on the fact that she is in that vertex. (Of course, when she remembers where she moved previously, she can alternate and win.) Opposed to this, if the winning condition demands that Zero visits $v_2$ infinitely often, she only needs to follow the rule ``if in vertex $v_0$, go to vertex~$v_2$''---her decision what to do next is only based on the current vertex.\n\nA uniform positional winning strategy for Zero is a function $(W_0 \\cap V_0) \\to W_0$, where $W_0$ is Zero's winning region, such that no matter where in $W_0$ a play starts, if Zero moves as determined by the function, then the resulting play is winning for Zero. For One, the definition is symmetric.\n\n\\index{positional strategy!in a two-player game of infinite duration}\n\\begin{theorem}[positional strategies \\cite{emerson-jutla-tree-automata-mu-calculus-determinacy-1991}]\n \\label{thm:positional}\n In every parity game, both players have a uniform positional winning strategy.\n\\end{theorem}\n\n\\subsection{State- and transition-controlled alternating automata}\n\\label{sec:alternation}\n\nIn general, an alternating automaton is an automaton where acceptance depends on the full computation tree on a given word, more precisely, such an automaton provides means for specifying that a given word is accepted if, and only if, a certain subgraph of the full computation tree exists. At one extreme, when this subgraph is required to be a rooted path, then the automaton is nothing else than a conventional automaton.\n\n\\index{alternating $\\omega$-automaton}\\index{$\\omega$-automaton!alternating}\nFor $\\omega$-automata, essentially two variants of alternating automata have been studied: in one variant, alternation is specified by partitioning the state space \\cite{miyano-hayashi-alternating-finite-automata-on-omega-words-2-1984}; in the other variant, alternation is specified by complex transition formulas \\cite{muller-schupp-alternating-automata-infinite-trees-1987}.\n\n\\index{state-controlled alternating $\\omega$-automaton}\\index{$\\omega$-automaton!alternating!state-controlled}\nIn the state-controlled variant the state space $Q$ is partitioned into a set $E$ of existential states and a set $U$ of universal states (where either set could be empty) and the set of initial states is either a subset of $E$ or of $U$. A run of the automaton on a word $u$ is a prefix-closed set $T \\subseteq Q^*$, which should be thought of as a tree satisfying the following properties for every vertex $vq \\in T$:\n\\begin{asparaitem}\n\\item if $q \\in E$, then there exists a state $q'$ such that $vqq' \\in T$ and $\\langle q, u(|v|), q'\\rangle \\in \\Delta$; \n\\item if $q \\in U$, then $vqq' \\in T$ for every $q'$ such that $\\langle q, u(|v|), q'\\rangle \\in \\Delta$.\n\\end{asparaitem}\nThe run is initial if either $Q_I \\subseteq E$ and $Q_I \\cap T \\neq \\emptyset$ or $Q_I \\subseteq U$ and $Q_I \\subseteq T$; it is recurring if every word $r \\in Q^\\omega$ whose finite prefixes all belong to $T$ (every infinite rooted path through~$T$) is recurring in the sense of the given transition condition. This means, in particular, if $E = Q$ and the set of initial states is existential, then the automaton can be viewed as an ordinary $\\omega$-automaton. It is said to be a universal automaton if $U = Q$.\\index{universal $\\omega$-automaton}\\index{$\\omega$-automaton!universal}\n\nFrom the closure under complementation of the class of $\\omega$-languages recognized by B\\\"{u}chi automata, one obtains immediately:\n\n\\begin{remark}\n \\label{thm:universal-co-buechi}\n Every regular $\\omega$-language is recognized by a universal co-B\\\"{u}chi automaton.\n\\end{remark}\n\n\\index{transition-controlled alternating $\\omega$-automaton}\\index{$\\omega$-automaton!alternating!transition-controlled}\nIn the transition-controlled variant, the transition relation is replaced by a transition function $\\delta \\colon Q \\times A \\to M(Q)$, where $M(Q)$ is the set of all expressions built from states, the connectives $\\vee$ (``or'') and $\\wedge$ (``and''), and the boolean constants $0$ (``false'') and $1$ (``true''). For instance, $q \\wedge (q' \\vee q'')$ could be a value of the transition function. The set of initial states is replaced by an expression from $M(Q)$. Again, a run is a prefix-closed set \\mbox{$T \\subseteq Q^*$}, but this time satisfying the following condition. For each vertex $vq \\in T$, the set $\\{q' \\mid vqq' \\in T\\}$ satisfies the expression $\\delta(q, u(|v|))$, where satisfaction is defined in the obvious way. A run is initial if $\\{q \\in Q \\mid q \\in T\\}$ satisfies the initial condition. Being recurring is defined as above.\n\n\\begin{remark}\n A state-controlled alternating $\\omega$-automaton can be viewed as a transition-controlled alternating $\\omega$-automaton.\n\\end{remark}\n\nMore precisely, for every existential state $q$ one sets $\\delta(q, a) = \\bigvee \\{q' \\mid \\langle q, a, q'\\rangle \\in \\Delta\\}$, and for every universal state $q$ one sets $\\delta(q, a) = \\bigwedge \\{q' \\mid \\langle q, a, q' \\rangle \\in \\Delta\\}$; the set of initial states is converted into an initial condition in the same way; the recurrence condition does not need to be changed.\n\n\\subsection{Alternating automata and games}\n\nGiven a state-controlled alternating automaton as above and an $\\omega$-word $u$ over the same alphabet, the question whether $u$ is accepted by the automaton can be viewed as the question whether Zero wins a certain game, the so-called automaton game for $u$. The vertices of this game are pairs of the form $\\langle q, u'\\rangle$, where $u'$ is a suffix of $u$; such a vertex belongs to Zero if, and only if, $q$ is existential; there is an edge from $\\langle q,u'\\rangle$ to $\\langle q', u''\\rangle$ if $\\langle q,u'(0), q'\\rangle \\in \\Delta$ and $u'' = u'(1)u'(2) \\dots$; the recurrence condition is adapted in the straightforward fashion, based on the state in the first component. \n\n\\begin{remark}\n \\label{thm:alternation-game}\n A state-controlled alternating $\\omega$-automaton with existential [universal] initial states accepts a word $u$ if, and only if, Zero wins the automaton game for $u$ in some [every] vertex in $Q_I \\times \\{u\\}$.\n\\end{remark}\n\nFrom Theorem~\\ref{thm:determinacy}, which states that the games that occur in this fashion are determined, one can derive that a word is not accepted if, and only if, One has a winning strategy. This is equivalent to saying that the dual automaton accepts the word, where dualizing an automaton has the obvious meaning:\\index{dual!of an alternating $\\omega$-automaton}\\index{$\\omega$-automaton!alternating!dual of} existential and universal states exchange their roles and the recurrence condition is replaced by its negation. In other words, complementation is a trivial problem for alternating automata.\n\n\\begin{proposition}[complementing alternating automata]\n \\label{thm:complementation-alternation}\n The dual of a state-controlled alternating $\\omega$-automaton recognizes the complement of the language recognized by the given automaton.\n\\end{proposition}\n\nRemark~\\ref{thm:alternation-game} and Proposition~\\ref{thm:complementation-alternation} hold true for transition-controlled alternating $\\omega$-au\\-tom\\-a\\-ta as well, but the definition of the automaton game and the dualization process need to be adapted. The vertices of the automaton games are of the form $\\langle \\phi, u'\\rangle$ where $\\phi$ is a subformula of some value of the transition function. In the dualization process, the values of the transition function and the initial condition are dualized.\n\n\\subsection{From alternating automata to nondeterministic ones}\n\nAlternating $\\omega$-automata can be exponentially more concise than ordinary ones, just as in the finite-word setting~\\cite{drusinsky-harel-on-the-power-of-bounded-concurrency-i-finite-automata-1994}, but with regard to expressive power there is no difference. This is a major application of complementing $\\omega$-automata.\n\n\\begin{theorem}[from alternating to nondeterministic \\cite{miyano-hayashi-alternating-finite-automata-on-omega-words-2-1984}]\n For every alternating $\\omega$-au\\-tom\\-a\\-ton there exists an equivalent nondeter\\-min\\-istic B\\\"{u}chi automaton.\n\\end{theorem}\n\nTo prove this, first observe that it is enough to consider alternating parity automata, because any Muller condition can be turned into a parity condition as described in the proof of Theorem~\\ref{thm:muller-to-parity}. \n\nBy Theorem~\\ref{thm:positional}, parity games have uniform positional winning strategies. It follows that if there is an accepting run (recall that runs are trees) of an alternating parity automaton on a given word, then there is also an accepting subgraph of the run DAG, where this is defined in the obvious way. Checking that in a subgraph of a run DAG \\emph{all} rooted paths are recurring can be done using an appropriate $\\omega$-automaton, as explained in what follows. \n\nConsider the nondeterministic parity automaton over the alphabet $\\wp(Q \\times Q)$, with state set $Q$, initial set $Q_I$, transition relation $\\{\\langle q, a, q'\\rangle \\mid \\langle q,q'\\rangle \\in a\\}$, and parity condition $\\pi + 1$. This automaton accepts a word $u$ if the DAG which is obtained by collating the letters of $u$ contains \\emph{some} initial rooted path starting in an initial state and not satisfying the parity condition of the original automaton. Any $\\omega$-automaton recognizing the complement of the language recognized by this automaton is one that can check the DAG's. \n\nTo sum up, cascading \n\\begin{inparaenum}[(i)]\n\\item an automaton producing a subgraph of a run DAG of a given $\\omega$-word satisfying the transition relation and \n\\item the above automaton\n\\end{inparaenum}\nyields the desired automaton.\\qed\n\n\\subsection{Weak alternating automata}\n\\label{sec:weak-alternation}\n\n\\index{weak alternating $\\omega$-automaton}\\index{$\\omega$-automaton!weak alternating}\nRemark~\\ref{thm:weak-characterization} states that weak deterministic $\\omega$-automata only recognize fairly simple $\\omega$-languages. This is different for alternating automata:\n\n\\begin{theorem}[from alternating to weak alternating]\n \\cite{kupferman-vardi-weak-alternating-automata-are-not-that-weak-2001}\n For every alternating B\\\"{u}chi automaton with $n$ states there exists an equivalent weak alternating automaton with $2n^2$ states.\n\\end{theorem}\n\nBy dualization, it is enough to consider alternating co-B\\\"{u}chi automata. Theorem~\\ref{thm:positional} says that runs of alternating co-B\\\"{u}chi (and B\\\"{u}chi) automata can be thought of as run DAG's. The use of rank functions from Section~\\ref{sec:dags} leads to the following characterization of when an $\\omega$-word $u$ is accepted by a transition-controlled co-B\\\"{u}chi alternating automaton with $n$ states. There exists a tree $T \\subseteq (Q \\times [2n])^*$ satisfying the following conditions.\n\\begin{inparaenum}[(i)]\n\\item The set $\\{q \\in Q \\mid \\langle q,c \\rangle \\in T \\text{ for some $c< 2n$}\\}$ satisfies the initial condition.\n\\item Whenever $v \\langle q, c \\rangle \\in T$, then $\\{q' \\in Q \\mid v \\langle q, c \\rangle \\langle q', c'\\rangle \\in T \\text{ for some $c' < 2n$}\\}$ satisfies $\\delta(q,u(|v|))$. \n\\item There is no vertex $v \\langle q, 2j+1 \\rangle \\in T$ with $q \\in B$. \n\\item When $v \\langle q, c \\rangle \\langle q', c'\\rangle \\in T$, then $c \\geq c'$.\n\\item For every rooted path $\\langle q_0, c_0 \\rangle \\langle q_1, c_1 \\rangle \\dots$ there exists some $i$ such that $c_i, c_{i+1}, \\dots$ are all odd. \n\\end{inparaenum}\nThis can be used to construct a transition-controlled weak alternating automaton with state set $Q \\times [2n]$; the initial condition and the transition function are adapted from the given automaton in a straightforward fashion; the B\\\"{u}chi set consists of all states with an odd second component.\\qed\n\nIt should be noted that the breakpoint construction can be used to convert a weak alternating B\\\"{u}chi automaton into an equivalent nondeterministic one.\n\n\\subsection{Simulation relations and simulation games}\n\nOne way to compare automata with each other, more precisely, to compare their internal structure, is to use simulation relations, or, more generally, simulation games.\\index{simulation relation!for $\\omega$-automata}\\index{simulation game!for $\\omega$-automata}\n\nA simple approach is to say that a B\\\"{u}chi automaton $\\mathscr A'$ forwardly simulates a B\\\"{u}chi automaton $\\mathscr A$\\index{forward simulation!of $\\omega$-automata} if there is a relation $\\sigma \\subseteq Q \\times Q'$ such that the following three conditions are satisfied.\n\\begin{inparaenum}[(i)]\n\\item For every $q \\in Q_I$ there is some $q' \\in Q_I'$ such that $\\langle q , q' \\rangle \\in \\sigma$.\n\\item For all $\\langle q, q'\\rangle \\in \\sigma$ and $\\langle q, a, r\\rangle \\in \\Delta$ there is some $r' \\in Q'$ such that $\\langle r, r'\\rangle \\in \\sigma$ and $\\langle q', a, r'\\rangle \\in \\Delta'$.\n\\item For all $\\langle q, q'\\rangle \\in \\sigma$, if $q \\in B$, then $q' \\in B'$.\n\\end{inparaenum}\n\nThe important observations concerning this definition are:\\index{direct simulation!of $\\omega$-automata}\n\\begin{theorem}[direct simulation \n \\cite{dill-hu-wong-toi-checking-for-language-inclusion-using-simulation-preorder-1991}]\n \\label{thm:simulation}\n \\begin{compactenum}\n \\item If a B\\\"{u}chi automaton $\\mathscr A'$ simulates a B\\\"{u}chi automaton $\\mathscr A$, then the language recognized by $\\mathscr A$ is a subset of the language recognized by $\\mathscr A'$.\n \\item Whether a B\\\"{u}chi automaton $\\mathscr A'$ simulates a B\\\"{u}chi automaton $\\mathscr A$ can be determined in time linear in the product of the sizes of $\\mathscr A$ and $\\mathscr A'$.\n \\end{compactenum}\n\\end{theorem}\n\nAs a consequence, simulation relations can be used for efficient (but incomplete) inclusion tests.\n\nThe requirement that a B\\\"{u}chi state in the simulating automaton match a B\\\"{u}chi state in the simulated automaton right away is very strong. For inclusion to hold, it would be enough if a B\\\"{u}chi state in the simulated automaton is matched by a B\\\"{u}chi state in the simulating automaton at a later position. This is captured by the notion of delayed simulation,\\index{delayed simulation!of $\\omega$-automata} which is best phrased in terms of a certain two-player game, where one of the players is called Duplicator and tries to show that simulation is given, whereas the other is called Spoiler and tries to show that this is not the case. \n\nMore precisely, the game determines whether a state in a B\\\"{u}chi automaton $\\mathscr A'$ delayed simulates a state in a B\\\"{u}chi automaton $\\mathscr A$. When a play of the game starts, there is a pebble on each of the two states in question. In every round of the game, first Spoiler is required to move the pebble on $\\mathscr A$ over some transition and then Duplicator is required to move the other pebble (the pebble in $\\mathscr A'$) over some transition with the same label. If one of the players cannot move anymore, this player looses early. If an infinite play emerges, then Duplicator wins if, and only if, the following holds: whenever Spoiler visits a B\\\"{u}chi state in some round, Duplicator visits a B\\\"{u}chi state in the same or in a later round. The state in $\\Aut A'$ delayed simulates the state in $\\Aut A$ if Duplicator has a winning strategy in the game just described. The automaton $\\mathscr A'$ delayed simulates the automaton $\\mathscr A$ if every initial state of $\\mathscr A$ is simulated by some initial state of $\\mathscr A'$. Observe that the above game can be viewed as a game of infinite duration with a regular winning condition as described in Section~\\ref{sec:games}.\n\nTheorem~\\ref{thm:simulation} carries over to delayed simulation, only the complexity of computing delayed simulation is higher \\cite{etessami-schuller-wilke-fair-simulation-relations-parity-games-and-state-space-reduction-for-buechi-automata-2005}.\n\nFor purposes of state-space reduction, it useful to study simulation in both directions: if one state [delayed] simulates another one and vice versa, the states are said to mutually [delayed] simulate each other. These relations are, indeed, equivalence relations and have a useful property:\n\n\\begin{theorem}[quotienting \\cite{etessami-schuller-wilke-fair-simulation-relations-parity-games-and-state-space-reduction-for-buechi-automata-2005}]\nIf, in a quotient of a B\\\"{u}chi automaton with regard to the mutual [delayed] simulation relation, initial and B\\\"{u}chi states are chosen appropriately, then the resulting automaton is equivalent to the given one.\n\\end{theorem}\n\nThis gives, in effect, two polynomial-time algorithms for reducing the state space of B\\\"{u}chi automata, one less efficient than the other, but producing smaller automata. Finding and even approximating minimum-size B\\\"{u}chi automata is PSPACE-hard, in fact, this is independent of the type of the automaton, because results from finite-state automata on finite words ~\\cite{gramlich-schnitger-minimizing-nfas-and-regular-expressions-2007} carry over in a straightforward fashion.\n\nIn principle, one could also work with bisimulation rather than mutual simulation, but this gives, in general, worse reductions. \n\nMuch effort has gone into finding coarser relations for state-space reductions, and there are various ways of approaching this: letting Duplicator match with more than just one pebble, relaxing the winning condition for Duplicator further, considering backward simulation, and so on.\n\n\\section{Applications in logic}\n\n$\\omega$-Automata were introduced in the late fifties in the context of mathematical logic, more precisely, B\\\"{u}chi automata first showed up in \\cite{buechi-decision-method-restricted-second-order-arithmetic-1962} (in disguise) and were used there as a tool for proving that theories of specific structures are decidable. From a modern point of view, B\\\"{u}chi showed that the structures are $\\omega$-automatic~\\cite{hodgson-decidabilite-par-automata-fini-1983}.\n\n\\subsection{\\texorpdfstring{$\\omega$}{\u03c9}-Automatic structures}\n\nAssume a first-order structure $\\mathfrak S$ consisting of a universe $U$ and a family $\\{R_i\\}_{i \\in I}$ of relations, say $R_i$ having arity $n_i$, are given; the question is whether the theory of this structure is decidable. A good example are the real numbers with the ternary relation ``addition'', the predicate ``is positive'', and the predicate ``is power of 2''. \n\nAn $\\omega$-automatic presentation\\index{$\\omega$-automatic}\\index{structure!$\\omega$-automatic}\\index{presentation!$\\omega$-automatic}\\index{$\\omega$-automatic presentation} of a structure $\\mathfrak S$ as above is given by an alphabet $A$, an $\\omega$-automaton $\\mathscr U$ over $A$, an $\\omega$-automaton $\\mathscr E$ over $A \\times A$, and, for each $i \\in I$, an $\\omega$-automaton $\\mathscr R_i$ over $\\bigtimes_{i] (s) -- (s0); \n\t\t \\draw[scale=1.5,my loop] (s0) to node[above, left=3mm, align=center] {$0$\\\\[-1mm] inc $c$} (s0); \n\t\t \\draw[scale=1.5,rotate=180,my loop] (s0) to node[above, right=1mm, align=center] {$1$\\\\[-1mm] prt\\&res $c$} (s0); \n\t\t \\node [right of=s, right] {$\\limsup c = \\infty$ (recurrence condition)};\n\t \\end{tikzpicture}\n \\end{small}\n\\hspace*{\\fill}\n\n\\smallskip\\noindent\n\\textbf{Acknowledgment} \\ I am grateful to Christof, Olivier, Sebastian, and my master students for insightful comments, to Wolfgang for his constant support, and to Jean-\\'{E}ric for making me write this paper.\n\n\\bibliographystyle{abbrv}\n\\addcontentsline{toc}{section}{References}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nDynamics in concentrated protein systems are of fundamental interest in fields such as protein crystallization \\cite{Durbin1996}, phase separation \\cite{Anderson:2002aa}, the glass transition \\cite{PhysRevLett.99.118301} or diffusion in crowded environments \\cite{ELLIS2001597}, to name just a few. These systems display relatively slow and heterogeneous dynamics ranging from micro-seconds to seconds on length scales ranging from micrometers down to the single-particle nanometer scale. X-ray photon correlation spectroscopy (XPCS) is well suited to cover this length scale and time window employing coherent X-ray beams and tracing fluctuations in X-ray speckle patterns \\cite{sutton_coherence,Gruebel2008,Perakis2017,Madsen2018}. \nHowever, the highly intense X-ray beams of synchrotron storage rings are also the cause of considerable radiation damage to the samples. Atomic scale XPCS experiments use X-ray doses of MGy and beyond, which can lead to beam induced dynamics, even in hard condensed matter samples \\cite(Ruta2017). Soft and biological matter samples are much more sensitive to radiation damage requiring flowing samples \\cite{Fluerasu:ot5581} \\cite{Vodnala2018} or scanning samples with optimized data taking strategies \\cite{Verwohlt2018}. Radiation damage of bio-molecules in solution is caused mainly by two effects: either via direct damage to the protein structure itself by photo-ionization or by indirect damage via hydrolysis of the surrounding water molecules (see e.g. \\cite{Garman:ba5150}, \\cite{Holton_A_2009}). In both cases, the damage becomes apparent by a characteristic change to the SAXS pattern indicating an increase of the radius of gyration, mostly due to aggregation. Typical critical X-ray doses for protein molecules in solution range from 7-10 kGy (BSA) to 0.3 kGy (Rnase) after which a degradation of the SAXS patterns become visible \\cite{Jeffries_Limiting_2015}. These doses are easily reached within ms when using focused beams of modern synchrotron sources. While in protein crystallography cryogenic cooling helps to prevent the diffusion of radicals, such an approach is obviously impossible when studying the dynamics of proteins in solution. \n\nXPCS requires a coherent X-ray beam and the signal to noise ratio (SNR) in XPCS experiments ideally scales linear with the source brilliance $B$ \\cite{Lumma2000}. The fastest accessible time scale then scales with $B^2$ promising four orders of magnitude faster temporal resolution at the upgraded sources of ESRF and PETRA IV \\cite{Einfeld:xe5006}\\cite{Weckert:it5005} \\cite{Schroer_2018} which is one of the key drivers for XPCS at DLSR sources \\cite{Shpyrko_X_2014}. These arguments, however, only hold if radiation damage is no issue. Thus, the question arises of how much XPCS experiments of biological \/ radiation sensitive samples could really benefit from the gain in coherence performance of DLSR rings. Here, we show that the combination of (i) larger coherence lengths, (ii) higher photon energy and (iii) the increased coherent photon flux yields indeed an increase in SNR of up to one order of magnitude when compared to standard XPCS setups at today's storage rings. We calculate explicitly, using the boundary conditions set by the maximum tolerable X-ray doses of a lysozyme solution, the XPCS speckle contrast, speckle intensities and maximum number of images per spot. We come to the conclusion, that DLSR rings hold the promise to measure dynamics of biological samples at length scales of a single protein molecule. \\\\\n\n\n\\section{XPCS on protein solutions}\n\nXPCS experiments track fluctuations in X-ray speckle patterns yielding access to the intermediate scattering function $f(q,\\tau ) = S(q,\\tau)\/S(q)$ by correlating intensities per detector pixel \\cite{GRUBEL20043}.\nThe measured signal in such experiments is the normalized intensity autocorrelation function\n\\begin{equation}\ng_2(q,\\tau)=\\frac{ \\langle I_{pix}(q,t')I_{pix}(q,t'+\\tau) \\rangle }{\\langle I_{pix}(q,t') \\rangle ^2}=1+\\beta |f(q,\\tau)|^2,\\label{eq:g2}\n\\end{equation}\nwith $\\beta$ denoting the speckle contrast and $q = 4 \\pi \\sin (\\Theta\/2)\/\\lambda$ being the scattering vector, depending on the wavelength $\\lambda$ and the scattering angle $\\Theta$. The time delay between two consecutive time frames is denoted $\\tau$ and $\\left\\langle \\ldots \\right\\rangle$ is the ensemble average over all equivalent delay times $\\tau$ and pixels within a certain range of the absolute value $\\left|\\vec{q}\\right|$.\\\\\n\nThe scattering intensity per pixel from a protein solution is given by\n\\begin{equation}\nI_{pix}(q) = F_c \\cdot t_{fr} \\cdot T_{sample} \\cdot d \\cdot \\frac{\\mathrm{d}\\Sigma}{\\mathrm{d}\\Omega}(q) \\cdot \\Delta \\Omega_{pix},\\label{eq:I}\n\\end{equation}\nwith $F_c$ denoting the incident coherent flux (ph\/second), $t_{fr}$ the exposure time for one frame, $T_{sample}$ the sample's transmission and $\\Delta \\Omega_{pix} = (P \/ L)^2$ the solid angle covered by a single pixel, with $P$ being the pixel size and $L$ the sample-detector distance. In the following, we will set the thickness of the sample $d(E)$ to be equal to the absorption length of water $d(E) = 1\/\\mu(E)$ at each respective photon energy $E$, with the transmission following as $T_{sample} = \\exp(- \\mu d) \\approx 0.368$. \\\\\nThe differential scattering cross section per unit volume or absolute scattering intensity in 1\/m of a protein solution is defined as\n\\begin{equation}\n\\frac{\\mathrm{d}\\Sigma}{\\mathrm{d}\\Omega} (q) = C \\cdot M \\cdot \\bar{v}^2 \\cdot \\Delta \\rho^2 \\cdot P(q) \\cdot S_{eff}(q),\n\\end{equation}\nwith $P(q)$ the form and $S_{eff}(q)$ the effective structure factor and $C$ the protein concentration. We will calculate the SNR for lysozyme as model protein with a molar mass of $M=14.3$ kDa and specific volume $\\bar{v} = 0.74$ cm${}^2$\/g. The scattering contrast $\\Delta \\rho$ follows from the chemical composition of lysoszyme showing almost no dependence on energy in the energy range of interest here. With this, the absolute scattering intensity can be expressed as \n\\begin{equation}\n\\frac{\\mathrm{d}\\Sigma}{\\mathrm{d}\\Omega} (q) = C \\cdot 1.02 \\mathrm{m^2 \\over kg} \\cdot P(q) \\cdot S_{eff}(q),\n\\end{equation}\nin good agreement with measured values of $(1.03 \\pm 0.06) \\mathrm{m^2 \\over kg}$ ZITAT\\cite{}.\\\\\n\n\\begin{figure}\n\\centering\n\t\t\\includegraphics[width=1\\textwidth]{SAXS.png}\n\t\\caption{Form factor $P(Q)$ (black line) and effective structure factor $P(q) \\cdot S_{eff}$ (red dashed line) of a diluted and concentrated lysozyme solution, respectively. The inset shows the relaxation rate $\\Gamma(q)$ as a function of $q$ for both cases.}\n\t\\label{fig:BioSAXS}\n\\end{figure}\n\n The form and effective structure factor $P(q) \\cdot S_{eff}(q)$ are modeled following \\cite{Moller:2012aa} and displayed in Fig. \\ref{fig:BioSAXS} for a diluted ($10$ mg\/ml) and concentrated ($250$ mg\/ml) lysozyme solution. The $q$ values of interest are within $q = 0.5 \\ \\mathrm{nm}^{-1} - 1.5 \\ \\mathrm{nm}^{-1}$, which corresponds to length scales of $4-12$ nm.\\\\\n The dynamics of the low concentrated protein solution can be described as Brownian diffusion with a single exponential autocorrelation function\n \\begin{equation}\n g_2(q,t)-1 = \\beta \\exp(-2\\Gamma(q)t)\n \\end{equation}\nand relaxation rate\n \\begin{equation}\n \\Gamma = D_0 q^2\n \\end{equation}\nwhich is proportional to the Stokes-Einstein diffusion constant\n \\begin{equation}\n D_0 = \\frac{k_B T}{6 \\pi \\eta R_H},\n \\end{equation}\n where $T$, $\\eta$, $R_H$ and $k_B$ are the temperature, the viscosity of the suspending medium, the hydrodynamic radius of the protein and the Boltzmann constant, respectively. The $q$-dependence of the relaxation rate is plotted in the upper right inset of Fig. \\ref{fig:BioSAXS} for diluted and concentrated lysozyme solutions. For the diluted case, we take the viscosity of water and a hydrodynamic radius of $R_H = 1.9$ nm. In order to illustrate the expected timescales for XPCS experiments on concentrated protein solutions we use an increased effective solution viscosity by a factor of 15 \\cite{Godfrin2015}. The time scales of interest are here ranging from 100 $\\mu s$ to seconds.\\\\\nIn practice, XPCS correlation functions are averaged over many pixels in a narrow range of $q$ values. Typical regions of interested are sketched as colored areas in Fig. \\ref{fig:BioSAXS}. The same set of regions is additionally depicted in the lower left inset, showing the location of the corresponding pixels on an EIGER 4M detector for $E = 8$ keV and a sample to detector distance of $L=2$ m. In the following, we will always calculate the SNR at the maximum of the structure factor peak at $q=0.9 \\ \\mathrm{nm}^{-1}$.\\\\\n\n\\section{Signal to noise ratio}\n\nThe signal to noise ratio for the autocorrelation function $g_2(q,\\tau)$ depends on the average intensity per pixel $I_{pix}$, the contrast $\\beta$, the number of pixels $N_{pix}$, the number of frames $N_{fr}$ and the number of repetitions $N_{rep}$ via\n\\begin{equation}\nSNR = \\beta \\cdot I_{pix} \\cdot \\sqrt{N},\n\\end{equation}\nwith $N = N_{pix} \\cdot N_{fr} \\cdot N_{rep}$.\\\\\nConsidering $N_{fr} = T\/t_{fr}$ with $t_{fr}$ being the single frame exposure time and $T$ the total accumulated time for $N_{fr}$ frames yields in combination with Eq. \\ref{eq:I} $SNR \\propto F_c \\sqrt{t_{fr} \\cdot T}$. This scaling implies that an increase in coherent flux by one order of magnitude gives access to two orders of magnitude faster dynamics for the same SNR. However, this argument only holds when the sample is capable of handling the increased photon flux. If a critical dose $D_c$ exists, beyond which radiation induced damage starts to degrade the sample, the longest overall exposure time $T$ depends on $F_c$ and the increase of coherent flux might be less or not beneficial at all for studying radiations sensitive samples. \n\nThe dose per second delivered to the sample depends on the photon flux as well as the photon energy which both also influences the achievable SNR. Here, we take all those parameters into account and calculate the benefit to the SNR from the increased coherent flux of DLSRs.We identify three parameter, which we will assume to be nearly free of choice over a wide range of values. These are the photon energy $E = \\hbar c \/ \\lambda$, the diameter $a$ of the X-ray beam spot size on the sample and the distance $L$ between sample and detector. In the following, we will establish the dependencies of the different contributions on the SNR, and determine the optimal set of $a$, $\\lambda$, and $L$ values for XPCS experiment using radiation sensitive samples.\\\\\n\n\n\n\\begin{figure}\n\\centering\n\t\t\\includegraphics[width=10cm]{Brilliance.png}\n\t\\caption{a) Brilliance taken from \\cite{Schroer_2018} b) Coherent Flux calculated from fig. a), assuming a bandwidth of $0.01\\%$, corresponding to a Si(111) monochromator with $\\Delta \\lambda \/ \\lambda = 10^{-4}$.}\n\t\\label{fig:bril}\n\\end{figure}\n\nFig. \\ref{fig:bril} a) shows the expected increase of Brilliance as a function of photon energy for an U29 undulator at PETRA III and IV. Additionally, the case of an U18 with $5$ and $10$ m length will be investigated. The data shown is taken from \\cite{Schroer_2018}. From this, the coherent flux can be calculated as \n\\begin{equation}\nF_c[ph \/ s \/ 0.1\\%] = 10^{-8} Br[ph \/ s \/ 0.1\\% \/ mm^2 \/ mr^2] \\left(\\lambda[\\AA] \\over 2\\right)^2,\n\\end{equation}\nwhich is also depicted in Fig. \\ref{fig:bril} b). Using the given brilliance we calculate the coherent flux for $8$ keV at PETRA III as $3.8 \\cdot 10^{11}$ ph\/s. This is in good agreement with measured values of $2.3 \\cdot 10^{11}$ ph\/s, taking into account transmission effects of beamline components and optics. In the following, the actual coherent flux on the sample will be calculated by taking into account the same beamline transmission factor for all undulators.\n\n\\subsection{Limitations due to radiation damage}\nWe assume a critical dose $D_c$ beyond which radiation induced damage starts to degrade the sample which can be expressed as (Meisburger (2013). Biophys. J. 104, 227\u2013236)\n\\begin{equation}\nD_c = \\frac{F_c E (1-T_{sample}) T}{d(E) a^2 \\rho},\n\\end{equation}\nwith $F_c$ the photon flux on the sample, the product of energy dependent sample thickness $d(E)$ and beam area $a^2$, the sample absorption $(1-T_{sample})$, photon energy $E$ and exposure time $T$. From this we derive the maximum number of frames which can be measured before radiation damage occurs to be\n\\begin{equation}\n N_{fr} = \\frac{d(E) a^2 \\rho D_c}{t_{fr} F_c E (1-T_{sample})}, \\label{N}\n\\end{equation}\nignoring the latency time of the detector and absorption within the sample container walls. The sample thickness $d(E)$ is always adapted to the energy dependent absorption length of water. One important conclusion from equation \\ref{N} is that the SNR scales via SNR $ \\propto F_c \\sqrt{N_{fr}} \\propto \\sqrt{F_c}$ for radiation sensitive samples. Moreover, with the scalings $d(E) \\propto E^3$ and $F_c \\propto Br(E) \/E^{2}$ we also find the peculiar relation of $N_{fr} \\propto E^4$ favoring higher photon energies if a large number of frames is required. \\\\\nWe illustrate this with the example of a typical spot size for XPCS experiments of $a = 4 \\ \\mathrm{\\mu m}$, an exposure time of a single frame of $t_{fr} = 1$ ms and a critical Dose limit for a concentrated lysozyme solution of $D_c = 1$ kGy. \n\\begin{figure}\n\\centering\n\t\t\\includegraphics[width=10cm]{Nfr.png}\n\t\\caption{Maximum number of frames which can be measured on one spot before the onset of radiation damage for a lysozyme solution with beamsize $a = 4 \\ \\mathrm{\\mu m}$ and exposure time per frame of 1 ms. The vertical black line depicts the threshold of at least two consecutive frames.}\n\t\\label{fig:Nfr}\n\\end{figure}\nFig. \\ref{fig:Nfr} displays the possible number of consecutive frames as a function of photon energy. A prerequisite for correlation spectroscopy is obviously that the number of consecutive frames is at least two (i.e. $N_{fr} \\geq 2$), indicated via filled symbols. Already with the coherent flux of PETRA III the critical dose is exceeded after or during the first image and beam damage is occurring between two images for photon energies below $10$ keV. At this energy, an increase in coherent flux would therefore not be usable for XPCS experiments on protein samples. However, it can also be seen that $N_{fr}$ increases with photon energy due to the increasing absorption length of the X-rays. Effectively, the radiation dose is spread over a larger sample volume with increasing photon energy. However, many properties like the speckle size, the coherent flux as well as the longitudinal and transversal coherence lengths decrease with increasing photon energy. Therefore, the disadvantageous influence of these properties on the speckle contrast $\\beta$ and consequently on the SNR of XPCS experiments need to be taken into account as well.\n\n\\subsection{Speckle contrast $\\beta$}\nThe speckle contrast depends on nearly all experimental parameters such as pixel size $P$, speckle size $S\\approx \\lambda L\/a$, beam size $a$, sample thickness $d$, wavevector transfer $q$, and the transverse and longitudinal coherence lengths. It can be written as a product, \n\\begin{equation}\n\\beta(a, d, q, \\lambda, L) = \\beta_{cl}(a,d,q,\\lambda) \\beta_{res}(a,L,\\lambda)\n\\end{equation}\nin which the first factor $\\beta_{cl}$ corresponds to the reduction of the contrast from unity due to the finite coherence lengths in transverse and longitudinal direction. The second factor $\\beta_{res}$ corresponds to a finite angular resolution of the experimental setup. This results in a reduction of contrast if the pixel size of the detector $P$ exceeds the size of the speckle $S$:\n\\begin{equation}\n\\beta_{res}(a,L,\\lambda) = \\left( {2 \\over w^2} \\int_0^w (w-v) \\left( \\frac{\\sin(v\/2)}{v\/2}\\right)^2 \\mathrm{d}v\\right)^2 \\label{eq:betares} \n\\end{equation}\nwith $w= 2 \\pi P a\/L\\lambda = 2 \\pi P\/S$. Fig. \\ref{fig:betares} displays the speckle contrast $\\beta_{res}$ as a function of beamsize $a$ for sample-detector distances of $L=5$ m and $L=100$ m, respectively, pixel size $P=75 \\ \\mu$m and photon energies of 8, 15 and 25 keV. The maximum $\\beta_{res}$ is obtained in a high resolution configuration with $S \\geq P$ and scales as $\\beta \\approx \\lambda^2 L^2 \/a^2P^2$ in the low resolution configuration, when $S \\ll P$. Therefore, XPCS experiments with large beamsizes require long sample-detector distances in order to resolve the smaller speckles. \n\\begin{figure}\n\\centering\n\t\t\\includegraphics[width=10cm]{beta_ps.png}\n\t\\caption{Speckle contrast $\\beta_{res}$ as a function of beam size $a$ on the sample according to eq. \\ref{eq:betares}. Calculated for photon energies of 8 keV (red), 15 keV (green) and 24 keV (blue) and for sample-detector distances L=100 m (top lines) and L=5 m (bottom lines), respectively.}\n\t\\label{fig:betares}\n\\end{figure}\n\nThe dependence of $\\beta_{cl}$ on beamsize $a$, sample thickness $d$, transverse $\\xi_h$, bandwidth $\\Delta \\lambda \\over \\lambda$ and q-value is taken into account via {\\cite{sutton_coherence}\n\\begin{equation} \\label{eq:betacl}\n\\begin{split}\n\\beta_{cl}(a,d,q,\\lambda) & = \\frac{2}{(a \\cdot d)^2} \\int_0^{a} \\mathrm {d}x \\int_0^d \\mathrm{d}z (a-x)(d-z) \\exp(-x^2\/\\xi_h^2) \\\\\n & \\times \\left( \\exp(-2\\left|Ax+Bz\\right|)+\\exp(-2\\left|Ax-Bz\\right|) \\right), \n\\end{split}\n\\end{equation}\nwith $A = { \\Delta \\lambda \\over \\lambda} q \\sqrt{1- {1 \\over 4} q^2 \/ k^2}$ and $B = - {\\Delta \\lambda \\over 2 \\lambda}{q^2 \\over k}$. In vertical direction we assume a completely coherent beam and in horizontal direction, the coherence length is estimated as \n\\begin{equation}\n\\xi_h = {R \\cdot \\lambda \\over 2\\pi \\sigma} , \\label{eq:xih}\n\\end{equation}\nwith $R$ being the distance between source and beam defining aperture and $\\sigma$ the RMS source size. With $\\sigma_h = 36 \\ \\mu$m (P10, low-$\\beta$ source, 10 keV, $R = 90$ m), this results in a horizontal coherence length at $E=10$ keV of $\\xi_h = 49 \\ \\mu$m. A reduced horizontal source size at PETRA IV of $\\sigma_h = 12 \\ \\mu$m would result in an increased horizontal coherence length of $\\xi_h = 147 \\ \\mu$m at the same energy. These values reduce to $20 \\ \\mu$m and $59 \\ \\mu$m at an energy of $E=25$ keV, respectively. The full energy dependence of $\\xi_h$ is shown in Fig. \\ref{fig:betacl} a).\\\\\nUsing a partially coherent source like a undulator for coherent scattering experiments, cutting of the incident X-ray beam is required in order to obtain a nearly fully transversely coherent beam. Therefore, a beam defining aperture is set to an opening size equal to the transversal coherence length. Smaller beam sizes can be achieved with additional focussing elements. For our calculations, we will consider the resulting focussed beam as fully coherent with a $\\xi_h$ being equal to the beam size. For larger beamsizes, $\\xi_h$ is calculated following Eq. \\ref{eq:xih}.\\\\\nThe temporal or longitudinal coherence length can be calculated as\n\\begin{equation}\n\\xi_l = {\\lambda \\over 2} {\\lambda \\over \\Delta \\lambda},\n\\end{equation}\ndepending on the bandwidth of the used monochromator ( $\\Delta \\lambda \/ \\lambda \\approx 1.4 \\cdot 10^{-4}$ for a Si(111) monochromator and $\\Delta \\lambda \/ \\lambda \\approx 3 \\cdot 10^{-5}$ for Si(311)). \\\\\nThe results for $\\beta_{cl}$ as a function of beam size and X-ray energy are shown in figure \\ref{fig:betacl} b) for a q-value of $q=0.9 $ nm$^{-1}$, corresponding to the peak of the structure factor shown in Fig. \\ref {fig:BioSAXS}.\n\\begin{figure}\n\\centering\n\t\t\\includegraphics[width=10cm]{beta_q.png}\n\t\\caption{a) Horizontal coherence length calculated from the source properties of PETRA III and PETRA IV as a function of photon energy. b) Speckle contrast $\\beta_{cl}$ as a function of beam size calculated according to eq. \\ref{eq:betacl} (photon energies 8 keV (red), 15 keV (green) and 25 keV (blue)). The dashed line corresponds to the horizontal coherence length at P10 PETRA III, the solid line represents the horizontal coherence length expected with PETRA IV. The q-value is $q=0.9$ nm$^{-1}$ and the sample thicknesses are $d$=1.0, 6.5 and 23 mm corresponding to the absorption length of water at the respective photon energies.}\n\t\\label{fig:betacl}\n\\end{figure}\nWe observe a reduction of speckle contrast with increasing beamsize and a reduced contrast for smaller beamsizes as a function of photon energy. Both reductions can be explained by the scattering volume, defined by spot size $a$ and sample thickness $d$, exceeding the coherence volume defined by the longitudinal and transversal coherence length. \n\n\\subsection{Number of pixel}\nChanging the photon energy and sample detector distance has direct implications on the number of pixels which can be covered within an area of a certain $q$-range. The scattering signal may be in a circular region of interest on the detector of width $\\Delta q$ and radius $q$. In the SAXS regime, $q = (4 \\pi\/ \\lambda) \\theta $ and $\\Delta q = (4 \\pi\/ \\lambda) \\Delta \\theta$, and the diffraction ring has a width on the detector of $\\Delta \\theta \\cdot L$ and a circumference of $2\\pi (2\\theta) L$. The number of illuminated pixels is thus\n\\begin{equation}\nN_{pix} = \\frac{q \\Delta q \\lambda^2 L^2}{4 \\pi P^2}.\n\\end{equation}\n\n\\section{XPCS of protein solutions}\n\nHaving established the dependence of the SNR on the experimental parameters we can use the expression\n\\begin{equation}\nSNR = \\beta(a, \\lambda, L) \\cdot I_{pix}(\\lambda, L) \\cdot \\sqrt{N_{fr}(a, \\lambda)\\cdot N_{pix}(\\lambda, L)}\n\\end{equation}\nto characterize the influence of the improved brilliance of the new generation of X-ray sources on XPCS experiments with radiation sensitive samples.\\\\\nIn Fig. \\ref{fig:StandXPCS}, we display the SNR for a standard XPCS setup. It was assumed that an EIGER 4M detector \\cite{Johnson2014} is used, with a sample detector distance of $L=5$ m, which corresponds at a photon energy of $E=8$ keV to the inset of Fig. \\ref{fig:BioSAXS}. In order to match the speckle size to the pixel size, an X-ray spot size of $a=4$ $\\mu$m is required, corresponding to the calculations shown in Fig. \\ref {fig:Nfr}.\\\\\nFurther parameters are:\n\\begin{table}\n\\caption{Parameters fixed for the calculations of the SNR}\n\\begin{tabular}{ll} \n$q $&$ 0.9 \\mathrm{nm}^{-1}$\\\\\n$\\Delta q $&$ 0.1 \\mathrm{nm}^{-1}$\\\\\n$C $&$ 250 \\mathrm{mg\/ml}$\\\\\n$P(q) \\cdot S(q) $&$ \\approx 0.3$\\\\\n$D_c $&$ 1,000 \\mathrm{J\/kg} = 1 \\mathrm{kGy}$\\\\\n$P $&$ 75 \\mathrm{\\mu m}$\\\\\n$t $&$ 1 \\mathrm{ms}$\\\\\n\\end{tabular}\n\\end{table}\n\n\n\\begin{figure}\n\\centering\n\t\t\\includegraphics[width=10cm]{SNR_standardXPCS.png}\n\t\\caption{a) Signal to Noise ratio (SNR) calculated as a function of photon energy for a setup with ($a= 4 \\mu$m, $L=2$ m) and for different undulators. Open symbols correspond to experimental conditions which are not accesible due to beam damage effects. b) Speckle contrast of this setup as a function of photon energy.}\n\t\\label{fig:StandXPCS}\n\\end{figure}\n\nThe red data points correspond to the photon beam properties of PETRA III, the green, blue and cyan points to the improved coherent flux $F_c$ offered by PETRA IV with different undulators. As can be seen, the increasing coherent flux offers theoretically improved SNR values of more than one order of magnitude. However, as marked with open symbols, the highest theoretically possible SNR of each configuration corresponds to experimental conditions where the critical dose limit of the sample is reached within two sequential acquisitions (i.e. $N_{fr} \\leq 2)$. Therefore, the maximum increase in SNR can not be reached in practice and the upgrade to PETRA IV would not lead to such a significant increase in SNR for this setup.\\\\\nData points which correspond to beam conditions where at least two sequential acquisitions are possible are displayed as filled symbols. It is evident that higher beam energies with also thicker samples would ease the effect of a higher flux and make XPCS experiments possible also with a standard configuration ($L=5$ m, $a=4 \\ \\mu$m). However, as displayed in Fig. \\ref{fig:StandXPCS} b), this also results in much reduced speckle contrasts and therefore the beneficial effect of an increased coherent flux on the SNR is largely lost due to the strongly reduced speckle contrast $\\beta$. \n\n\\subsection{Optimizing the experimental setup}\nIn order to use the increased coherent flux for XPCS experiments, one has to adapt the experimental setup in terms of focussing, photon energy and sample detector distance.\\\\\nTherefore, we repeat the previously presented calculations for a set of different beamsizes $a$ and sample-detector distances $L$. At each point in the $a-L$ plane, the SNR is calculated as a function of photon energy and the maximum is calculated. However, only values are considered which correspond to $N_{fr} \\geq 2$ at 1 ms exposure. The maximum SNR for each pair of $a$ and $L$ values is displayed in Fig. \\ref{fig:contour}.\\\\\nIt can be seen that the previously discussed setup with a small beam and large speckle (marked by a red dot) does not give the best SNR already for the case of PETRA III. With a sample-detector distance of $L=5.5$ m and an X-ray spot size of $a=9 \\ \\mu$m, the expected SNR increases by $25 \\%$.\\\\\n\\begin{figure}\n\\centering\n\t\t\\includegraphics[width=10cm]{contour.png}\n\t\\caption{The maximum value of SNR as a function of sample-detector distance $L$ and beamsize $a$ for DLSR. The highest SNRs are $SNR_{P3}=0.46$; $SNR_{P4}=1.7$; $SNR_{P4 U18}=2.7$; $SNR_{P4 U18-10m}=3.9$ }\n\t\\label{fig:contour}\n\\end{figure}\nHowever, in the case of PETRA IV (U18-10m), an overall increase in SNR by about one order of magnitude can be achieved, without exceeding the critical radiation dose of the sample. This setup would feature a sample detector distance of $L=26 \\ \\mathrm{m}$ and a spot size of $a=24 \\ \\mathrm{\\mu m}$ at $E=14.7$ keV.\\\\\n\n\\begin{table}\n\\caption{Parameters optimized setup for $N_{fr}=2$ using a Si(111) monochromator \\label{tab2}}\n\\begin{tabular}{lllll} \nparamters &U29 (PIII) & U29 & U18 5m & U18 10m \\\\\n\\hline\nsignal to noise ratio SNR & $0.4$ &$1.7$& $2.6$ &$3.7$ \\\\\n\\hline \nbeam size $a$ \/ $\\mu$m & $7.5$ &$13.3$ & $17.7$ &$23.7$ \\\\\n\\hline\nsample detector distance $L$ \/ m & $3.8$ &$10.0$ & $14.7$ &$21.5$ \\\\\n\\hline \nbeam energy $E$ \/ keV & $8.1$ &$12.2$& $13.6$ &$14.7$ \\\\\n\\hline\ncoherent Flux $F_c$ \/ ph\/s & $2.1 \\ 10^{11}$ & $1.5 \\ 10^{12}$ &$3.4 \\ 10^{12}$ &$7.2 \\ 10^{12}$ \\\\\n\\hline\ncontrast $\\beta$ & $0.20$ & $0.12$ & $0.10$ & $0.08$ \\\\\n\\hline\nspeckle size $S$ \/ $\\mu$m & $78$& $76$ & $75$ &$76$ \\\\\n\\hline\nintensity per pixel $I_{pix}$ \/ ph\/ms& $2.3 \\ 10^{-3}$& $8.8.0 \\ 10^{-3}$ & $1.2 \\ 10^{-2}$ & $1.5 \\ 10^{-2}$ \\\\\n\\hline\nnumber of pixel in q-range $N_{pix}$ & 0.4 M& 1.3 M& 2.2 M& 4.2 M \\\\\n\\hline\nnumber of frames $N_{fr}$ &2 & 2& 2 & 2 \\\\\n\\hline\nsample thickness $d$ \/ mm &$1.0$& $3.6$ & $4.9$ & $6.3$\\\\\n\\hline\nexposed sample volume \/ nL & $0.05$& $0.6$& $1.6$ & $3.5$ \\\\\n\\end{tabular}\n\\end{table}\n\nThe resulting parameter for the optimized experimental setups are summarized in Tab. \\ref{tab2} for each of the considered undulators. We note that for higher coherent flux setups the optimized setups feature an increase of beam size $a$, sample detector distance $L$ and photon energy $E$. \\\\\n\nAs a general trend, it is evident that the sample volume, spanned by the sample thickness $d$ and spot size $a$ needs to be increased when the coherent flux increases. In order to compensate for the consequently decreasing angular speckle size, the sample detector distance needs to increase so that the speckle size can maintain its value of $S \\approx 75$ $\\mu$m. However, it can be seen that one can still observe a decrease in speckle contrast $\\beta$, even though the speckle have the same size on the detector for all four presented setups. This effect is due to the second contribution to the speckle contrast $\\beta_{cl}$, see Eq. \\ref{eq:betacl}, originating from the limited longitudinal coherence length of the X-ray beam.\n\n\\subsection{Si(311)-Mono}\nHere, we investigate how an additional increase of the longitudinal coherence length by using a Si(311) monochromator benefit the achievable SNR. We repeat the calculations with a reduced bandwidth of $3 \\cdot 10^{-5}$ and a reduced flux compared to the Si(111) calculations by $74 \\ \\%$. The resulting SNRs are displayed in Fig. \\ref{fig:contour_311} and Tab.' \\ref{tab3}. We find that the use of a Si(311) monochromator improves the SNR by an additional $30 \\%$ compared to the Si(111) thus leading to an overall SNR gain of a factor of 13 when comparing PETRA III with PETRA IV.\n\n\n\\begin{figure}\n\\centering\n\t\t\\includegraphics[width=10cm]{contour_311.png}\n\t\\caption{Best combination of sample-detector distance $L$ and beamsize $a$ for DLSR using an Si(311) monochromator.}\n\t\\label{fig:contour_311}\n\\end{figure}\n\n\n\\begin{table}\n\\caption{optimized setup using a Si(311) monochromator \\label{tab3}}\n\\begin{tabular}{lllll} \nparameters &U29 (PIII) & U29 & U18 5m & U18 10m \\\\\n\\hline\nsignal to noise ratio SNR & $0.37$ &$1.8$& $3.0$ &$4.9$ \\\\\n\\hline \nbeam size $a$ \/ $\\mu$m & $7.5$ &$10.0$ & $13.3$ &$17.8$ \\\\\n\\hline\nsample detector distance $L$ \/ m & $5.6$ &$8.3$ & $10.0$ &$14.7$ \\\\\n\\hline \nbeam energy $E$ \/ keV & $12.5$ &$14.1$& $12.5$ &$13.8$ \\\\\n\\hline\ncoherent Flux $F_c$ \/ ph\/s & $1.5 \\ 10^{10}$ & $2.9 \\ 10^{11}$ &$1.1 \\ 10^{12}$ &$2.3 \\ 10^{12}$ \\\\\n\\hline\ncontrast $\\beta$ & $0.23$ & $0.20$ & $0.23$ & $0.20$ \\\\\n\\hline\nspeckle size $S$ \/ $\\mu$m & $74$& $73$ & $74$ &$74$ \\\\\n\\hline\nintensity per pixel $I_{pix}$ \/ ph\/ms& $3.0 \\ 10^{-3}$& $3.9 \\ 10^{-3}$ & $6.9 \\ 10^{-3}$ & $9.1 \\ 10^{-3}$ \\\\\n\\hline\nnumber of pixel in q-range $N_{pix}$ & 0.06 M& 0.7 M& 1.3 M& 2.2 M \\\\\n\\hline\nnumber of frames $N_{fr}$ &69 & 8& 3 & 3 \\\\\n\\hline\nsample thickness $d$ \/ mm &$3.8$& $5.5$ & $3.8$ & $5.2$\\\\\n\\hline\nexposed sample volume \/ nL & $0.21$& $0.55$& $0.7$ & $1.6$ \\\\\n\\end{tabular}\n\\end{table}\n\n\n\n\n\\subsection{Multiple frame XPCS and two time correlation functions}\nIt becomes evident that with SNR values $\\propto 3-5$ XPCS from protein solutions is indeed possible at DLSRs with adapted experimental setups. As a direct consequence of the presented results, the optimized data acquisition scheme differs from conventional XPCS measurements. Instead of taking many hundreds to thousands of images at one spot, the scheme with maximum SNR for protein XPCS rather consists of \"double-shot\" exposures. This would not give a full correlation function from one spot on the sample, but rather one data point of $g_2$ for each illuminated sample spot. In consequence, the correlation function would be constructed from many of such double-shot exposures, which each can be done on a new sample spot and with a different delay time $\\tau$ between the two frames (see e.g. \\cite{Verwohlt2018}). The required sample volume therefore scales with the desired number of data points of $g_2$.\\\\\nHowever, this acquisition scheme is not suitable for samples displaying heterogeneous dynamics or aging effects. In such cases a movie-mode acquisition scheme with more than two frames per spot is needed. Fig. \\ref{fig:contour_tt} displays the resulting SNR values in the $a-L$ plane for $N_{fr}=2, 5, 25, 100$ for the case of PETRA IV U18 10m.\nWe find that with increasing number of images $N_{fr}$ the value of the maximum SNR decreases and its position in the $a-L$ plane shifts towards larger beamsizes $a$ and larger sample-detector distances $L$. For realizing the higher number of frames, an increase of the photon energy and of the beamsize is required (from Eq. 11 we find the scaling $a \\propto \\sqrt{N_{fr}}$). The resulting degradation of speckle contrast is partially counterbalanced by improving the angular resolution via a larger sample-detector distance. For example for $N=100$ frames the optimum SNR is 3.2 at $a=75 \\mu m$, $L=82$m and $E=15.4$ keV. Generally speaking we find at the maximum of the SNR a scaling of $L\\propto a \\propto \\sqrt{N_{fr}B(E)}$.\n\n\\begin{figure}\n\\centering\n\t\t\\includegraphics[width=10cm]{contour_tt.png}\n\t\\caption{The maximum value of the SNR as a function of sample-detector distance $L$ and beamsize $a$ for PETRA IV (U18 10m), for $N_{fr}=2, 5, 25, 100$ number of frames taken on the same sample spot (Concentrated lysozyme solution, q=1 nm$^{-1}$ and critical dose of 1 kGy). The white color indicates $a-L$ combinations in which the required number of frames cannot be realized with the photon flux.}\n\t\\label{fig:contour_tt}\n\\end{figure}\n\nIn reality it might become difficult to realize a beamline with up to $100$ m sample-detector distance, which consequently would also require a detector with a very large number of pixels. However, it can be seen from Fig. 9 that also at shorter sample-detector distances $L$, the SNR is still significantly larger than 1 for up to $N_{fr}=100$. Therefore, we investigate how the SNR can be optimized, if the length of the beamline is limited to a fixed value of $L$ and on the same time a certain number of frames $N_{fr}$ is required to track the physics of the protein solution.\n\nWe demonstrate this by fixing the sample-detector distance at $L=30$ m and using a Si(311) monochromator. We plot both the SNR and the maximum number of frames possible as a function of beamsize $a$ (Fig. 10 left) for photon energies of $E=13.1$ keV (solid), $E=14.9$ keV (dashed) and $E=17$ keV (dash-dotted), respectively. Fig. 10 right displays the SNR as a function of $N_{fr}$ for the different photon energies. The benefit of using slightly higher photon energies than 13 keV is obvious as it allows to either increase the SNR value at fixed $N_{fr}$ or to record more images at fixed value of the SNR.\nWe find that with the source parameters of PETRA IV (U18 10m) the resulting values of the SNR are on the order of single digits. Specifically, we may take the example of $N_{fr}=100$ and find an SNR value of 2.5. Thus with 100 repeats (i.e. $N_{rep}=100)$ we could obtain an SNR of 25 of an averaged correlation function. \n\n\n\n\\begin{figure}\n\\centering\n\t\t\\includegraphics[width=10cm]{multiframe.png}\n\t\\caption{Left: SNR (green lines) as a function of beamsize a for photon energies of 13.1, 14.9 and 17.0 keV (solid, dashed, dashed-dotted lines). Red lines indicates the maximum number of possible frames. Right: SNR as a function of the maximum number of frames displayed for the same photon energies. }\n\t\\label{fig:multiframe}\n\\end{figure}\n\n\\section{Conclusion}\nWe determined the signal to noise ratios (SNR) for XPCS experiments of a concentrated lysozyme solution at length scales of the hydrodynamic radius of a single protein molecule. The results show that the SNR values can at least increase up to one order of magnitude at future upgraded storage rings when compared to existing facilties. With this, the required measuring time would reduce by two orders of magnitude making dynamic studies of protein solutions at nanometer lengthscales feasible. However, in order to take full advantage of the properties of the future sources, XPCS experiments require adapted experimental setups with larger beamsizes and longer sample-detector distances than usually available at standard XPCS beamlines. \n\n\\section{Acknowledgements}\nThe authors would like to thank the PETRA IV project team, and especially C.\nSchroer, M. Tischer, and S. Klumpp for useful discussions and support. C.G. acknowledges funding by BMBF via project 05K19PS1.\n\n \n \n \n\n\n \n \n \n\n \n\n\n \n \n\n\n\n\n\\printbibliography\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe recent success in deep learning is attributed to the availability of large labeled data~\\cite{krizhevsky2012imagenet,zhou2018brief} and the assumption of i.i.d. between training and test datasets. Such assumptions could be violated when test data features a drastic difference from the training data, e.g. training on synthetic images and test on real ones, and this is often referred to as domain shift~\\cite{quinonero2008dataset,ben2010theory}. To tackle this issue, domain adaptation~(DA)~\\cite{wang2018deep} emerges and the labeled training data and unlabeled testing data are often referred to as source and target data\/domains respectively.\n\nThe existing DA works either require the access to both source and target domain data during training~\\cite{ganin2015unsupervised} or training on multiple domains simultaneously~\\cite{zhou2021domain}. The former approach renders the methods restrictive to limited scenarios where source domain data is always available during adaptation while the latter ones are computationally more expensive. \nTo alleviate the reliance on source domain data, which may be inaccessible due to privacy issues or storage overhead, source-free domain adaptation (SFDA) emerges which handles DA on target data without access to source data~\\cite{pmlr-v119-liang20a,kundu2020universal,yang2021generalized,xia2021adaptive,liu2021ttt++}. SFDA is often achieved through self-training~\\cite{pmlr-v119-liang20a}, self-supervised learning~\\cite{liu2021ttt++} or introducing prior knowledge~\\cite{pmlr-v119-liang20a} and it requires multiple training epochs on the full target data to allow convergence. Despite easing the dependence on source data, SFDA has major drawbacks in a more realistic domain adaptation scenario where test data arrives in a stream and inference or prediction must be taken instantly, and this setting is often referred to as test-time training (TTT) or adaptation (TTA)~\\cite{sun2020test,wang2020tent,iwasawa2021test,liu2021ttt++}. Despite the attractive feature of adaption at time test, we notice a confusion of what defines a test-time training and as a result comparing apples and oranges happens frequently in the community. In this work, we first categorize TTT by two key factors after summarizing various definitions made in existing works. First, under a realistic TTT setting, test samples are sequentially streamed and prediction must be made instantly upon the arrival of a new test sample. More specifically, the prediction of test sample $X_T$, arriving at time stamp $T$, should not be affected by any subsequent samples, $\\{X_t\\}_{t=T+1\\cdots\\infty}$. Throughout this work, We refer to the sequential streaming as \\textbf{one-pass adaptation} protocol and any other protocols violating this assumption are called \\textbf{multi-pass adaptation} (model may be updated on all test data for multiple epochs before inference). Second, we notice some recent works must \\textbf{modify source domain training loss}, e.g. by introducing additional self-supervised branch, to allow more effective TTT~\\cite{sun2020test,liu2021ttt++}. This will introduce additional overhead in the deployment of TTT because re-training on some source dataset, e.g. ImageNet, is computationally expensive. \nIn this work, we aim to tackle on the most realistic and challenging TTT protocol, i.e. one-pass test time training with no modifications to training objective. This setting is similar to TTA proposed in~\\cite{wang2020tent} except for not restricting access to a light-weight information from the source domain. Given the objective of TTT being efficient adaptation at time-time, this assumption is computationally efficient and improves TTT performance substantially. We name this new TTT protocol as \\textbf{sequential test time training (sTTT)}. \n\nWe propose two techniques to enable efficient and accurate sTTT. i) We are inspired by the recent progresses in unsupervised domain adaptation~\\cite{tang2020unsupervised} that encourages testing samples to form clusters in the feature space. However, separately learning to cluster in the target domain without regularization from source domain does not guarantee improved adaptation~\\cite{tang2020unsupervised}. \nTo overcome this challenge, we identify clusters in both the source and target domains through a mixture of Gaussians with each component Gaussian corresponding to one category. Provided with the category-wise statistics from source domain as anchors, we match the target domain clusters to the anchors by minimizing the KL-Divergence as the training objective for sTTT. Therefore, we name the proposed method \\textit{test-time anchored clustering (TTAC)}. Since test samples are sequentially streamed, we develop an exponential moving averaging strategy to update the target domain cluster statistics to allow gradient-based optimization. ii) Each component Gaussian in the target domain is updated by the test sample features that are assigned to the corresponding category. Thus, incorrect assignments (pseudo labels) will harm the estimation of component Gaussian. To tackle this issue, we are inspired by the correlation between network's stability and confidence and pseudo label accuracy~\\cite{lee2013pseudo,sohn2020fixmatch}, and propose to filter out potential incorrect pseudo labels. Component Gaussians are then updated by the samples that have passed the filtering. To exploit the filtered out samples, we incorporate a global feature alignment~\\cite{liu2021ttt++} objective. \nWe also demonstrate TTAC is compatible with existing TTT techniques, e.g. contrastive learning branch~\\cite{liu2021ttt++}, if source training loss is allowed to be modified. The contributions of this work are summarized as below.\n\n\n\n\n\\begin{itemize}\n\\item In light of the confusions within TTT works, we provide a categorization of TTT protocols by two key factors. Comparison of TTT methods is now fair within each category.\n\\item We adopt a realistic TTT setting, namely sTTT. To improve test-time feature learning, we propose TTAC by matching the statistics of the target clusters to the source ones. The target statistics are updated through moving averaging with filtered pseudo labels. \n\\item The proposed method is complementary to existing TTT method and is demonstrated on five TTT datasets, achieving the state-of-the-art performance under all categories of TTT protocols.\n\\end{itemize}\n\n\n\\section{Related Work}\n\n\\noindent\\textbf{Unsupervised Domain Adaptation}. \nDomain adaptation aims to improve model generalization when source and target data are not drawn i.i.d. When target data are unlabeled, UDA~\\cite{ganin2015unsupervised,tzeng2014deep} learns domain invariant feature representations on both source and target domains to improve generalization. Follow-up works improve DA by minimizing a divergence~\\cite{gretton2012kernel,sun2016deep,zellinger2017central}, adversarial training~\\cite{hoffman2018cycada} or discovering cluster structures in the target data~\\cite{tang2020unsupervised}. Apart from formulating DA within a task-specific model, re-weighting has been adopted for domain adaptation by selectively up-weighting conducive samples in the source domain~\\cite{jiang2007instance, yan2017mind}. Despite the efforts in UDA, it is inevitable to access the source domain data which may be not accessible due to privacy issues, storage overhead, etc. Deploying DA in more realistic scenarios has inspired research into source-free domain adaptation and test-time training\/adaptation.\n\n\\noindent\\textbf{Source-Free Domain Adaptation}. \nWithout the access to source data, source-free domain adaptation (SFDA) develops domain adaptation through self-training~\\cite{pmlr-v119-liang20a, kundu2020universal, iwasawa2021test}, self-supervised training~\\cite{liu2021ttt++} or clustering in the target domain~\\cite{yang2021generalized}. It has been demonstrated that SFDA performs well on seminal domain adaptation datasets even compared against UDA methods~\\cite{tang2020unsupervised}. Nevertheless, SFDA still requires access to all testing data and training must be carried out iteratively on the testing data. In a more realistic DA scenario where inference and adaptation must be implemented simultaneously, SFDA will no longer be effective. Moreover, some statistical information on the source domain does not pose privacy issues and can be exploited to further improve adaptation on target data.\n\n\\noindent\\textbf{Test-Time Training}. \nCollecting enough samples from target domain and adapt models in an offline manner restricts the application to adapting to static target domain. To allow fast and online adaptation, test-time training (TTT)~\\cite{sun2020test,wang2022continual} or adaptation (TTA)~\\cite{wang2020tent} emerges. Despite many recent works claiming to be test-time training, we notice a severe confusion over the definition of TTT. In particular, whether training objective must be modified~\\cite{sun2020test,liu2021ttt++} and whether sequential inference on target domain data is possible~\\cite{wang2020tent,iwasawa2021test}. Therefore, to reflect the key challenges in TTT, we define a setting called sequential test-time training (sTTT) which neither modifies the training objective nor violates sequential inference. Under the more clear definition, some existing works, e.g. TTT~\\cite{sun2020test} and TTT++~\\cite{liu2021ttt++} is more likely to be categorized into SFDA. Several existing works~\\cite{wang2020tent,iwasawa2021test} can be adapted to the sTTT protocol. Tent~\\cite{wang2020tent} proposed to adjust affine parameters in the batchnorm layers to adapt to target domain data. The high TTA efficiency inevitably leads to limited performance gain on the target domain. T3A~\\cite{iwasawa2021test} further proposed to update classifier prototype through pseudo labeling. Despite being efficient, updating classifier prototype alone does not affect feature representation for the target domain. Target feature may not form clusters at all when the distribution mismatch between source and target is large enough. In this work we propose to simultaneously cluster on the target domain and match target clusters to source domain classes, namely anchored clustering. To further constrain feature update, we introduce additional global feature alignment and pseudo label filtering. Through the introduced anchored clustering, we achieve test-time training of more network parameters and achieve the state-of-the-art performance.\n\n\n\\section{Methodology}\n\nIn this section we first introduce the anchored clustering objective for test-time training through pseudo labeling and then describe an efficient iterative updating strategy. An overview of the proposed pipeline is illustrated in Fig.~\\ref{fig:overview}.\n\n\\begin{figure}[!htb]\n \\centering\n \\includegraphics[width=0.99\\linewidth]{.\/Figure\/Overview_v1.pdf}\n \\caption{Overview of TTAC pipeline. i) In the source domain, we calculate category-wise and global statistics as anchors. ii) In the testing stage, samples are sequentially streamed and pushed into a fixed-length queue. Clusters in target domain are identified through anchored clustering with pseudo label filtering. Target clusters are then matched to the anchors in source domain to achieve test-time training.}\n \n \n \\label{fig:overview}\n\\end{figure}\n\n\\subsection{Anchored Clustering for Test-Time Training}\n\n\nDiscovering cluster structures in the target domain has been demonstrated effective for unsupervised domain adaptation~\\cite{tang2020unsupervised} and we develop an anchored clustering on the test data alone\nWe first use a mixture of Gaussians to model the clusters in the target domain, here each component Gaussian represents one discovered cluster. \nWe further use the distributions of each category in the source domain as anchors for the target distribution to match against. In this way, test data features can simultaneously form clusters and the clusters are associated with source domain categories, resulting in improved generalization to target domain. Formally, we first write the mixture of Gaussians in the source and target domains $ p_s(x)=\\sum_k \\alpha_k \\mathcal{N}(\\mu_{sk},\\Sigma_{sk}),\\quad p_t(x)=\\sum_k \\beta_k \\mathcal{N}(\\mu_{tk},\\Sigma_{tk})$, where $\\{\\mu_k\\in\\mathbb{R}^d,\\Sigma_k\\in\\mathbb{R}^{d\\times d}\\}$ represent one cluster in the source\/target domain and $d$ is the dimension of feature embedding.\n\nAnchored clustering can be achieved by matching the above two distributions and one may directly minimize the KL-Divergence between the two distribution.\nNevertheless, this is non-trivial because the KL-Divergence between two mixture of Gaussians has no closed-form solution which prohibits efficient gradient-based optimization. Despite some approximations exist~\\cite{hershey2007approximating}, without knowing the semantic labels for each Gaussian component, even a good match between two mixture of Gaussians does not guarantee target clusters are aligned to the correct source ones and this will severely harm the performance of test-time training.\nIn light of these challenges, we propose a category-wise alignment. Specifically, we allocate the same number of clusters in both source and target domains and each target cluster is assigned to one source cluster. We can then minimize the KL-Divergence between each pair of clusters as in Eq.~\\ref{eq:KLD}. \n\\vspace{-0.5cm}\n\n\\begin{equation}\\label{eq:KLD}\n\\begin{split}\n \\mathcal{L}_{ac}&=\\sum_k D_{KL}(\\mathcal{N}(\\mu_{sk},\\Sigma_{sk})||\\mathcal{N}(\\mu_{tk},\\Sigma_{tk}))\\\\\n &=\\sum_k -H(\\mathcal{N}(\\mu_{sk},\\Sigma_{sk})) + H(\\mathcal{N}(\\mu_{sk},\\Sigma_{sk}),\\mathcal{N}(\\mu_{tk},\\Sigma_{tk}))\n\\end{split}\n\\end{equation}\n\nThe KL-Divergence can be further decomposed into the entropy $H(\\mathcal{N}(\\mu_{sk},\\Sigma_{sk}))$ and cross-entropy $H(\\mathcal{N}(\\mu_{sk},\\Sigma_{sk}),\\mathcal{N}(\\mu_{tk},\\Sigma_{tk}))$. It is commonly true that the source reference distribution $P_s(x)$ is fixed thus the entropy term is a constant $C$ and only the cross-entropy term is to be optimized.\nGiven the closed-form solution to the KL-Divergence between two Gaussian distributions, we now write the anchored clustering objective as,\n\n\\begin{equation}\\label{eq:anchored_clustering_loss}\n \\mathcal{L}_{ac}= \\sum_k \\{\\log \\sqrt{2\\pi^d|\\Sigma_{tk}|} + \\frac{1}{2}(\\mu_{tk}-\\mu_{sk})^\\top\\Sigma_{tk}^{-1}(\\mu_{tk}-\\mu_{sk}) + tr(\\Sigma_{tk}^{-1}\\Sigma_{sk})\\} + C\n\\end{equation}\n\nThe source cluster parameters can be estimated in an offline manner. These information will not cause any privacy leakage and only introduces a small computation and storage overheads. In the next section, we elaborate clustering in the target domain.\n\n\\subsection{Clustering through Pseudo Labeling}\n\nIn order to test-time train network with anchored clustering loss, one must obtain target cluster parameters $\\{\\mu_{tk},\\Sigma_{tk}\\}$. \nFor a minibatch of target test samples $\\set{B}^t=\\{x_i\\}_{i=1\\dots N_B}$ at timestamp $t$, we first denote the predicted posterior as $P^t=softmax(h(f(x_i)))\\in[0,1]^{B\\times K}$ where $softmax(\\cdot)$, $h(\\cdot)$ and $f(\\cdot)$ respectively denote a standard softmax function, the classifier head and backbone network. The pseudo labels are obtained via $\\hat{y}_i=\\arg\\max_k P_{ik}^t$. Given the predicted pseudo labels we could estimate the mean and covariance for each component Gaussian with the pseudo labeled testing samples.\nHowever, pseudo labels are always subject to model's discrimination ability. The error rate for pseudo labels is often high when the domain shift between source and target is large, directly updating the component Gaussian is subject to erroneous pseudo labels, a.k.a. confirmation bias~\\cite{arazo2020pseudo}. To reduce the impact of incorrect pseudo labels, we first adopt a light-weight temporal consistency (TC) pseudo label filtering approach. Compared to co-teaching~\\cite{han2018co} or meta-learning~\\cite{li2019learning} based methods, this light-weight method does not introduce additional computation overhead and is therefore more suitable for test-time training.\nSpecifically, to alleviate the impact from the noisy predictions, we calculate the temporal exponential moving averaging posteriors $\\widetilde{P}^t \\in [0,1]^{N \\times K}$ as below,\n\n\\vspace{-0.5cm}\n\n\\begin{equation}\n \n \\widetilde{P}^t_i = \n (1 - \\xi) * \\widetilde{P}^{t-1}_i + \\xi * P^t_i\\quad,\\quad s.t.\\quad\\widetilde{P}^{0}_i=P^0_i\n\\end{equation}\n\nThe temporal consistency filtering is realized as in Eq.~\\ref{eq:tc_filter} where $\\tau_{TC}$ is a threshold determining the maximally allowed difference in the most probable prediction over time. If the posterior deviate from historical value too much, it will be excluded from target domain clustering.\n\\vspace{-0.2cm}\n\n\\begin{equation}\\label{eq:tc_filter}\n F_i^{TC} = \\mathbbm{1}((P_{i\\hat{k}}^t - \\widetilde{P}^{t-1}_{i\\hat{k}}) > \\tau_{TC}),\\quad s.t. \\quad \\hat{k} = \\arg\\max_k(P_{ik}^t)\n\\end{equation}\n\n\nDue to the sequential inference, test samples without enough historical predictions may still pass the TC filtering. So, we further introduce an additional pseudo label filter directly based on the posterior probability as,\n\\vspace{-0.5cm}\n\n\\begin{equation}\n F_i^{PP}=\\mathbbm{1}(\\widetilde{P}^{t}_{i\\hat{k}}>\\tau_{PP})\n\\end{equation}\n\nBy filtering out potential incorrect pseudo labels, we update the component Gaussian only with the leftover target samples as below.\n\\vspace{-0.5cm}\n\n\\begin{equation}\n \\mu_{tk} = \\frac{\\sum\\limits_{i} F^{TC}_iF^{PP}_i\\mathbbm{1}(\\hat{y}_i=k)f(x_i)}{\\sum\\limits_i F^{TC}_iF^{PP}_i\\mathbbm{1}(\\hat{y}_i=k)},\\quad\n \\Sigma_{tk} = \\frac{\\sum\\limits_i F^{TC}_iF^{PP}_i\\mathbbm{1}(\\hat{y}_i=k) (f(x_i)-\\mu_{tk})^\\top(f(x_i)-\\mu_{tk})}{\\sum\\limits_i F^{TC}_iF^{PP}_i\\mathbbm{1}(\\hat{y}_i=k)}\n\\end{equation}\n\\vspace{-0.5cm}\n\n\\subsection{Global Feature Alignment}\n\nAs discussed above, test samples that do not pass the filtering will not contribute to the estimation of target clusters. Hence, anchored clustering may not reach its full potential without the filtered test samples. To exploit all available test samples, we propose to align global target data distribution to the source one. We define the global feature distribution of the source data as $\\hat{p}_s(x)=\\mathcal{N}(\\mu_s,\\Sigma_s)$ and the target data as $\\hat{p}_t(x)=\\mathcal{N}(\\mu_t,\\Sigma_t)$. To align two distributions, we again minimize the KL-Divergence as,\n\\vspace{-0.5cm}\n\n\\begin{equation}\\label{eq:global_loss}\n \\mathcal{L}_{ga}=D_{KL}(\\hat{p}_s(x)||\\hat{p}_t(x))\n\\end{equation}\n\nSimilar idea has appeared in~\\cite{liu2021ttt++} which directly matches the moments between source and target domains~\\cite{zellinger2017central} by minimizing the F-norm for the mean and covariance, i.e. $||\\mu_t-\\mu_s||^2_2+||\\Sigma_t-\\Sigma_s||^2_F$. However, designed for matching complex distributions represented as drawn samples, central moment discrepancy~\\cite{zellinger2017central} requires summing infinite central moment discrepancies and the ratios between different order moments are hard to estimate. For matching two parameterized Gaussian distributions KL-Divergence is more convenient with good explanation from a probabilistic point of view. Finally, we add a small constant to the diagonal of $\\Sigma$ for both source and target domains to increase the condition number for better numerical stability.\n\n\n\n\\subsection{Efficient Iterative Updating}\n\n\nDespite the distribution for source data can be trivially estimated from all available training data in a totally offline manner, estimating the distribution for target domain data is not equally trivial, in particular under the sTTT protocol.\nIn a related research~\\cite{liu2021ttt++}, a dynamic queue of test data features are preserved to dynamically estimate the statistics, which will introduce additional memory footprint~\\cite{liu2021ttt++}.\nTo alleviate the memory cost we propose to iteratively update the running statistics for Gaussian distribution. \nFormally, we define $t$-th test minibatch as $\\set{B}^t=\\{x_i\\}_{i=1\\cdots N_{B}}$. Denoting the running mean and covariance at step $t$ as $\\mu^t$ and $\\Sigma^t$, we present the rules to update the mean and covariance in Eq.~\\ref{eq:runningstatistics}. More detailed derivations and update rules for per cluster statistics are deferred to the Appendix. \n\n\\vspace{-0.5cm}\n\\begin{equation}\\label{eq:runningstatistics}\n\\begin{split}\n \n \n \n \n %\n & \\mu^t = \\mu^{t-1} + \\delta^t, \\quad\n \\Sigma^t=\\Sigma^{t-1}+a^t{\\sum_{x_i\\in\\set{B}}\\{(f(x_i)-\\mu^{t-1})^\\top(f(x_i)-\\mu^{t-1})-\\Sigma^{t-1}\\}} - {\\delta^t}^\\top\\delta^t \\\\\n & \\delta^t=a^t{\\sum\\limits_{x_i\\in\\set{B}}(f(x_{i}) - \\mu^{t-1})},\\quad \n N^t = N^{t-1} + |\\set{B}^t|, \\quad \n a^t = \\frac{1}{N^t}, \n\\end{split}\n\\end{equation}\n\n\n\nAdditionally, $N^t$ grows larger overtime. New test samples will have smaller contribution to the update of target domain statistics when $N^t$ is large enough. As a result, the gradient calculated from current minibatch will vanish. To alleviate this issue, we impose a clip on the value of $\\alpha^t$ as below. As such, the gradient can maintain a minimal scale even if $N^t$ is very large. \n\n\n\\vspace{-0.3cm}\n\n\\begin{equation}\n a^t = \\left \\{\n \\begin{array}{lcl}\n \\frac{1}{N^t} & & N^t < N_{clip} \\\\\n \\frac{1}{N_{clip}} & & others\n \\end{array}\n \\right.\n \n \n \n \n \n \n\\end{equation}\n\\vspace{-0.4cm}\n\n\n\n\n\n\n\n\n\n\n\\subsection{TTAC Training Algorithm}\nWe summarize the training algorithm for the TTAC in Algo.~\\ref{alg:main}. For effective clustering in target domain, we allocate a fixed length memory space, denoted as $\\set{C} \\in \\set{R}^{N_{C} \\times H \\times W \\times \\texttt{3}}$, to store the recent testing samples. In the sTTT protocol, we first make instant prediction on each testing sample, and only update the model when $N_B$ testing samples are accumulated. TTAC can be efficiently implemented, e.g. with two devices, one is for continuous inference and another is for model updating\n\n\\vspace{-0.4cm}\n\\begin{algorithm}\n\\caption{Test-Time Anchored Clustering Training Algorithm }\\label{alg:main}\n\\SetKwInOut{Input}{input}\n\\SetKwInOut{Return}{return}\n\\Input{A new testing sample batch $\\set{B}^t=\\{x_i\\}_{i=1\\dots N_B}$.}\n\n\\textcolor{gray}{\\# Update the testing sample queue $\\set{C}$.}\n\n$\\set{C}^t=\\set{C}^t\\setminus \\set{B}^{t-N_C\/N_B}$,\\quad\n$\\set{C}^t=\\set{C}^t\\bigcup \\set{B}^{t}$\n\n\\For{$ 1 $ \\KwTo $N_{itr}$}\n{\n \\For{minibatch $\\{x^t_k\\}^N_{k=1}$ in $\\set{C}^t$}\n {\n \n \n \n \n \\textcolor{gray}{\\# Obtain the predicted posterior and pseudo labels}\n \n $P^t_i=softmax(h(f(x^t_i)))$,\\quad\n $\\hat{y}^t_i = \\arg\\max_k(P^t_{ik})$\n \n \\textcolor{gray}{\\# Calculate the global and per-cluster running mean and covariance by Eq.~\\ref{eq:runningstatistics}}\n \n $\\mu^t$,\\quad $\\Sigma^t$,\\quad $\\{\\mu_k^t\\}$,\\quad $\\{\\Sigma_k^t\\}$\n \n \n \n \n \n \n \n \n \n \\textcolor{gray}{\\# Optimize the combined loss by Eq.~\\ref{eq:anchored_clustering_loss} and Eq.~\\ref{eq:global_loss}}\n \n $\\mathcal{L}=\\mathcal{L}_{ac}+\\lambda\\mathcal{L}_{ga}$\n \n \n \n update network $f$ to minimize $\\mathcal{L}$\n }\n}\n\n\n\\end{algorithm}\n\\vspace{-0.5cm}\n\n\n\n\n\n\n\n\n\\section{Experiment}\n\nIn this section, we first compare various existing methods based on the two key factors. Evaluation is then carried out on five test-time training datasets. We then ablate the components of TTAC. Further analysis on the cumulative performance, qualitative insights, etc. are provided at the end.\n\n\\vspace{-0.2cm}\n\\subsection{Datasets}\nWe evaluate on 5 test-time training datasets and report the classification error rate~($\\%$) throughout the experiment section. To evaluate the test-time training efficacy on corrupted target images, we use \\textbf{CIFAR10-C\/CIFAR100-C}~\\cite{hendrycks2018benchmarking}, each consisting of 10\/100 classes with 50,000 training samples of clean data and 10,000 corrupted test samples. \nWe further evaluate test-time training on hard target domain samples with \\textbf{CIFAR10.1}~\\cite{pmlr-v97-recht19a}, which contains around 2,000 difficult testing images sampled over years of research on the original CIFAR-10 dataset.\nTo demonstrate the ability to do test-time training for synthetic data to real data transfer we further use \\textbf{VisDA-C}~\\cite{VisDA}, which is a challenging large-scale synthetic-to-real object classification dataset, consisting of 12 classes, 152,397 synthetic training images and 55,388 real testing images. Finally, to evaluate test-time training on 3D point cloud data, we choose \\textbf{ModelNet40-C}~\\cite{ModelNet40-C}, which consists of 15 common and realistic corruptions of point cloud data, with 9,843 training samples and 2,468 test samples.\n\n\n\n\n\\vspace{-0.2cm}\n\\subsection{Experiment Settings}\n\n\n\n\\noindent\\textbf{Hyperparameters}. We use the ResNet-50~\\cite{he2016deep} for image datasets and the DGCNN~\\cite{wang2019dynamic} on ModelNet40-C. We optimize the backbone network $f(\\cdot)$ by SGD with momentum on all datasets. On CIFAR10-C\/CIFAR100-C and CIFAR10.1, we set BS = 256 and LR = 0.01, 0.0001, 0.01 respectively. On VisDA-C we set BS = 128 and LR = 0.0001, and on ModelNet40-C we set BS = 64 and LR = 0.001. More details of hyperparameters can be found in the Appendix. \n\n\\noindent\\textbf{Test-Time Training Protocols}.\nWe categorize test-time training based on two key factors. First, whether the training objective must be changed during training on the source domain, we use Y and N to indicate if training objective is allowed to be changed or not respectively. Second, whether testing data is sequentially streamed and predicted, we use O to indicate a sequential \\textbf{O}ne-pass inference and M to indicate non-sequential inference, a.k.a. \\textbf{M}ulti-pass inference. With the above criteria, we summarize 4 test-time training protocols, namely N-O, Y-O, N-M and Y-M, and the strength of the assumption increases from the first to the last protocols. \nOurs sTTT setting makes the weakest assumption, i.e. N-O. Existing methods are categorized by the four TTT protocols, we notice that some methods can operate under multiple protocols\n\n\\noindent\\textbf{Competing Methods}.\nWe compare the following test-time training methods. Direct testing (\\textbf{TEST}) without adaptation simply do inference on target domain with source domain model.\nTest-time training (\\textbf{TTT-R})~\\cite{sun2020test} jointly trains the rotation-based self-supervised task and the classification task in the source domain, and then only train the rotation-based self-supervised task in the streaming test samples and make the predictions instantly. The default method is classified into the Y-M protocol.\nTest-time normalization (\\textbf{BN})~\\cite{ioffe2015batch} moving average updates the batch normalization statistics by streamed data. The default method follows N-M protocol and can be adapted to N-O protocol.\nTest-time entropy minimization (\\textbf{TENT})~\\cite{wang2020tent} updates the parameters of all batch normalization by minimizing the entropy of the model predictions in the streaming data. By default, TENT follows the N-O protocol and can be adapted to N-M protocol.\nTest-time classifier adjustment (\\textbf{T3A})~\\cite{iwasawa2021test} computes target prototype representation for each category using streamed data and make predictions with updated prototypes. T3A follows the N-O protocol by default.\nSource Hypothesis Transfer (\\textbf{SHOT})~\\cite{pmlr-v119-liang20a} freezes the linear classification head and trains the target-specific feature extraction module by exploiting balanced category assumption and self-supervised pseudo-labeling in the target domain. SHOT by default follows the N-M protocol and we adapt it to N-O protocol.\n\\textbf{TTT++}~\\cite{liu2021ttt++} aligns source domain feature distribution, whose statistics are calculated offline, and target domain feature distribution by minimizing the F-norm between the mean covariance. TTT++ follows the Y-M protocol and we adapt it to N-O (removing contrastive learning branch) and Y-O protocols. Finally, we present our own approach, \\textbf{TTAC}, which only requires a single pass on the target domain and does not have to modify the source training objective. We further modify TTAC for Y-O, N-M and Y-M protocols, for Y-O and Y-M we incorporate an additional contrastive learning branch~\\cite{liu2021ttt++}. We could further combine TTAC with additional diversity loss and entropy minimization loss introduced in SHOT~\\cite{pmlr-v119-liang20a}, denoted as TTAC+SHOT.\n\n\\vspace{-0.4cm}\n\n\\subsection{Test-Time Training on Corrupted Target Domain}\n\nWe present the test-time training results on CIFAR10C\/100C and ModelNet40C datasets in Tab.~\\ref{tab:categorization_table}. We make the following observations from the results.\n\n\\noindent\\textbf{sTTT (N-O) Protocol}. \nWe first analyze the results under the proposed sTTT (N-O) protocol. Our method outperforms all competing ones by a large margin, e.g. $3\\%$ improvement is observed on both CIFAR10-C and CIFAR100-C from the previous best (TTT++). We further combine TTAC with the class balance assumption made in SHOT (TTAC+SHOT). With the stronger assumptions out method can further improve upon TTAC alone, in particular on ModelNet40-C dataset. This result demonstrates TTAC's compatibility with existing methods.\n\n\\noindent\\textbf{Alternative Protocols}.\nWe further compare different methods under N-M, Y-O and Y-M protocols. Under the Y-O protocol, TTT++~\\cite{liu2021ttt++} modifies the source domain training objective by incorporating a contrastive learning branch~\\cite{chen2020simple}. To compare with TTT++, we also include the contrastive branch and observe a clear improvement on both CIFAR10-C and CIFAR100-C datasets. More TTT methods can be adapted to the N-M protocol which allows training on the whole target domain data multiple epochs. Specifically, we compared with BN, TENT and SHOT. With TTAC alone we observe substantial improvement on all three datasets and TTAC can be further combined with SHOT demonstrating additional improvement. Finally, under the Y-M protocol, we demonstrate very strong performance compared to TTT-R and TTT++. It is also worth noting that TTAC under the N-O protocol can already yield results close to TTT++ under the Y-M protocol, suggesting the strong test-time training ability of TTAC even under the most challenging TTT protocol.\n\n\n\n\n\n\n\n\\begin{table*}[htbp]\n \\centering\n \n \\caption{Comparison under different TTT protocols. Y\/N indicates modifying source domain training objective or not. O\/M indicate one pass or multiple passes test-time training. C10-C, C100-C and MN40-C refer to CIFAR10-C, CIFAR100-C and ModelNet40-C datasets respectively. All numbers indicate error rate in percentage.}\n \\resizebox{0.75\\linewidth}{!}{\n \\begin{tabular}{l|cc|ccc}\n \\toprule\n Method & TTT Protocol & Assum. Strength & C10-C & C100-C & MN40-C \\\\\n \\midrule\n TEST & - & - & 29.15 & 60.34 & 34.62 \\\\\n \\midrule\n BN~\\cite{ioffe2015batch} & N-O & Weak & 15.49 & 43.38 & 26.53 \\\\\n TENT~\\cite{wang2020tent} & N-O & Weak & 14.27 & 40.72 & 26.38 \\\\\n T3A~\\cite{iwasawa2021test} & N-O & Weak & 15.44 & 42.72 & 24.57 \\\\\n SHOT~\\cite{pmlr-v119-liang20a} & N-O & Weak & 13.95 & 39.10 & 19.71 \\\\\n TTT++~\\cite{liu2021ttt++} & N-O & Weak & 13.69 & 40.32 & - \\\\\n TTAC~(Ours) & N-O & Weak & \\textbf{10.94} & 36.64 & 22.30 \\\\\n TTAC+SHOT~(Ours) & N-O & Weak & 10.99 & \\textbf{36.39} & \\textbf{19.21} \\\\\n \\midrule\n TTT++~\\cite{liu2021ttt++} & Y-O & Medium & 13.00 & 35.23 & - \\\\\n TTAC~(Ours) & Y-O & Medium & \\textbf{10.69} & \\textbf{34.82} & - \\\\\n \\midrule\n BN~\\cite{ioffe2015batch} & N-M & Medium & 15.70 & 43.30 & 26.49 \\\\\n TENT~\\cite{wang2020tent} & N-M & Medium & 12.60 & 36.30 & 21.23 \\\\\n SHOT~\\cite{pmlr-v119-liang20a} & N-M & Medium & 14.70 & 38.10 & 15.99 \\\\\n \n TTAC~(Ours) & N-M & Medium & \\textbf{9.42} & 33.55 & 16.77 \\\\\n TTAC+SHOT~(Ours) & N-M & Medium & 9.54 & \\textbf{32.89} & \\textbf{15.04} \\\\\n \\midrule\n TTT-R~\\cite{sun2020test} & Y-M & Strong & 14.30 & 40.40 & - \\\\ \n TTT++~\\cite{liu2021ttt++} & Y-M & Strong & 9.80 & 34.10 & - \\\\\n TTAC~(Ours) & Y-M & Strong & \\textbf{8.52} & \\textbf{30.57} & - \\\\\n \\bottomrule\n \\end{tabular}\n }\n \\label{tab:categorization_table}\n\\end{table*}\n\\vspace{-0.35cm}\n\n\\subsection{Additional Datasets}\n\n\\noindent\\textbf{TTT on Hard Samples}. CIFAR10.1 contains roughly 2,000 new test images that were re-sampled after the research on original CIFAR-10 dataset, which consists of some hard samples and reflects the normal domain shift in our life. The results in Table.~\\ref{tab:cifar101} demonstrate our method is better able to adapt to the normal domain shift. \n\n\n\\noindent\\textbf{TTT on Synthetic to Real Adaptation}. VisDA-C is a large-scale benchmark of synthetic-to-real object classification dataset. The setting of training on a synthetic dataset and testing on real data fits well with the real application scenario. On this dataset, we conduct experiments with our method under the N-O, Y-O and Y-M protocols and other methods under respective protocols, results are presented in Table.~\\ref{tab:visda}. We make the following observations. First, our method (TTAC Y-O)\noutperforms all methods except TTT++ under the Y-M protocol. This suggests TTAC is able to be deployed in the realistic test-time training protocol. Moreover, if training on the whole target data is allowed, TTAC (Y-M) further beats TTT++ by a large margin, suggesting the effectiveness of TTAC under a wide range of TTT protocols.\n\n\n\n\\begin{table}[htbp]\n \\begin{minipage}{0.47\\textwidth}\n \\centering\n \\caption{Test-time training on CIFAR10.1.}\n \\resizebox{1\\linewidth}{!}{\n \\begin{tabular}{ccccccc}\n \\toprule\n TEST & BN & TTT-R & TENT & SHOT & TTT++ & TTAC \\\\\n \\midrule\n 12.1 & 14.1 & 11.0 & 13.4 & 11.1 & 9.5 & \\textbf{9.2} \\\\ \n \\bottomrule\n \\end{tabular}\n }\n \\label{tab:cifar101}\n \\end{minipage}\n \\hfill\n \\begin{minipage}{0.53\\textwidth}\n \\centering\n \\caption{Source-free sTTT on CIFAR10-C.}\n \\resizebox{1\\linewidth}{!}{\n \\begin{tabular}{ccccccc}\n \\toprule\n TEST & BN & TENT & T3A & SHOT & TTAC & TTAC+SHOT \\\\\n \\midrule\n 29.15 & 15.49 & 14.27 & 15.44 & 13.95 & 13.74 & \\textbf{13.35} \\\\\n \\bottomrule\n \\end{tabular}%\n }\n \\label{tab:source_blind}%\n \\end{minipage}\n\\end{table}\n\n\n\n\n\\vspace{-0.5cm}\n\n\\begin{table}[htbp]\n \\centering\n \\caption{Test-time training on VisDA. The numbers for competing methods are inherited from \\cite{liu2021ttt++}.}\n \\resizebox{\\linewidth}{!}{\n \\begin{tabular}{l|cccccccccccc|cc}\n \\toprule\n Method & Plane & Bcycl & Bus & Car & Horse & Knife & Mcycl & Person & Plant & Sktbrd & Train & Truck & Per-class\\\\\n \\midrule\n TEST & 56.52 & 88.71 & 62.77 & 30.56 & 81.88 & 99.03 & 17.53 & 95.85 & 51.66 & 77.86 & 20.44 & 99.51 & 65.19\\\\\n BN (N-M)~\\cite{ioffe2015batch} & 44.38 & 56.98 & 33.24 & 55.28 & 37.45 & 66.60 & 16.55 & 59.02 & 43.55 & 60.72 & 31.07 & 82.98 & 48.99\\\\\n TENT (N-M)~\\cite{wang2020tent} & 13.43 & 77.98 & 20.17 & 48.15 & 21.72 & 82.45 & 12.37 & 35.78 & 21.06 & 76.41 & 34.11 & 98.93 & 45.21\\\\\n SHOT (N-M)~\\cite{pmlr-v119-liang20a} & 5.73 & \\textbf{13.64} & 23.33 & 42.69 & 7.93 & 86.99 & 19.17 & 19.97 & 11.63 & 11.09 & 15.06 & \\textbf{43.26} & 25.04 \\\\\n TFA (N-M)~\\cite{liu2021ttt++} & 28.25 & 32.03 & 33.67 & 64.77 & 20.49 & 56.63 & 22.52 & 36.30 & 24.84 & 35.20 & 25.31 & 64.24 & 37.02 \\\\\n TTT++ (Y-M)~\\cite{liu2021ttt++} & 4.13 & 26.20 & 21.60 & \\textbf{31.70} & 7.43 & 83.30 & 7.83 & 21.10 & 7.03 & \\textbf{7.73} & \\textbf{6.91} & 51.40 & 23.03\\\\\n \\midrule\n \n TTAC~(N-O) & 18.54 & 40.20 & 35.84 & 63.11 & 23.83 & 39.61 & 15.51 & 41.35 & 22.97 & 46.56 & 25.24 & 67.81 & 36.71 \\\\\n TTAC~(Y-O) & 7.19 & 29.99 & 22.52 & 56.58 & 8.14 & 18.41 & 8.25 & 22.28 & 10.18 & 23.98 & 13.55 & 67.02 & 24.01 \\\\\n TTAC~(Y-M) & \\textbf{2.74} & 17.73 & \\textbf{18.91} & 43.12 & \\textbf{5.54} & \\textbf{12.24} & \\textbf{4.66} & \\textbf{15.90} & \\textbf{4.77} & 10.78 & 9.75 & 62.45 & \\textbf{17.38} \\\\\n \\toprule\n \\end{tabular}\n }\n \\label{tab:visda}\n\\end{table}\n\n\n\n\\vspace{-0.4cm}\n\\subsection{Ablation Study}\n\\vspace{-0.1cm}\n\nWe conduct ablation study on CIFAR10-C dataset for individual components, including anchored clustering, pseudo label filtering, global feature alignment and finally the compatibility with contrastive branch~\\cite{liu2021ttt++}. For anchored clustering alone, we use all testing samples to update cluster statistics. For pseudo label filtering alone, we implement as predicting pseudo labels followed by filtering, then pseudo labels are used for self-training. We make the following observations from Tab.~\\ref{tab:ablation}. Under both N-O and N-M protocols, introducing anchored clustering or pseudo label filtering alone improves over the baseline, e.g. under N-O $29.15\\%\\rightarrow 14.32\\%$ for anchored clustering and $29.15\\%\\rightarrow15.00\\%$ for pseudo label filtering. When anchored clustering is combined with pseudo label filtering, we observe a significant boost in performance. This is due to more accurate estimation of category-wise cluster in the target domain {and this reflects matching directly in the feature space \nmay be better than minimizing \ncross-entropy with pseudo labels}. We further evaluate aligning global features alone with KL-Divergence. This achieves relatively good performance and obviously outperforms the L2 distance alignment adopted in \\cite{liu2021ttt++}. Finally, we combine all three components and the full model yields the best performance. When contrast learning branch is included, TTAC achieves even better results.\n\n\n\n\n\n\n\n\n\\vspace{-0.5cm}\n\n\\begin{table}[htbp]\n \\centering\n \\caption{Ablation study for individual components on CIFAR10-C dataset.}\n \\resizebox{1\\linewidth}{!}{\n \\begin{tabular}{lcccccrrrrrrrr}\n \\toprule\n TTT Protocol & - & \\multicolumn{5}{c}{N-O} & \\multicolumn{1}{c}{Y-O} & \\multicolumn{5}{c}{N-M} & \\multicolumn{1}{c}{Y-M}\\\\\n \\cmidrule(lr){1-1} \\cmidrule(lr){2-2} \\cmidrule(lr){3-7} \\cmidrule(lr){8-8} \\cmidrule(lr){9-13} \\cmidrule(lr){14-14} \n Anchored Cluster. & - & \\checkmark & - & \\checkmark & - & \\multicolumn{1}{c}{\\checkmark} & \\multicolumn{1}{c}{\\checkmark} & \\multicolumn{1}{c}{\\checkmark} & \\multicolumn{1}{c}{\\checkmark} & \\multicolumn{1}{c}{-} & \\multicolumn{1}{c}{-} & \\multicolumn{1}{c}{\\checkmark} & \\multicolumn{1}{c}{\\checkmark} \\\\\n Pseudo Label Filter. & - & - & \\checkmark & \\checkmark & - & \\multicolumn{1}{c}{\\checkmark} & \\multicolumn{1}{c}{\\checkmark} & \\multicolumn{1}{c}{-} & \\multicolumn{1}{c}{\\checkmark} & \\multicolumn{1}{c}{-} & \\multicolumn{1}{c}{-} & \\multicolumn{1}{c}{\\checkmark} & \\multicolumn{1}{c}{\\checkmark} \\\\\n Global Feat. Align. & - & - & - & - & KLD & \\multicolumn{1}{c}{KLD} & \\multicolumn{1}{c}{KLD} & \\multicolumn{1}{c}{-} & \\multicolumn{1}{c}{-} & \\multicolumn{1}{c}{L2 Dist.\\cite{liu2021ttt++}} & \\multicolumn{1}{c}{KLD} & \\multicolumn{1}{c}{KLD} & \\multicolumn{1}{c}{KLD} \\\\\n Contrast. Branch~\\cite{liu2021ttt++} & - & - & - & - & - & \\multicolumn{1}{c}{-} & \\multicolumn{1}{c}{\\checkmark} & \\multicolumn{1}{c}{-} & \\multicolumn{1}{c}{-} & \\multicolumn{1}{c}{-} & \\multicolumn{1}{c}{-} & \\multicolumn{1}{c}{-} & \\multicolumn{1}{c}{\\checkmark} \\\\\n Avg Acc & 29.15 & 14.32 & 15.00 & 11.33 & 11.72 & 10.94 & 10.69 & 11.11 & 10.01 & \\multicolumn{1}{c}{11.87} & 10.8 & 9.42 & 8.52 \\\\\n \\bottomrule\n \\end{tabular}%\n }\n \\label{tab:ablation}%\n\\end{table}%\n\\vspace{-0.5cm}\n\n\n\n\n\n\n\n\n\n\n\\subsection{Additional Analysis}\n\n\\noindent\\textbf{Cumulative performance under sTTT}. We illustrate the cumulative error under the sTTT protocol in Fig.~\\ref{fig:add_study}~(a). For both datasets TTAC outperforms competing methods from the early stage of test-time training. The advantage is consistent throughout the TTT procedure.\n\n\n\\begin{figure}\n\\subfloat[Test-time cumulative error]{\\includegraphics[width=0.45\\linewidth]{.\/Figure\/OnePass4.pdf}}\n\\subfloat[TTT++ Feature]{\\includegraphics[width=0.25\\linewidth]{.\/Figure\/GlobalAlignmentTSNE.png}}\n\\subfloat[TTAC Feature]{\\includegraphics[width=0.25\\linewidth]{.\/Figure\/OursTSNE.png}}\n\n\\caption{(a) Comparison of test-time cumulative error under one-pass protocol. (b) T-SNE visualization of TTT++ feature embedding. (c) T-SNE visualization of TTAC feature embedding.}\\label{fig:add_study}\n\\vspace{-0.5cm}\n\n\\end{figure}\n\n\n\n\n\n\\noindent\\textbf{TSNE Visualization of TTAC features}.\nWe provide qualitative results for test-time training by visualizing the adapted features through T-SNE~\\cite{van2008visualizing}. In Fig.~\\ref{fig:add_study}~(b) and Fig.~\\ref{fig:add_study}~(c), we compared the features learned by TTT++~\\cite{liu2021ttt++} and TTAC~(Ours). We observe a better separation between classes by TTAC, implying an improved classification accuracy.\n\n\n\n\\noindent\\textbf{Source-Free Test-Time Training}. \nTTT aims to adapt model to target domain data by doing simultaneous training and sequential inference. It has been demonstrated some light-weight information, e.g. statistics, from source domain will greatly improve the efficacy of TTT. Nevertheless, under a more strict scenario where source domain information is strictly blind, TTAC can still exploit classifier prototypes to facilitate anchored clustering. Specifically, we normalize the category-wise weight vector with the norm of corresponding target domain cluster center as prototypes. Then, we build source domain mixture of Gaussians by taking prototypes as mean with a fixed covariance matrix. The results on CIFAR10-C are presented in Tab.~\\ref{tab:source_blind}. It is clear that even without any statistical information from source domain, TTAC still outperforms all competing methods.\n\n\n\n\\vspace{-0.2cm}\n\\section{Conclusion}\nTest-time training~(TTT) tackles the realistic challenges of deploying domain adaptation on-the-fly. In this work, we are first motivated by the confused evaluation protocols for TTT and propose two key criteria, namely modifying source training objective and sequential inference, to further categorize existing methods into four TTT protocols. Under the most realistic protocol, i.e. sequential test-time training (sTTT), we develop a test-time anchored clustering (TTAC) approach to align target domain features to the source ones. Unlike batchnorm and classifier prototype updates, anchored clustering allows all network parameters to be trainable, thus demonstrating stronger test-time training ability. We further propose pseudo label filtering and an iterative update method to improve anchored clustering and save memory footprint respectively. Experiments on five datasets verified the effectiveness of TTAC under sTTT as well as other TTT protocols.\n\n\n{\n\\small\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{#2}\\label{S:#1}%\n \\ifShowLabels \\TeXref{{S:#1}} \\fi}\n\\newcommand{\\ssec}[2]{\\subsection{#2}\\label{SS:#1}%\n \\ifShowLabels \\TeXref{{SS:#1}} \\fi}\n\n\\newcommand{\\sssec}[2]{\\subsubsection{#2}\\label{SSS:#1}%\n \\ifShowLabels \\TeXref{{SSS:#1}} \\fi}\n\n\n\\newcommand{\\refs}[1]{Section ~\\ref{S:#1}}\n\\newcommand{\\refss}[1]{Section ~\\ref{SS:#1}}\n\\newcommand{\\refsss}[1]{Section ~\\ref{SSS:#1}}\n\\newcommand{\\reft}[1]{Theorem ~\\ref{T:#1}}\n\\newcommand{\\refl}[1]{Lemma ~\\ref{L:#1}}\n\\newcommand{\\refp}[1]{Proposition ~\\ref{P:#1}}\n\\newcommand{\\refc}[1]{Corollary ~\\ref{C:#1}}\n\\newcommand{\\refd}[1]{Definition ~\\ref{D:#1}}\n\\newcommand{\\refr}[1]{Remark ~\\ref{R:#1}}\n\\newcommand{\\refe}[1]{\\eqref{E:#1}}\n\\newcommand{\\refco}[1]{Conjecture ~\\ref{Co:#1}}\n\n\\newenvironment{thm}[1]%\n { \\begin{Thm} \\label{T:#1} \\ifShowLabels \\TeXref{T:#1} \\fi }%\n { \\end{Thm} }\n\n\\renewcommand{\\th}[1]{\\begin{thm}{#1} \\sl }\n\\renewcommand{\\eth}{\\end{thm} }\n\n\\newenvironment{lemma}[1]%\n { \\begin{Lem} \\label{L:#1} \\ifShowLabels \\TeXref{L:#1} \\fi }%\n { \\end{Lem} }\n\\newcommand{\\lem}[1]{\\begin{lemma}{#1} \\sl}\n\\newcommand{\\end{lemma}}{\\end{lemma}}\n\n\n\\newenvironment{propos}[1]%\n { \\begin{Prop} \\label{P:#1} \\ifShowLabels \\TeXref{P:#1} \\fi }%\n { \\end{Prop} }\n\\newcommand{\\prop}[1]{\\begin{propos}{#1}\\sl }\n\\newcommand{\\end{propos}}{\\end{propos}}\n\n\\newenvironment{corol}[1]%\n { \\begin{Cor} \\label{C:#1} \\ifShowLabels \\TeXref{C:#1} \\fi }%\n { \\end{Cor} }\n\\newcommand{\\cor}[1]{\\begin{corol}{#1} \\sl }\n\\newcommand{\\end{corol}}{\\end{corol}}\n\n\\newenvironment{defeni}[1]%\n { \\begin{Def} \\label{D:#1} \\ifShowLabels \\TeXref{D:#1} \\fi }%\n { \\end{Def} }\n\\newcommand{\\defe}[1]{\\begin{defeni}{#1} \\sl }\n\\newcommand{\\end{defeni}}{\\end{defeni}}\n\n\\newenvironment{remark}[1]%\n { \\begin{Rem} \\label{R:#1} \\ifShowLabels \\TeXref{R:#1} \\fi }%\n { \\end{Rem} }\n\\newcommand{\\rem}[1]{\\begin{remark}{#1}}\n\\newcommand{\\end{remark}}{\\end{remark}}\n\n\\newenvironment{conjec}[1]%\n { \\begin{Conj} \\label{Co:#1} \\ifShowLabels \\TeXref{Co:#1} \\fi }%\n { \\end{Conj} }\n\\renewcommand{\\conj}[1]{\\begin{conjec}{#1} \\sl }\n\\newcommand{\\end{conjec}}{\\end{conjec}}\n\n\\newcommand{\\eq}[1]%\n { \\ifShowLabels \\TeXref{E:#1} \\fi\n \\begin{equation} \\label{E:#1} }\n\\newcommand{ \\end{equation} }{ \\end{equation} }\n\n\\newcommand{ \\begin{proof} }{ \\begin{proof} }\n\\newcommand{ \\end{proof} }{ \\end{proof} }\n\n\n\n\n\n\n\\newcommand\\alp{\\alpha} \\newcommand\\Alp{\\Alpha}\n\\newcommand\\bet{\\beta}\n\\newcommand\\gam{\\gamma} \\newcommand\\Gam{\\Gamma}\n\\newcommand\\del{\\delta} \\newcommand\\Del{\\Delta}\n\\newcommand\\eps{\\varepsilon}\n\\newcommand\\zet{\\zeta}\n\\newcommand\\tet{\\theta} \\newcommand\\Tet{\\Theta}\n\\newcommand\\iot{\\iota}\n\\newcommand\\kap{\\kappa}\n\\newcommand\\lam{\\lambda} \\newcommand\\Lam{\\Lambda}\n\\newcommand\\sig{\\sigma} \\newcommand\\Sig{\\Sigma}\n\\newcommand\\vphi{\\varphi}\n\\newcommand\\ome{\\omega} \\newcommand\\Ome{\\Omega}\n\\newcommand\\Sit{\\Sigma_\\tau }\n\\newcommand\\si{\\sigma }\n\\newcommand\\calA{{\\mathcal{A}}}\n\\newcommand\\calB{{\\mathcal{B}}}\n\\newcommand\\calC{{\\mathcal{C}}}\n\\newcommand\\calD{{\\mathcal{D}}}\n\\newcommand\\calE{{\\mathcal{E}}}\n\\newcommand\\calF{{\\mathcal{F}}}\n\\newcommand\\calG{{\\mathcal{G}}}\n\\newcommand\\calH{{\\mathcal{H}}}\n\\newcommand\\calI{{\\mathcal{I}}}\n\\newcommand\\calJ{{\\mathcal{J}}}\n\\newcommand\\calK{{\\mathcal{K}}}\n\\newcommand\\calL{{\\mathcal{L}}}\n\\newcommand\\calM{{\\mathcal{M}}}\n\\newcommand\\calN{{\\mathcal{N}}}\n\\newcommand\\calO{{\\mathcal{O}}}\n\\newcommand\\calP{{\\mathcal{P}}}\n\\newcommand\\calQ{{\\mathcal{Q}}}\n\\newcommand\\calR{{\\mathcal{R}}}\n\\newcommand\\calS{{\\mathcal{S}}}\n\\newcommand\\calT{{\\mathcal{T}}}\n\\newcommand\\calU{{\\mathcal{U}}}\n\\newcommand\\calV{{\\mathcal{V}}}\n\\newcommand\\calW{{\\mathcal{W}}}\n\\newcommand\\calX{{\\mathcal{X}}}\n\\newcommand\\calY{{\\mathcal{Y}}}\n\\newcommand\\calZ{{\\mathcal{Z}}}\n\\newcommand\\C{{\\mathbb C}}\n\\newcommand\\sset{\\subset}\n\\newcommand\\sminus{\\smallsetminus}\n\\newcommand\\en{{\\enspace}}\n\\newcommand\\vi{${\\en\\sf {(i)}}\\;$}\n\\newcommand\\vii{${\\;\\sf {(ii)}}\\;$}\n\\newcommand\\viii{${\\sf {(iii)}}\\;$}\n\\newcommand\\iv{${\\sf {(iv)}}\\;$}\n\\newcommand\\vv{${\\sf {(v)}}\\;$}\n\n\n\\newcommand{\\mathcal{C}}{\\mathcal{C}}\n\\newcommand{\\mathbb{V}}{\\mathbb{V}}\n\\newcommand{\\mathbf{D}}{\\mathbf{D}}\n\n\n\n\\newcommand\\bfa{{\\mathbf a}} \\newcommand\\bfA{{\\mathbf A}}\n\\newcommand\\bfb{{\\mathbf b}} \\newcommand\\bfB{{\\mathbf B}}\n\\newcommand\\bfc{{\\mathbf c}} \\newcommand\\bfC{{\\mathbf C}}\n\\newcommand\\bfd{{\\mathbf d}} \\newcommand\\bfD{{\\mathbf D}}\n\\newcommand\\bfe{{\\mathbf e}} \\newcommand\\bfE{{\\mathbf E}}\n\\newcommand\\bff{{\\mathbf f}} \\newcommand\\bfF{{\\mathbf F}}\n\\newcommand\\bfg{{\\mathbf g}} \\newcommand\\bfG{{\\mathbf G}}\n\\newcommand\\bfh{{\\mathbf h}} \\newcommand\\bfH{{\\mathbf H}}\n\\newcommand\\bfi{{\\mathbf i}} \\newcommand\\bfI{{\\mathbf I}}\n\\newcommand\\bfj{{\\mathbf j}} \\newcommand\\bfJ{{\\mathbf J}}\n\\newcommand\\bfk{{\\mathbf k}} \\newcommand\\bfK{{\\mathbf K}}\n\\newcommand\\bfl{{\\mathbf l}} \\newcommand\\bfL{{\\mathbf L}}\n\\newcommand\\bfm{{\\mathbf m}} \\newcommand\\bfM{{\\mathbf M}}\n\\newcommand\\bfn{{\\mathbf n}} \\newcommand\\bfN{{\\mathbf N}}\n\\newcommand\\bfo{{\\mathbf o}} \\newcommand\\bfO{{\\mathbf O}}\n\\newcommand\\bfp{{\\mathbf p}} \\newcommand\\bfP{{\\mathbf P}}\n\\newcommand\\bfq{{\\mathbf q}} \\newcommand\\bfQ{{\\mathbf Q}}\n\\newcommand\\bfr{{\\mathbf r}} \\newcommand\\bfR{{\\mathbf R}}\n\\newcommand\\bfs{{\\mathbf s}} \\newcommand\\bfS{{\\mathbf S}}\n\\newcommand\\bft{{\\mathbf t}} \\newcommand\\bfT{{\\mathbf T}}\n\\newcommand\\bfu{{\\mathbf u}} \\newcommand\\bfU{{\\mathbf U}}\n\\newcommand\\bfv{{\\mathbf v}} \\newcommand\\bfV{{\\mathbf V}}\n\\newcommand\\bfw{{\\mathbf w}} \\newcommand\\bfW{{\\mathbf W}}\n\\newcommand\\bfx{{\\mathbf x}} \\newcommand\\bfX{{\\mathbf X}}\n\\newcommand\\bfy{{\\mathbf y}} \\newcommand\\bfY{{\\mathbf Y}}\n\\newcommand\\bfz{{\\mathbf z}} \\newcommand\\bfZ{{\\mathbf Z}}\n\n\\newcommand\\QQ{\\mathbb{Q}}\n\\newcommand\\WW{\\mathbb{W}}\n\\newcommand\\EE{\\mathbb{E}}\n\\newcommand\\RR{\\mathbb{R}}\n\\newcommand\\TT{\\mathbb{T}}\n\\newcommand\\YY{\\mathbb{Y}}\n\\newcommand\\UU{\\mathbb{U}}\n\\newcommand\\OO{\\mathbb{O}}\n\\newcommand\\PP{\\mathbb{P}}\n\\renewcommand\\AA{\\mathbb{A}}\n\\renewcommand\\SS{\\mathbb{S}}\n\\newcommand\\DD{\\mathbb{D}}\n\\newcommand\\FF{\\mathbb{F}}\n\\newcommand\\GG{\\mathbb{G}}\n\\newcommand\\HH{\\mathbb{H}}\n\\newcommand\\JJ{\\mathbb{J}}\n\\newcommand\\KK{\\mathbb{K}}\n\\newcommand\\LL{\\mathbb{L}}\n\\newcommand\\ZZ{\\mathbb{Z}}\n\\newcommand\\XX{\\mathbb{X}}\n\\newcommand\\CC{\\mathsf{M}}\n\\newcommand\\VV{\\mathbb{V}}\n\\newcommand\\BB{\\mathbb{B}}\n\\newcommand\\NN{\\mathbb{N}}\n\\newcommand\\MM{\\mathbb{M}}\n\n\n \n\\newcommand\\grA{{\\mathfrak{A}}} \\newcommand\\gra{{\\mathfrak{a}}}\n\\newcommand\\grB{{\\mathfrak{B}}} \\newcommand\\grb{{\\mathfrak{b}}}\n\\newcommand\\grC{{\\mathfrak{C}}} \\newcommand\\grc{{\\mathfrak{c}}}\n\\newcommand\\grD{{\\mathfrak{D}}} \\newcommand\\grd{{\\mathfrak{d}}}\n\\newcommand\\grE{{\\mathfrak{E}}} \\newcommand\\gre{{\\mathfrak{e}}}\n\\newcommand\\grF{{\\mathfrak{F}}} \\newcommand\\grf{{\\mathfrak{f}}}\n\\newcommand\\grG{{\\mathfrak{G}}} \\newcommand\\grg{{\\mathfrak{g}}}\n\\newcommand\\grH{{\\mathfrak{H}}} \\newcommand\\grh{{\\mathfrak{h}}}\n\\newcommand\\grI{{\\mathfrak{I}}} \\newcommand\\gri{{\\mathfrak{i}}}\n\\newcommand\\grJ{{\\mathfrak{J}}} \\newcommand\\grj{{\\mathfrak{j}}}\n\\newcommand\\grK{{\\mathfrak{K}}} \\newcommand\\grk{{\\mathfrak{k}}}\n\\newcommand\\grL{{\\mathfrak{L}}} \\newcommand\\grl{{\\mathfrak{l}}}\n\\newcommand\\grM{{\\mathfrak{M}}} \\newcommand\\grm{{\\mathfrak{m}}}\n\\newcommand\\grN{{\\mathfrak{N}}} \\newcommand\\grn{{\\mathfrak{n}}}\n\\newcommand\\grO{{\\mathfrak{O}}} \\newcommand\\gro{{\\mathfrak{o}}}\n\\newcommand\\grP{{\\mathfrak{P}}} \\newcommand\\grp{{\\mathfrak{p}}}\n\\newcommand\\grQ{{\\mathfrak{Q}}} \\newcommand\\grq{{\\mathfrak{q}}}\n\\newcommand\\grR{{\\mathfrak{R}}} \\newcommand\\grr{{\\mathfrak{r}}}\n\\newcommand\\grS{{\\mathfrak{S}}} \\newcommand\\grs{{\\mathfrak{s}}}\n\\newcommand\\grT{{\\mathfrak{T}}} \\newcommand\\grt{{\\mathfrak{t}}}\n\\newcommand\\grU{{\\mathfrak{U}}} \\newcommand\\gru{{\\mathfrak{u}}}\n\\newcommand\\grV{{\\mathfrak{V}}} \\newcommand\\grv{{\\mathfrak{v}}}\n\\newcommand\\grW{{\\mathfrak{W}}} \\newcommand\\grw{{\\mathfrak{w}}}\n\\newcommand\\grX{{\\mathfrak{X}}} \\newcommand\\grx{{\\mathfrak{x}}}\n\\newcommand\\grY{{\\mathfrak{Y}}} \\newcommand\\gry{{\\mathfrak{y}}}\n\\newcommand\\grZ{{\\mathfrak{Z}}} \\newcommand\\grz{{\\mathfrak{z}}}\n\n\n\n\\newcommand\\nek{,\\ldots,}\n\\newcommand\\sdp{\\times \\hskip -0.3em {\\raise 0.3ex\n\\hbox{$\\scriptscriptstyle |$}}}\n\n\n\\newcommand\\ad{\\operatorname{ad}}\n\\newcommand\\area{\\operatorname{area}}\n\\newcommand\\Aug{\\operatorname{Aug}}\n\\newcommand\\Aut{\\operatorname{Aut}}\n\\newcommand\\Ad{\\operatorname{Ad}}\n\\newcommand\\Char{\\operatorname{Char}}\n\\newcommand\\Clo{\\operatorname{Cl}}\n\\newcommand\\cf{{\\rm \\,cf\\,}}\n\\newcommand\\Cone{\\operatorname{Cone}}\n\\newcommand\\cont{\\operatorname{cont}}\n\\newcommand\\codim{\\operatorname{codim}}\n\\newcommand\\Coker{\\operatorname{Coker}}\n\\newcommand\\conv{\\operatorname{conv}}\n\\newcommand\\Conv{\\operatorname{Conv}}\n\\newcommand\\const{\\operatorname{const}}\n\\newcommand\\Const{\\operatorname{Const}}\n\\newcommand\\Det{\\operatorname{Det}}\n\\newcommand\\diag{\\operatorname{diag}}\n\\newcommand\\diam{\\operatorname{diam}}\n\\newcommand\\Diam{\\operatorname{Diam}}\n\\newcommand\\dist{\\operatorname{dist}}\n\\newcommand\\Dom{\\operatorname{Dom}}\n\\newcommand\\dom{\\operatorname{dom}}\n\\newcommand\\End{\\operatorname{End\\,}}\n\\newcommand\\Ext{\\operatorname{Ext}}\n\\newcommand\\esssup{\\operatorname{ess\\ sup}}\n\\newcommand\\Ran{{\\rm Ran}}\n\\newcommand\\RANK{\\operatorname{rank}}\n\\newcommand\\Geo{\\operatorname{Geo}}\n\\newcommand\\GL{\\operatorname{GL}}\n\\newcommand\\gl{{\\bf gl}}\n\\newcommand\\grad{\\operatorname {\\rm grad}}\n\\newcommand\\Hom{\\operatorname {Hom}}\n\\newcommand\\RHom{\\operatorname {RHom}}\n\\newcommand\\id{\\operatorname{id}}\n\\newcommand\\Id{\\operatorname{Id}}\n\\newcommand\\im{\\operatorname {im}}\n\\newcommand\\IM{\\operatorname{Im}}\n\\newcommand\\Ind{\\operatorname{Ind}}\n\\newcommand\\ind{\\operatorname{ind}}\n\\newcommand\\Inf{\\operatorname{Inf}}\n\\newcommand\\Int{\\operatorname{Int}}\n\\newcommand\\Min{\\operatorname{Min}}\n\\newcommand\\MOD{\\operatorname{mod}}\n\\newcommand{\\operatorname{Mor}}{\\operatorname{Mor}}\n\\newcommand{\\operatorname{Ob\\,}}{\\operatorname{Ob\\,}}\n\\newcommand\\ord{\\operatorname{ord}}\n\\newcommand\\Ka{\\operatorname{Ka}}\n\\newcommand\\Ker{\\operatorname{Ker}}\n\\newcommand\\PGL{{\\mathrm{PGL}}}\n\\newcommand\\PGSp{{\\mathrm{PGSp}}}\n\\newcommand\\Plt{\\operatorname{Plt}}\n\\newcommand\\proj{\\operatorname{proj}}\n\\newcommand\\Proj{\\operatorname{Proj}}\n\\newcommand\\res{\\rm res}\n\\newcommand\\rk{\\operatorname{rk}}\n\\newcommand\\Range{\\operatorname{Range}}\n\\newcommand\\RE{\\operatorname{Re}}\n\\newcommand\\operatorname{Res}{\\operatorname{Res}}\n\\newcommand\\rot{\\operatorname{rot}}\n\\newcommand\\Max{\\operatorname{Max}}\n\\newcommand\\Maximum{\\operatorname{Maximum}}\n\\newcommand\\Minimum{\\operatorname{Minimum}}\n\\newcommand\\Minimize{\\operatorname{Minimize}}\n\\newcommand\\operatorname{Perv}{\\operatorname{Perv}}\n\\newcommand\\Prob{\\operatorname{Prob}}\n\\newcommand\\pr{\\operatorname{pr}}\n\\newcommand\\sech{\\rm sech}\n\\newcommand\\sgn{\\operatorname{sgn}}\n\\newcommand{\\operatorname{sign}}{\\operatorname{sign}}\n\\newcommand\\SL{{\\rm SL}}\n\\newcommand\\Sbm{\\operatorname{Sbm}}\n\\newcommand\\SO{{\\rm SO}}\n\\newcommand\\SPAN{\\operatorname{span}}\n\\newcommand\\spec{{\\rm spec}}\n\\newcommand\\Spec{\\operatorname{Spec}}\n\\newcommand\\supess{\\operatorname{sup\\ ess}}\n\\newcommand\\supp{\\operatorname{supp}}\n\\newcommand\\Supp{\\operatorname{Supp}}\n\\newcommand\\Sup{\\operatorname{Sup}}\n\\newcommand\\Sym{\\operatorname{Sym}}\n\\newcommand\\tr{\\operatorname{tr}}\n\\newcommand\\Tr{\\operatorname{Tr}}\n\\newcommand\\Tor{\\operatorname{Tor}}\n\\newcommand\\Var{\\operatorname{Var}}\n\\newcommand\\Vol{\\operatorname{Vol}}\n\n\\newcommand\\oa{{\\overline{a}}}\n\\newcommand\\oA{{\\overline{A}}}\n\\newcommand\\ob{{\\overline{b}}}\n\\newcommand\\oB{{\\overline{B}}}\n\\newcommand\\oc{{\\overline{c}}}\n\\newcommand\\oC{{\\overline{C}}}\n\\newcommand\\oD{{\\overline{D}}}\n\\newcommand\\od{{\\overline{d}}}\n\\newcommand\\oE{{\\overline{E}}}\n\\renewcommand\\oe{{\\overline{e}}}\n\\newcommand\\of{{\\overline{f}}}\n\\newcommand\\oF{{\\overline{F}}}\n\\newcommand\\og{{\\overline{g}}}\n\\newcommand\\oG{{\\overline{G}}}\n\\newcommand\\oh{{\\overline{h}}}\n\\newcommand\\oH{{\\overline{H}}}\n\\newcommand\\oI{{\\overline{I}}}\n\\newcommand\\oj{{\\overline{j}}}\n\\newcommand\\oJ{{\\overline{J}}}\n\\newcommand\\ok{{\\overline{k}}}\n\\newcommand\\oK{{\\overline{K}}}\n\\newcommand\\oL{{\\overline{L}}}\n\\newcommand\\om{{\\overline{m}}}\n\\newcommand\\oM{{\\overline{M}}}\n\\newcommand\\oN{{\\overline{N}}}\n\\newcommand\\oO{{\\overline{O}}}\n\\newcommand\\oo{{\\overline{o}}}\n\\newcommand\\op{{\\overline{p}}}\n\\newcommand\\oP{{\\overline{P}}}\n\\newcommand\\oq{{\\overline{q}}}\n\\newcommand\\oQ{{\\overline{Q}}}\n\\newcommand\\OR{{\\overline{r}}}\n\\newcommand\\oS{{\\overline{S}}}\n\\newcommand\\os{{\\overline{s}}}\n\\newcommand\\ot{{\\overline{t}}}\n\\newcommand\\oT{{\\overline{T}}}\n\\newcommand\\ou{{\\overline{u}}}\n\\newcommand\\oU{{\\overline{U}}}\n\\newcommand\\ov{{\\overline{v}}}\n\\newcommand\\oV{{\\overline{V}}}\n\\newcommand\\ow{{\\overline{w}}}\n\\newcommand\\oW{{\\overline{W}}}\n\\newcommand\\ox{{\\overline{x}}}\n\\newcommand\\oX{{\\overline{X}}}\n\\newcommand\\oy{{\\overline{y}}}\n\\newcommand\\oY{{\\overline{Y}}}\n\\newcommand\\oz{{\\overline{z}}}\n\\newcommand\\oZ{{\\overline{Z}}}\n\n\\newcommand\\oalp{{\\overline{\\alpha}}}\n\\newcommand\\obet{{\\overline{\\bet}}}\n\\newcommand\\odel{{\\overline{\\del}}}\n\\newcommand\\oDel{{\\overline{\\Del}}}\n\\newcommand\\ocup{{\\overline{\\cup}}}\n\\newcommand\\ovarphi{{\\overline{\\varphi}}}\n\\newcommand\\ochi{{\\overline{\\chi}}}\n\\newcommand\\oeps{{\\overline{\\eps}}}\n\\newcommand\\oeta{{\\overline{\\eta}}}\n\\newcommand\\ogam{{\\overline{\\gam}}}\n\\newcommand\\okap{{\\overline{\\kap}}}\n\\newcommand\\olam{{\\overline{\\lambda}}}\n\\newcommand\\oLam{{\\overline{\\Lambda}}}\n\\newcommand\\omu{{\\overline{\\mu}}}\n\\newcommand\\onu{{\\overline{\\nu}}}\n\\newcommand\\oOme{{\\overline{\\Ome}}}\n\\newcommand\\ophi{\\overline{\\phi}}\n\\newcommand\\oPhi{{\\overline{\\Phi}}}\n\\newcommand\\opi{{\\overline{\\pi}}}\n\\newcommand\\oPsi{{\\overline{\\Psi}}}\n\\newcommand\\opsi{{\\overline{\\psi}}}\n\\newcommand\\orho{{\\overline{\\rho}}}\n\\newcommand\\osig{{\\overline{\\sig}}}\n\\newcommand\\otau{{\\overline{\\tau}}}\n\\newcommand\\otet{{\\overline{\\theta}}}\n\\newcommand\\oxi{{\\overline{\\xi}}}\n\\newcommand\\oome{\\overline{\\ome}}\n\\newcommand\\opart{{\\overline{\\partial}}}\n\n\n\\newcommand\\ua{{\\underline{a}}}\n\\newcommand\\ub{{\\underline{b}}}\n\\newcommand\\uc{{\\underline{c}}}\n\\newcommand\\uD{{\\underline{D}}}\n\\newcommand\\uk{{\\underline{k}}}\n\\newcommand\\ue{{\\underline{e}}}\n\\newcommand\\uj{{\\underline{j}}}\n\\newcommand\\ul{{\\underline{l}}}\n\\newcommand\\uL{{\\underline{L}}}\n\\newcommand\\uo{{\\underline{o}}}\n\\newcommand\\uO{{\\underline{O}}}\n\\newcommand\\uP{{\\underline{P}}}\n\\newcommand\\uQ{{\\underline{Q}}}\n\\newcommand\\um{{\\underline{m}}}\n\\newcommand\\uM{{\\underline{M}}}\n\\newcommand\\un{{\\underline{n}}}\n\\newcommand\\us{{\\underline{s}}}\n\\newcommand\\ut{{\\underline{t}}}\n\\newcommand\\uu{{\\underline{u}}}\n\\newcommand\\uv{{\\underline{v}}}\n\\newcommand\\uV{{\\underline{V}}}\n\\newcommand\\ux{{\\underline{x}}}\n\\newcommand\\uX{{\\underline{X}}}\n\\newcommand\\uy{{\\underline{y}}}\n\\newcommand\\uz{{\\underline{z}}}\n\n\\newcommand\\ualp{{\\underline{\\alp}}}\n\\newcommand\\ubet{{\\underline{\\bet}}}\n\\newcommand\\uchi{{\\underline{\\chi}}}\n\\newcommand\\udel{{\\underline{\\del}}}\n\\newcommand\\uell{{\\underline{\\ell}}}\n\\newcommand\\ueps{{\\underline{\\eps}}}\n\\newcommand\\ueta{{\\underline{\\eta}}}\n\\newcommand\\uGam{{\\underline{\\Gamma}}}\n\\newcommand\\unu{{\\underline{\\nu}}}\n\\newcommand\\uome{{\\underline{\\omega}}}\n\\newcommand\\utet{{\\underline{\\tet}}}\n\\newcommand\\ulam{{\\underline{\\lam}}}\n\n\n\\newcommand\\hata{{\\widehat{a}}}\n\\newcommand\\hatA{{\\widehat{A}}}\n\\newcommand\\hatb{{\\widehat{b}}}\n\\newcommand\\hatc{{\\widehat{c}}}\n\\newcommand\\hatC{{\\widehat{C}}}\n\\newcommand\\hatB{{\\widehat{B}}}\n\\newcommand\\hatD{{\\widehat{D}}}\n\\newcommand\\hate{{\\widehat{e}}}\n\\newcommand\\hatE{{\\widehat{E}}}\n\\newcommand\\hatf{{\\widehat{f}}}\n\\newcommand\\hatF{{\\widehat{F}}}\n\\newcommand\\hatg{{\\widehat{g}}}\n\\newcommand\\hatG{{\\widehat{G}}}\n\\newcommand\\hath{{\\widehat{h}}}\n\\newcommand\\hatH{{\\widehat{H}}}\n\\newcommand\\hati{{\\hat{i}}}\n\\newcommand\\hatI{{\\hat{I}}}\n\\newcommand\\hatj{{\\widehat{j}}}\n\\newcommand\\hatJ{{\\widehat{J}}}\n\\newcommand\\hatk{{\\widehat{k}}}\n\\newcommand\\hatK{{\\widehat{K}}}\n\\newcommand\\hatL{{\\widehat{L}}}\n\\newcommand\\hatm{{\\widehat{m}}}\n\\newcommand\\hatM{{\\widehat{M}}}\n\\newcommand\\hatn{{\\widehat{n}}}\n\\newcommand\\hatN{{\\widehat{N}}}\n\\newcommand\\hatp{{\\widehat{p}}}\n\\newcommand\\hatP{{\\widehat{P}}}\n\\newcommand\\hatr{{\\widehat{r}}}\n\\newcommand\\hatR{{\\widehat{R}}}\n\\newcommand\\hatq{{\\widehat{q}}}\n\\newcommand\\hatQ{{\\widehat{Q}}}\n\\newcommand\\hatT{{\\widehat{T}}}\n\\newcommand\\hatu{{\\widehat{u}}}\n\\newcommand\\hatU{{\\widehat{U}}}\n\\newcommand\\hatV{{\\widehat{V}}}\n\\newcommand\\hatv{{\\widehat{v}}}\n\\newcommand\\hatw{{\\widehat{w}}}\n\\newcommand\\hatW{{\\widehat{W}}}\n\\newcommand\\hatx{{\\widehat{x}}}\n\\newcommand\\hatX{{\\widehat{X}}}\n\\newcommand\\haty{{\\widehat{y}}}\n\\newcommand\\hatY{{\\widehat{Y}}}\n\\newcommand\\hatZ{{\\widehat{Z}}}\n\\newcommand\\hatz{{\\widehat{z}}}\n\n\\newcommand\\hatalp{{\\widehat{\\alpha}}}\n\\newcommand\\hatdel{{\\widehat{\\delta}}}\n\\newcommand\\hatDel{{\\widehat{\\Delta}}}\n\\newcommand\\hatbet{{\\widehat{\\beta}}}\n\\newcommand\\hateps{{\\hat{\\eps}}}\n\\newcommand\\hatgam{{\\widehat{\\gamma}}}\n\\newcommand\\hatGam{{\\widehat{\\Gamma}}}\n\\newcommand\\hatlam{{\\widehat{\\lambda}}}\n\\newcommand\\hatmu{{\\widehat{\\mu}}}\n\\newcommand\\hatnu{{\\widehat{\\nu}}}\n\\newcommand\\hatOme{{\\widehat{\\Ome}}}\n\\newcommand\\hatphi{{\\widehat{\\phi}}}\n\\newcommand\\hatPhi{{\\widehat{\\Phi}}}\n\\newcommand\\hatpi{{\\widehat{\\pi}}}\n\\newcommand\\hatpsi{{\\widehat{\\psi}}}\n\\newcommand\\hatPsi{{\\widehat{\\Psi}}}\n\\newcommand\\hatrho{{\\widehat{\\rho}}}\n\\newcommand\\hatsig{{\\widehat{\\sig}}}\n\\newcommand\\hatSig{{\\widehat{\\Sig}}}\n\\newcommand\\hattau{{\\widehat{\\tau}}}\n\\newcommand\\hattet{{\\widehat{\\theta}}}\n\\newcommand\\hatvarphi{{\\widehat{\\varphi}}}\n\\newcommand\\hatZZ{{\\widehat{\\ZZ}}}\n\n\n\n\\newcommand\\tilA{{\\widetilde{A}}}\n\\newcommand\\tila{{\\widetilde{a}}}\n\\newcommand\\tilB{{\\widetilde{B}}}\n\\newcommand\\tilb{{\\widetilde{b}}}\n\\newcommand\\tilc{{\\widetilde{c}}}\n\\newcommand\\tilC{{\\widetilde{C}}}\n\\newcommand\\tild{{\\widetilde{d}}}\n\\newcommand\\tilD{{\\widetilde{D}}}\n\\newcommand\\tilE{{\\widetilde{E}}}\n\\newcommand\\tilf{{\\widetilde{f}}}\n\\newcommand\\tilF{{\\widetilde{F}}}\n\\newcommand\\tilg{{\\widetilde{g}}}\n\\newcommand\\tilG{{\\widetilde{G}}}\n\\newcommand\\tilh{{\\widetilde{h}}}\n\\newcommand\\tilk{{\\widetilde{k}}}\n\\newcommand\\tilK{{\\widetilde{K}}}\n\\newcommand\\tilj{{\\widetilde{j}}}\n\\newcommand\\tilm{{\\widetilde{m}}}\n\\newcommand\\tilM{{\\widetilde{M}}}\n\\newcommand\\tilH{{\\widetilde{H}}}\n\\newcommand\\tilL{{\\widetilde{L}}}\n\\newcommand\\tilN{{\\widetilde{N}}}\n\\newcommand\\tiln{{\\widetilde{n}}}\n\\newcommand\\tilO{{\\widetilde{O}}}\n\\newcommand\\tilP{{\\widetilde{P}}}\n\\newcommand\\tilp{{\\widetilde{p}}}\n\\newcommand\\tilq{{\\widetilde{q}}}\n\\newcommand\\tilQ{{\\widetilde{Q}}}\n\\newcommand\\tilR{{\\widetilde{R}}}\n\\newcommand\\tilr{{\\widetilde{r}}}\n\\newcommand\\tilS{{\\widetilde{S}}}\n\\newcommand\\tils{{\\widetilde{s}}}\n\\newcommand\\tilT{{\\widetilde{T}}}\n\\newcommand\\tilt{{\\widetilde{t}}}\n\\newcommand\\tilu{{\\widetilde{u}}}\n\\newcommand\\tilU{{\\widetilde{U}}}\n\\newcommand\\tilv{{\\widetilde{v}}}\n\\newcommand\\tilV{{\\widetilde{V}}}\n\\newcommand\\tilw{{\\widetilde{w}}}\n\\newcommand\\tilW{{\\widetilde{W}}}\n\\newcommand\\tilX{{\\widetilde{X}}}\n\\newcommand\\tilx{{\\widetilde{x}}}\n\\newcommand\\tily{{\\widetilde{y}}}\n\\newcommand\\tilY{{\\widetilde{Y}}}\n\\newcommand\\tilZ{{\\widetilde{Z}}}\n\\newcommand\\tilz{{\\widetilde{z}}}\n\n\\newcommand\\tilalp{{\\widetilde{\\alpha}}}\n\\newcommand\\tilbet{{\\widetilde{\\beta}}}\n\\newcommand\\tildel{{\\widetilde{\\delta}}}\n\\newcommand\\tilDel{{\\widetilde{\\Delta}}}\n\\newcommand\\tilchi{{\\widetilde{\\chi}}}\n\\newcommand\\tileta{{\\widetilde{\\eta}}}\n\\newcommand\\tilgam{{\\widetilde{\\gamma}}}\n\\newcommand\\tilGam{{\\widetilde{\\Gamma}}}\n\\newcommand\\tilome{{\\widetilde{\\ome}}}\n\\newcommand\\tillam{{\\widetilde{\\lam}}}\n\\newcommand\\tilmu{{\\widetilde{\\mu}}}\n\\newcommand\\tilphi{{\\widetilde{\\phi}}}\n\\newcommand\\tilpi{{\\widetilde{\\pi}}}\n\\newcommand\\tilpsi{{\\widetilde{\\psi}}}\n\\renewcommand\\tilome{{\\widetilde{\\ome}}}\n\\newcommand\\tilOme{{\\widetilde{\\Ome}}}\n\\newcommand\\tilPhi{{\\widetilde{\\Phi}}}\n\\newcommand\\tilQQ{{\\widetilde{\\QQ}}}\n\\newcommand\\tilrho{{\\widetilde{\\rho}}}\n\\newcommand\\tilsig{{\\widetilde{\\sig}}}\n\\newcommand\\tiltau{{\\widetilde{\\tau}}}\n\\newcommand\\tiltet{{\\widetilde{\\theta}}}\n\\newcommand\\tilvarphi{{\\widetilde{\\varphi}}}\n\\newcommand\\tilxi{{\\widetilde{\\xi}}}\n\n\\newcommand{\\,\\hookrightarrow\\,}{\\,\\hookrightarrow\\,}\n\\newcommand{\\,\\longrightarrow\\,}{\\,\\longrightarrow\\,}\n\\newcommand{\\mapsto}{\\mapsto}\n\\newcommand{\\,\\,\\twoheadrightarrow\\,\\,}{\\,\\,\\twoheadrightarrow\\,\\,}\n\n\\newcommand\\twolongrightarrow{\\ \\hbox{$\\longrightarrow\\hskip -17pt\n\\longrightarrow$}\\ }\n\n\\newcommand\\x{\\times}\n\\newcommand\\ten{\\otimes}\n\n\\renewcommand{\\>}{\\rangle}\n\\newcommand{\\<}{\\langle}\n\n\n\\newcommand{\\rangle}{\\rangle}\n\\newcommand{\\langle}{\\langle}\n\n\\newcommand\\qlb{{\\overline \\QQ}_l}\n\\newcommand\\fr{\\text{Fr}}\n\\newcommand\\frw{\\fr_w}\n\\newcommand\\gln{{\\bf GL}(n)}\n\\newcommand\\wt{\\widetilde}\n\\newcommand\\chl{\\calK_{\\calL}}\n\\newcommand\\ch{\\text{ch}}\n\n\\newcommand{\\Phi\\hskip-7.5pt \\Phi}{\\Phi\\hskip-7.5pt \\Phi}\n\\renewcommand{\\oPhi}{{\\overline \\Phi}}\n\\renewcommand{\\gl}{{\\bf GL}}\n\\newcommand{{\\hat GL(n,F)}}{{\\hat GL(n,F)}}\n\\renewcommand{\\Id}{\\text{Id}}\n\\newcommand{\\operatorname{Irr}}{\\operatorname{Irr}}\n\\newcommand\\gd{{\\bfG}^{\\vee}}\n\\newcommand\\cic{C^{\\infty}_c}\n\\renewcommand\\Spec{\\operatorname{Spec}}\n\n\\newcommand\\tbY{{\\widetilde \\bfY}}\n\\newcommand\\tbf{{\\widetilde \\bff}}\n\\newcommand\\tbp{{\\widetilde \\bfp}}\n\\newcommand\\obY{{\\overline \\bfY}}\n\\newcommand\\obf{{\\overline \\bff}}\n\\newcommand\\obp{{\\overline \\bfp}}\n\\renewcommand\\oome{{\\overline \\ome}}\n\\newcommand\\tbphi{{\\widetilde \\Phi\\hskip-7.5pt \\Phi}}\n\\newcommand\\obPhi{{\\overline \\Phi\\hskip-7.5pt \\Phi}}\n\\newcommand\\tbG{{\\widetilde \\bfG}}\n\\newcommand\\obT{{\\overline \\bfT}_\\rho}\n\\newcommand\\arlam{{\\overset{\\to}\\lam}}\n\\newcommand\\tilLam{{\\widetilde \\Lam}}\n\\newcommand\\tbT{{\\widetilde \\bfT}}\n\n\n\\newcommand\\nc{\\newcommand}\n\\renewcommand{\\gd}{\\grg^{\\vee}}\n\\newcommand{\\alp^{\\vee}}{\\alp^{\\vee}}\n\\newcommand{G(\\calO)}{G(\\calO)}\n\\newcommand{M(\\calO)}{M(\\calO)}\n\\newcommand{\\rho^{\\vee}}{\\rho^{\\vee}}\n\\newcommand{{\\overline \\gg}}{{\\overline \\gg}}\n\\newcommand{\\ogg^{\\lam}}{{\\overline \\gg}^{\\lam}}\n\\newcommand{\\text{Perv}_{\\go}(\\gg)}{\\text{Perv}_{G(\\calO)}(\\gg)}\n\\newcommand{{\\operatorname{IC}}}{{\\operatorname{IC}}}\n\\newcommand{\\operatorname{Rep}}{\\operatorname{Rep}}\n\\newcommand{{\\widetilde \\theta}}{{\\widetilde \\theta}}\n\\newcommand\\du{\\check{\\ }}\n\\newcommand\\gr{\\operatorname{gr}}\n\n\\newcommand{\\overline{\\mathbb A}{}}{\\overline{\\mathbb A}{}}\n\n\\newcommand{{\\mathbf d}}{{\\mathbf d}}\n\n\n\\newcommand{{\\hat{\\mathfrak g}^{\\vee}}}{{\\hat{\\mathfrak g}^{\\vee}}}\n\\newcommand{{G\\check{\\ }}}{{G\\check{\\ }}}\n\\newcommand{{{\\mathfrak h}^{\\vee}}}{{{\\mathfrak h}^{\\vee}}}\n\n\\newcommand{\\overset{\\circ}{Ufl}{}}{\\overset{\\circ}{Ufl}{}}\n\\renewcommand{\\oF}{\\overset{\\circf}{\\mathfrak F}{}}\n\\newcommand{{\\stackrel{\\sim}{\\longrightarrow}}}{{\\stackrel{\\sim}{\\longrightarrow}}}\n\n\\nc\\aff{\\operatorname{aff}}\n\\nc\\oGr{\\overline{{\\mathsf {Gr}}}}\n\\nc\\Bun{\\operatorname{Bun}}\n\\nc\\hgrg{\\widehat{\\grg}}\n\\renewcommand\\Int{\\operatorname{Int}}\n\\nc\\bInt{\\overline{\\Int}}\n\\nc\\hatLam{\\widehat{\\Lam}}\n\\nc\\bmu{\\overline{\\mu}}\n\\nc\\bnu{\\overline{\\nu}}\n\\nc\\blambda{\\overline{\\lam}}\n\\nc\\btau{{{t}}}\n\\nc\\bseta{{\\boldsymbol{\\eta}}}\n\\renewcommand\\SL{\\operatorname{SL}}\n\\nc\\ocalW{\\overline{\\calW}}\n\\nc\\pos{\\operatorname{pos}}\n\\nc\\IH{\\operatorname{IH}}\n\\nc\\Rep{\\operatorname{Rep}}\n\\nc\\Gal{\\operatorname{Gal}}\n\\nc{\\tilGr}{\\widetilde{{\\mathsf {Gr}}}}\n\\renewcommand\\overset{\\circ}{Ufl}{}{\\operatorname{Maps}}\n\\nc\\Pic{\\operatorname{Pic}}\n\n\n\n\\emergencystretch=2cm\n\n\\nc{\\HC}{{\\mathcal{HC}}}\n\\nc{\\on}{\\operatorname}\n\\nc{{\\mathbb A}}{{\\mathbb{A}}}\n\\nc{{\\mathbb C}}{{\\mathbb{C}}}\n\\nc{\\BG}{{\\mathbb{G}}}\n\\nc{\\BM}{{\\mathbb{M}}}\n\\nc{\\BMt}{\\BM_\\tau}\n\\nc{{\\mathbb N}}{{\\mathbb{N}}}\n\\nc{\\BQ}{{\\mathbb{Q}}}\n\\nc{{\\mathbb P}}{{\\mathbb{P}}}\n\\nc{\\BR}{{\\mathbb{R}}}\n\\nc{{\\mathbb Z}}{{\\mathbb{Z}}}\n\\nc{\\BS}{{\\mathbb{S}}}\n\n\\nc{\\CA}{{\\mathcal{A}}}\n\\nc{\\CB}{{\\mathcal{B}}}\n\\nc{\\CalC}{{\\mathcal C}}\n\\nc{\\CalD}{{\\mathcal D}}\n\\nc{\\CE}{{\\mathcal{E}}}\n\\nc{{\\mathcal F}}{{\\mathcal{F}}}\n\\nc{\\CG}{{\\mathcal{G}}}\n\\nc{\\CH}{{\\mathcal{H}}}\n\\nc{\\CK}{{\\mathcal{K}}}\n\\nc{\\CL}{{\\mathcal{L}}}\n\\nc{\\CM}{{\\mathcal{M}}}\n\\nc{\\CMM}{{\\mathcal{M}^{\\operatorname{gen}}_\\hbar(-\\rho)}}\n\\nc{\\CN}{{\\mathcal{N}}}\n\\nc{\\CO}{{\\mathcal{O}}}\n\\nc{\\CP}{{\\mathcal{P}}}\n\\nc{\\CQ}{{\\mathcal{Q}}}\n\\nc{\\CR}{{\\mathcal{R}}}\n\\nc{\\CS}{{\\mathcal{S}}}\n\\nc{\\CT}{{\\mathcal{T}}}\n\\nc{\\CU}{{\\mathcal{U}}}\n\\nc{\\CV}{{\\mathcal{V}}}\n\\nc{\\CW}{{\\mathcal{W}}}\n\\nc{\\CX}{{\\mathcal{X}}}\n\\nc{\\CY}{{\\mathcal{Y}}}\n\\nc{\\CZ}{{\\mathcal{Z}}}\n\n\\nc{\\gen}{{\\operatorname{gen}}}\n\\nc{\\cM}{{\\check{\\mathcal M}}{}}\n\\nc{\\csM}{{\\check{\\mathcal A}}{}}\n\\nc{\\obM}{{^G\\!{\\mathsf M}}}\n\\nc{\\obMt}{\\obM_\\tau}\n\\nc{\\oCA}{{\\overset{\\circ}{\\mathcal A}}{}}\n\\nc{\\obA}{{\\overset{\\circ}{\\mathbf A}}{}}\n\\nc{\\ooM}{{\\overset{\\circ}{M}}{}}\n\\nc{\\GM}{{^G\\!{\\mathsf M}}}\n\\nc{\\osR}{{^G\\!{\\mathsf R}}}\n\\nc{\\GMt}{\\GM_\\tau}\n\\nc{\\osRt}{\\osR_\\tau}\n\\nc{\\vM}{{\\overset{\\bullet}{\\mathcal M}}{}}\n\\nc{\\nM}{{\\underset{\\bullet}{\\mathcal M}}{}}\n\\nc{\\obD}{{\\overset{\\circ}{\\mathbf D}}{}}\n\\nc{\\cp}{{\\overset{\\circ}{\\mathbf p}}{}}\n\\nc{\\ofZ}{{\\overset{\\circ}{\\mathfrak Z}}{}}\n\n\\nc{\\fa}{{\\mathfrak{a}}}\n\\nc{\\fb}{{\\mathfrak{b}}}\n\\nc{{\\mathfrak g}}{{\\mathfrak{g}}}\n\\nc{\\fgl}{{\\mathfrak{gl}}}\n\\nc{\\fh}{{\\mathfrak{h}}}\n\\nc{\\fri}{{\\mathfrak{i}}}\n\\nc{\\fj}{{\\mathfrak{j}}}\n\\nc{\\fm}{{\\mathfrak{m}}}\n\\nc{\\fn}{{\\mathfrak{n}}}\n\\nc{\\fu}{{\\mathfrak{u}}}\n\\nc{\\fp}{{\\mathfrak{p}}}\n\\nc{\\frr}{{\\mathfrak{r}}}\n\\nc{\\fs}{{\\mathfrak{s}}}\n\\nc{\\ft}{{\\mathfrak{t}}}\n\\nc{\\fT}{{\\mathfrak{T}}}\n\\nc{\\ofT}{{\\overline{\\mathfrak T}}}\n\\nc{\\ofS}{{\\overline{\\mathfrak S}}}\n\\nc{\\fsl}{{\\mathfrak{sl}}}\n\\nc{\\hsl}{{\\widehat{\\mathfrak{sl}}}}\n\\nc{\\hgl}{{\\widehat{\\mathfrak{gl}}}}\n\\nc{{\\hat{\\mathfrak g}}}{{\\widehat{\\mathfrak{g}}}}\n\\nc{\\chg}{{\\widehat{\\mathfrak{g}}}{}^\\vee}\n\\nc{\\hn}{{\\widehat{\\mathfrak{n}}}}\n\\nc{\\chn}{{\\widehat{\\mathfrak{n}}}{}^\\vee}\n\n\\nc{\\fA}{{\\mathfrak{A}}}\n\\nc{\\fB}{{\\mathfrak{B}}}\n\\nc{\\fD}{{\\mathfrak{D}}}\n\\nc{\\fE}{{\\mathfrak{E}}}\n\\nc{\\fF}{{\\mathfrak{F}}}\n\\nc{\\fG}{{\\mathfrak{G}}}\n\\nc{\\fI}{{\\mathfrak{I}}}\n\\nc{\\fJ}{{\\mathfrak{J}}}\n\\nc{\\fK}{{\\mathfrak{K}}}\n\\nc{\\fL}{{\\mathfrak{L}}}\n\\nc{\\fM}{{\\mathfrak{M}}}\n\\nc{\\fN}{{\\mathfrak{N}}}\n\\nc{\\frP}{{\\mathfrak{P}}}\n\\nc{\\fS}{{\\mathfrak S}}\n\\nc{\\fU}{{\\mathfrak{U}}}\n\\nc{\\fZ}{{\\mathfrak{Z}}}\n\n\\nc{\\bb}{{\\mathbf{b}}}\n\\nc{\\mathbf{c}}{{\\mathbf{c}}}\n\\nc{\\be}{{\\mathbf{e}}}\n\\nc{\\bj}{{\\mathbf{j}}}\n\\nc{\\bn}{{\\mathbf{n}}}\n\\nc{\\bp}{{\\mathbf{p}}}\n\\nc{\\bq}{{\\mathbf{q}}}\n\\nc{\\bv}{{\\mathbf{v}}}\n\\nc{\\bx}{{\\mathbf{x}}}\n\\nc{\\by}{{\\mathbf{y}}}\n\\nc{\\bw}{{\\mathbf{w}}}\n\\nc{\\bA}{{\\mathbf{A}}}\n\\nc{\\bB}{{\\mathbf{B}}}\n\\nc{\\bC}{{\\mathbf{C}}}\n\\nc{\\bK}{{\\mathbf{K}}}\n\\nc{\\bD}{{\\mathbf{D}}}\n\\nc{\\bH}{{\\mathbf{H}}}\n\\nc{\\bM}{{\\mathbf{M}}}\n\\nc{\\bN}{{\\mathbf{N}}}\n\\nc{\\bS}{{\\mathbf{S}}}\n\\nc{\\bT}{{\\mathbf{T}}}\n\\nc{\\bV}{{\\mathbf{V}}}\n\\nc{\\bW}{{\\mathbf{W}}}\n\\nc{\\bX}{{\\mathbf{X}}}\n\\nc{\\bP}{{\\mathbf{P}}}\n\\nc{\\bQ}{{\\mathbf{Q}}}\n\\nc{\\bZ}{{\\mathbf{Z}}}\n\n\\nc{\\sA}{{\\mathsf{A}}}\n\\nc{\\sB}{{\\mathsf{B}}}\n\\nc{\\sC}{{\\mathsf{C}}}\n\\nc{\\sD}{{\\mathsf{D}}}\n\\nc{\\sF}{{\\mathsf{F}}}\n\\nc{\\sK}{{\\mathsf{K}}}\n\\nc{\\sM}{{\\mathsf{M}}}\n\\nc{\\sO}{{\\mathsf{O}}}\n\\nc{\\sQ}{{\\mathsf{Q}}}\n\\nc{\\sP}{{\\mathsf{P}}}\n\\nc{\\sV}{{\\mathsf{V}}}\n\\nc{\\sW}{{\\mathsf{W}}}\n\\nc{\\sZ}{{\\mathsf{Z}}}\n\\nc{\\sfp}{{\\mathsf{p}}}\n\\nc{\\sr}{{\\mathsf{r}}}\n\\nc{\\sfb}{{\\mathsf{b}}}\n\\nc{\\sfc}{{\\mathsf{c}}}\n\\nc{\\sd}{{\\mathsf{d}}}\n\\nc{\\sg}{{\\mathsf{g}}}\n\\nc{\\sfl}{{\\mathsf{l}}}\n\n\\nc{\\BK}{{\\bar{K}}}\n\n\\nc{\\tA}{{\\widetilde{\\mathbf{A}}}}\n\\nc{\\tB}{{\\widetilde{\\mathcal{B}}}}\n\\nc{\\tg}{{\\widetilde{\\mathfrak{g}}}}\n\\nc{\\tG}{{\\widetilde{G}}}\n\\nc{\\TM}{{\\widetilde{\\mathbb{M}}}{}}\n\\nc{\\tO}{{\\widetilde{\\mathsf{O}}}{}}\n\\nc{\\tU}{{\\widetilde{\\mathfrak{U}}}{}}\n\\nc{\\TZ}{{\\tilde{Z}}}\n\\nc{\\tZ}{\\widetilde{Z}{}}\n\\nc{\\tx}{{\\tilde{x}}}\n\\nc{\\tbv}{{\\tilde{\\bv}}}\n\\nc{\\tfP}{{\\widetilde{\\mathfrak{P}}}{}}\n\\nc{\\tz}{{\\tilde{\\zeta}}}\n\\nc{\\tmu}{{\\tilde{\\mu}}}\n\n\\nc{\\td}{\\ddot{\\underline{d}}{}}\n\\nc{\\tzeta}{\\widetilde{\\zeta}{}}\n\\nc{\\hd}{{\\widehat{\\underline{d}}}}\n\\nc{\\hG}{{\\widehat{G}}}\n\\nc{\\hBP}{\\widehat{\\mathbb P}{}}\n\\nc{\\hQ}{{\\widehat{Q}}}\n\\nc{\\UM}{{^U\\!{\\mathsf M}}}\n\\nc{\\hsR}{{^U\\!{\\mathsf R}}}\n\\nc{\\UMt}{\\UM_\\tau}\n\\nc{\\hsRt}{\\hsR_\\tau}\n\\nc{\\hfM}{\\widehat{\\mathfrak M}{}}\n\\nc{\\hbM}{{^U\\!{\\mathsf M}}}\n\\nc{\\hbMt}{\\hbM_\\tau}\n\\nc{\\hCP}{\\widehat{\\mathcal P}{}}\n\\nc{\\hCR}{\\widehat{\\mathcal R}{}}\n\\nc{\\hCS}{{\\widehat{\\mathcal S}}}\n\\nc{\\hfZ}{\\widehat{\\mathfrak Z}{}}\n\n\\nc{\\urho}{\\underline{\\rho}}\n\\nc{\\uB}{\\underline{B}}\n\\nc{\\uC}{{\\underline{\\mathbb{C}}}}\n\\nc{\\ui}{\\underline{i}}\n\\nc{\\ofP}{{\\overline{\\mathfrak{P}}}}\n\n\\nc{\\hrho}{{\\hat{\\rho}}}\n\n\\nc{\\unl}{\\underline}\n\\nc{\\ol}{\\overline}\n\\nc{\\one}{{\\mathbf{1}}}\n\\nc{\\two}{{\\mathbf{t}}}\n\n\\nc{\\Tot}{{\\mathop{\\operatorname{\\rm Tot}}}}\n\\nc{\\Hilb}{{\\mathop{\\operatorname{\\rm Hilb}}}}\n\\nc{\\CHom}{{\\mathop{\\operatorname{{\\mathcal{H}}\\it om}}}}\n\\nc{\\defi}{{\\mathop{\\operatorname{\\rm def}}}}\n\\nc{\\length}{{\\mathop{\\operatorname{\\rm length}}}}\n\n\\nc{\\Cliff}{{\\mathsf{Cliff}}}\n\\nc{\\Fl}{{\\mathsf{Fl}}}\n\\nc{\\Fib}{{\\mathsf{Fib}}}\n\\nc{\\Coh}{{\\mathsf{Coh}}}\n\\nc{\\FCoh}{{\\mathsf{FCoh}}}\n\n\n\\nc{\\cplus}{{\\mathbf{C}_+}}\n\\nc{\\cminus}{{\\mathbf{C}_-}}\n\\nc{\\cthree}{{\\mathbf{C}_*}}\n\\nc{\\Qbar}{{\\bar{Q}}}\n\n\\nc{\\bh}{{\\bar{h}}}\n\\nc{\\bOmega}{{\\overline{\\Omega}}}\n\\nc\\tGr{\\widetilde{{\\mathsf {Gr}}}}\n\n\\nc{\\seq}[1]{\\stackrel{#1}{\\sim}}\n\\nc\\ogu{\\overline{G\/U}}\n\\nc\\chlam{\\check{\\lam}}\n\n\\nc\\St{\\operatorname{St}}\n\n\\nc\\uS{\\underline{S}}\n\\nc\\QM{\\mathcal{QM}}\n\\nc\\BPt{{\\mathbb P}^2_\\tau}\n\\nc\\bpt{{\\mathbf{Q}}_\\tau}\n\\nc\\mm{\\mu_M}\n\\nc\\mg{\\mu_G}\n\\nc\\mt{\\mu_\\tet}\n\\nc\\ttt{(\\tet,\\tet')}\n\\nc\\Mtt{\\CM_\\tau^{\\ttt}}\n\\nc\\tnt{(\\tet^0,\\tet^1)}\n\\nc\\Mt{\\CM_\\tau^{\\tnt}}\n\\nc\\VE{V_\\bullet(E)}\n\\nc\\bu{\\bullet}\n\\nc\\WF{W_\\bu(F)}\n\\nc\\sk{\\enskip}\n\\nc\\dmn{(n-d(d-1)\/2, 2n-d^2+r, n-d(d+1)\/2)}\n\\nc\\Fix{\\GM^T}\n\\nc\\Fixt{\\GMt^T}\n\\newcommand\\ic{\\operatorname{IC}}\n\\nc\\cm{\\operatorname{CM}}\n\\nc\\oCM{{^D{\\mathsf M}}}\n\\nc\\icm{\\ic(\\oCM^n)}\n\\nc\\tlam{{\\bar\\lam}}\n\\nc\\Sch{{\\mathsf{Sch}}}\n\\nc\\pg{{p\\ccirc\\gamma}}\n\\newcommand\\cd{\\!\\cdot\\!}\n\\nc\\ccirc{{{}_{\\,{}^{^\\circ}}}}\n\\newcommand{{\\overline{\\mathsf {Sch}}}{}}{{\\overline{\\mathsf {Sch}}}{}}\n\\newcommand{{\\mathsf {Gr}}}{{\\mathsf {Gr}}}\n\\newcommand{{\\mathsf {Mat}}}{{\\mathsf {Mat}}}\n\\newcommand{_{\\sf{reg}}}{_{\\sf{reg}}}\n\\newcommand{{^D{\\mathsf M}}}{{^D{\\mathsf M}}}\n\n\\newcommand{\\mathsf{R}_\\tau}{\\mathsf{R}_\\tau}\n\\newcommand{{}^A\\mathsf{M}_\\tau}{{}^A\\mathsf{M}_\\tau}\n\\newcommand{{}^A\\mathsf{R}_\\tau}{{}^A\\mathsf{R}_\\tau}\n\n\\newcommand{\\mathop{\\mathrm{alt}}}{\\mathop{\\mathrm{alt}}}\n\n\n\\setcounter{tocdepth}{1}\n\\begin{document}\n\\title{Intersection cohomology of the Uhlenbeck compactification \nof the Calogero--Moser space}\n\\author{Michael Finkelberg, Victor Ginzburg, Andrei Ionov and Alexander\nKuznetsov}\n\n\\dedicatory{To Joseph Bernstein on his 70th birthday,\nwith gratitude and admiration}\n\n\n\n\\begin{abstract}\nWe study the natural Gieseker and Uhlenbeck compactifications of the rational\nCalogero--Moser phase space. \nThe Gieseker compactification is smooth and provides a small\nresolution of the Uhlenbeck compactification. We use the resolution\nto compute\nthe stalks of the IC-sheaf of the Uhlenbeck compactification.\n\\end{abstract}\n\\maketitle\n\n\\begin{flushright}\n{\\small{\\textit{I'd\nsay that if one can compute the Poincar\\'e polynomial\\\\\n for intersection \ncohomology without a computer then,\\\\ probably, there is a small\nresolution which gives it.}}\\\\\n(J. Bernstein)\\hskip 20mm\\hphantom{x}}\n\\end{flushright}\n\n\n\\sec{int}{Introduction}\n\n\\ssec{calmo}{The Calogero--Moser space}\n\nThe {\\sf Calogero--Moser space} $\\sM^n$~\\cite{kks} is the quotient modulo a free action\nof $\\PGL_n$ of the space of pairs of complex $n\\times n$-matrices $(X,Y)$ such that\n$[X,Y]-\\on{Id}$ has rank 1.\nThe Calogero--Moser space is a smooth connected affine algebraic variety of\ndimension $2n$~\\cite{W}.\n\n\n\n\n\\ssec{uhgi}{The Gieseker and Uhlenbeck compactifications}\nMore generally, for a parameter $\\tau\\in{\\mathbb C}^\\times$,\n we consider a\ngraded algebra $A^\\tau$ with generators\n$x,y,z$, of degree 1, and the following commutation relations \n\\begin{equation}\\label{A-relations}\n[x,z]=[y,z]=0,\\quad [x,y]=\\tau z^2.\n\\end {equation}\nThis algebra is a very special case of the Sklyanin algebras\nstudied in~\\cite{ns}, specifically, it corresponds to the case \nof a degenerate plane cubic curve equal\nto a triple line.\nWe set ${\\mathbb P}^2_\\tau=\\mathsf{Proj}(A^\\tau)$, a non-commutative $\\mathsf{Proj}$\nin the sense of~\\cite{AZ}, see also~\\cite{KKO}, and\nwrite $\\mathsf{coh}({\\mathbb P}^2_\\tau)=\\mathsf{qgr}(A^\\tau)$ for the corresponding\nabelian category $\\mathsf{coh}(\\BPt)$ of ``coherent\nsheaves''. Associated with an object\n$E\\in \\mathsf{coh}(\\BPt)$ there is a well defined triple\n$(r=\\rk E,\\,d=\\deg E,\\,n=c_2(E))$, of nonnegative integers,\nthe rank, the degree, and the\nsecond Chern class of $E$, respectively.\n\nGiven a triple $(r,d,n)$, where $r$ and $d$ are coprime,\nwe introduce two different moduli spaces, $\\GMt(r,d,n)$ and\n$\\UMt(r,d,n)$, of coherent sheaves on\n${\\mathbb P}^2_\\tau$. These moduli spaces are\ndefined by stability\nconditions. The moduli space\n$\\GMt(r,d,n)$,\n the {\\em Gieseker} moduli space, is defined using Gieseker stability.\n The moduli space $\\UMt(r,d,n)$, the {\\em Uhlenbeck} moduli space,\n is defined using Mumford stability.\nThese moduli spaces are projective varieties\nwhich provide two different compactifications of the\nmoduli space of locally free sheaves.\nThe variety $\\GMt(r,d,n)$ is a particular\ncase of moduli spaces studied in~\\cite{ns} (cf. also~\\cite{NV})\nin greater generality.\nThe variety $\\UMt(r,d,n)$ is more mysterious; it does not\nfit into the framework of ~\\cite{ns} and it has not been considered\nthere. In fact, even in the commutative case,\na satisfactory construction of the Uhlenbeck compactification of \nthe moduli space of locally free sheaves on an arbitrary smooth \nsurface\nis not known so far, cf. \\cite{BFG}.\nIn the case we are interested in, i.e. in the case of the noncommutative \nsurface $\\BPt$, the variety $\\UMt(r,d,n)$ will be studied in section\n2. In particular, using an interpretation of our moduli spaces\nin terms of certain moduli spaces of quiver representations,\nwe construct a projective morphism \n$\\gamma_\\tau:\\ \\GMt(r,d,n)\\to \\UMt(r,d,n)$.\nThis morphism turns out to be a resolution of singularities,\nprovided $r$ and $d$ are coprime. \n\nIn this paper, we will mostly be interested \nin the case where $r=1,\\ d=0$, and $\\tau\\neq 0$.\nThe moduli space of locally free sheaves $\\CE$ on $\\BPt$ such that\n$\\rk\\CE=1,\\ \\deg\\CE=0$, and $c_2(\\CE)=n$ has an ADHM type description.\nSpecifically, according to~\\cite{NS} and \\cite{KKO}, this moduli space is\nisomorphic to the variety $\\sM^n_\\tau$ defined as\na quotient of the space of pairs $(X,Y)$,\nof $n\\times n$-matrices such that\n$\\rk([X,Y]-\\tau\\on{Id})=1$, by the (free) action\nof the group $\\PGL_n$ by conjugation.\n Note that the rescaling map $(X,Y)\\mapsto(\\frac{1}{\\tau} X,Y)$\ngives a canonical isomorphism of $\\sM^n_\\tau$\nwith the Calogero--Moser space\n$\\sM^n$. Therefore, the varieties $\\GM^n_\\tau=\\GMt(1,0,n)$ and\n$\\UM^n_\\tau=\\UMt(1,0,n)$ provide two different compactifications of the\nCalogero--Moser space. Since $1$ and $0$ are coprime,\n the corresponding morphism $\\gamma_\\tau:\\ \\GM^n_\\tau\\to\\UM^n_\\tau$ is\n a resolution of singularities.\nMoreover, we show that this morphism is {\\em small} in the sense\nof Goresky--MacPherson. \n\n\nOne can allow the parameter $\\tau$ to vary in ${\\mathbb A}^1$.\nSimilarly to the above, one \nconstructs the family of Gieseker, rwesp. Uhlenbeck, compactifications\n$\\GM^n$,\nresp. $\\UM^n$,\nequipped with maps to $\\AA^1$ such that the fibers over the point $\\tau \\in \\AA^1 \\setminus \\{0\\}$\nare $\\GM^n_\\tau$, resop. $\\UM^n_\\tau$. Furthermore, we construct\na small resolution of singularities $\\gamma:\\ \\GM^n \\to \\UM^n$. In fact, over $\\AA^1 \\setminus \\{0\\}$\nthe maps $\\gamma_\\tau:\\ \\GM^n_\\tau\\to\\UM^n_\\tau$ are identified with the maps discussed above, \nwhile the fiber over $\\tau=0$ is\nthe Hilbert--Chow morphism \n$\\gamma_0:\\ \\on{Hilb}^n{\\mathbb P}^2\\to S^n{\\mathbb P}^2=({\\mathbb P}^2)^n\/\\fS_n$.\n\nIt is well known that $\\gamma_0$ is only {\\em semismall}. The reason of this\ndifference between $\\gamma_0$ and $\\gamma_\\tau,\\ \\tau\\ne0$, is due to the\ndifference between the stratifications of the commutative and noncommutative\nUhlenbeck compactifications, respectively. Namely, we have a distinguished (classical,\ncommutative) projective line subscheme ${\\mathbb P}^1\\subset{\\mathbb P}^2_\\tau$, and\nsimilarly,\nwe have ${\\mathbb P}^1\\subset{\\mathbb P}^2$ such that ${\\mathbb P}^2\\setminus{\\mathbb P}^1={\\mathbb A}^2$.\nThere is a stratification\n\\begin{equation*}\n\\UM^n_\\tau=\\bigsqcup_{0\\leq m\\leq n}\\sM^m_\\tau\\times S^{n-m}{\\mathbb P}^1,\\qquad\\text{for $\\tau\\ne0$},\n\\end{equation*}\nand $\\gamma_\\tau$ is an isomorphism over the open part $\\sM^n_\\tau$.\nSimilarly, we have a stratification\n\\begin{equation*}\n\\UM^n_0=S^n{\\mathbb P}^2=\\bigsqcup_{0\\leq m\\leq n}S^m{\\mathbb A}^2\\times S^{n-m}{\\mathbb P}^1,\n\\end{equation*}\nbut $\\gamma_0$ is {\\em not} an isomorphism over $S^n{\\mathbb A}^2$, only a semismall\nresolution of singularities.\n\nLet us remark that the readers experienced with the classical Uhlenbeck\ncompactifications might expect another stratification with strata of $\\UM^n_\\tau$ being\n$\\sM^m_\\tau\\times S^{n-m}{\\mathbb P}^2_\\tau$\n(the reason of semismallness of the classical Gieseker resolution); however ${\\mathbb P}^2_\\tau$\nis {\\em not} a classical scheme, so only its ``classical part'' ${\\mathbb P}^1$\nsurvives in the classical moduli scheme $\\UM^n_\\tau$, yielding the\nstratification of the previous paragraph.\n\nNote also that the Gieseker moduli spaces of~\\cite{ns} carry a natural\nPoisson structure. We expect it to descend to the Uhlenbeck compactification.\nHowever, even normality of $\\UM^n_\\tau$ seems to be a hard question and it\nis out of the scope of this paper.\n\n\n\\newcommand{\\operatorname{\\mathbf{h}}}{\\operatorname{\\mathbf{h}}}\n\n\\ssec{gormac}{The main Theorem}\n\n\nLet $\\frP(n)$ denote the set of partitions of an integer $n\\geq0$ and for an algebraic variety $T$ and a partition $\\lambda = (\\lambda_1,\\dots,\\lambda_l)$ put\n\\begin{equation*}\\label{diagonal-stratification}\nS_\\lambda T = \\{ \\textstyle\\sum \\lambda_i P_i\\ |\\ P_1 \\ne P_2 \\ne \\dots \\ne P_l \\in T \\} \\subset S^nT=T^n\/\\fS_n.\n\\end{equation*}\nso that \n$S^n T = \\bigsqcup_{\\lambda \\in \\frP(n)} S_\\lambda T$\nis a stratification, which we call the {\\sf diagonal stratification}. \nLet $\\ic(\\UM^n_\\tau)$ be the IC sheaf (see~\\cite{bbd}) \nof the Uhlenbeck compactification.\nOur main result is the computation of the stalks of the IC sheaf.\n\n\n\n\\th{main}\nThe IC sheaf of the Uhlenbeck compactification is\nsmooth along the stratifications\n\\begin{equation*}\n\\UM^n_\\tau = \\bigsqcup_{\\substack{0 \\le m \\le n \\\\[.5ex] \\lambda \\in \\frP(n-m)}} \\left(\\sM^m \\times S_\\lambda\\PP^1\\right).\n\\end{equation*}\nFor $0 \\le m \\le n$ and $\\lambda = (\\lambda_1,\\dots,\\lambda_k) \\in \\frP(n-m)$,\nthe stalk of the sheaf $\\ic(\\UM^n_\\tau)$ at a point of a stratum \n$\\sM^m\\times S_\\lambda{\\mathbb P}^1$ is isomorphic to\n\\begin{equation}\\label{heis}\n\\bigotimes_{i=1}^k \\left( \\bigoplus_{\\mu \\in \\frP(\\lambda_i)} {\\mathbb C}[2l(\\mu)] \\right)[2m]\n\\end{equation}\nas a graded vector space.\n\\eth\n\nThe proof employs the small resolution of the family\n$\\gamma:\\ \\GM^n_\\tau\\to\\UM^n_\\tau$ and reduces the study of the fibers\nfor $\\tau\\ne0$ to the well known properties of the fibers of the Hilbert--Chow\nmorphism for $\\tau=0$. \n\n\\begin{Rem} Given a complex semisimple simply connected group $G$ one can \nconsider the moduli\nspace of $G$-bundles on ${\\mathbb P}^2$ equipped with a trivialization\nat the infinite line ${\\mathbb P}^1\\sset{\\mathbb P}^2$.\nThere is also an Uhlenbeck space ${\\mathcal U}_G({\\mathbb A}^2)$ that contains\nthe above moduli space as a Zariski open subset\n(the variety ${\\mathcal U}_G({\\mathbb A}^2)$ is not proper in this setting).\nAssume that the group $G$ is almost simple,\nand let $G_{\\on{aff}}$ be the affinization of $G$, a Kac-Moody group\nsuch that ${\\mathfrak g} _{\\on{aff}}=\\textrm{Lie}(G_{\\on{aff}})$ is an affine Lie algebra.\nThen, the Uhlenbeck space ${\\mathcal U}_G({\\mathbb A}^2)$ may be viewed\nas a slice in the affine Grassmannian for the group $G_{\\on{aff}}$,\nthat is, in the double affine Grassmannian for $G$~\\cite{bf}.\nThe IC stalks of ${\\mathcal U}_G({\\mathbb A}^2)$ may be identified, in\naccordance with the predictions based on the geometric Satake\ncorrespondence, with certain graded versions\nof the weight spaces of the basic integrable representation of\n${\\mathfrak g}_{\\on{aff}}^\\vee$, the Langlands dual of the Lie algebra ${\\mathfrak g} _{\\on{aff}}$.\nIn the simply-laced case, the Dynkin diagram of the Lie algebra\n ${\\mathfrak g} _{\\on{aff}}$ is an affine Dynkin diagram of types\n $\\widetilde A,\\,\\widetilde D,\\,\\widetilde E$,\nand we have ${\\mathfrak g}_{\\on{aff}}^\\vee={\\mathfrak g}_{\\on{aff}}$.\n\n\nIt is often useful to view the graph with one vertex and one edge-loop\nat that vertex as a Dynkin diagram of type $\\widetilde A_0$.\nIt is known that the Kac-Moody Lie algebra associated with \n $\\widetilde A_0$ is the Heisenberg Lie algebra ${\\mathfrak H}$.\nBy definition, we have ${\\mathfrak H} := \\C\\delta\\ltimes\n\\widehat{\\C(\\!(t)\\!)}$,\nwhere $\\widehat{\\C(\\!(t)\\!)}$ is a central extension of the abelian Lie algebra\n$\\C(\\!(t)\\!)$ and $\\delta:=t\\frac{d}{dt}$, a derivation.\nThe Fock representation of ${\\mathfrak H}$ plays the role of the basic\nintegrable representation of an affine Lie algebra.\nThe tensor factors of the graded vector space in \\eqref{heis}\nmay be identified in a natural way with \ncertain weight spaces of the Fock space (the action of the derivation\n$\\delta$ \ngives a grading\non the Fock space).\nThis suggests, in view of the above, that\nour variety\n $\\UM^n_\\tau$ might play the role of a slice in\nsome kind of an affine Grassmannian for the Heisenberg group\nand Theorem \\ref{T:main} is\na manifestation of (a certain analogue of) the geometric Satake\ncorrespondence in the case of Dynkin diagram of type $\\widetilde A_0$.\n\\end{Rem} \n\n\n\n\n\\ssec{orga}{Organization of the paper}\nIn~\\refs{ch5} we study coherent sheaves on a noncommutative projective plane\nand the corresponding representations of a Kronecker-type quiver.\nWe introduce Gieseker and Mumford stabilities of sheaves and interpret them\nas stabilities of quiver representations. We construct the Gieseker and\nthe Uhlenbeck moduli spaces of sheaves as GIT moduli spaces of quiver\nrepresentations and a map $\\gamma$ between the moduli spaces as the map\ncoming from a variation of GIT quotients.\nIn~\\refs{ch6} we discuss the special case of sheaves of rank 1 and degree 0.\nIn this case the Gieseker and the Uhlenbeck moduli spaces are compactifications\nof the Calogero--Moser space. We investigate in detail the map $\\gamma$ between\nthe compactifications and compute the stalks of the IC sheaf on the Uhlenbeck compactification.\nIn the Appendix we provide proofs of some of the results of~\\refs{ch5}.\n\n\\medskip\n\n\\noindent\n{\\textbf{Notation.}}\\en Given a vector space $V$, we write $V^\\vee$ for\nthe dual vector space and $S^\\bullet V=\\oplus_{i\\geq 0}\\ S^i V$ for the\nSymmetric algebra of $V$.\n\n\\ssec{ackno}{Acknowledgments}\nThe study of M.F. and A.K. has been funded by the Russian Academic Excellence\nProject `5-100'. \nV.G. was supported in part by the NSF grant DMS-1303462.\nA.I. was supported by the grants NSh-5138.2014.1 and RFBR 15-01-09242.\nA.K. was partially supported by RFBR 14-01-00416, 15-01-02164, 15-51-50045 and NSh-2998.2014.1.\n\n\n\n\n\n\n\n\n\n\\newcommand{{\\widetilde{H}}}{{\\widetilde{H}}}\n\n\n\n\\sec{ch5}{Sheaves on the noncommutative plane $\\BPt$ and quiver representations}\n\nTo construct the Gieseker and the Uhlenbeck compactifications of the \nCalogero-Moser space we use an interpretation of the latter as moduli spaces \nof coherent sheaves on a noncommutative projective plane.\n\n\n\n\\ssec{reco}{Sheaves on the noncommutative projective plane}\n\n\\newcommand{\\mathfrak{a}}{\\mathfrak{a}}\n\nWe start with a slightly more invariant definition of the Calogero--Moser space.\nWe consider a symplectic vector space $H$ of dimension 2 with a symplectic form $\\omega \\in \\Lambda^2H^\\vee$,\na vector space $V$ of dimension $n$, a nonzero complex number $\\tau \\in \\C^\\times$,\nand consider the subvariety\n$\\widetilde\\sM_\\tau(V) \\subset \\Hom(V,V\\otimes H)$\ndefined by\n\\begin{equation*}\n\\widetilde\\sM_\\tau(V) = \\{ \\mathfrak{a} \\in \\Hom(V, V\\otimes H)\\ |\\ \\mathrm{rank}(\\omega( \\mathfrak{a} \\circ \\mathfrak{a}) - \\tau\\id_V) = 1 \\},\n\\end{equation*}\nwhere $\\omega(\\mathfrak{a}\\circ \\mathfrak{a})$ is defined as the composition\n$V \\xrightarrow{\\ \\mathfrak{a}\\ } V \\otimes H \\xrightarrow{\\ \\mathfrak{a} \\otimes \\id_H\\ } V \\otimes H \\otimes H \\xrightarrow{\\ \\id_V \\otimes \\omega\\ } V$,\nconsider the action of $\\PGL(V)$ on $\\widetilde\\sM_\\tau(V)$ by conjugation,\nand define\n\\begin{equation*}\n\\sM_\\tau(V) = \\widetilde\\sM_\\tau(V)\/\\PGL(V).\n\\end{equation*}\nA choice of a symplectic basis in $H$ allows to rewrite $\\mathfrak{a}$ as a pair of operators $(X,Y)$,\nthen $\\omega(\\mathfrak{a}\\circ \\mathfrak{a})$ becomes as $[X,Y]$, and so this definition\nagrees with the standard one. \n\n\nDenote ${\\widetilde{H}} := H \\oplus {\\mathbb C}$. We define a {\\sf twisted symmetric algebra of ${\\widetilde{H}}$} by\n\\begin{equation*}\nA^\\tau =\nS^\\bullet_\\tau{\\widetilde{H}} =\n{\\mathbb C}\\langle H \\oplus {\\mathbb C} z\\rangle\/\\langle [H,z]=0,\\ [h_1,h_2]=\\tau \\omega(h_1,h_2) z^2 \\rangle.\n\\end{equation*}\nChoosing a symplectic basis $x,y$ in $H$ the defining relations in\n$A^\\tau$\ntake the form \\eqref{A-relations}.\n\n\nThe algebra $A^\\tau$ is a graded noetherian algebra \nand we let\n\\begin{equation*}\n{\\mathbb P}^2_\\tau := \\mathsf{Proj}(A^\\tau)\n\\end{equation*}\nbe the non-commutative ``projective spectrum'' of \n$A^\\tau$ in the sense of~\\cite{AZ}. \nThe category of ``coherent sheaves'' on the non-commutative scheme\n${\\mathbb P}^2_\\tau$ is defined as\n$\\mathsf{coh}({\\mathbb P}^2_\\tau):=\\mathsf{qgr}(A^\\tau)$, a quotient of the\nabelian category\nof finitely generated graded $A^\\tau$-modules by the \nSerre subcategory of finite-dimensional modules.\nNote that the group $\\SL(H)$ acts on the algebra $A^\\tau$ by automorphisms.\nThe action on $A^\\tau$ induces an $\\SL(H)$-action \non the category of coherent sheaves $\\mathsf{coh}(\\BPt)$.\n\nAs it was shown in~\\cite{AZ,KKO,BGK,NS} and other papers, coherent sheaves\non such a noncommutative projective plane behave very similarly to those\non the usual (commutative) plane~${\\mathbb P}^2$. For instance, one can define\nthe cohomology spaces of sheaves, local $\\underline{\\Ext}$ sheaves,\nthe notions of torsion free and locally free sheaves, one has the sequence\n$\\{\\CO(i)\\}_{i\\in\\ZZ}$ of ``line bundles'', one can prove Serre duality and\nconstruct the Beilinson spectral sequence.\n\nThe main differences from the commutative case are\n\\begin{itemize}\n\\item\nin general there is {\\em no tensor product of sheaves} (due to the noncommutativity);\nhowever, one can tensor with $\\CO(i)$ and thus define the twist functors $F \\mapsto F(i)$\nsince sheaves $\\CO(i)$ correspond to graded $A^\\tau$-modules having a natural bimodule structure\n(alternatively, the twist functor can be thought of as the twist of the grading functor\nin the category of graded $A^\\tau$-modules);\n\\item\nthe dual of a sheaf on $\\PP^2_\\tau$ is a sheaf on\n$\\mathsf{Proj}((A^\\tau)^{\\mathrm{opp}})$,\nthe ``opposite'' noncommutative projective plane;\nin fact, one has $\\mathsf{Proj}((A^\\tau)^{\\mathrm{opp}})=\\PP^2_{-\\tau}$ since $(A^\\tau)^{\\mathrm{opp}} \\cong A^{-\\tau}$.\n\n\\item\nthe noncommutative projective plane ${\\mathbb P}^2_\\tau$ {\\em has less points} than the usual plane ${\\mathbb P}^2$,\nand as a consequence the category $\\mathsf{coh}({\\mathbb P}^2_\\tau)$ {\\em has more locally free sheaves} than\n$\\mathsf{coh}({\\mathbb P}^2)$.\n\\end{itemize}\n\n\n\n\n\n\n\nBelow, we summarize the results of \\cite{AZ,BGK,KKO,NS} that we are\ngoing to\nuse later in the paper.\n\nBy \\cite[Theorem 8.1(3)]{AZ},\nthe cohomology groups of the sheaves $\\CO(i)$ are given by the\nfollowing formulas (similar to those in the commutative case):\n\n\\begin{equation*}\nH^p(\\BPt,\\CO(i))=\n\\begin{cases}\nA^\\tau_i = S^i{\\widetilde{H}}, &\\text{if $p=0$ and $i\\ge0$}\\\\\n(A^\\tau_{-i-3})^\\vee = S^{-i-3}{\\widetilde{H}}^\\vee, &\\text{if $p=2$ and $i\\le-3$}\\\\\n0 &\\text{otherwise}\n\\end{cases}\n\\end{equation*}\n\nOne has a functorial Serre duality isomorphism\n\\begin{equation*}\n\\Ext^i(E,F)\\cong\\Ext^{2-i}(F,E(-3))^\\vee.\n\\end{equation*}\n\nThe sheaves $(\\CO(-2),\\CO(-1),\\CO)$ form a full exceptional collection\nin the derived category and there is an associated Beilinson type spectral sequence. \nThe construction of the spectral sequence involves the sheaves $Q_0$,\n$Q_1$ and $Q_2$ on $\\BPt$ defined by\n\\begin{equation}\\label{defq}\nQ_0 = \\CO,\n\\qquad\n0 \\to \\CO \\xrightarrow{ (x,y,z) } \\CO(1) \\oplus \\CO(1) \\oplus \\CO(1) \\to Q_1 \\to 0,\n\\qquad\nQ_2 = \\CO(3).\n\\end{equation}\nSometimes another resolution for $Q_1$ is more convenient\n\\begin{equation}\\label{anotherq1}\n0 \\to Q_1 \\to \\CO(2) \\oplus \\CO(2) \\oplus \\CO(2) \\xrightarrow{ (x,y,z) } \\CO(3) \\to 0\n\\end{equation}\nWe remark that each of the two sequences above is a truncation of the Koszul complex.\n\nThe Beilinson spectral sequence \nhas the form\n\\begin{equation*}\nE^{-p,q}_1=\\Ext^q(Q_{p}(-p),E)\\otimes \\CO(-p)\\Longrightarrow E^i_\\infty=\\begin{cases} E, &\\text{for $i=0$}\\\\\n0, &\\text{otherwise}\n\\end{cases}\n\\end{equation*}\nwhere $p=0,1,2$. \nUsing the Beilinson spectral sequence one shows, cf. \\cite[\\S7.2]{BGK}, that\nany coherent sheaf $E$ on $\\BPt$ admits a resolution of the form\n\\eq{resol1}\n0\\to V'\\otimes \\CO(k-2)\\to V\\otimes \\CO(k-1)\\to V'' \\otimes \\CO(k)\\to E\\to0\n \\end{equation} \nfor some $k\\in{\\mathbb Z}$ and vector spaces $V',V,V''$.\n\nThe dual\n$\nE^* := \\underline{\\Hom}(E,\\CO)$,\nof any sheaf $E$ is a sheaf on the opposite plane $\\PP^2_{-\\tau}$.\nThe sheaf $E$ is called {\\itshape locally free} if\n$\\underline{\\Ext}^i(E,\\CO) = 0$ for $i > 0$. \n\nThe following statements are proved in \\cite[Proposition 2.0.4]{BGK}.\nFor any sheaf $E$, the sheaf $E^*$ is locally free, furthermore,\n$E$ is locally free if and only if its canonical map $E\\to E^{**}$\nis an isomorphism. The kernel of a morphism of locally free sheaves is always locally free.\n\nLet $\\CE$ be a locally free sheaf.\nWriting~\\refe{resol1} for $\\CE^*$ and dualizing, one deduces that\nany locally free sheaf $\\CE$ has a resolution of the form\n\\eq{resol2}\n0\\to\\CE\\to U' \\otimes \\CO(-k)\\to U \\otimes \\CO(1-k)\\to U'' \\otimes \\CO(2-k)\\to0.\n \\end{equation} \n\n\nA sheaf $E$ is called {\\itshape torsion free} if it can be embedded in a locally free sheaf.\nThis can be shown, e.g. using \\cite[Proposition\n2.0.6]{BGK}, to be equivalent to \nthe injectivity of the canonical map $E \\to E^{**}$.\n\n\n\n\n\n\nFor a coherent sheaf $E$ its {\\itshape Hilbert polynomial} is defined by\nthe usual formula\n\\begin{equation*}\nh_E(t)=\\sum_{i=0}^2(-1)^i\\dim H^i(\\BPt,E(t)).\n\\end{equation*}\nFor sheaves $\\CO(i)$ it is the same as in the commutative case\n$h_{\\CO(i)}(t)=(t+i+1)(t+i+2)\/2$. So, using~\\refe{resol1} one sees\nthat the Hilbert polynomial of any sheaf can be written as\n\\eq{david}\nh_E(t)=r(E)\\frac{(t+1)(t+2)}{2}+\\deg(E)\\frac{2t+3}{2}+\\frac{\\deg(E)^2}{2}-c_2(E)\n\\end{equation}\nfor some integers $r(E),\\deg(E)$ and $c_2(E)$ defined by this equality and\ncalled the rank, degree and second Chern class of $E$ respectively.\nIt is clear from the definition that the Hilbert polynomial as well as the rank\nand the degree are additive in exact sequences. Further, one can check\nthat they behave naturally with respect to dualization\n\\begin{itemize}\n\\item for any sheaf $E$ one has $r(E^*) = r(E)$;\n\\item for a torsion free sheaf $E$ one also has $\\deg(E^*) = - \\deg(E)$;\n\\item for a locally free sheaf $\\CE$ one also has $c_2(E^*) = c_2(E)$.\n\\end{itemize}\n\\newcommand{\\mathop{\\mathrm{ch}}\\nolimits}{\\mathop{\\mathrm{ch}}\\nolimits}\n\nSometimes, instead of the second Chern class $c_2(E)$ it is more convenient to use\n\\begin{equation*}\n\\mathop{\\mathrm{ch}}\\nolimits_2(E) := \\deg(E)^2\/2 - c_2(E)\n\\end{equation*}\n(this can be thought of as the second coefficient of the Chern character).\nIts obvious advantage is additivity in exact sequences.\n\n\nFor any sheaf $E$ the rank $r(E)$ is nonnegative. If $E$ is torsion free and nonzero\nthen $r(E) > 0$, moreover, if $r(E) = 0$ then the degree $\\deg(E)$ is nonnegative.\nThe sheaf $F$ is called {\\itshape Artin sheaf of length} $n = h_F =\n\\mathop{\\mathrm{ch}}\\nolimits_2(F)$\nif both the rank and degree of $F$ are equal to zero, equivalently, \n the Hilbert polynomial\nof $F$ is constant. In this case,\n the integer $n := h_F =\n\\mathop{\\mathrm{ch}}\\nolimits_2(F)$ is nonnegative and it is called the length of $F$.\n\n\n\n\n\n\nA special feature of the noncommutative plane $\\BPt$\nis that it has less points than the commutative ${\\mathbb P}^2$: all points of\n $\\BPt$\nare contained, in a sense, in the projective line $\\PP^1$ `at infinity'.\nIn more detail, note we have $\\Proj(S^\\bullet(H)) = \\PP(H^\\vee) \\cong \\PP(H) =\n\\PP^1$,\nwhere we identify\n$H^\\vee = H$ via $\\omega$.\nHeuristically, one may view the graded algebra morphism \n\\begin{equation*}\nA^\\tau\\,\\,\\twoheadrightarrow\\,\\, A^\\tau\/\\langle z\\rangle\\cong S^\\bullet(H) \\cong {\\mathbb C}[x,y].\n\\end{equation*}\nas being induced by a `closed imbedding' $\\PP^1\\,\\hookrightarrow\\, \\BPt$',\nof the projective line `at infinity'.\nSpecifically, there is a pair of\nadjoint functors $i_*: \\mathsf{coh}({\\mathbb P}(H)) \\to \\mathsf{coh}(\\BPt)$ and\n$i^*: \\mathsf{coh}(\\BPt) \\to \\mathsf{coh}({\\mathbb P}(H))$.\nThe pushforward functor $i_*$ extends a graded $S^\\bullet(H)$-module structure to a graded $A^\\tau$-module structure\nby setting the action of $z$ to be zero.\nThe pullback functor $i^*$ takes a graded $A^\\tau$-module $M$ to $M\/Mz$.\nThe projection $A^\\tau\\,\\,\\twoheadrightarrow\\,\\, S^\\bullet(H)$ is clearly $\\SL(H)$-equivariant, hence so are the functors $i_*$ and $i^*$.\nThe functor $i_*$ is exact. The functor $i^*$ is right exact, and it has\n a sequence of left derived functors $L_pi^*$, $p > 0$. In fact\n\\begin{itemize}\n\\item for any sheaf $E$ one has $L_{>1}i^*E = 0$;\n\\item for a torsion free sheaf $E$ one also has $L_1i^*E = 0$;\n\\item for a locally free sheaf $\\CE$ the sheaf $i^*\\CE$ is also locally free.\n\\end{itemize}\n\n\nWe will use the following result.\n\n\\prop{morph2}(\\cite[Proposition 3.4.14]{BGK})\nFor any $\\tau\\ne0$ one has\n\n(1) If $i^*E=0$ then $E=0$.\n\n(2) If $\\phi\\in\\Hom(E,F)$ and $i^*\\phi$ is an epimorphism, then $\\phi$ is an epimorphism.\n\n(3) If $\\phi\\in\\Hom(E,F)$ and both $i^*\\phi$ and $L_1i^*\\phi$ are isomorphisms then $\\phi$ is an isomorphism.\n\n(4) If $\\phi\\in\\Hom(E,F), i^*\\phi$ is a monomorphism and $L_1i^*F=0$ then $\\phi$ is a monomorphism.\n\n(5) A sheaf $E$ is locally free iff $L_{>0}i^*E=0$ and $i^*E$ is locally free.\n\n\\end{propos}\n\n\n\n\nWe deduce the following properties of Artin sheaves:\n\n\\prop{Art0} Let $F$ be an Artin sheaf and $h_F(t)=n$.\n\n(1) For sufficiently general $h \\in H \\subset {\\widetilde{H}} = H^0(\\BPt,\\CO(1))$,\n the map $h\\colon F(-1)\\to F$, of\nright multiplication by $h$, is an isomorphism.\n\n(2) For any locally free sheaf $\\CE$ we have\n\\begin{equation*}\n\\dim\\Ext^m(\\CE,F)=\\begin{cases}\n0, &\\text{if $m>0$}\\\\\nnr(\\CE), &\\text{if $m=0$}.\n\\end{cases}\n\\end{equation*}\n\n\n(3) The sheaf $F$ has a filtration\n$0=F_0\\subset F_1\\subset\\ldots\\subset F_n=F$ with $F_k\/F_{k-1}=i_*\\CO_{P_k}$\nfor some points $P_1,\\dots,P_n \\in \\PP(H)$ on the line at infinity.\nIn particular, if $h_F(t)=1$ then $F\\cong i_*\\CO_P$\nfor some point $P\\in\\PP(H)$.\n\\end{propos}\n \\begin{proof} \n(1) Since both $i^*F$ and $L_1i^*F$ are torsion sheaves on $\\PP(H)$ the maps $i^*h$ and $L_1i^*h$ are isomorphisms for generic $h$.\nHence $h\\colon F(-1) \\to F$ is also an isomorphism for generic $h$ by \\refp{morph2}(3).\n\n(2) By~\\refe{resol2} it is enough to consider the case $\\CE = \\CO(p)$ for some $p \\in \\ZZ$.\nIn this case for $p \\ll 0$ the result is clear and for arbitrary $p$ it follows from (1).\n\n\n(3) The map $F\\to i_*i^*F$ is an epimorphism. On the other hand, $i^*F$ is a nontrivial sheaf on $\\PP(H)$,\nhence there is an epimorphism $i^*F\\to\\CO_P$ for some $P\\in\\PP(H)$. The composition gives an epimorphism $F\\to i_*\\CO_P$.\nIts kernel is an Artin sheaf on $\\BPt$ of length $n-1$ and we can apply induction in $n$.\n \\end{proof} \n\n\nLet $F$ be an Artin sheaf and take an arbitrary $p \\in \\ZZ$. Consider the canonical map\n\\begin{equation*}\nH^0(\\BPt,F(p)) \\otimes H \\to H^0(\\BPt,F(p+1))\n\\end{equation*}\ninduced by the embedding $H \\subset {\\widetilde{H}} = H^0(\\BPt,\\CO(1))$.\nLet $n = h_F$ be the length of $F$, so that both cohomology spaces above are $n$-dimensional.\nA component of the $n$-th wedge power of the above map is a map\n\\begin{equation*}\n\\det (H^0(\\BPt,F(p))) \\otimes S^n H \\to \\det(H^0(\\BPt,F(p+1))).\n\\end{equation*}\nIts partial dualization gives a map\n\\begin{equation}\\label{equation-support}\n\\det (H^0(\\BPt,F(p))) \\otimes \\det(H^0(\\BPt,F(p+1)))^\\vee \\to S^nH^\\vee.\n\\end{equation}\nWe consider the projectivization of the right hand side as the space of degree $n$ divisors on $\\PP(H)$\nand denote by\n\\begin{equation*}\n\\supp(F) \\in \\PP(S^nH^\\vee) = S^n\\PP(H^\\vee)\n\\end{equation*}\nthe image of the map. The next Lemma shows it is well defined.\n\n\\lem{artin-support}\nFor any Artin sheaf $F$ of length $n$ the map~\\eqref{equation-support} is injective,\n$\\supp(F)$ is well defined and is independent of the choice of $p$.\nIf $0 \\to F_1 \\to F_2 \\to F_3 \\to 0$ is an exact sequence of Artin sheaves then\n\\begin{equation*}\n\\supp(F_2) = \\supp(F_1) + \\supp(F_3).\n\\end{equation*}\nIf $0 \\ne h \\in H$ then $h:F(-1) \\to F$ is an isomorphism if and only if $h \\not\\in \\supp(F)$.\n\\end{lemma}\n \\begin{proof} \nClearly, evaluation of the image of~\\eqref{equation-support} on $h \\in H$ is the determinant of the map $H^0(\\BPt,F(p)) \\to H^0(\\BPt,F(p+1))$ induced by $h$.\nWe know that for generic $h$ the map is an isomorphism, hence its determinant is nonzero. This means that the image of~\\eqref{equation-support} is not\nidentically zero and proves the first claim of the Lemma. It also proves the ``only if'' part of the last claim. Moreover, the ``if'' part also follows\nfor Artin sheaves of length~1. The additivity of the support under extensions is evident (the determinant of a block upper triangular matrix is the product\nof the determinants of blocks). It follows that if $F_\\bullet$ is a filtration on $F$ with $F_k\/F_{k-1} \\cong i_*\\CO_{P_k}$ then $\\supp(F) = \\sum P_k$.\nIn particular, $\\supp(F)$ does not depend on the choice of the integer $p$. Finally, this observation and the additivity of the support also proves\nthe ``if'' part of the last claim in general.\n \\end{proof} \n\n\n\n\n\n\n\n\n\nTo finish this introductory section let us mention that one can consider families of sheaves on a noncommutative\nplane $\\BPt$. More precisely, for each affine scheme $S$ one can define a notion of a coherent\nsheaf on $S \\times \\BPt$ (see~\\cite{ns}) which is the standard way to think about $S$-families of sheaves on $\\BPt$.\nThis allows to define moduli spaces of sheaves on $\\BPt$ with appropriate stability conditions, which is the goal of this section.\n\nIn fact, a significant part of the results of this section are proved in a more general setting\n(i.e., for an arbitrary Artin--Schelter algebra instead of $A_\\tau$) in~\\cite{ns}, in particular,\nthe Gieseker moduli space we construct coincides with the moduli space of Nevins--Stafford.\nHowever, the case we consider is significantly simpler than the general case, this is the reason\nfor us to present most of the constructions here, while suppressing some proofs. The really new content\nof the section is the definition, construction, and investigation of the Uhlenbeck moduli space.\nTo make its relation to the Gieseker moduli space more clear, we use a GIT construction of the latter\nmoduli space which is different from that of~\\cite{ns}.\n\n\n\n\\ssec{quive}{Coherent sheaves and quiver representations}\nLet $A^!_\\tau$ be\nthe quadratic dual algebra of $A^\\tau$. From the quadratic relations \nfor $A^\\tau$ we deduce that $A^!_\\tau$ is isomorphic\nto a \n{\\sf twisted exterior algebra} $\\Lambda^\\bullet_\\tau({\\widetilde{H}}^\\vee)$ of \nthe vector space ${\\widetilde{H}}^\\vee = H^\\vee \\oplus {\\mathbb C}\\zeta$.\nSpecifically, writing\n $\\{-,-\\}$ for the anticommutator,\nwe have\n\\begin{equation*}\nA^!_\\tau \\cong\n\\Lambda^\\bullet_\\tau({\\widetilde{H}}^\\vee) =\n{\\mathbb C}\\langle H^\\vee \\oplus {\\mathbb C}\\zeta \\rangle\/\\langle \\{H^\\vee, H^\\vee\\} = \\{H^\\vee,\\zeta\\} = \\zeta^2 + \\tau\\omega = 0 \\rangle.\n\\end{equation*}\nThe group $\\SL(H)$ acts on~$A^!_\\tau$ by algebra automorphisms.\n\nChoosing a symplectic basis $\\xi,\\eta$ in $H^\\vee$ we can rewrite the\nabove as follows\n\\begin{equation*}\nA^!_\\tau = {\\mathbb C}\\langle \\xi,\\eta,\\zeta \\rangle\/\\langle \\xi^2=\\eta^2=\\eta\\xi+\\xi\\eta=\\zeta\\xi + \\xi\\zeta = \\eta\\zeta + \\zeta\\eta = \\zeta^2 + \\tau(\\xi\\eta - \\eta\\xi) = 0 \\rangle.\n\\end{equation*}\nHere, the grading on $A^!_\\tau$ corresponds to the grading $\\deg\\xi=\\deg\\eta=\\deg\\zeta=1$.\nLet $\\bpt$ be the following quiver:\n\\begin{equation*}\n\\xymatrix{\n*+[o][F]{1} \\ar[rrr]_{(A_\\tau^!)_1}\\ar@\/^1pc\/[rrrrrr]^{(A_\\tau^!)_2} &&&\n*+[o][F]{2} \\ar[rrr]_{(A_\\tau^!)_1} &&&\n*+[o][F]{3} }\n\\end{equation*}\nwith the spaces of arrows given by the components $(A_\\tau^!)_1$ and $(A_\\tau^!)_2$ of the dual algebra\nand the composition of arrows given by the multiplication $(A^!_\\tau)_1 \\otimes (A^!_\\tau)_1 \\to (A^!_\\tau)_2$ in $A^!_\\tau$.\nThe $\\SL(H)$ action on $A^!_\\tau$ induces an action on the quiver $\\bpt$,\non the category of its representations $\\Rep(\\bpt)$, and on its derived category $\\mathbf{D}(\\Rep(\\bpt))$.\n\n\\prop{derequ}\nThe functors between the bounded derived categories\n\\begin{align*}\n&\\mathbf{D}(\\mathsf{coh}(\\BPt)) \\to \\mathbf{D}(\\Rep(\\bpt)),\n&&\nE \\mapsto (\\Ext^\\bullet(Q_2(-1),E), \\Ext^\\bullet(Q_1,E), \\Ext^\\bullet(Q_0(1),E)),\\\\\n&\\mathbf{D}(\\Rep(\\bpt)) \\to \\mathbf{D}(\\mathsf{coh}(\\BPt)),\n&&\nR_\\bullet \\mapsto \\{ R_1\\otimes\\CO(-1) \\to R_2\\otimes\\CO \\to R_3\\otimes\\CO(1)\\}\n\\end{align*}\nare mutually inverse $\\SL(H)$-equivariant equivalences.\n\\end{propos}\n \\begin{proof} \nFollows from the fact that $(\\CO(-1),\\CO,\\CO(1))$ is a strong exceptional collection in $\\mathbf{D}(\\mathsf{coh}(\\BPt))$,\nand $(Q_2(-1),Q_1,Q_0(1))$ is its dual collection. The quiver $\\bpt$ is in fact\nthe quiver of morphisms of the latter sequence.\n \\end{proof} \n\nWe consider the restrictions of these functors to the abelian categories.\nGiven a representation $R_\\bullet = (R_1,R_2,R_3)$ of $\\bpt$ one constructs a complex of sheaves\n\\begin{equation}\\label{comrep}\n\\mathcal{C}(R_\\bullet) := \\{R_1\\otimes\\CO(-1) \\to R_2\\otimes \\CO \\to R_3\\otimes\\CO(1)\\}.\n\\end{equation}\nDenote by~$\\CH^i(R_\\bullet)$, $i =1,2,3$, its cohomology sheaves. Recall that a three-term complex\nis a {\\sf monad} if its cohomology at the first and last terms vanish.\n\nAnalogously, given a sheaf $E$ we consider a representation of $\\bpt$\n\\begin{equation}\\label{repcom}\nV_\\bullet(E) = (\\Ext^1(Q_2(-1),E), \\Ext^1(Q_1,E), \\Ext^1(Q_0(1),E)).\n\\end{equation}\nThis is equivalent to applying the functor of~\\refp{derequ} and then taking\nthe first cohomology in the derived category of quiver representations.\n\n\\lem{monad-equivalence}\nIf $\\mathcal{C}(R_\\bu)$ is a monad and $E = \\CH^2(\\mathcal{C}(R_\\bu))$ is its middle cohomology sheaf then $V_\\bu(E) \\cong R_\\bu$\nand $\\Ext^i(Q_p(1-p),E) = 0$ for $i \\ne 1$ and $p = 0,1,2$.\n\nVice versa, if $E$ is a coherent sheaf on $\\BPt$ and $\\Ext^i(Q_p(1-p),E) = 0$ for $i \\ne 1$ and $p = 0,1,2$\nthen $\\mathcal{C}(V_\\bu(E))$ is a monad and $\\CH^2(V_\\bu(E)) = E$.\n\\end{lemma}\n \\begin{proof} \nIf $\\mathcal{C}(R_\\bu)$ is a monad then $E = \\CH^2(\\mathcal{C}(R_\\bu))$ is isomorphic\nto the complex $\\mathcal{C}(R_\\bu)$ in the derived category $\\mathbf{D}(\\mathsf{coh}(\\BPt))$,\nhence the complex can be used to compute $\\Ext^i(Q_p(1-p),E)$.\nThe computation gives the required result.\n\nVice versa, under the conditions of the Lemma we have $V_\\bu(E)$ is the image of $E$ in $\\mathbf{D}(\\Rep(\\bpt))$\nunder the equivalence of~\\refp{derequ}, hence $\\mathcal{C}(V_\\bu(E)) \\cong E$ in $\\mathbf{D}(\\mathsf{coh}(\\BPt))$.\nThis means that the complex is a monad and its middle cohomology is $E$.\n \\end{proof} \n\n\n\n\n\n\n\n\n\n\\ssec{stabilo}{Stability of sheaves and quiver representations}\n\nThe notions of Gieseker and Mumford (semi)stability of coherent sheaves are standard in the commutative context.\nWe refer to~\\cite{HL} for more details and for proofs of standard\nfacts. These notions \nhave generalizations\nfor sheaves on $\\BPt$.\n\nGiven a sheaf $E$ on $\\BPt$ with $r(E)>0$,\nwe define its Mumford and Gieseker slopes as\n\\begin{align*}\n\\mm(E) & =\\frac{\\deg(E)}{r(E)}\\in\\BQ,\\\\\n\\mg(E) & =\\frac{h_E(t)}{r(E)}=\\frac{(t+1)(t+2)}{2}+\\mm(E)\\frac{2t+3}{2}+\\frac{\\deg(E)^2-2c_2(E)}{2r(E)}\\in\\BQ[t].\n\\end{align*}\nLet $p(t)$ and $q(t)$ be polynomials. We say that $p \\mu(F_{n-2}\/F_{n-1}) > \\dots > \\mu(F_0\/F_1)$.\n\n\\medskip\n\n\nTo check Gieseker stability (semistability) it is enough to consider only subsheaves $F \\subset E$ such that\n$E\/F$ is torsion free (in particular, $r(F) < r(E)$). So, the following is clear.\n\n\\lem{Q}\nAny torsion free sheaf of rank $1$ is Gieseker stable and Mumford stable.\n\\end{lemma}\n\n\nNote also that $\\mm(E)>\\mm(F)$ implies $\\mg(E)>\\mg(F)$ and $\\mg(E)\\ge\\mg(F)$\nimplies $\\mm(E)\\ge\\mm(F)$. It follows that Gieseker semistability implies\nMumford semistability, while Mumford stability for tosion free sheaves implies Gieseker stability.\nMoreover, if the rank and the degree of a torsion free sheaf are coprime then\nsemistablity implies stability.\n\n\n\nThe following Lemma is standard\n\\lem{hom}\n\n(1) If $E$, $F$ are Mumford semistable sheaves with $F$ torsion free and $\\mu_M(E)>\\mu_M(F)$ then $\\Hom(E,F)=0$.\n\n(2) If $E,F$ are Mumford stable sheaves, $E$ is locally free and $\\mu_M(E) \\ge \\mu_M(F)$\nthen any nontrivial homomorphism $E\\to F$ is an isomorphism.\n\\end{lemma}\n\n\nThe notion of stability for a representation of a quiver depends on a choice of a polarization, see~\\cite{K}.\nA {\\sf polarization} in case of the quiver $\\bpt$ amounts to a triple $\\tet=(\\tet_1,\\tet_2,\\tet_3)$ of real numbers.\nThe {\\sf $\\tet$-slope} of a representation $R_\\bullet = (R_1,R_2,R_3)$ of $\\bpt$ is defined as\n\\begin{equation*}\n\\mt(R_\\bullet) = \\langle \\tet, \\dim R_\\bu \\rangle := \\tet_1\\dim R_1+\\tet_2\\dim R_2+\\tet_3\\dim R_3.\n\\end{equation*}\n\n\\defe{stability-reps}\nA representation $R_\\bullet$ is $\\tet$-stable\n(resp. $\\tet$-semistable) if $\\mt(R_\\bullet)=0$ and for any\nsubrepresentation $R'_\\bullet\\subset R_\\bullet$ such that $0\\ne\nR'_\\bullet\\ne R_\\bullet$ we have $\\mt(R'_\\bullet)>0$\n(resp. $\\mt(R'_\\tet)\\ge 0$). Representations $R_\\bullet$ and\n$R'_\\bullet$ are called $S$-equivalent with respect to $\\tet$ if both of\nthem are $\\tet$-semistable and have isomorphic composition factors in the\ncategory of $\\tet$-semistable representations.\n\\end{defeni}\n\n\n\n\nLet $\\tet,\\tet'$ be a pair of polarizations. It is well known (e.g.~\\cite{DH})\nthat, for all sufficiently small and positive $\\eps\\in \\mathbb{R}$,\n stability, semistability and $S$-equivalence with respect to\n $\\tet+\\eps\\tet'$ does not depend on $\\eps$.\n\n\\defe{stability-reps-2}\nA representation $R_\\bullet$ is $\\ttt$-stable (resp. $\\ttt$-semistable) if $R_\\bullet$ is\n$(\\tet+\\eps\\tet')$-stable (resp. semistable) for sufficiently small positive\n$\\eps$.\n\\end{defeni}\n\nThere is an analogue of~\\refl{hom} for representations of the quiver $\\bpt$.\n\n\n\n\\ssec{sheaves-reps}{From sheaves to quiver representations}\nLet ${\\mathcal{TF}}_{r,d}$ be the The following result is essentially a combination of Lemma~6.4 and Theorem~5.6 from~\\cite{ns}.\nThe only new statement is the exactness claim. We provide a proof for\nthe\nreader's convenience.\n\\th{mon}\nLet $-r \\le d < r$.\nThen, the assignment $E\\mapsto V_\\bu(E)$ gives an exact functor from the\ncategory of Mumford semistable torsion free sheaves $E$\non $\\BPt$ such that $r(E)=r$ and $\\deg(E)=d$\nto the category of representations\nof the quiver $\\bpt$.\nFor such a sheaf $E$,\nthe representation $V_\\bu(E)$ gives a monad\n\\begin{equation}\\label{monad-of-e}\nV_1(E) \\otimes \\CO(-1) \\to V_2(E) \\otimes \\CO \\to V_3(E) \\otimes \\CO(1)\n\\end{equation}\nsuch that its cohomology at the middle term is isomorphic to\n$E$. Furthermore, we have $\\dim V_\\bullet(E) =\n(n-d(d-1)\/2,2n-d^2+r,n-d(d+1)\/2)$,\nwhere $n=c_2(E)$; in particular, $c_2(E) \\ge d(d+1)\/2$.\n\\eth\n \\begin{proof} \nFirst we note that all $Q_p$ are Mumford stable of slopes equal to $0$, $3\/2$ and $3$ respectively.\nIndeed, for $p = 0$ and $p = 2$ this follows from~\\refl{Q} and the definitions of $Q_p$.\nSo let $p = 1$. The sheaf $Q_1$ is locally free because by~\\eqref{anotherq1} it is the kernel\nof a morphism of locally free sheaves, and moreover $r(Q_1) = 2$, $\\deg(Q_1) = 3$.\nSo, it is enough to check that if $F \\subset Q_1$ is a subsheaf of rank 1 with $Q_1\/F$ torsion free then $\\deg(F) \\le 1$.\nAssume $\\deg(F) \\ge 2$. As $Q_1\/F$ is torsion free, $F$ is locally free.\nSince $Q_1$ is a subsheaf in $\\calO(2)^{\\oplus3}$ there is a nontrivial homomorphism from $F$ to $\\calO(2)$.\nOn the other hand both $F$ and $\\calO(2)$ are Mumford stable by (1), $F$ is locally free, and\n$\\mm(F)\\ge2=\\mm(\\calO(2))$. Hence $F\\simeq\\CO(2)$ by \\refl{hom}(2). But applying\nthe functor $\\Hom(\\calO(2),-)$ to~\\eqref{anotherq1} we see that $\\Hom(\\calO(2),Q_1)=0$.\n\nThe proved stability implies that\n\\begin{equation*}\n\\Hom(Q_0(1),E) = \\Hom(Q_1,E) = \\Hom(Q_2(-1),E) = 0.\n\\end{equation*}\nIndeed, the slopes of the first arguments are $1$, $3\/2$, and $2$ respectively, while the slope\nof the second argument is $d\/r < 1$, so~\\refl{hom}(1) applies. Analogously,\n\\begin{equation*}\n\\Hom(E(3),Q_0(1)) = \\Hom(E(3),Q_1) = \\Hom(E(3),Q_2(-1)) = 0\n\\end{equation*}\nsince the slope of the first argument is $d\/r+3 > 2$. By Serre duality we then have\n\\begin{equation*}\n\\Ext^2(Q_0(1),E) = \\Ext^2(Q_1,E) = \\Ext^2(Q_2(-1),E) = 0.\n\\end{equation*}\nTherefore, \\refl{monad-equivalence} applies to $E$ and shows that~\\eqref{monad-of-e} is a monad\nand $E$ is its cohomology.\nThe dimensions of the spaces $V_p(E)$ are computed directly by using the formula~\\refe{david} for the Hilbert polynomial of a sheaf.\nThe exactness of the functor $V_\\bu$ is clear from its definition and vanishing of $\\Hom$ and $\\Ext^2$ spaces.\n \\end{proof} \n\n\\prop{artin-mon}\nThe functor $F\\mapsto {\\mathcal\n C}(R_\\bullet(F))$\nyields, for an Artin sheaf $F$, a canonical exact sequence\n\\begin{equation*}\n0 \\to W_1(F) \\otimes \\CO(-1) \\to W_2(F) \\otimes \\CO \\to W_3(F) \\otimes \\CO(1) \\to F \\to 0\n\\end{equation*}\nThe resulting functor $W_\\bu$ from the category of Artin sheaves on $\\BPt$ to the category\nof representations of the quiver $\\bpt$ is exact\nand we have $\\dim W_\\bullet(F) = (n,2n,n)$, where $n$ is the length of $F$.\n\\end{propos}\n \\begin{proof} \nThe proof is analogous to the proof of~\\refl{monad-equivalence}. We apply the equivalence of~\\refp{derequ} to the sheaf~$F$.\nBy~\\refp{Art0}(2) applying the functor of~\\refp{derequ} to $F$ we obtain nothing but representation\n\\begin{equation*}\nW_\\bullet(F) = (\\Hom(Q_2(-1),F),\\Hom(Q_1,F),\\Hom(Q_0(1),F))\n\\end{equation*}\nand its dimension vector is $(n,2n,n)$. Since the functor is an equivalence, it follows that\nthe complex $\\mathcal{C}(W_\\bu(F))$ is left exact and $\\CH^3(\\mathcal{C}(W_\\bullet(F))) \\cong F$, which amounts to the above exact sequence.\nExactness of the functor $W_\\bu$ follows from the vanishing of $\\Ext^1(Q_p(1-p),F)$ by~\\refp{Art0}(2).\n \\end{proof} \n\nIf $0 \\ne h \\in H$, $P \\in \\PP(H)$ is the corresponding point and $F = \\CO_P$, then\n\\begin{equation}\\label{monad-point}\nW_\\bullet(\\CO_P) = \\{ \\C \\xrightarrow{\\ \\left(\\begin{smallmatrix} h \\\\ \\zeta \\end{smallmatrix}\\right)\\ } \\C^2 \\xrightarrow{\\ (-\\zeta,h)\\ } \\C \\}.\n\\end{equation}\n\n\n\n\n\n\\ssec{relat stab}{From sheaf stability to quiver stability}\n\nIn this section we show that Gieseker and Mumford semistability correspond to semistablity of quiver representations.\n\nFrom now on we fix a triple $(r,d,n)$ such that\n\\begin{equation}\\label{assumptions-r-d-n}\n0 \\le d < r\n\\qquad\\text{and}\\qquad\nn \\ge d(d+1)\/2.\n\\end{equation} Put\n\\begin{equation}\\label{alpha}\n\\alp(r,d,n)=(n-d(d-1)\/2, 2n-d^2+r, n-d(d+1)\/2).\n\\end{equation}\nAccording to~\\reft{mon} if $E$ is a Mumford semistable sheaf such that $r(E)=r,\n\\deg(E)=d$ and $c_2(E)=n$ then $\\dim V_\\bullet(E) = \\alp(r,d,n)$.\n\nWe choose the following pair of polarizations\n\\begin{equation}\\label{thetas}\n\\begin{split}\n&\\tet^0=(-r-d,d,r-d),\\\\\n&\\tet^1=(2n-d^2+r,d^2-2n,2n-d^2+r).\n\\end{split}\n\\end{equation}\nNote that $\\tet^0$ does not depend on $n$. Note also that\n\\begin{equation*}\n\\langle \\tet^0,\\alp(r,d,n) \\rangle = \\langle \\tet^1,\\alp(r,d,n) \\rangle = 0.\n\\end{equation*}\nIn what follows we frequently consider $\\tet^0$-stability and $\\tnt$-stability of representations.\nIn fact the notion of $\\tnt$-(semi)stablity of a quiver representation is equivalent to the notion\nof (semi)stability of a Kronecker complex considered in \\cite{ns}.\n\n\\lem{kronecker-stability}\nLet $V_\\bu$ be an $\\alp(r,d,n)$-dimensional representation of the quiver $\\bpt$ and let $\\mathcal{C}(V_\\bu)$\nbe the associated complex~\\eqref{comrep}. Then $V_\\bu$ is $\\tnt$-(semi)stable if and only if $\\mathcal{C}(V_\\bu)$\nis (semi)stable in the sense of~\\cite[Def.~6.8]{ns}.\n\\end{lemma}\n \\begin{proof} \nJust note that a subcomplex in $\\mathcal{C}(V_\\bu)$ always corresponds to a subrepresentation $U_\\bu \\subset V_\\bu$,\nand the expression (1) from \\cite[Defenition~6.8]{ns} for $\\mathcal{C}(U_\\bu)$ equals $\\mu_{\\tet^0}(U_\\bu)$,\nwhile the expression (2) equals $\\mu_{\\tet^1}(U_\\bu)$,\n \\end{proof} \n\nThe following crucial observation relating stability for sheaves and for\nquiver representations, respectively,\nis due to Sergey Kuleshov. Parts (2) and (3) (as well as a part of~\\refl{monadic-sheaf} below)\nare also proved in Proposition~6.20 of~\\cite{ns}.\n\n\\lem{eq\nLet $E$ be a torsion free sheaf with $r(E)=r, \\deg(E)=d, c_2(E)=n$ and let\n$V_\\bullet(E)$ be the corresponding representation of the quiver $\\bpt$. Then\n\n(1) if $E$ is Mumford semistable then $\\VE$ is $\\tet^0$-semistable;\n\n(2) if $E$ is Gieseker semistable then $\\VE$ is $\\tnt$-semistable;\n\n(3) if $E$ is Gieseker stable then $\\VE$ is $\\tnt$-stable.\n\\end{lemma}\n\n \\begin{proof} \n(1) Assume that $E$ is Mumford semistable. Let $U_\\bullet$ be a subrepresentation\nin $\\VE$ and $W_\\bullet = V_\\bullet(E) \/ U_\\bullet$ be the quotient representation,\nand put $u_i=\\dim U_i$. Then we have a short exact sequence of complexes\n\\begin{equation}\\label{uvw}\n\\vcenter{\\xymatrix{\n0 \\ar[r] &\nU_1 \\otimes \\CO(-1) \\ar[r] \\ar[d] &\nU_2 \\otimes \\CO \\ar[r] \\ar[d] &\nU_3 \\otimes \\CO(1) \\ar[r] \\ar[d] &\n0\n\\\\\n0 \\ar[r] &\nV_1(E) \\otimes \\CO(-1) \\ar[r] \\ar[d] &\nV_2(E) \\otimes \\CO \\ar[r] \\ar[d] &\nV_3(E) \\otimes \\CO(1) \\ar[r] \\ar[d] &\n0\n\\\\\n0 \\ar[r] &\nW_1 \\otimes \\CO(-1) \\ar[r] &\nW_2 \\otimes \\CO \\ar[r] &\nW_3 \\otimes \\CO(1) \\ar[r] &\n0\n}}\n\\end{equation}\nConsidering it as an exact triple (with respect to the vertical maps) of 3-term complexes\nand applying the Snake Lemma we obtain a long exact sequence of cohomology sheaves\n\\begin{equation*}\n0 \\to\n\\CH^1(U_\\bullet) \\to 0 \\to \\CH^1(W_\\bullet) \\to\n\\CH^2(U_\\bullet) \\to E \\to \\CH^2(W_\\bullet) \\to\n\\CH^3(U_\\bullet) \\to 0 \\to \\CH^3(W_\\bullet) \\to 0\n\\end{equation*}\nof these complexes. In particular, we have $\\CH^1(U_\\bullet) = \\CH^3(W_\\bullet) = 0$.\nDenote\n\\begin{equation*}\nr_i^U = r(\\CH^i(U_\\bullet)), \\quad\nd_i^U = \\deg(\\CH^i(U_\\bullet)), \\quad\nr_i^W = r(\\CH^i(W_\\bullet)), \\quad\nd_i^W = \\deg(\\CH^i(W_\\bullet)).\n\\end{equation*}\nLet $I$ be image of the morphism $\\CH^2(U_\\bullet) \\to E$ in the above sequence\nand let\n\\begin{equation*}\nr_I=r(I), \\quad d_I=\\deg(I).\n\\end{equation*}\nThen using additivity of rank and degree we can rewrite the slope of $U_\\bullet$ as\n\\begin{multline*}\n\\mu_{\\tet^0}(U_\\bullet)\n = r(u_3-u_1)+d(u_2-u_1-u_3)\n =r(d_3^U - d_2^U)+d(r_2^U-r_3^U) \\\\\n =r(d_3^U - d_1^W - d_I)+d(r_1^W+r_I-r_3^U)\n=(rd_3^U - r_3^U d)+(r_1^W d-r d_1^W)+(r_Id-rd_I).\n\\end{multline*}\nNow we will show that all three summands in the right-hand-side are nonnegative.\n\nFirst, note that $\\CH^3(U_\\bullet)$ is a quotient of $U_3\\otimes\\CO(1)$.\nThe latter sheaf is semistable by \\refl{Q}(1) and\nwe have $\\mm(U_3\\otimes\\calO(1))=1$. Therefore, $d_3^U \\ge r_3^U$ and hence\n\\begin{equation*}\nrd_3^U-r_3^Ud \\ge r_3^U(r-d)\\ge0.\n\\end{equation*}\nNote that the inequality $rd_3^U-r_3^Ud \\ge0$ is strict unless $r_3^U=d_3^U=0$,\nthat is unless $\\CH^3(U_\\bullet)$ is an Artin sheaf .\n\nFurther, note that $\\CH^1(W_\\bullet)$ is a subsheaf of $W_1\\otimes\\CO(-1).$\nThe latter sheaf is semistable and one has $\\mm(W_1\\otimes\\CO(-1))=-1$. Therefore,\n$d_1^W \\le -r_1^W \\le 0$ and hence\n\\begin{equation*}\nr_1^W d-r d_1^W \\ge dr_1^W \\ge 0.\n\\end{equation*}\nNote that this inequality is strict unless $r_1^W = d_1^W = 0$, that is unless\n$\\CH^1(W_\\bullet)$ is an Artin sheaf. But since it is a subsheaf in $W_1\\otimes\\CO(-1)$\nit is torsion free, hence this is equivalent to $\\CH^1(W_\\bullet) = 0$.\n\n\nFinally, $I$ is a subsheaf of the Mumford semistable torsion free sheaf $E$.\nHence either $I=0$, or else $r_I>0$ and $\\mm(I)\\le\\mm(E)$. In both cases we have\n$$r_Id-d_Ir\\ge0.$$\nNote that this inequality is strict unless $I=0$ or $\\mm(I)=\\mm(E).$\n\nCombining all these inequalities we see that any subrepresentation in $\\VE$\nhas a non-negative $\\tet^0$-slope. Thus $\\VE$ is $\\tet^0$-semistable.\n\n(2) Assume that the sheaf $E$ is Gieseker semistable but $V_\\bullet(E)$ is not $(\\tet^0,\\tet^1)$-semistable.\nLet $U_\\bullet \\subset V_\\bullet(E)$ be a destabilizing subrepresentation. Since\n$E$ is automatically Mumford semistable and hence $V_\\bullet(E)$ is $\\tet^0$-semistable by (1),\nwe conclude that $\\mu_{\\tet^0}(U_\\bullet) = 0$. Then as we observed in the above argument\n$\\CH^3(U_\\bullet)$ is an Artin sheaf, $\\CH^1(W_\\bullet) = 0$ and\nhence $\\CH^2(U_\\bullet) = I$ is a subsheaf in $E$.\n\nLet $c = \\mathop{\\mathrm{ch}}\\nolimits_2(E) = d^2\/2 - n$ and $c_i^U = \\mathop{\\mathrm{ch}}\\nolimits_2(\\CH^i(U_\\bullet))$.\nThen\n\\begin{multline*}\n\\mu_{\\tet^1}(U_\\bullet)=(r-2c)u_1+2cu_2+(r-2c)u_3=r(u_1+u_3)+2c(u_2-u_1-u_3) \\\\\n=2r(c_3^U-c_2^U) + 2c(r_2^U-r_3^U)\n=2(rc_3^U-r_3^U c)+2(r_2^Uc-r c_2^U).\n\\end{multline*}\nSince $\\CH^3(U_\\bullet)$ is an Artin sheaf we have $r_3^U = 0$ and $c_3^U \\ge 0$,\nhence\n\\begin{equation*}\nrc_3^U-r_3^U c = rc_3^U \\ge 0.\n\\end{equation*}\nOn the other hand, $\\CH^2(U_\\bullet) = I$ and hence\neither $\\CH^2(U_\\bullet) = 0$, or $\\mm(\\CH^2(U_\\bullet)) = \\mm(E)$.\nIn both cases\n\\begin{equation*}\nr_2^Uc-r c_2^U = rr_2^U(\\mg(E) - \\mg(\\CH^2(U_\\bullet))) \\ge 0\n\\end{equation*}\nbecause $E$ is Gieseker semistable.\nThus $\\mu_{\\tet^1}(U_\\bullet) \\ge0$ which contradicts to the assumption that $U_\\bullet$\ndestabilizes $V_\\bullet(E)$.\n\n(3) In the notation of (2) assume that $\\mu_{\\tet^0}(U_\\bu) = \\mu_{\\tet^1}(U_\\bu) = 0$.\nThen $c_3^U=0$, hence $\\CH^3(U_\\bullet)=0$. Moreover,\neither $\\CH^2(U_\\bullet)=0$, or $\\mg(\\CH^2(U_\\bullet))=\\mg(E)$, and so $\\CH^2(U_\\bullet)=E$\nsince $E$ is Gieseker stable. In the first case the first line of~\\eqref{uvw} is exact,\nhence $U_\\bullet=0$ by~\\refp{derequ}. In the second case the first line of~\\eqref{uvw} is a resolution of $E$,\nhence $U_\\bullet=\\VE$.\n \\end{proof} \n\n\\lem{cfart}\nLet $F$ be an Artin sheaf. Then $W_\\bu(F)$ is $\\tet^0$-semistable. If the length of $F$ is equal to $1$ then $W_\\bu(F)$ is $\\tet^0$-stable.\n\\end{lemma}\n \\begin{proof} \nSince by~\\refp{Art0}(3) any Artin sheaf is an extension of the structure sheaves of points and the functor $W_\\bullet$\nis exact, it is enough to verify the $\\tet^0$-stability of $W_\\bullet(\\CO_P)$. The latter is clear from the explicit\nform~\\eqref{monad-point} of the monad --- it is easy to see that the only nontrivial subrepresentations of $W_\\bu(\\CO_P)$\nhave dimension $(0,0,1)$, $(0,1,1)$ and $(0,2,1)$, and their $\\tet^0$-slope is clearly positive with our assumptions on $d$ and $r$.\n \\end{proof} \n\n\n\n\\ssec{quiver-sheaf}{From quiver stability to sheaf stability}\n\nIn this section we show that stable representations of the quiver, in their turn,\ngive rise to stable sheaves.\n\n\\defe{red}\nA representation $V_\\bullet$ is called:\n\\begin{itemize}\n\\item\n{\\sf Artin}, if $\\CH^1(V_\\bu) = \\CH^2(V_\\bu) = 0$ and $\\CH^3(V_\\bu)$ is an Artin sheaf;\n\\item\n{\\sf monadic}, if $\\CH^1(V_\\bullet) = \\CH^3(V_\\bullet) = 0$;\n\\item\n{\\sf supermonadic}, if both $V_\\bu$ and $V_\\bu^\\vee$ are monadic.\n\\end{itemize}\n\\end{defeni}\n\n\\lem{smlf}\nA monadic representation $V_\\bullet$ is supermonadic iff $\\CH^2(V_\\bullet)$ is locally free.\n\\end{lemma}\n \\begin{proof} \nSince $V_\\bullet$ is monadic, the complex $\\mathcal{C}(V_\\bullet)$ is isomorphic to $\\CH^2(V_\\bullet)$ in the derived category $\\mathbf{D}(\\mathsf{coh}(\\BPt))$.\nTherefore the complex $\\mathcal{C}(V_\\bullet^\\vee) = \\mathcal{C}(V_\\bullet)^*$ is isomorphic to the derived dual of $\\CH^2(V_\\bu)$. In other words,\n$\\CH^i(\\mathcal{C}(V^\\vee_\\bu)) \\cong \\underline{\\Ext}^{i-2}(\\CH^2(V_\\bullet),\\CO)$. So, $V_\\bullet$ is supermonadic iff\n$\\underline{\\Ext}^i(\\CH^2(V_\\bullet),\\CO) = 0$ for $i \\ne 0$, i.e., iff $\\CH^2(V_\\bullet)$ is locally free.\n \\end{proof} \n\n\\lem{monadic-sheaf}\nLet $V_\\bu$ be a monadic representation of $\\bpt$ and let $E = \\CH^2(V_\\bu)$.\nIf $V_\\bu$ is $\\tet^0$-semistable then $E$ is Mumford semistable and\nif $V_\\bu$ is $\\tnt$-semistable then $E$ is Gieseker semistable.\nIn both cases $V_\\bu = V_\\bu(E)$.\n\\end{lemma}\n \\begin{proof} \nAssume $E$ is not Mumford semistable and consider its Harder--Narasimhan filtration.\nBreaking it up at slope $\\mu_M(E)$ we can represent $E$ as an extension\n\\begin{equation*}\n0 \\to E' \\to E \\to E'' \\to 0,\n\\end{equation*}\nsuch that the slopes of all quotients in the Harder--Narasimhan filtration of $E'$ (resp.\\ $E''$) are greater than (resp.\\ less than or equal to) $\\mu_M(E)$.\nLet $(r',d',n')$\nbe the rank, the degree and the second Chern class of $E'$.\nNote that both $E'$ and $E''$ are the cohomology sheaves of the monads $V_\\bu(E')$ and $V_\\bu(E'')$ respectively.\nIndeed, for $E'$ the argument of~\\reft{mon} shows that $\\Ext^2(Q_p(1-p),E') = 0$.\nOn the other hand, since $E'$ is a subsheaf in $E$ we have $\\Hom(Q_p(1-p),E') \\subset \\Hom(Q_p(1-p),E) = 0$.\nAnalogously, for $E''$ the argument of~\\reft{mon} gives the vanishing of $\\Hom$'s, while the surjectivity\nof the map from $E$ gives the vanishing of $\\Ext^2$. It follows that we have an exact sequence of monads\n\\begin{equation*}\n0 \\to V_\\bu(E') \\to V_\\bu \\to V_\\bu(E'') \\to 0.\n\\end{equation*}\nFinally, note that\n\\begin{equation*}\n\\langle \\tet^0, \\alpha(r',d',n') \\rangle = r'd - rd' = rr'\\left(\\frac{d}{r} - \\frac{d'}{r'}\\right) = rr'(\\mu_M(E) - \\mu_M(E')) < 0.\n\\end{equation*}\nHence the subrepresentation $V_\\bu(E') \\subset V_\\bu$ violates\nthe $\\tet^0$-semistability of $V_\\bu$. This proves the first part.\n\nIf $E$ is Mumford semistable but not Gieseker semistable, we take again $E'$ to be the part of the Harder--Narasimhan\nfiltration of $E$ with the slopes greater than $\\mu_G(E)$. Then $\\langle \\tet^0, \\alpha(r',d',n') \\rangle = 0$ but\n\\begin{equation*}\n\\langle \\tet^1, \\alpha(r',d',n') \\rangle = r'(d^2-2n) - r(d'^2 - 2n') = 2rr'\\left(\\frac{d^2-2n}{2r} - \\frac{d'^2-2n'}{2r'}\\right) < 0\n\\end{equation*}\nHence the subrepresentation $V_\\bu(E') \\subset V_\\bu$ violates the $\\tnt$-semistability of $V_\\bu$. This proves the second part.\n\nFinally, we have $V_\\bu = V_\\bu(E)$ by~\\refl{monad-equivalence}.\n \\end{proof} \n\n\\prop{tet0-ss}\nLet $V_\\bu$ be a $\\tet^0$-semistable representation of $\\bpt$ of dimension $\\alpha(r,d,n)$.\nThen $V_\\bu$ is S-equivalent to a direct sum $U_\\bu \\oplus W_\\bu$, where $U_\\bu$ is supermonadic of dimension $\\alpha(r,d,n-k)$\nand $W_\\bu$ is Artin of dimension $(k,2k,k)$ for some $0 \\le k \\le n$.\n\\end{propos}\n\n \\begin{proof} \nAssume that $\\CH^3(V_\\bu) \\ne 0$. Then $i^*(\\CH^3(V_\\bullet)) \\ne 0$ by~\\refp{morph2}(1), hence\nthere is a surjective morphism $\\CH^3(V_\\bullet) \\to i_*\\CO_P$ for some point $P \\in {\\mathbb P}(H)$.\nSince $\\CH^3(V_\\bullet)$ is the top cohomology of the complex $\\mathcal{C}(V_\\bullet)$, there is\na canonical morphism $\\mathcal{C}(V_\\bullet) \\to \\CH^3(V_\\bullet)$.\nComposing these morphisms we get a nontrivial morphism $\\mathcal{C}(V_\\bullet) \\to \\CO_P$.\nBy \\refp{derequ} this corresponds to a nontrivial morphism $V_\\bullet \\to W_\\bullet(\\CO_P)$.\nSince $V_\\bu$ is $\\tet^0$-semistable and $W_\\bu(\\CO_P)$ is $\\tet^0$-stable, the morphism is surjective.\nTaking $V'_\\bu$ to be the kernel of the morphism, we see that $V'_\\bu$ is $\\tet^0$-semistable and $V_\\bu$ is S-equivalent to\n$V'_\\bu \\oplus W_\\bu(\\CO_P)$. The dimension of $V'_\\bu$ is strictly less than that of $V_\\bu$,\nso iterating the construction we reduce to the case when $\\CH^3(V_\\bu) = 0$.\n\n\n\nAssume now that $\\CH^3(V_\\bu) = 0$, but $\\CH^3(V^\\vee_\\bu) \\ne 0$. Then applying the same argument to $V^\\vee_\\bu$\nwe obtain an injection $W_\\bu(\\CO_P) \\to V_\\bu$. Taking $V'_\\bu$ to be the cokernel of this morphism,\nwe see that $V'_\\bu$ is $\\tet^0$-semistable and $V_\\bu$ is S-equivalent to $V'_\\bu \\oplus W_\\bu(\\CO_P)$.\nIterating the construction we reduce to the case when $\\CH^3(V_\\bu) = \\CH^3(V^\\vee_\\bu) = 0$.\n\nFinally, assume that $\\CH^3(V_\\bu) = 0$ and $\\CH^3(V^\\vee_\\bu) = 0$. Let us show that $V_\\bu$ is supermonadic.\nIndeed, if ${\\mathcal F} := \\CH^1(V_\\bu) \\ne 0$ then ${\\mathcal F}$ is a locally free sheaf and we have a left exact sequence\n$0 \\to {\\mathcal F} \\to V_1 \\otimes \\CO(-1) \\to V_2 \\otimes \\CO$. After dualization we get a complex\n\\begin{equation*}\nV_2^\\vee \\otimes \\CO \\to V_3^\\vee \\otimes \\CO(1) \\to {\\mathcal F}^*\n\\end{equation*}\nin which the second arrow is nontrivial (since after second dualization it gives back the embedding ${\\mathcal F} \\to V_1\\otimes\\CO(-1)$).\nThis means that $\\CH^3(V^\\vee_\\bu) \\ne 0$, thus contradicting the assumption.\nTherefore $\\CH^1(V_\\bu) = 0$.\nAnalogous argument with $V_\\bu$ replaced by $V^\\vee_\\bu$ shows that $\\CH^1(V^\\vee_\\bu) = 0$, so $V_\\bu$ is indeed supermonadic.\n\nAs at each step of the above procedure the dimension of the representation\nhas decreased by $(1,2,1)$,\nthe dimension of the supermonadic part we end up with is equal to\n\\begin{equation*}\n\\alpha(r,d,n) - (k,2k,k) = \\alpha(r,d,n-k).\n\\end{equation*}\nSince all the components of a dimension vector are nonnegative, we have $n-k \\ge d(d+1)\/2$, in particular $k \\le n$.\n \\end{proof} \nNote that we have\n\\[(n,2n,n) = \\alpha(0,0,n).\\]\n\\cor{tet-ss-artin}\nLet $W_\\bu$ be a $\\tet^0$-semistable representation of $\\bpt$ of dimension $(n,2n,n)$. Then $W_\\bu$ is an Artin representation.\n\\end{corol}\n \\begin{proof} \nApplying~\\refp{tet0-ss} we see that $W_\\bu$ is S-equivalent to a sum of\nan Artin representation and a supermonadic representation $U_\\bu$ of dimension $\\alpha(0,0,n-k)$ for some $k$.\nBy definition and~\\refl{smlf} the corresponding complex $\\mathcal{C}(U_\\bu)$ is a monad and its middle cohomology\nis a locally free sheaf of rank $0$. Thus the cohomology is zero and the complex $\\mathcal{C}(U_\\bu)$ is acyclic.\nBy~\\refp{derequ} this means that $U_\\bu = 0$, so we have no supermonadic part. It follows that $W_\\bu$\nis S-equivalent to an Artin representation. It follows immediately that $\\CH^1(W_\\bu) = \\CH^2(W_\\bu) = 0$,\nand $\\CH^3(W_\\bu)$ is an iterated extension of Artin sheaves. Hence it is an Artin sheaf itself,\nand so $W_\\bu$ is also an Artin representation.\n \\end{proof} \n\n\nThe first part of the following result can be found in Lemma~6.14 of \\cite{ns}.\n\n\\prop{tnt-ss}\nLet $V_\\bu$ be a $\\tnt$-semistable representation of $\\bpt$ of dimension $\\alpha(r,d,n)$ with $0 \\le d < r$.\nThen $V_\\bu$ fits into a short exact sequence\n\\begin{equation*}\n0 \\to W_\\bu \\to V_\\bu \\to U_\\bu \\to 0,\n\\end{equation*}\nwhere $U_\\bu$ is supermonadic and $W_\\bu$ is Artin.\nMoreover, $V_\\bu = V_\\bu(E)$, where $E$ is a Gieseker semistable sheaf of rank $r$, degree $d$ and $c_2 = n$,\n$U_\\bu = V_\\bu(E^{**})$, and $W_\\bu = W_\\bu(E^{**}\/E)$.\n\\end{propos}\n\n \\begin{proof} \nThe argument of~\\refp{tet0-ss} proves that there is a filtration on $V_\\bu$ in which there are several factors\nwhich are Artin representations of dimension $(1,2,1)$ and one supermonadic factor of dimension $\\alpha(r,d,n-k)$\nfor some $0 \\le k \\le n$. But\n\\begin{equation*}\n\\langle \\tet^1, (1,2,1) \\rangle = 2r > 0,\n\\end{equation*}\nhence Artin factors can appear only before the supermonadic factor. This proves that the filtration\ngives the required exact sequence.\n\nApplying the functor $\\mathcal{C}$ to the exact sequence and taking into account $\\CH^2(W_\\bu) = \\CH^3(U_\\bu) = 0$,\nwe get the long exact sequence of cohomology sheaves\n\\begin{equation*}\n0 \\to \\CH^2(V_\\bu) \\to \\CH^2(\\CU_\\bu) \\to \\CH^3(W_\\bu) \\to \\CH^3(V_\\bu) \\to 0.\n\\end{equation*}\nIf $\\CH^3(V_\\bu) \\ne 0$ then the argument of the proof of~\\refp{tet0-ss} shows that\nthere is a surjection $V_\\bu \\to W_\\bu(\\CO_P)$ which, as we observed, contradicts to $\\tnt$-semistability of $V_\\bu$.\nThus $V_\\bu$ is monadic. Denote $E = \\CH^2(V_\\bu)$, $\\CE = \\CH^2(U_\\bu)$ and $F = \\CH^3(W_\\bu)$. Then the above sequence\ncan be rewritten as\n\\begin{equation*}\n0 \\to E \\to \\CE \\to F \\to 0.\n\\end{equation*}\nNote that $\\CE$ is locally free by~\\refl{smlf}, and $F$ is Artin since $W_\\bu$ is. Dualizing the sequence and taking into account\nthat $\\underline{\\Hom}(F,\\CO) = \\underline{\\Ext}^1(F,\\CO) = 0$ since $F$ is Artin,\nwe deduce that $E^* = \\CE^*$. Therefore $E^{**} = \\CE^{**} = \\CE$ since $\\CE$ is locally free and the map $E \\to \\CE = E^{**}$\nis the canonical embedding. Thus $E$ is torsion free and $F \\cong E^{**}\/E$. Moreover, by~\\refp{derequ}\nit follows that $V_\\bu = V_\\bu(E)$ and $U_\\bu = V_\\bu(E^{**})$, while $W_\\bu = W_\\bu(E^{**}\/E)$.\n\nWe finish by noting that $E$ is Gieseker semistable by~\\refl{monadic-sheaf}.\n \\end{proof} \n\n\n\n\\cor{coprime-stable}\nIf $r$ and $d$ are coprime then a $\\tet^0$-semistable representation is $\\tnt$-semistable if and only if it has no Artin quotients,\ni.e., if it is monadic.\n\\end{corol}\n \\begin{proof} \nFirst let us show that if $r$ and $d$ are coprime and $V_\\bu$ is a supermonadic $\\tet^0$-semistable representation then it is $\\tet^0$-stable.\nIndeed, $V_\\bu = V_\\bu(E)$ for a Mumford semistable sheaf $E$ by~\\refl{monadic-sheaf}, and moreover, the sheaf $E$ is locally free\nby~\\refl{smlf}. So, if $0 \\to V'_\\bu \\to V_\\bu \\to V''_\\bu \\to 0$ is an exact sequence of representations with both $V'_\\bu$ and $V''_\\bu$ nonzero and\n$\\mu_{\\tet^0}(V'_\\bu) = \\mu_{\\tet^0}(V''_\\bu) = 0$ then by~\\refl{eq} we have an exact sequence\n\\begin{equation*}\n0 \\to \\CH^2(V'_\\bu) \\to E \\to \\CH^2(V''_\\bu) \\to \\CH^3(V'_\\bu) \\to 0,\n\\end{equation*}\nand moreover, $V''_\\bu$ is monadic, $\\mu_M(\\CH^2(V''_\\bu)) = \\mu_M(E) = d\/r$, and $\\CH^3(V'_\\bu)$ is Artin.\nBut since $r$ and $d$ are coprime either the rank and the degree of $\\CH^2(V''_\\bu)$ are zero, or equal to $r$ and $d$ respectively.\nThe first is impossible since then $F := \\CH^2(V''_\\bu)$ is an Artin sheaf and $V''_p = \\Ext^1(Q_p(1-p),F) = 0$, so $V''_\\bu = 0$.\nThe second is impossible since then the rank of $\\CH^2(V'_\\bu)$ is zero, and as $\\CH^2(V'_\\bu)$ being a subsheaf in $E$\nis torsion free, it should be zero. Thus $V'_\\bu$ is a nonzero Artin subrepresentation in $V_\\bu$ which means that $V^\\vee_\\bu$\nhas a nonzero Artin quotient representation and hence cannot be monadic.\n\nNow let $V_\\bu$ be an arbitrary $\\tet^0$-semistable but not $\\tnt$-semistable representation. Consider all composition factors of $V_\\bu$\nin the category of $\\tet^0$-semistable representations. By~\\refp{tet0-ss} and the above argument these are Artin representations and\nthe supermonadic part of $V_\\bu$. Among those Artin representations have positive $\\tet^1$-slope, hence the supermonadic part\nis the only composition factor with negative $\\tet^1$-slope. So, the only way how $V_\\bu$ can be not $\\tnt$-semistable\nis when $V_\\bu$ has an Artin quotient.\n \\end{proof} \n\n\n\n\n\n\\ssec{moduli-spaces}{Moduli spaces}\n\nLet $\\CM^\\tet_\\tau(r_1,r_2,r_3)$ denote the moduli space of $\\tet$-semistable\n$(r_1,r_2,r_3)$-dimensional representations of the quiver $\\bpt$, as\ndefined by\nKing ~\\cite{K}.\nIt is a coarse moduli space for families of $\\tet$-semistable representations\nof the quiver $\\bpt$ of dimension $(r_1,r_2,r_3)$. In particular, its closed points\nare in a bijection with S-equivalence classes of $\\tet$-semistable representations.\n\nFor rational $\\tet$ there is an explicit GIT construction of the moduli space. One starts with {\\sf the representation space} of $\\bpt$:\n\\begin{equation}\\label{rt-embedded}\n\\mathsf{R}_\\tau(r_1,r_2,r_3) \\subset \\Hom(\\C^{r_1}\\otimes (A_\\tau^!)_1,\\C^{r_2}) \\times \\Hom(\\C^{r_2}\\otimes (A_\\tau^!)_1,\\C^{r_3}),\n\\end{equation}\nconsisting of those pairs of maps $f:\\C^{r_1} \\otimes (A_\\tau^!)_1 \\to \\C^{r_2}$, $g:\\C^{r_2} \\otimes (A_\\tau^!)_1 \\to \\C^{r_3}$ such that the composition\n$g\\circ (f \\otimes \\id):\\C^{r_1} \\otimes (A_\\tau^!)_1 \\otimes (A_\\tau^!)_1 \\to \\C^{r_3}$ factors through the multiplication map\n$\\C^{r_1} \\otimes (A_\\tau^!)_1 \\otimes (A_\\tau^!)_1 \\to \\C^{r_1} \\otimes (A_\\tau^!)_2$. Clearly, \\eqref{rt-embedded} is a Zarisky closed subset\nin an affine space.\nThe group\n\\begin{equation*}\n\\GL(r_1,r_2,r_3)=\\GL(r_1)\\times \\GL(r_2)\\times \\GL(r_3)\n\\end{equation*}\nacts naturally on $\\mathsf{R}_\\tau(r_1,r_2,r_3)$.\nGiven a rational polarization $\\tet$, in the trivial\nbundle,\nlet $\\C[\\mathsf{R}_\\tau(r_1,r_2,r_3)]^{\\GL(r_1,r_2,r_3),p\\tet}$\nbe the vector space of polynomial $\\GL(r_1,r_2,r_3)$-semiinvariants of\nweight $p\\tet$ (this space is declaired to be zero unless\n$p\\tet$ is an integral weight).\nOne defines an associated GIT quotient by\n\\begin{equation*}\n\\mathsf{R}_\\tau(r_1,r_2,r_3) \/\\!\/_\\tet \\GL(r_1,r_2,r_3) := \\Proj\\left(\\bigoplus_{p = 0}^\\infty \\C[\\mathsf{R}_\\tau(r_1,r_2,r_3)]^{\\GL(r_1,r_2,r_3),p\\tet}\\right).\n\\end{equation*}\nThen,\naccording to ~\\cite{K}, one has \n$\\CM^\\tet_\\tau(r_1,r_2,r_3) \\cong \\mathsf{R}_\\tau(r_1,r_2,r_3) \/\\!\/_\\tet\n\\GL(r_1,r_2,r_3)$.\nFurther,\nit turns out that the space of all polarizations $\\tet$ has a chamber structure\nand the moduli space $\\CM^\\tet_\\tau(r_1,r_2,r_3)$ depends only on the chamber in which $\\tet$ sits.\nThis allows to define $\\CM^\\tet_\\tau$ for arbitrary (real) polarization $\\tet$ by taking rational $\\tet'$ in the same chamber as $\\tet$\nand setting $\\CM^\\tet_\\tau(r_1,r_2,r_3) := \\CM^{\\tet'}_\\tau(r_1,r_2,r_3)$.\n\nAnalogously one constructs a coarse moduli space $\\Mtt(r_1,r_2,r_3)$ for a pair of polarizations $(\\tet,\\tet')$ by taking\nan arbitrary polarization in the chamber containing $\\tet + \\eps\\tet'$ for all sufficiently small and positive $\\eps$.\n\nIt has been shown in \\cite{K} that the moduli space of\nsemistable representations of any quiver that has no\n oriented cycles is a projective variety. It follows,\n since\nthe quiver $\\bpt$ has no oriented cycles,\nthat each of the above moduli spaces $\\CM^\\tet_\\tau(r_1,r_2,r_3)$\nis a projective variety. This variety comes equipped\nwith a natural $\\SL(H)$-action.\nFinally, we remark that if the dimension vector\n$(r_1,r_2,r_3)$ is primitive, i.e., indivisible, then \n $\\CM^\\tet_\\tau(r_1,r_2,r_3)$ is a {\\em fine} moduli space.\n\\medskip\n\n\n\n\n\n\n\n\n\n\n\n\n\\nc{\\reduced}{{\\mathrm{red}}}\n\nBelow we discuss moduli spaces of several classes of representations of the quiver $\\bpt$.\nFirst, recall that by~\\refc{tet-ss-artin} any $\\tet^0$-semistable representation of dimension $(n,2n,n)$\nis an Artin representation. So, we refer to the corresponding moduli space as to {\\em the moduli space of Artin represnetations}\nand denote it by ${}^A\\mathsf{M}_\\tau(n,2n,n)$. Thus we have\n\\begin{equation*}\n{}^A\\mathsf{M}_\\tau(n,2n,n) := \\CM^{\\tet^0}_\\tau(n,2n,n) = \\mathsf{R}_\\tau(n,2n,n)\/\\!\/_{\\tet^0} \\GL(n,2n,n).\n\\end{equation*}\nThe moduli space of Artin representations is highly non-reduced. In what follows, however, we will\nonly need a description of the underlying reduced scheme which we denote by ${}^A\\mathsf{M}_\\tau(n,2n,n)_\\reduced$.\nThe proof of the following Proposition can be found in the Appendix.\n\n\\prop{artin-moduli-k}\nThe map $W_\\bu \\mapsto \\supp(\\CH^3(W_\\bu)))$ gives an $\\SL(H)$-equivariant isomorphism\n\\begin{equation*}\n{}^A\\mathsf{M}_\\tau(n,2n,n)_\\reduced \\cong S^n(\\PP(H)).\n\\end{equation*}\n\\end{propos}\n\n\n\n\nDimension vector $(n,2n,n)$ considered here is a special case of the vector $\\alpha(r,d,n)$ for $r = d = 0$.\nWe also consider a general moduli space of $\\tet^0$-semistable $\\alpha(r,d,n)$-dimensional representations\nof $\\bpt$ which we call {\\sf the Uhlenbeck moduli space}\n(the reasons behind this choice of a name will become clear later) of sheaves on $\\BPt$. We denote it\n\\begin{align*}\n\\UMt(r,d,n) := \\CM^{\\tet^0}_\\tau(\\alp(r,d,n)) = \\mathsf{R}_\\tau(\\alp(r,d,n))\/\\!\/_{\\tet^0} \\GL(\\alp(r,d,n)).\n\\end{align*}\nThe last moduli space we consider is the moduli space of $\\tnt$-semistable $\\alpha(r,d,n)$-dimensions representations\n\\begin{align*}\n\\GMt(r,d,n) := \\Mt(\\alp(r,d,n)) = \\mathsf{R}_\\tau(\\alpha(r,d,n))\/\\!\/_{\\tnt}\\GL(\\alpha(r,d,n)).\n\\end{align*}\nWe call this moduli space {\\sf the Gieseker moduli space} of sheaves on $\\BPt$. The reason for this is the following\n\n\\prop{nevins}\nThe Gieseker moduli space $\\GMt(r,d,n)$ is isomorphic to the moduli space of Gieseker\nsemistable sheaves on $\\BPt$ constructed in~\\cite{ns}. Moreover, the open subset\nof $\\GMt(r,d,n)$ of $\\tnt$-stable representations corresponds, via the\nisomorphism,\nto the open set of\nGieseker stable sheaves on $\\BPt$.\n\\end{propos}\n \\begin{proof} \nThis follows immediately from~\\refl{kronecker-stability}, as the functor of $\\tnt$-(semi)stable representations of the quiver $\\bpt$\nis isomorphic to the functor of (semi)stable Kronecker complexes considered in~\\cite{ns}.\n \\end{proof} \n\n\\cor{sm}\nIf $\\mathsf{gcd}(r,d,n)=1$, then $\\GMt(r,d,n)$ is a fine moduli space,\nmoreover,\nthis moduli space is smooth.\n\\end{corol}\n \\begin{proof} \nBy~\\cite[Prop.~7.15]{ns} the moduli space of semistable Kronecker complexes is fine. As the functor of semistable Kronecker complexes\nis isomorphic to the functor of $\\tnt$-semistable representations of $\\bpt$, we conclude that $\\GMt(r,d,n)$ is also a fine moduli space.\nMoreover, from $\\mathsf{gcd}(r,d,n) = 1$ it follows that all $\\tnt$-semistable representations of the quiver are $\\tnt$-stable,\nhence all Gieseker semistable sheaves are stable, and so the smoothness of the moduli space is proved in~\\cite[Thm.~8.1]{ns}.\n \\end{proof} \n\n\n\n\n\n\n\n\n\\ssec{stratifications}{Stratifications}\n\nRecall that by~\\refp{tnt-ss} any $\\tnt$-semistable representation $V_\\bu$ can be written as $V_\\bu(E)$\nfor a Gieseker semistable sheaf $E$.\nThis gives a decomposition of the moduli space $\\GMt(r,d,n)$ into pieces by the length of $E^{**}\/E$.\nIt will be shown in the Appendix at the end of the paper that\nthis decomposition is, in fact, an algebraic stratification. \n\nThe proofs of other results of this subsection stated below\nare also deferred to section 4.\n\n\\lem{stratification}\nThe Gieseker moduli space $\\GMt(r,d,n)$ is naturally stratified by locally closed $\\SL(H)$-invariant subsets\n\\begin{equation*}\n\\GMt(r,d,n) = \\bigsqcup_{0\\le k\\le n} \\GMt^k(r,d,n),\n\\end{equation*}\nwhere the stratum $\\GMt^k(r,d,n)$\ncorresponds to the locus of Gieseker semistable sheaves $E$ on $\\BPt$ with $c_2(E^{**}\/E)=k$.\n\\end{lemma}\n\nIn particular, the open stratum $\\GMt^0(r,d,n) \\subset \\GMt(r,d,n)$ parameterizes locally free Gieseker semistable sheaves.\n\nThere is also an analogous stratification of the Uhlenbeck moduli space.\n\n\\lem{stratification-uhlenbeck}\nThe Uhlenbeck moduli space $\\UMt(r,d,n)$ is naturally stratified by locally closed $\\SL(H)$-invariant subsets\n\\begin{equation*}\n\\UMt(r,d,n) = \\bigsqcup_{0\\le k\\le n} \\UMt^k(r,d,n),\n\\end{equation*}\nwhere the stratum $\\UMt^k(r,d,n)$\ncorresponds to the locus of Mumford semistable sheaves $E$ on $\\BPt$ with $c_2(E^{**}\/E)=k$.\n\\end{lemma}\n\n\n\n\n\n\n\n\nThe {\\em natural} stratifications of the Gieseker and the Uhlenbeck moduli spaces have highly nonreduced\nstrata. The reason for that is the nonreducedness of the moduli space of Artin sheaves, or going one step\ndeeper, the nonreducedness of the scheme of ``commutative points'' of $\\BPt$. However, as we have only\ntopological applications in mind, the nilpotents in the structure sheaves of the strata are irrelevant\nfor our purposes, so by this reason we replace each stratum of both moduli spaces by the reduced\nscheme underlying the corresponding natural stratum. \nThus, from now on we will abuse the notation and write\n$\\GMt^k(r,d,n)$, resp. $\\UMt^k(r,d,n)$, for the stratum of the\ncorresponding stratification equipped with {\\textsf{reduced scheme structure}}.\n\n\n\nIt follows from standard results of geometric invariant theory\n(see~\\cite{DH}) that there is a canonical $\\SL(H)$-equivariant \nprojective morphism\n\\begin{equation*}\n\\gam_\\tau\\colon\\GMt(r,d,n)\\to\\UMt(r,d,n),\n\\end{equation*}\nresulting from a specialization of $\\tnt$-semistability\nto $\\tet^0$-semistability.\n\n\\begin{Rem} We do not know how to define the morphism $\\gam_\\tau$ in\n terms\nof coherent sheaves on $\\BPt$, without using identifications of \nmoduli spaces of coherent sheaves with the corresponding moduli spaces\nof\nquiver representations.\n\\end{Rem}\n\n\nThe main result of this section establishes a compatibility between the constructed statifications\nof the Gieseker and Uhlenbeck moduli spaces and describes the relation between the strata.\n\n\n\n\\th{str}\n(1) The map $\\gamma_\\tau:\\GMt(r,d,n) \\to \\UMt(r,d,n)$ is compatible with the stratifications, i.e.,\n$\\gamma_\\tau(\\GMt^k(r,d,n)) \\subset \\UMt^k(r,d,n)$.\n\nIn the case where $\\tau\\neq 0$ and the integers $r$ and $d$ are coprime, \nthe Gieseker compactification is smooth and the following holds:\n\\vskip 2pt \n\n(2) The open set $\\GMt^0(r,d,n)$ is the locus of Gieseker stable\nsupermonadic representations; furthermore, this open set\ncorresponds, via the isomorphism of Proposition \\ref{P:nevins},\nto the\nlocus of locally free Gieseker stable sheaves on $\\BPt$.\nMoreover, the\nmap $\\gamma_\\tau$ yields an isomorphism $\\GMt^0(r,d,n){\\stackrel{\\sim}{\\longrightarrow}}\n\\UMt^0(r,d,n)$.\n\n\n\n(3) For any\n$k > 0$ one has an $\\SL(H)$-equivariant isomorphism\n\\begin{equation*}\n\\UMt^k(r,d,n) \\cong \\GMt^0(r,d,n-k) \\times {}^A\\mathsf{M}_\\tau(k,2k,k)_\\reduced \\cong \\GMt^0(r,d,n-k) \\times S^k{\\mathbb P}(H).\n\\end{equation*}\nUsing this isomorphism, for $E\\in\n\\GMt^k(r,d,n)$ we have\n\\begin{equation*}\n\\gamma_\\tau(E) = (E^{**},\\supp(E^{**}\/E)).\n\\end{equation*}\nIn particular, \nthe fiber of $\\gamma_\\tau$ over a point $(\\CE,D) \\in \\GMt^0(r,d,n-k) \\times S^k{\\mathbb P}^1 \\subset \\UMt(r,d,n)$\nis the underlying reduced scheme for the moduli space of subsheaves $E \\subset \\CE$ with $\\supp(\\CE\/E) = D$.\n\\eth\n\n\n\\begin{Rem} The relation between the moduli spaces $\\GMt(r,d,n)$ and $\\UMt(r,d,n)$\nis completely analogous to that of the Gieseker and Uhlenbeck compactifications of the moduli\nspaces of vector bundles on commutative algebraic surfaces (this justifies\nthe use of these names in our situation).\nAn important difference between commutative and noncommutative cases\ndifference is in the dimensions of the strata: in the commutative case the second factors in the product\nexpression for Uhlenbeck strata are symmetric powers of the surface, while in the noncommutative\ncase we only have a symmetric power of a curve.\nThis fact will play a crucial role in subsequent sections.\n\\end{Rem} \n\n\n\n\n\n\n\n\n\\sec{ch6}{Rank $1$ sheaves and the Calogero--Moser space}\n\nIn this section we study the Gieseker and the Uhlenbeck moduli spaces of rank 1 and degree 0\ntorsion free sheaves on $\\BPt$.\n\n\\ssec{calmos}{The compactifications}\n\n\n\n\n\n\nTo unburden the notation we write\n$$\n\\obMt^n=\\GMt(1,0,n),\\quad \\hbMt^n=\\UMt(1,0,n), \\quad\n\\obMt^{n,k}=\\GMt^k(1,0,n), \\quad\n\\hbMt^{n,k}=\\UMt^k(1,0,n).\n$$\nIt is well known, cf. \\cite[Prop.~8.13]{ns}, that the open strata $\\GMt^{n,0} \\subset \\GMt^n$ and\n$\\UMt^{n,0} \\subset \\UMt$ can be identified with the Calogero--Moser\nspace.\nThus, the varieties $\\obMt^n$ and $\\hbMt^n$ provide\ntwo {\\em different} compactifications of the Calogero--Moser\nspace,\nto be called\n{\\sf the Gieseker and the Uhlenbeck compactifications}, respectively.\nFurthermore, the variety $\\obMt^n$ being smooth, the morphism $\\gamma_\\tau \\colon \\obMt^n \\to \\hbMt^n$ is a resolution of singularities.\nLater on, we will use this resolution to compute the stalks of the IC sheaf of the Uhlenbeck compactification.\n\n\n\n\\th{isom}\nFor $\\tau \\ne 0$ we have $\\SL(H)$-equivariant isomorphisms\n\\begin{equation*}\n\\GMt^{n,0} = \\UMt^{n,0} \\cong \\sM_\\tau^n.\n\\end{equation*}\n\\eth\n \\begin{proof} \nThe first isomorphism follows from~\\refp{nevins} and~\\cite[Prop.~8.13]{ns}. The second\nis a consequence of the first and~\\reft{str}(2).\n \\end{proof} \n\n\n\n\nIn \\refss{moduli-spaces}, we have introduced a contraction map $\\gamma_\\tau\\colon\\obMt^n\\to\\hbMt^n$.\nBy~\\reft{str} it sends a torsion free sheaf $E$ to $(E^{**},\\supp(E^{**}\/E))$,\nand for any $0\\leq k\\leq n$, we have\n\\begin{equation*}\n\\gamma_\\tau(\\obMt^{n,k})\\ =\\ \\hbMt^{n,k}\\ =\\ \\sM_\\tau^{n-k}\\x S^k{\\mathbb P}(H).\n\\end{equation*}\nBelow, we are going to describe the fibers of the map $\\gamma_\\tau$.\nChoose $0 \\ne h \\in H$ and let\n\\begin{equation*}\n{\\mathbb A}^1_h := {\\mathbb P}(H) \\smallsetminus \\{h\\}.\n\\end{equation*}\nThus, ${\\mathbb A}^1_h$ is an affine line and, for any $n\\geq 0$, the set\n$S^n{\\mathbb A}^1_h$ is Zariski open and dense in $S^n{\\mathbb P}(H)$.\nIt is clear that these sets for all $h \\in H$ form an open covering of $S^n\\PP(H)$.\n\nConsider the zeroth Calogero--Moser space $\\sM_\\tau^0$. Clearly, this is just a point,\nand under the isomorphism of~\\reft{isom} it corresponds to the trivial line bundle $\\CO_{\\BPt}$.\nFor each nonzero vector $h \\in H$ consider the open subset\n$\\{\\CO\\} \\times S^n{\\mathbb A}^1_h = \\sM^0_\\tau \\times S^n{\\mathbb A}^1_h \\subset \\sM^0_\\tau \\times S^n\\PP(H) = \\hbMt^{n,n}$\nand its preimage under the map $\\gamma_\\tau:\\obMt^{n,n} \\to \\hbMt^{n,n}$:\n\\begin{equation}\\label{equation-bnh}\nB^n_h =\n\\gamma_\\tau^{-1}(\\{\\calO\\}\\x S^n{\\mathbb A}^1_h).\n\\end{equation}\nAnalogously, we can take arbitrary locally free sheaf $\\CE$ of rank 1 and degree 0, consider the locally closed subset\n$\\{\\CE\\} \\times S^k{\\mathbb A}^1_h \\subset \\sM^m_\\tau \\times S^k{\\mathbb A}^1_h \\subset \\sM^m_\\tau \\times S^k\\PP(H) = \\hbMt^{m+k,k}$\nand its preimage under the map $\\gamma_\\tau:\\obMt^{m+k,k} \\to \\hbMt^{m+k,k}$:\n\n\n\\prop{dep}\nFor any locally free sheaf $\\calE$ of rank $1$, $\\calE\\in\\sM_\\tau^{m}$,\nthere is an isomorphism\n\\begin{equation*}\n\\gamma_\\tau^{-1}(\\{\\calE\\}\\x S^k{\\mathbb A}^1_h)\\cong B^k_h.\n\\end{equation*}\n\\end{propos}\n \\begin{proof} \nThere is an integer $p \\in \\ZZ$ and two maps\n\\begin{equation*}\n\\phi:\\CO(-p) \\to \\CE\n\\qquad\\text{and}\\qquad\n\\phi':\\CE \\to \\CO(p)\n\\end{equation*}\nsuch that $i^*(\\phi) = h^p$ and $i^*(\\phi') = h^p$. Indeed, take $p$ sufficiently large to have\n\\begin{equation*}\n\\Ext^1(\\CE,\\CO(p-1)) = \\Ext^1(\\CO(-p),\\CE(-1)) = 0\n\\end{equation*}\nand define $\\phi$ and $\\phi'$ as lifts of the compositions in the next two diagrams\n\\begin{equation*}\n\\xymatrix@C=1em{\n&& \\CO(-p) \\ar[r] \\ar@{..>}[d]_\\phi & i_*i^*\\CO(-p) \\ar[d]^{h^p} \\\\\n0 \\ar[r] & \\CE(-1) \\ar[r] & \\CE \\ar[r] & i_*i^*\\CE \\ar[r] & 0\n}\n\\qquad\n\\xymatrix@C=1.5em{\n&& \\CE \\ar[r] \\ar@{..>}[d]_\\phi' & i_*i^*\\CE \\ar[d]^{h^p} \\\\\n0 \\ar[r] & \\CO(p-1) \\ar[r] & \\CO(p) \\ar[r] & i_*i^*\\CO(p) \\ar[r] & 0\n}\n\\end{equation*}\n\n\nThe maps $\\phi$ and $\\phi'$ give morphisms between the moduli spaces of surjections\n$\\CE \\twoheadrightarrow F$ and $\\CO \\twoheadrightarrow F$ onto Artin sheaves $F$\nof length $k$ with $\\supp(F) \\subset {\\mathbb A}^1_h$:\n\\begin{equation*}\n(f:\\CE \\twoheadrightarrow F) \\mapsto (f \\circ \\phi(p):\\CO \\twoheadrightarrow F)\n\\qquad\\text{and}\\qquad\n(g:\\CO \\twoheadrightarrow F) \\mapsto (g(p) \\circ \\phi':\\CE \\twoheadrightarrow F).\n\\end{equation*}\nwhere in both cases we identify $F$ with $F(p)$ via $h^p$. It is straightforward to check\nthat these maps are mutually inverse. As the preimages $\\gamma_\\tau^{-1}(\\CE\\times S^k{\\mathbb A}^1_h)$ and\n$B^k_h = \\gamma_\\tau^{-1}(\\CO\\times S^k{\\mathbb A}^1_h)$ are identified by~\\reft{str}(3) with the reduced schemes\nunderlying these moduli spaces, the constructed maps provide an isomorphism between them as well.\n \\end{proof} \n\n\n\\newcommand{{{\\on{disj}}}}{{{\\on{disj}}}}\n\n\nThe spaces $B^n_h$ come with a natural map $\\gamma_\\tau:B^n_h \\to S^n{\\mathbb A}^1_h$.\nIn fact they enjoy the following factorization property. Define\nthe open subset $(S^{k_1}{\\mathbb A}^1\\times S^{k_2}{\\mathbb A}^1)_{{\\on{disj}}} \\subset S^{k_1}{\\mathbb A}^1\\times S^{k_2}{\\mathbb A}^1$ as\n\\begin{equation}\\label{disjoint-divisors}\n(S^{k_1}{\\mathbb A}^1\\times S^{k_2}{\\mathbb A}^1)_{{\\on{disj}}} = \\{ (D_1,D_2) \\in S^{k_1}{\\mathbb A}^1 \\times S^{k_2}{\\mathbb A}^1 \\ |\\ \\supp(D_1) \\cap \\supp(D_2) = \\emptyset \\}\n\\end{equation}\nand\n\\begin{equation*}\n(B_h^{k_1} \\times B_h^{k_2})_{{\\on{disj}}} := (\\gamma_\\tau \\times \\gamma_\\tau)^{-1}((S^{k_1}\\AA_h^1 \\times S^{k_2}\\AA_h^1)_{{\\on{disj}}}) \\subset B_h^{k_1} \\times B_h^{k_2}.\n\\end{equation*}\n\n\\prop{factorization-b}\nThe collection of spaces $B^n_h$ has a factorization property, i.e.\\ there is a collection of maps\n\\begin{equation*}\n\\psi_{k_1,k_2}:(B_h^{k_1}\\times B_h^{k_2})_{{\\on{disj}}} \\to B_h^{k_1+k_2}\n\\end{equation*}\nfor all positive integers $k_1,k_2$ which has the following properties: \n\\begin{itemize}\n\\item (associativity) $\\psi_{k_1+k_2,k_3} \\circ (\\psi_{k_1,k_2} \\times \\id) = \\psi_{k_1,k_2+k_3} \\circ (\\id \\times \\psi_{k_1,k_2})$ for all $k_1,k_2,k_3$;\n\\item (commutativity) the maps $\\psi_{k,k} \\colon (B_h^k\\times B_h^k)_{{\\on{disj}}} \\to B_h^{2k}$ commute with the transposition of the factors on the source for all $k$;\n\\item (compatibility with the addition) the following diagram is Cartesian\n\\begin{equation*}\n\\xymatrix{\n(B_h^{k_1}\\times B_h^{k_2})_{{\\on{disj}}} \\ar[rr]^-{\\psi_{k_1,k_2}} \\ar[d]_{\\gamma_\\tau\\times\\gamma_\\tau} && B_h^{k_1+k_2} \\ar[d]^{\\gamma_\\tau} \\\\\n(S^{k_1}{\\mathbb A}^1\\times S^{k_2}{\\mathbb A}^1)_{{\\on{disj}}} \\ar[rr]^-{a_{k_1,k_2}} \t&& S^{k_1+k_2}{\\mathbb A}^1\n}\n\\end{equation*}\nwhere the bottom arrow is the addition morphism: $(D_1,D_2) \\mapsto D_1 + D_2$.\n\\end{itemize}\n\n\\end{propos}\n \\begin{proof} \nA point of $(B^{k_1}_h \\times B^{k_2}_h)_{{\\on{disj}}}$ can be represented by a pair of Artin sheaves $F_1,F_2$ of length $k_1$ and $k_2$ respectively\nwith epimorphisms $\\CO \\twoheadrightarrow F_1$ and $\\CO \\twoheadrightarrow F_2$. \nConsider the sum $F = F_1 \\oplus F_2$ and the map $\\CO \\to F$ given by the sum of the two above maps. Let us show it is surjective.\nBy~\\refp{morph2}(2) it is enough to check that the map $\\CO_{\\PP(H)} \\to i^*F = i^*F_1 \\oplus i^*F_2$ is surjective.\nBut as the supports of the sheaves $i^*F_1$ and $i^*F_2$ are disjoint, this is equivalent to the surjectivity\nof each of the maps $\\CO_{\\PP(H)} \\to i^*F_1$ and $\\CO_{\\PP(H)} \\to i^*F_2$ which we have again by~\\refp{morph2}(2).\nThis means that the sheaf $F$ with the constructed epimorphism $\\CO \\twoheadrightarrow F$ give a point of $B^{k_1+k_2}_h$, and thus a morphism\n\\begin{equation*}\n\\psi^B_{k_1,k_2} \\colon (B^{k_1}_h \\times B^{k_2}_h)_{{\\on{disj}}} \\to B^{k_1+k_2}_h\n\\end{equation*}\nis defined. Let us show it is a factorization. Indeed, the associativity and the commutativity properties\nare evident, so it remains to check the compatibility with the addition, i.e.\\ that the corresponding diagram is Cartesian.\nThe commutativity of the diagram follows from~\\refl{artin-support}, so it remains to note that if $F$ is an Artin sheaf \nof length $k_1+k_2$ such that $\\supp(F) = D_1 + D_2$ with disjoint divisors $D_1 \\in S^{k_1}\\AA^1_h$ and $D_2 \\in S^{k_2}\\AA^1_h$, \nthen $F$ has a unique representation as a direct sum $F = F_1 \\oplus F_2$ with $\\supp(F_1) = D_1$\nand $\\supp(F_2) = D_2$ (this follows easily from~\\refp{Art0}). \n \\end{proof} \n\n\nThe variety $B_h^k$ has a nice linear algebra description.\nFix a vector space $V$ of dimension~$k$. Let\n\\eq{rel}\n\\widetilde{B}{}^k_h = \\{ (Y,Z,v)\\in \\End(V)\\times \\End(V)\\times V\\ |\\ [Y,Z]=\\tau Z^3\\ \\text{and $v$ is cyclic} \\}.\n \\end{equation} \nHere we say that a vector $v$ is {\\sf cyclic} for a pair of matrices $(Y,Z)$ if\nthere is no proper vector subspace $V' \\subset V$ that contains $v$ and is both $Y$-stable and $Z$-stable.\nOne has a natural $\\GL(V)$-action on $\\widetilde{B}{}^k_h$\ngiven by $g:\\ (Y,Z,v)\\mapsto (gYg^{-1}, gZg^{-1}, gv)$.\n\n\n\\th{26}\nThe action of $\\GL(V)$ on $\\widetilde{B}{}^k_h$ is free and\n\\begin{equation*}\nB^k_h \\cong (\\widetilde{B}{}^k_h\/\\GL(V))_\\reduced.\n\\end{equation*}\nUnder this isomorphism the map $\\gamma_\\tau:B^k_h \\to S^k{\\mathbb A}^1_h$ is induced by the map $\\widetilde{B}^k_h \\to S^k{\\mathbb A}^1_h$\nwhich takes $(Y,Z,v)$ to $\\Spec(Y)$.\n\\eth\n\n \\begin{proof} \nAssume that $g\\in \\GL(V)$ acts trivially on a triple $(Y,Z,v)$. Let\n$V^g\\subset V$ be the space of invariants of $g$. Then $v\\in V^g$ and\n$Y(V^g)\\subset V^g,\\ Z(V^g)\\subset V^g$, hence $V^g=V$ as $v$ is cyclic, and so $g=1$.\n\n\nNow consider the moduli space of surjections $\\CO \\twoheadrightarrow F$ with $F$ an Artin sheaf\nof length $k$ with $\\supp(F) \\subset {\\mathbb A}^1_h$. Let us show it is isomorphic to the quotient\n$\\widetilde{B}{}^k_h\/\\GL(V)$. Then passing to the underlying reduced schemes will prove the Theorem.\n\n\nChoose symplectic coordinates $x,y$ in $H$ such that the point $h \\in \\PP(H)$ is given by the equation $x = 0$.\nLet $(Y,Z,v)$ be a point of $\\widetilde{B}{}^k_h$.\nConsider a graded vector space $V[x] := V \\otimes {\\mathbb C}[x]$ with $\\deg x = 1$, with $x$ acting by multiplications, and with the action of $y$ and $z$ defined by\n\\begin{equation*}\ny = xY -\\tau x^2Z^2\\partial_x,\n\\qquad\nz = xZ.\n\\end{equation*}\nThe commutation $[x,z] = 0$ is clear. Moreover, we have\n\\begin{equation*}\n[y,z] = [xY -\\tau x^2Z^2\\partial_x, xZ] = x^2[Y,Z] - \\tau x^2Z^2[\\partial_x,x]Z = \\tau x^2Z^3 - \\tau x^2Z^3 = 0\n\\end{equation*}\nand\n\\begin{equation*}\n[x,y] = [x, xY -\\tau x^2Z^2\\partial_x] = - \\tau x^2Z^2[x,\\partial_x] = \\tau x^2Z^2 = \\tau z^2.\n\\end{equation*}\nThis shows that $V[x]$ is a graded $A^\\tau$-module. Let $F$ be the corresponding coherent sheaf on $\\BPt$.\nBy definition the map $x:V[x] \\to V[x]$ is injective with finite dimensional cokernel, hence the map $x:F \\to F(1)$\nis an isomorphism. In particular, the Hilbert polynomial $h_F(t)$ is constant, hence $F$ is an Artin sheaf\nwith $\\supp(F) \\subset {\\mathbb A}^1_h$ by~\\refl{artin-support}. Moreover, the length of $F$ is equal to $\\dim V = k$\nand the vector $v \\in V \\subset V[x]$ gives a morphism of $A^\\tau$-modules $A^\\tau \\to V[x]$. By cyclicity\nassumption the map is surjective in all components of sufficiently large degree, hence the corresponding\nmorphism of sheaves $\\CO \\to F$ is surjective.\nNote that the construction is $\\GL(V)$-invariant.\n\nVice versa, let $\\CO \\twoheadrightarrow F$ be a surjection with $F$ an Artin sheaf of length $k$\nand $\\supp(F) \\subset {\\mathbb A}^1_h$. Choose an isomorphism $V \\cong H^0(\\BPt,F)$. Note that\nby~\\refl{artin-support} the map $x:F \\to F(1)$ is an isomorphism, hence it also\ninduces an isomorphism on the spaces of global sections $x \\colon H^0(\\BPt,F) \\xrightarrow{\\ \\sim\\ } H^0(\\BPt,F(1))$.\nWe denote by $x^{-1}$ the inverse isomorphism. We put $Y = x^{-1}y$, $Z = x^{-1}z$\nconsidered as endomorphisms of $V = H^0(\\BPt,F)$. Finally, we take $v$ to be the image\nof $1 \\in H^0(\\BPt,\\CO)$ in $V$ under the map $\\CO \\twoheadrightarrow F$.\n\nLet us show that~\\refe{rel} holds. First, we have $y = xY$, $z = xZ$ which gives relations\n\\begin{equation*}\nx^2Z = xZx,\n\\qquad\nxYxZ = xZxY,\n\\qquad\\text{and}\\qquad\nx^2Y - xYx = \\tau xZxZ.\n\\end{equation*}\nIt follows that\n\\begin{equation*}\nx^2(YZ - ZY) =\nx^2YZ - x^2ZY =\nxYxZ + \\tau xZxZ^2 - xZxY =\n\\tau x^2Z^3.\n\\end{equation*}\nIn the second equality we used the third relation,\nand in the third equality we used the first two relations.\nAs $x$ is an isomorphism, we deduce $[Y,Z] = \\tau Z^3$. So, it remains to show that $v$ is cyclic. For this\ntake an arbitrary subspace $V' \\subset V$ containing $v$ and closed under the action of $Y$ and $Z$.\nThe triple $(Y,Z,v)$ in the vector space $V'$ then gives an Artin sheaf $F'$ and a surjection $\\CO \\twoheadrightarrow F'$.\nThe embedding $V' \\subset V$ gives an embedding of sheaves $F' \\hookrightarrow F$ and it is clear that the original map\n$\\CO \\twoheadrightarrow F$ factors as $\\CO \\twoheadrightarrow F' \\hookrightarrow F$. It follows that $F' = F$\nand hence $V' = H^0(\\BPt,F') = H^0(\\BPt,F) = V$.\n\nThe two constructions are clearly mutually inverse and thus prove an isomorphism of moduli spaces\nand hence the first part of the Theorem.\nFor the second part it remains to show that $\\supp(F) = \\Spec(Y)$. But this is clear since\nthe action of the coordinate $y$ from $H^0(\\BPt,F) = V$ to $H^0(\\BPt,F(1)) = V$ is given by the operator $Y$.\n \\end{proof} \n\nWe use the identification $B^k_h \\cong\\ (\\widetilde{B}{}^k_h\/\\GL(V))_\\reduced$\nto investigate the properties of $B^k_h$.\n\n\\lem{nil}\nIf a pair $(Y,Z)$ satisfies \\refe{rel} then $Z$ is nilpotent.\n\\end{lemma}\n \\begin{proof} \nNote that $[Y,Z^p]=p\\tau Z^{p+2}$ for $p \\ge 3$. Since $\\tau\\ne0$, it follows that\n$\\Tr Z^{p+2}=0$ for any $p \\ge 3$. Hence $Z$ is nilpotent.\n \\end{proof} \n\n\\lem{ex}\nFor any nilpotent $Z$ there exist $Y$ and $v$ such that~\\refe{rel} holds.\n\\end{lemma}\n \\begin{proof} \nFirst, for any $u \\in {\\mathbb C}$ take\n\\begin{equation*}\nV = \\C[t]\/t^k,\\qquad\nY = u + \\tau t^3\\partial_t,\\quad\nZ = t,\\quad\nv = 1.\n\\end{equation*}\nClearly, \\refe{rel} holds,\nso we have an example in case when $Z$ is just one Jordan block.\nNote that $\\Spec(Y) = ku \\in S^k{\\mathbb A}^1_h$.\n\nFor arbitrary nilpotent $Z$ the Jordan decomposition of $Z$ is a direct sum\ndecomposition $Z = Z_1 \\oplus \\dots \\oplus Z_m$ with blocks of size $k_1, \\dots, k_m$.\nChoosing $m$ distinct complex numbers $u_1,\\dots,u_m$ we construct triples $(Y_i,Z_i,v_i)$\non $V_i = {\\mathbb C}[t]\/t^{k_i}$ such that $\\supp(Y_i,Z_i,v_i) = k_iu_i$. Factorization property of~\\refp{factorization-b}\nthen shows that the direct sum $(\\oplus Y_i,\\oplus Z_i,\\oplus v_i)$ is a point of $B^{k_1+\\dots+k_m}_h$.\n \\end{proof} \n\n\n\nWe will use a natural one-to-one correspondence $\\lam\\mapsto O_\\lam$,\nbetween partitions of $k$ and the nilpotent conjugacy classes in\n$\\operatorname{End}(V)$, provided by Jordan normal form.\nLet $B_h^\\lam\\subset B^k_h$ denote the set of all triples $(Y,Z,v)$ satisfying \\refe{rel} with $Z\\in O_\\lam$.\n\n\\th{b-components}\nWe have a decomposition into a union of connected components\n\\begin{equation*}\nB^k_h=\\coprod_{\\lam \\in \\frP(k)} B^\\lam_h.\n\\end{equation*}\nThe component $B^\\lam_h$ is smooth, connected and $k$-dimensional.\n\\eth\n \\begin{proof} \nLet $O=O_\\lam$ be a nilpotent orbit of the group $\\GL(V)$. Let\n\\begin{equation*}\nN^*_O\\End(V)=\\{(Y,Z)\\ |\\ [Y,Z]=0\\}\\subset\\End(V) \\times \\End(V)\n\\end{equation*}\nbe the conormal bundle of $O$. Let\n\\begin{equation*}\n_\\tau\\!N^*_O\\End(V)=\\{(Y,Z)\\ |\\ Z\\in O,\\ [Y,Z]=\\tau Z^3\\}.\n\\end{equation*}\nThen according to~\\refl{ex} the space $(_\\tau\\!N^*_O\\End(V))_\\reduced$ is a\n$\\GL(V)$-equivariant $N^*_O\\End(V)$-torsor over~$O$.\nIn particular $(_\\tau\\!N^*_O\\End(V))_\\reduced$ is smooth and $k^2$-dimensional.\n\n\nIt is clear that for any pair $(Y,Z)$ the set of cyclic $v\\in V$ is open.\nTherefore,\n\\begin{equation*}\n\\widetilde{B}^k_h \\subset \\coprod_{\\lam \\in \\frP(k)} \\left(_\\tau\\!N^*_{O_\\lam}\\End(V)\n\\times V \\right)\n\\end{equation*}\nis an open subset.\nMoreover, by~\\refl{ex} it has a nonempty intersection with every component above.\nThe theorem now follows from~\\reft{26}.\n \\end{proof} \n\n\n\n\\cor{dim}\nConsider the map $\\gamma_\\tau:B^k_h \\to S^k{\\mathbb A}^1_h$.\nThe set of points $D \\in S^k{\\mathbb A}^1_h$ such that $\\dim \\gamma_\\tau^{-1}(D) \\ge m$\nhas codimension at least $m$ in $S^k{\\mathbb A}^1_h$, and is empty for $m \\ge k$.\n\\end{corol}\n \\begin{proof} \nAs $B^k_h$ is equidimensional of dimension $k$ by~\\reft{b-components}, it is enough to show that no component of $B^k_h$\nis contained in the fiber of $\\gamma_\\tau:B^k_h \\to S^k{\\mathbb A}^1_h$. For this note that the map $\\gamma_\\tau$\nis equivariant with respect to the action of the group $\\GG_a \\subset \\SL(H)$, the unipotent radical of the parabolic\nwhich fixes $h \\in H$, and that its action on ${\\mathbb A}^1_h$\nis free.\n \\end{proof} \n\n\n\\th{sml}\nThe map $\\gamma_\\tau\\colon\\GMt^n\\to\\UMt^n$ is small.\n\\eth\n \\begin{proof} \nLet $(\\UMt^n)_m \\subset \\UMt^n$ be the set of points over which the fiber of $\\gamma_\\tau$ has dimension~$m$.\nTake any $0 \\ne h \\in H$. By~\\refp{dep} for any $(\\CE,D) \\in \\sM_\\tau^{n-k} \\times S^k{\\mathbb A}^1_h$ the fiber $\\gamma_\\tau^{-1}(\\CE,D)$\nis isomorphic to the fiber of the map $\\gamma_\\tau:B^k_h \\to S^k{\\mathbb A}^1_h$ over $D$. In particular, by~\\refc{dim}\nthe codimension of the set $(\\UMt^n)_m \\cap (\\sM_\\tau^{n-k} \\times S^k{\\mathbb A}^1_h)$ in $\\sM_\\tau^{n-k} \\times S^k{\\mathbb A}^1_h$\nis at least $m$, and moreover $k > m$. Therefore\n\\begin{multline*}\n\\dim ((\\UMt^n)_m \\cap (\\sM_\\tau^{n-k} \\times S^k{\\mathbb A}^1_h)) \\le\n\\dim (\\sM_\\tau^{n-k} \\times S^k{\\mathbb A}^1_h) - m = \\\\\n2(n-k) + k - m = 2n - k - m < 2n - 2m.\n\\end{multline*}\nSince the sets $\\sM_\\tau^{n-k} \\times S^k{\\mathbb A}^1_h$ form an open covering of the stratum $\\sM_\\tau^{n-k} \\times S^k{\\mathbb A}^1 = \\UMt^{n-k,k}$\nof a stratification of $\\UMt^{n}$, the result follows.\n \\end{proof} \n\n\\ssec{defo}{Deformation of $\\GMt^n$ and $\\UMt^n$}\n\nThe goal of this section is to show that the Gieseker and the Uhlenbeck compactifications\nform a family over $\\AA^1$ (with coordinate $\\tau$) and check that the former is smooth.\nTo be more precise, consider the following graded algebra:\n\\begin{align*}\n\\sA =& {\\mathbb C}\\langle x,y,z,\\btau\\rangle\\Bigl.\\Bigr\/\n\\Bigl\\langle[x,z]=[y,z]=[\\btau,x]=[\\btau,y]=[\\btau,z]=0,[x,y]=\\btau z ^2\\Bigr\\rangle, \\\\\n& \\deg x=\\deg y=\\deg z=1,\\ \\deg\\btau=0.\n\\end{align*}\nAs $\\btau$ is central of degree 0, this is an algebra over ${\\mathbb C}[\\btau]$. In particular, we can specialize $\\btau$\nto any complex number $\\tau$, which gives back the algebra $A^\\tau$ we considered before.\n\nAnalogously, we consider the Koszul dual of $\\sA$ over ${\\mathbb C}[\\btau]$:\n\\begin{align*}\n\\sA^! =& {\\mathbb C}\\langle \\xi,\\eta,\\zeta,\\btau \\rangle\/\\langle \\xi^2=\\eta^2=\\eta\\xi+\\xi\\eta=\\zeta\\xi + \\xi\\zeta = \\eta\\zeta + \\zeta\\eta = \\zeta^2 + \\btau(\\xi\\eta - \\eta\\xi) = 0 \\rangle,\\\\\n& \\deg \\xi=\\deg \\eta = \\deg \\zeta = 1,\\ \\deg\\btau=0.\n\\end{align*}\nThis is a graded ${\\mathbb C}[\\btau]$-algebra. Note that each of its graded components $\\sA^!_0$, $\\sA^!_1$, $\\sA^!_2$, $\\sA^!_3$\nis a free ${\\mathbb C}[\\btau]$-module of finite rank (equal to $1$, $3$, $3$, and $1$ respectively).\n\n\nFurther, we consider the quiver $\\bQ$ over ${\\mathbb C}[\\btau]$ defined as\n\\begin{equation*}\n\\xymatrix{\n*+[o][F]{1} \\ar[rrr]_{\\sA^!_1}\\ar@\/^1pc\/[rrrrrr]^{\\sA^!_2} &&&\n*+[o][F]{2} \\ar[rrr]_{\\sA^!_1} &&&\n*+[o][F]{3} }\n\\end{equation*}\n(analogously to the quiver $\\bpt$), and its representations in the category of ${\\mathbb C}[\\btau]$-modules.\nBy definition such a representation is the data of three ${\\mathbb C}[\\btau]$-modules $(\\bV_1,\\bV_2,\\bV_3)$\nand two morphisms of ${\\mathbb C}[\\btau]$-modules $\\bV_1 \\otimes_{{\\mathbb C}[\\btau]} \\sA^!_1 \\to \\bV_2$ and\n$\\bV_2 \\otimes_{{\\mathbb C}[\\btau]} \\sA^!_2 \\to \\bV_3$ such that the composition\n$\\bV_1 \\otimes_{{\\mathbb C}[\\btau]} \\sA^!_1 \\otimes_{{\\mathbb C}[\\btau]} \\sA^!_1 \\to \\bV_2 \\otimes_{{\\mathbb C}[\\btau]} \\sA^!_1 \\to \\bV_3$\nfactors through $\\bV_1 \\otimes_{{\\mathbb C}[\\btau]} \\sA^!_2 \\to \\bV_3$.\n\nAssuming each of $\\bV_i$ is a free ${\\mathbb C}[\\btau]$-module of finite rank, the space\n\\begin{equation*}\n\\Hom_{{\\mathbb C}[\\btau]}(\\bV_1 \\otimes_{{\\mathbb C}[\\btau]} \\sA^!_1, \\bV_2) \\oplus \\Hom_{{\\mathbb C}[\\btau]}(\\bV_2 \\otimes_{{\\mathbb C}[\\btau]} \\sA^!_1, \\bV_3)\n\\end{equation*}\nis also a free ${\\mathbb C}[\\btau]$-module of finite rank. We consider the associated vector bundle over $\\Spec({\\mathbb C}[\\btau]) = \\AA^1$\nand its total space $\\Tot(\\Hom_{{\\mathbb C}[\\btau]}(\\bV_1 \\otimes_{{\\mathbb C}[\\btau]} \\sA^!_1, \\bV_2) \\oplus \\Hom_{{\\mathbb C}[\\btau]}(\\bV_2 \\otimes_{{\\mathbb C}[\\btau]} \\sA^!_1, \\bV_3))$\nwhich is fibered over $\\AA^1$ with fiber an affine space. The above factorization condition defines a Zarisky closed subspace\n\\begin{equation*}\n\\Rep_\\bQ(\\bV_\\bu) \\subset \\Tot(\\Hom_{{\\mathbb C}[\\btau]}(\\bV_1 \\otimes_{{\\mathbb C}[\\btau]} \\sA^!_1, \\bV_2) \\oplus \\Hom_{{\\mathbb C}[\\btau]}(\\bV_2 \\otimes_{{\\mathbb C}[\\btau]} \\sA^!_1, \\bV_3))\n\\end{equation*}\nparameterizing all representations of the quiver $\\bQ$ in $\\bV_\\bu$. By definition this is an affine variety over $\\AA^1$.\n\n\n\n\nNow we take a relatively prime triple $(r,d,n)$ such that~\\eqref{assumptions-r-d-n} hold,\nconsider a triple of free ${\\mathbb C}[t]$-modules $(\\bV_1,\\bV_2,\\bV_3)$ of ranks given\nby the dimension vector $\\alpha(r,d,n)$ of~\\eqref{alpha}, and put\n\\begin{equation*}\n\\Rep_\\bQ(\\alp(r,d,n)) = \\Rep_\\bQ(\\bV_\\bu).\n\\end{equation*}\nThe group $\\GL(\\alp(r,d,n))$ acts naturally on the space $\\Rep_\\bQ(\\alp(r,d,n))$\nalong the fibers of the projection $\\Rep_\\bQ(\\alp(r,d,n)) \\to \\AA^1$.\nAny rational polarization $\\tet$ (in the sense ~\\refss{stabilo}) linearizes this action and thus\ngives rise to the GIT quotient\n\\begin{equation*}\n\\CM^\\tet_\\bQ(\\alpha(r,d,n)) = \\Rep_\\bQ(\\alp(r,d,n)) \/\\!\/_\\tet \\GL(\\alp(r,d,n)).\n\\end{equation*}\nBy construction it comes with a map $\\CM^\\tet_\\bQ(\\alpha(r,d,n)) \\to \\AA^1$,\nand clearly its fiber over a point $\\tau \\in \\AA^1$ identifies with the moduli space\n$\\CM^\\tet_\\tau(\\alpha(r,d,n))$.\n\nUsing this construction for $\\tet = \\tet^0$ and for $\\tet = \\tet^0 + \\varepsilon\\tet^1$ we construct\nthe relative versions of the Gieseker and the Uhlenbeck compactifications\n\\begin{equation*}\n\\GM(r,d,n) = \\CM^{\\tnt}_\\bQ(\\alpha(r,d,n))\n\\qquad\\text{and}\\qquad\n\\UM(r,d,n) = \\CM^{\\tet^0}_\\bQ(\\alpha(r,d,n)).\n\\end{equation*}\nBy standard GIT we have a contraction $\\gamma:\\GM(r,d,n) \\to \\UM(r,d,n)$ commuting with the morphisms to $\\AA^1$.\n\n\n\n\n\n\\prop{smd}\nIf $\\operatorname{gcd}(r,d,n)=1$ then the map $\\GM(r,d,n) \\to \\AA^1$ is smooth and projective.\nIn particular, $\\GM(r,d,n)$ is a smooth variety.\n\\end{propos}\n \\begin{proof} \nBy~\\refl{kronecker-stability} the moduli space $\\GM(r,d,n)$ coincides with the moduli space of Gieseker semistable\nsheaves of rank $r$, degree $d$ and second Chern class $n$ for the family $\\sA$ of Artin--Schelter algebras over ${\\mathbb C}[\\btau]$\nconstructed in~\\cite{ns}. The smoothness and the projectivity of the latter is proved in Theorem~8.1 of {\\em loc.\\ cit.}\n \\end{proof} \n\n\nWhen considering the case $r = 1$, $d = 0$ we abbreviate $\\GM(1,0,n)$ to just $\\GM^n$ and $\\UM(1,0,n)$ to $\\UM^n$.\n\n\n\n\\ssec{fixed}{Fixed points}\n\nWe choose a torus $T \\subset \\SL(H)$ and consider its action on the Calogero--Moser space\nand its Gieseker and Uhlenbeck compactifications. The stratifications and the map $\\gam$\nare $\\SL(H)$-equivariant and hence $T$-equivariant as well. We aim at a description\nof the set of $T$-fixed points on $\\UMt^n$. Recall first what is known about the $T$-fixed locus of $\\sM^n_\\tau$.\n\n\\lem{fixed-cm}(\\cite[Proposition~6.11]{W})\nFor $\\tau \\ne 0$ the set of $T$-fixed points in $\\sM^n_\\tau$ is in a natural bijection\nwith the set $\\frP(n)$ of partitions of $n$.\n\\end{lemma}\n\n\nWe denote by $\\mathbf{c}^n_\\lambda \\in \\sM^n_\\tau$ the $T$-fixed point corresponding to a partition $\\lambda \\in \\frP(n)$.\nIn particular, $\\mathbf{c}^0 \\in \\sM^0_\\tau$ is the unique point (it is automatically $T$-fixed).\nLet also $P_0,P_\\infty \\in \\PP(H)$ be the $T$-fixed points on the line $\\PP(H)$.\n\n\\lem{fixed-uhlenbeck}\nFor any $\\tau \\ne 0$ the set of $T$-fixed points in $\\UMt^n$ is finite. Moreover,\n\\begin{equation*}\n(\\UMt^n)^T = \\{ (\\mathbf{c}^m_\\lambda, k_0 P_0 + k_\\infty P_\\infty) \\in \\sM^m_\\tau \\times S^{n-m}{\\mathbb P}^1 \\mid \\lambda\\in\\frP(m),\\ k_0+k_\\infty=n-m\\}.\n\\end{equation*}\n\\end{lemma}\n \\begin{proof} \nIt is enough to describe $T$-fixed points on each of the strata $\\sM^m_\\tau \\times S^{n-m}\\PP(H)$ of the stratification of $\\UMt^n$.\nAs the product decomposition is $\\SL(H)$-invariant, it is enough to describe fixed points on each factor.\nOn first factor we use~\\refl{fixed-cm}, and on $S^k\\PP(H)$ a description of fixed points is evident.\n \\end{proof} \n\nRecall that a $T$-fixed point $P$ is called {\\sf attracting} if all the weights of the $T$-action\non the tangent space at point $P$ are positive.\n\n\\lem{attracting-uhlenbeck}\nThe Uhlenbeck compactification $\\UMt^n$ of the Calogero--Moser space has a unique attracting $T$-fixed\npoint $(\\mathbf{c}^0,nP_0) \\in \\sM^0_\\tau\\times S^n\\PP(H) \\subset \\UMt^n$.\n\\end{lemma}\n \\begin{proof} \nSince $\\UMt^n$ is a projective variety, the $T$-action on it should have at least one attracting point.\nOn the other hand, $\\sM^n_\\tau$ is a sympletic manifold, and the $T$-action preserves\nthe sympletic structure~\\cite{kks}, hence for $m > 0$ the weights of $T$ on the tangent\nspaces at points $\\mathbf{c}^m_\\lambda$ are pairwise opposite, and thus for $m > 0$ the $T$-fixed\npoints $(\\mathbf{c}^m_\\lambda,k_0 P_0 + k_\\infty P_\\infty)$ are not attracting. Therefore, each\nattracting point of the $T$-action on $\\UMt^n$ lies on $\\sM^0\\times S^n\\PP(H) = S^n\\PP(H)$.\nAs it also should be an attracting point for the $T$-action on $S^n\\PP(H)$, it should\ncoincide with $(\\mathbf{c}^0,nP_0)$.\n \\end{proof} \n\n\n\n\n\n\\ssec{comp}{The IC sheaf of the Uhlenbeck compactification}\n\n\nIn this section we will prove~\\reft{main}. \nThe statement of the Theorem and the arguments we use are purely topological.\nWe refer to~\\cite{bbd} for the notion of IC sheaf and the general machinery.\n\n\n\n\nWe start with computing the stalks of the IC-sheaf at the deepest stratum of \nthe Uhlenbeck stratification.\nSince for $n=0$ the Calogero--Moser space $\\sM^0_\\tau$ is just a point, by~\\reft{str}(2) we have\n$S^n\\PP(H) = S^n\\PP(H) = \\sM^0_\\tau \\times S^n\\PP(H) \\subset \\UMt^n$.\nRecall also the diagonal stratification~\\eqref{diagonal-stratification} of $S^n\\PP(H)$\nand its deepest stratum $S_{(n)}\\PP(H) \\subset S^n\\PP(H)$.\n\n\n\n\n\n\n\\prop{one stalk}\nFor any $P \\in \\PP(H)$ the stalk of the sheaf $\\ic(\\UMt^n)$ at the point $(\\CO,nP)$ of the stratum $\\sM^0_\\tau \\times S_{(n)}\\PP(H) \\subset \\UMt^n$ is\nisomorphic to\n\\eq{chow}\n\\ic(\\UMt^n)_{(\\CO,nP)} = \\bigoplus_{\\mu\\in\\frP(n)}{\\mathbb C}[2l(\\mu)].\n\\end{equation}\n\\end{propos}\n \\begin{proof} \nLet $T \\subset \\SL(H)$ be a torus such that $P = P_0$ is the attracting point for the action of~$T$ on $\\PP(H)$.\nThe computation\nis based on the following ``deformation diagram'':\n\\begin{equation*}\n\\xymatrix{\n\\GM^n_0\\ar[d]^<>(0.5){\\gamma_0}\\ar@{^{(}->}[rr]^<>(0.5){\\tilde\\varsigma} &&\n\\GM^n\\ar[d]^<>(0.5){\\gamma} &&\n\\GM^n_\\bseta\\ar[d]^<>(0.5){\\gamma_\\bseta}\\ar@{_{(}->}[ll]\n\\\\\n\\UM^n_0\\ar[d]\\ar@{^{(}->}[rr]^<>(0.5){\\varsigma} &&\n\\UM^n\\ar[d]^<>(0.5){p} &&\n\\UM^n_\\bseta\\ar[d]\\ar@{_{(}->}[ll]\n\\\\\n\\{0\\}\\ar@{^{(}->}[rr] &&\n{\\mathbb A}^1\\ar@\/^\/[u]^<>(0.5){\\sigma} &&\n{\\mathbb A}^1\\sminus\\{0\\}\\ar@{_{(}->}[ll]\n}\n\\end{equation*}\nHere the middle column is the deformation family over $\\AA^1$ of~\\refp{smd}\nwith $p$ being the structure map. The left column is the fiber over the point $0 \\in \\AA^1$,\nwhile the right column is the base change to $\\AA^1 \\setminus \\{0\\} \\subset \\AA^1$. Finally,\nthe map $\\sigma:\\AA^1 \\to \\UM^n$ is defined as follows.\n\nFor any $\\tau \\ne 0$ we put $\\sigma(\\tau) = n\\cdot P \\in \\UMt^n \\subset \\UM^n$.\nClearly, this is a regular map $\\AA^1 \\setminus \\{0\\} \\to \\UMt$. By properness\nof $\\UM^n$ over $\\AA^1$ it extends to a map $\\sigma:\\AA^1 \\to \\UM^n$. Note that\n$\\sigma$ is a section of the map $p$. Indeed, this is clear over $\\AA^1 \\setminus\\{0\\}$\nby definition of $\\sigma$, and over $0$ this s true by continuity.\n\n\n\n\nLet $\\C^\\times_{\\leq1} \\subset {\\mathbb C}^\\times = T$ be the sub-semigroup of\nformed by the complex numbers with absolute value $\\leq1$, and\nlet $F = \\sigma({\\mathbb A}^1) \\sset \\UM^n$ be the image of the section $\\sigma$.\nFurther, let $\\CU\\subset \\UM^n$ be a small open neighborhood (in the analytic topology)\nof the point $\\sigma(0)$. Without loss of generality,\nwe may choose the set $\\CU$ to be $\\C^\\times_{\\leq1}$-stable.\nNote that $F$ is the attracting connected component\nof $(\\UM^n)^T$, by \\refl{attracting-uhlenbeck}.\nTherefore, shrinking $\\CU$ further, if necessary,\none may assume in addition that we have $\\CU^T=F\\cap \\CU$.\nThe action of $\\C^\\times_{\\leq1}$ preserves the fibers of $p:\\ \\CU\\to{\\mathbb A}^1$,\nand contracts $\\CU$ to the section $F=\\sigma({\\mathbb A}^1)$. According\nto~\\cite[Lemma~6]{Br},\nfor any $\\C^\\times$-equivariant complex ${\\mathcal F}$ of constructible sheaves on~$\\UMt^n$,\nthe natural morphism $\\sigma^*{\\mathcal F}\\to p_*({\\mathcal F}|_\\CU)$ is an isomorphism.\nIn other words, for any $\\tau\\in{\\mathbb A}^1$, there is a natural isomorphism\n\\eq{is0}\nH^\\bullet(\\CU\\cap \\UMt^n, \\ {\\mathcal F})\\cong {\\mathcal F}|_{\\sigma(\\tau)}.\n \\end{equation} \n\n\n\n\nNext, let $\\psi_p$, resp. $\\psi_\\pg$, denote the nearby cycles\nfunctor~\\cite[8.6]{KS}\nwith respect to the function $p$, resp. $\\pg$. Note that the morphism $\\pg$\nbeing smooth, we have\n$\\psi_\\pg(\\unl\\C{}_{\\GM^n_\\bseta})=\\unl\\C{}_{\\GM^n_0}$.\nTherefore, using the proper base change for\nnearby cycles (see e.g.~\\cite[Exercise~VIII.15]{KS}) we obtain\n\\begin{equation*}\n(\\gamma_0)_*\\unl\\C{}_{\\GM^n_0}=(\\gamma_0)_*(\\psi_\\pg(\\unl\\C{}_{\\GM^n_\\bseta}))\n=\\psi_p((\\gamma_\\bseta)_*\\unl\\C{}_{\\GM^n_\\bseta}).\n\\end{equation*}\nThe map $\\gamma_\\bseta: \\GM^n_\\bseta\\to \\UM^n_\\bseta$\n is a small and proper morphism.\nHence, we have an isomorphism ${\\operatorname{IC}}(\\UM^n_\\bseta) \\cong (\\gamma_\\bseta)_*\\unl\\C{}_{\\GM^n_\\bseta}[2n]$.\nCombining the above isomorphisms and\ntaking stalks at the point $\\sigma(0)$ yields\n\\eq{is1}\n\\big((\\gamma_0)_*\\unl\\C{}_{\\GM_0^n}\\big)|_{\\sigma(0)}\\cong\n\\big(\\psi_p((\\gamma_\\bseta)_*\\unl\\C{}_{\\GM^n_\\bseta})\\big)|_{\\sigma(0)}\n\\cong\\big(\\psi_p({\\operatorname{IC}}(\\UM^n_\\bseta))\\big)|_{\\sigma(0)}[-2n].\n \\end{equation} \n\nFurther, by definition\nof the functor $\\psi_p$, for a sufficiently small open set $\\CU$\nas above and for any $\\tau\\neq 0$\nwith a sufficiently small absolute value\none has $\n\\big(\\psi_p({\\operatorname{IC}}(\\UM^n_\\bseta))\\big)|_{\\sigma(0)}\n\\cong H^\\bullet\\big(\\CU\\cap p^{-1}(\\tau),\\ {\\operatorname{IC}}(\\UM^n_\\bseta)\\big)\n\\cong H^\\bullet\\big(\\CU\\cap\\UMt^n,\\ {\\operatorname{IC}}(\\UMt^n)\\big).\n$$\n\n\n\nThus, comparing the LHS and the RHS in \\refe{is1},\nwe obtain\n\\begin{align}H^\\bullet(\\gamma_0^{-1}(\\sigma(0)))[2n]\\cong\n\\big((\\gamma_0)_*\\unl\\C{}_{\\GM_0^n}\\big)|_{\\sigma(0)}[2n]&\\cong\n\\big(\\psi_p({\\operatorname{IC}}(\\UM^n_\\bseta))\\big)|_{\\sigma(0)}\\label{is4}\\\\\n&\\cong\nH^\\bullet\\big(\\CU\\cap\\UMt^n,\\ {\\operatorname{IC}}(\\UMt^n)\\big)\n\\cong{\\operatorname{IC}}(\\UMt^n)|_{\\sigma(\\tau)},\\nonumber\n\\end{align}\nwhere the last isomorphism is a special case of \\refe{is0}\nfor ${\\mathcal F}={\\operatorname{IC}}(\\UMt^n)$.\n\nTo complete the proof, we observe that the\nfiber $\\gamma_0^{-1}(\\sigma(0)))$ is the ``central fiber''\nof the Hilbert--Chow morphism $\\Hilb^n({\\mathbb P}^2)\\to S^n{\\mathbb P}^2$.\nIn other words, the variety $\\gamma_0^{-1}(\\sigma(0)))$\nis nothing but $\\Hilb^n_0({\\mathbb A}^2)$, the punctual Hilbert scheme of\n$n$ infinitesimally close points in ${\\mathbb A}^2$.\nThe Betti numbers of the punctual Hilbert scheme\nare well known, cf. e.g.~\\cite{N}. Specifically,\nall odd Betti numbers vanish and one\nhas the formula\n\\[\\dim H^{2k-2}(\\Hilb^n_0({\\mathbb A}^2))=\\#\\{\\mu\\in \\frP(n)\\mid l(\\mu)=k\\}.\\]\nIt follows from (\\ref{is4}) that, for\n$\\tau$ sufficiently small, the dimensions of the cohomology\ngroups of the stalk ${\\operatorname{IC}}(\\UMt^n)|_{\\sigma(\\tau)}$\nare given by the same formula. This is equivalent to the\nstatement of the proposition.\n \\end{proof} \n\n\n\n\nNow~\\reft{main} follows from~\\refp{one stalk} and the factorization\nproperty of~\\refp{factorization-b}. In effect, due to $\\SL(H)$-equivariance,\nit suffices to find the stalks of ${\\operatorname{IC}}(\\UMt^n)$ at $S_\\lambda{\\mathbb A}^1_h\\subset S_\\lambda\\PP(H)$.\nGiven a point $\\CE \\in \\sM^m$ and $D\\in S_\\lambda{\\mathbb A}^1_h$, due to the smallness of $\\gamma_\\tau$,\n\\begin{equation*}\n{\\operatorname{IC}}(\\UMt^n)_{(\\CE,D)} = H^\\bullet(\\gamma_\\tau^{-1}(\\{\\CE\\}\\times D),{\\mathbb C}).\n\\end{equation*}\nFurther, by~\\refp{dep} we have\n\\begin{equation*}\nH^\\bullet(\\gamma_\\tau^{-1}(\\{\\CE\\}\\times D),{\\mathbb C}) \\cong H^\\bullet(\\gamma_\\tau^{-1}(\\{\\CO\\}\\times D),{\\mathbb C}).\n\\end{equation*}\nNow if $D = \\sum k_iP_i$ with pairwise distinct points $P_i \\in {\\mathbb A}^1_h$,\nthen according to~\\refp{factorization-b},\n$\\gamma_\\tau^{-1}(\\{\\CO\\}\\times D)\\cong \\prod_i\\gamma_\\tau^{-1}(\\{\\CO\\}\\times k_iP_i)$,\nand hence\n\\begin{equation*}\nH^\\bullet(\\gamma_\\tau^{-1}(\\{\\CO\\}\\times D),{\\mathbb C})\\cong\n\\bigotimes_i H^\\bullet(\\gamma_\\tau^{-1}(\\{\\CO\\}\\times k_iP_i),{\\mathbb C}).\n\\end{equation*}\nDue to the smallness of $\\gamma_\\tau$,\n$H^\\bullet(\\gamma_\\tau^{-1}(\\{\\CO\\}\\times k_iP_i),{\\mathbb C})={\\operatorname{IC}}(\\UMt^i)_{(\\CO,k_iP_i)}$,\nand the latter stalk is known from~\\refp{one stalk}. This completes the proof of~\\reft{main}. \\qed\n\n\n\n\n\n\n\n\n\n\n\\sec{app}{Appendix}\n\nIn this Appendix we collect the proofs of some results from section~\\refs{ch5}.\nThroughout, we assume that $\\tau\\neq 0$.\n\\ssec{artin-proofs}{Moduli spaces of Artin sheaves}\n\nDenote by $\\PP^1_3$ the third infinitesimal\nneighborhood of the line at infinity $\\PP(H)$ in $\\BPt$,\ni.e.\\ the projective spectrum of a commutative graded algebra $\\C[x,y,z]\/z^3$.\n\n\n\\lem{artin-moduli}\nThe moduli space ${}^A\\mathsf{M}_\\tau(1,2,1)$ is a fine moduli space.\nIt is isomorphic to the third infinitesimal neighborhood of a line on a plane:\n${}^A\\mathsf{M}_\\tau(1,2,1) \\cong \\PP^1_3$.\n\\end{lemma}\n \\begin{proof} \nThe data of a $(1,2,1)$-dimensional representation of $\\bpt$ amounts to two maps\n\\begin{equation*}\n\\C \\xrightarrow{f} \\C^2\\otimes A^\\tau_1\n\\qquad\\text{and}\\qquad\n\\C^2 \\xrightarrow{g} \\C\\otimes A^\\tau_1\n\\end{equation*}\nwith the condition saying that the composition\n\\begin{equation*}\n\\C \\xrightarrow{f} \\C^2\\otimes A^\\tau_1 \\xrightarrow{g\\otimes 1} \\C \\otimes A^\\tau_1\\otimes A^\\tau_1 \\to \\C \\otimes A^\\tau_2\n\\end{equation*}\nis zero. In other words, it can be rewritten as saying that\n\\begin{equation*}\n\\sigma := (g\\otimes 1)(f(1)) \\in K := \\Ker(A^\\tau_1\\otimes A^\\tau_1 \\to A^\\tau_2).\n\\end{equation*}\nThe $\\tet^0$-semistability is equivalent to the injectivity of the maps $f^T,g \\colon {\\mathbb C}^2 \\to {\\mathbb C}\\otimes A^\\tau_1$. This means\nthat the element $\\sigma$ considered as an element of $A^\\tau_1\\otimes A^\\tau_1 = \\Hom((A^\\tau_1)^*,A^\\tau_1)$ has rank~2\n(then $\\C^2$-component of the representation is just the image of $\\sigma$). Thus the moduli space is nothing\nbut the degeneration scheme of the morphism\n\\begin{equation*}\n(A^\\tau_1)^* \\otimes \\CO_{\\PP(K)}(-1) \\to A^\\tau_1 \\otimes \\CO_{\\PP(K)}\n\\end{equation*}\non $\\PP(K)$. As $K \\subset A^\\tau_1 \\otimes A^\\tau_1$ can be written as\n\\begin{equation*}\nK = \\{ u(y\\otimes z - z\\otimes y) + v(x\\otimes z - z\\otimes x) + w(x\\otimes y - y\\otimes x - \\tau z\\otimes z) \\mid u,v,w \\in {\\mathbb C} \\},\n\\end{equation*}\nthe above morphism is given by the matrix\n\\begin{equation}\\label{matrix-u-v-w}\n\\left(\n\\begin{matrix}\n0 & w & v \\\\\n-w & 0 & u \\\\\n-v & -u & -\\tau w\n\\end{matrix}\n\\right)\n\\end{equation}\nand the degeneration condition is given by its determinant which is equal to\n\\begin{equation*}\n\\det\\left(\n\\begin{matrix}\n0 & w & v \\\\\n-w & 0 & u \\\\\n-v & -u & -\\tau w\n\\end{matrix}\n\\right) = -\\tau w^3.\n\\end{equation*}\nThis means that the moduli space is the subscheme of $\\PP(K)$ given by the equation $w^3 = 0$, i.e.\\\nthe third infinitesimal neighborhood $\\PP^1_3$ of the line $\\PP^1 = \\{w = 0\\}$ in the plane $\\PP^2$.\n\nTo show that the moduli space is fine we should construct a universal family. For this we restrict the map\n$(A^\\tau_1)^* \\otimes \\CO_{\\PP(K)}(-1) \\to A^\\tau_1 \\otimes \\CO_{\\PP(K)}$\nto $M := \\PP^1_3$. This is a morphism of constant rank $2$ (the rank does not drop to 1 since among the 2-by-2 minors of the matrix~\\eqref{matrix-u-v-w} one easily finds $u^2$, $v^2$, and $w^2$),\nhence its image is a rank $2$ vector bundle~$\\CV_2$. It comes equipped\nwith a surjective map $(A^\\tau_1)^* \\otimes \\CO_{M}(-1) \\to \\CV_2$ and an injective map\n$\\CV_2 \\to A^\\tau_1\\otimes \\CO_M$. Clearly these two maps provide $(\\CO_M(-1),\\CV_2,\\CO_M)$ with a structure of a family of representations of the quiver~$\\bpt$.\nThe above arguments show it is a universal family.\n \\end{proof} \n\n\n\n\n\n\nNow we give a description of the reduced structure of the space ${}^A\\mathsf{M}_\\tau(k,2k,k)$ for $k > 1$.\n\n \\begin{proof} [Proof of~\\refp{artin-moduli-k}]\nConsider the subset ${}^A\\mathsf{R}_\\tau^{\\tet^0}(k,2k,k) \\subset {}^A\\mathsf{R}_\\tau(k,2k,k)$ of all $\\tet^0$-semistable $(k,2k,k)$-dimensional representations of $\\bpt$\nand let $\\CW_\\bullet$ be the universal representation of the quiver over $\\BPt$. Let ${\\mathcal F}$ be the universal\nsheaf on the product ${}^A\\mathsf{R}_\\tau^{\\tet^0}(k,2k,k) \\times \\BPt$, i.e., the sheaf defined by exact sequence\n\\begin{equation*}\n0 \\to \\CW_1 \\boxtimes \\CO(-1) \\to \\CW_0 \\boxtimes \\CO \\to \\CW_1 \\boxtimes \\CO(1) \\to {\\mathcal F} \\to 0.\n\\end{equation*}\nThen the support map defined in~\\refl{artin-support} gives a map $\\supp:{}^A\\mathsf{R}_\\tau^{\\tet^0}(k,2k,k) \\to S^k\\PP^1$.\nThe map is clearly $\\GL(k,2k,k)$-equivariant, hence descends to a map from the moduli space\n\\begin{equation*}\n\\supp:{}^A\\mathsf{M}_\\tau(k,2k,k) \\to S^k\\PP^1.\n\\end{equation*}\nOn the other hand, we clearly have an embedding which takes a $k$-tuple of $(1,2,1)$-dimensional Artin representations\n$W^1_\\bullet$, $W^2_\\bullet$, \\dots, $W^k_\\bullet$ to their direct sum\n\\begin{equation*}\n({}^A\\mathsf{R}_\\tau(1,2,1))^k \\to {}^A\\mathsf{R}_\\tau(k,2k,k),\n\\qquad\n(W^1_\\bullet, W^2_\\bullet, \\dots, W^k_\\bullet) \\mapsto W^1_\\bullet \\oplus W^2_\\bullet \\oplus \\dots \\oplus W^k_\\bullet.\n\\end{equation*}\nThis map is equivariant with respect to the action of the group $\\GL(1,2,1)^k \\rtimes \\fS_k$ on the source,\nsuch that the $i$-th factor $\\GL(1,2,1)$ acts naturally on the $i$-th factor of $({}^A\\mathsf{R}_\\tau(1,2,1))^k$ and $\\fS_k$ permutes the factors,\nand the action on the target is given by\na natural embedding $\\GL(1,2,1)^k \\rtimes \\fS_k \\subset \\GL(k,2k,k)$. The $\\Proj$ construction of the GIT quotient implies that the map\ninduces a morphism of the GIT quotients\n\\begin{equation*}\n({}^A\\mathsf{R}_\\tau(1,2,1))^k \/\\!\/_{\\tet^0} (\\GL(1,2,1)^k \\rtimes \\fS_k) \\to {}^A\\mathsf{R}_\\tau(k,2k,k) \/\\!\/_{\\tet^0} \\GL(k,2k,k).\n\\end{equation*}\nThe quotient on the right is just the moduli space ${}^A\\mathsf{M}_\\tau(k,2k,k)$. The quotient on the left\ncan be identified with $({}^A\\mathsf{M}_\\tau(1,2,1))^k \/ \\fS_k$,\nso it is isomorphic to $S^k(\\PP^1_3)$ by~\\refl{artin-moduli}.\nRestricting to the reduced subscheme we obtain a map\n\\begin{equation*}\n\\Sigma \\colon S^k\\PP^1 = S^k(\\PP^1_3)_\\reduced \\to {}^A\\mathsf{M}_\\tau(k,2k,k).\n\\end{equation*}\nWe are going to show that the constructed maps $\\supp$ and $\\Sigma$ induce isomorphisms between $S^k\\PP^1$ and\nthe reduced moduli space ${}^A\\mathsf{M}_\\tau(k,2k,k)_{\\mathrm{red}}$.\n\nFor this we note that the maps give bijections between the sets of closed points of $S^k\\PP^1$ and ${}^A\\mathsf{M}_\\tau(k,2k,k)$,\nsince by~\\refp{Art0}(3) any Artin sheaf is S-equivalent to a direct sum of structure sheaves for a unique collection\nof points (which are given back by the support map). Note also that both $S^k\\PP^1$ and ${}^A\\mathsf{M}_\\tau(k,2k,k)$ are projective\nvarieties, hence the map $\\Sigma$ is proper. Finally, $S^k\\PP^1 \\cong \\PP^k$ is normal.\n\nSo, it is enough to show that any proper regular map from a reduced normal scheme to a reduced scheme inducing\na bijection on the sets of closed points is an isomorphism. Locally, we just have an integral (due to properness)\nextension of rings with the bottom ring being integrally closed (by normality), hence it is an isomorphism.\n \\end{proof} \n\n\n\n\\newcommand{{}^G\\Rt}{{}^G\\mathsf{R}_\\tau}\n\\newcommand{{}^U\\Rt}{{}^U\\mathsf{R}_\\tau}\n\n\\ssec{statifications-proofs}{Stratifications}\n\nHere we construct the required stratifications of the Gieseker and Uhlenbeck moduli spaces.\n\n \\begin{proof} [Proof of~\\refl{stratification}]\nLet ${}^G\\Rt := \\osRt^{\\tnt}(\\alpha(r,d,n)) \\subset \\osRt(\\alpha(r,d,n))$ be the open subset of\n$\\tnt$-semistable $\\alpha(r,d,n)$-dimensional representations of $\\bpt$.\nLet $\\CV_\\bu$ be the universal family of representations over ${}^G\\Rt$. Consider the universal monad\n\\begin{equation*}\n\\CV_1 \\boxtimes \\CO(-1) \\to \\CV_2 \\boxtimes \\CO \\to \\CV_3 \\boxtimes \\CO(1)\n\\end{equation*}\non ${}^G\\Rt \\times \\BPt$ and denote its cohomology sheaf by $E$. For each point $s \\in {}^G\\Rt$\nwe denote by $E_s$ the restriction of $E$ to $\\{s\\} \\times \\BPt$. Note that this is just\nthe cohomology sheaf of the monad $\\CV_{1s}\\otimes \\CO(-1) \\to \\CV_{2s} \\otimes \\CO \\to \\CV_{3s}\\otimes\\CO(1)$.\nIn particular, the sheaf $E$ is flat over ${}^G\\Rt$.\n\nConsider also the dual monad on $S\\times\\BPt$\n\\begin{equation*}\n\\CV_3^\\vee\\boxtimes\\CO(-1) \\to \\CV_2^\\vee \\boxtimes\\CO \\to \\CV_1^\\vee \\boxtimes\\CO(1)\n\\end{equation*}\nand let ${\\mathcal F}$ be the cokernel of the last map\n\\begin{equation*}\n{\\mathcal F} := \\Coker( \\CV_2^\\vee \\boxtimes\\CO \\to \\CV_1^\\vee \\boxtimes\\CO(1) ).\n\\end{equation*}\nFor each point $s \\in S$ we have\n\\begin{equation*}\n{\\mathcal F}_s \\cong \\Coker( \\CV_{2s}^\\vee \\boxtimes\\CO \\to \\CV_{1s}^\\vee \\boxtimes\\CO(1) ) \\cong \\underline{\\Ext}^1(E_s,\\CO) \\cong \\underline{\\Ext}^2(E_s^{**}\/E_s,\\CO).\n\\end{equation*}\nThus it is an Artin sheaf, but its length may vary from point to point.\nConsider the flattening stratification of $S$ for ${\\mathcal F}$:\n\\begin{equation*}\n{}^G\\Rt = {}^G\\Rt^{\\ge 0} \\supset {}^G\\Rt^{\\ge 1} \\supset {}^G\\Rt^{\\ge 2} \\supset \\dots \\supset {}^G\\Rt^{\\ge n} \\supset {}^G\\Rt^{\\ge n+1} = \\emptyset,\n\\end{equation*}\nwhere ${}^G\\Rt^{\\ge k}$ is the subscheme of points $s \\in {}^G\\Rt$ where the length of ${\\mathcal F}_s$ is at least $k$.\nThis stratification is $\\GL(\\alpha(r,d,n))$-invariant, so it gives a stratification\nof the GIT quotient ${}^G\\Rt\/\\!\/_{\\tnt}\\GL(\\alpha(r,d,n))$, i.e., of the Gieseker moduli space $\\GMt(r,d,n)$.\nFinally, we replace each stratum by its underlying reduced subscheme.\n \\end{proof} \n\n\nBelow we will need the following result on universal families.\n\n\\prop{family-on-strata}\nLet $\\CV_\\bu$ be the universal family of $\\tnt$-semistable $\\alpha(r,d,n)$-dimensional representations\nof $\\bpt$ over ${}^G\\Rt^k := {}^G\\Rt^{\\ge k} \\setminus {}^G\\Rt^{\\ge k+1}$.\nThen there is a natural exact sequence\n\\begin{equation*}\n0 \\to \\CW_\\bu \\to \\CV_\\bu \\to \\CU_\\bu \\to 0\n\\end{equation*}\nof families of representations over ${}^G\\Rt^k$ where $\\CW_\\bu$ is a family of Artin representations\nof dimension $(k,2k,k)$ and $\\CU_\\bu$ is a family of supermonadic $\\tnt$-semistable representations.\n\\end{propos}\n \\begin{proof} \nWe use freely the notation introduced in the proof of~\\refl{stratification}.\nBy assumption ${\\mathcal F}$ is a flat (over~${}^G\\Rt^k$)\nfamily of Artin sheaves of length $k$.\nConsider its Beilinson resolution\n\\begin{equation*}\n\\CW'_1 \\boxtimes \\CO(-1) \\to \\CW'_2 \\boxtimes \\CO \\to \\CW'_3 \\boxtimes \\CO(1),\n\\end{equation*}\nBy flatness of ${\\mathcal F}$ we know that $\\CW'_1$, $\\CW'_2$, $\\CW'_3$ are vector bundles of ranks $k$, $2k$, and $k$ respectively.\nBy functoriality of the Beilinson resolution there is a morphism of resolutions\n\\begin{equation*}\n\\xymatrix{\n\\CV_3^\\vee \\boxtimes\\CO(-1) \\ar[r] \\ar[d] & \\CV_2^\\vee \\boxtimes\\CO \\ar[r] \\ar[d] & \\CV_1^\\vee \\boxtimes\\CO(1) \\ar[d] \\\\\n\\CW'_1 \\boxtimes\\CO(-1) \\ar[r] & \\CW'_2 \\boxtimes\\CO \\ar[r] & \\CW'_3 \\boxtimes\\CO(1)\n}\n\\end{equation*}\nNote that the induced morphisms $\\CV_i^\\vee \\to \\CW'_{4-i}$ of vector bundles on ${}^G\\Rt^k$ are surjective.\nIndeed, this can be verified pointwise, i.e.\\ just for one representation instead of a family. In this case note that both\n$\\CV^\\vee_\\bu$ and $\\CW'_\\bu$ are $\\tet^0$-semistable, hence so is the image of the map. But any $\\tet^0$-semistable\nsubrepresentation of an Artin representation is also Artin. If ${\\mathcal F}' \\subset {\\mathcal F}$ is the corresponding Artin sheaf then it follows\nthat the map $\\CV^\\vee_1 \\to {\\mathcal F}$ factors through ${\\mathcal F}'$ which by definition of ${\\mathcal F}$ implies ${\\mathcal F}' = {\\mathcal F}$.\n\n\nLet $\\CW_i = (\\CW'_{4-i})^\\vee$ and $\\CU_i = \\Ker(\\CV_i^\\vee \\to \\CW'_{4-i})^\\vee$, so that we have an exact sequence of monads\n\\begin{equation*}\n\\xymatrix{\n\\CW_1 \\boxtimes\\CO(-1) \\ar[r] \\ar[d] & \\CW_2 \\boxtimes\\CO \\ar[r] \\ar[d] & \\CW_3 \\boxtimes\\CO(1) \\ar[d] \\\\\n\\CV_1 \\boxtimes\\CO(-1) \\ar[r] \\ar[d] & \\CV_2 \\boxtimes\\CO \\ar[r] \\ar[d] & \\CV_3 \\boxtimes\\CO(1) \\ar[d] \\\\\n\\CU_1 \\boxtimes\\CO(-1) \\ar[r] & \\CU_2 \\boxtimes\\CO \\ar[r] & \\CU_3 \\boxtimes\\CO(1)\n}\n\\end{equation*}\nBy construction, the family $\\CU_\\bu$ is $\\tnt$-semistable and supermonadic, so this exact sequence\nis the one we need.\n \\end{proof} \n\nAnd now we are ready to construct the stratification of the Uhlenbeck moduli space.\nThe situation here is a bit more complicated than in the Gieseker case, since Artin\nrepresentations can appear both as subrepresentations and as quotient representations\nof a $\\tet^0$-semistable representation. So, we perform a two-step construction, first\ndealing with the latter, and then with the former.\n\n\n \\begin{proof} [Proof of~\\refl{stratification-uhlenbeck}]\nLet ${}^U\\Rt := \\hsRt^{\\tet^0}(\\alpha(r,d,n)) \\subset \\hsRt(\\alpha(r,d,n))$ be the open subset of\n$\\tet^0$-semistable $\\alpha(r,d,n)$-dimensional representations of $\\bpt$. Let $\\CV_\\bu$ be\nthe universal family of representations over ${}^U\\Rt$.\nConsider the family of sheaves ${\\mathcal F}' := \\Coker(\\CV_2 \\boxtimes \\CO \\to \\CV_3 \\boxtimes \\CO(1))$ over ${}^U\\Rt \\times \\BPt$.\nNote that these are Artin sheaves of length at most $n$. Let\n\\begin{equation*}\n{}^U\\Rt = {}^U\\Rt^{\\ge 0,\\bu} \\supset {}^U\\Rt^{\\ge 1,\\bu} \\supset {}^U\\Rt^{\\ge 2,\\bu} \\supset \\dots \\supset {}^U\\Rt^{\\ge n,\\bu} \\supset {}^U\\Rt^{\\ge n+1,\\bu} = \\emptyset,\n\\end{equation*}\nbe the flattening stratification for the sheaf ${\\mathcal F}'$. Restricting the family $\\CV_\\bu$ to each stratum ${}^U\\Rt^{k,\\bu} = {}^U\\Rt^{\\ge k,\\bu} \\setminus {}^U\\Rt^{\\ge k+1,\\bu}$\nand repeating the arguments of~\\refp{family-on-strata} (without dualization) we construct on ${}^U\\Rt^k$ a natural exact sequence\n\\begin{equation*}\n0 \\to \\CV'_\\bu \\to \\CV_\\bu \\to \\CW_\\bu \\to 0\n\\end{equation*}\nof representations with $\\CW_\\bu$ being a $(k,2k,k)$-dimensional family of Artin representations, and\n$\\CV'_\\bu$ being a $\\tnt$-semistable family of $\\alpha(r,d,n-k)$-dimensional representations of the quiver $\\bpt$.\nApplying to the latter family the arguments of the proof of~\\refl{stratification} we obtain a natural stratification\n\\begin{equation*}\n{}^U\\Rt^{k,,\\bu} = {}^U\\Rt^{k,\\ge 0} \\supset {}^U\\Rt^{k,\\ge 1} \\supset {}^U\\Rt^{k,\\ge 2} \\supset \\dots \\supset {}^U\\Rt^{k,\\ge n-k} \\supset {}^U\\Rt^{k,\\ge n+1-k} = \\emptyset,\n\\end{equation*}\nby the length of the Artin sheaf $\\Coker((\\CV'_2)^\\vee \\boxtimes\\CO \\to (\\CV'_1)^\\vee \\boxtimes\\CO(1))$, and moreover,\non the stratum ${}^U\\Rt^{k,l} := {}^U\\Rt^{k,\\ge l} \\setminus {}^U\\Rt^{k,\\ge l+1}$ an exact sequence of representations\n\\begin{equation*}\n0 \\to \\CW'_\\bu \\to \\CV'_\\bu \\to \\CU_\\bu \\to 0\n\\end{equation*}\nwith $\\CW'_\\bu$ being a $(l,2l,l)$-dimensional family of Artin representations, and\n$\\CU_\\bu$ being a $\\tnt$-semistable family of $\\alpha(r,d,n-k-l)$-dimensional supermonadic representations.\n\nIt is clear that the subsets\n\\begin{equation*}\n{}^U\\Rt^{\\ge m} := \\bigsqcup_{k + l \\ge m} {}^U\\Rt^{k,l} \\subset {}^U\\Rt\n\\end{equation*}\nare $\\GL(\\alpha(r,d,n))$-invariant closed subsets,\nhence they induce a stratification of the GIT quotient ${}^U\\Rt\/\\!\/_{\\tet^0}\\GL(\\alpha(r,d,n))$, i.e., of the Uhlenbeck moduli space $\\UMt(r,d,n)$.\nAgain, we finish by replacing each stratum with its reduced underlying scheme.\n \\end{proof} \n\n\n\\rem{another-stratification}\nWe could start with splitting of Artin subrepresentations first (and define in this way closed subsets ${}^U\\Rt^{\\bullet,\\ge l} \\subset {}^U\\Rt$)\nand then continue with splitting Artin quotient representations. Note that this will give {\\em different}\\\/ two-index\nstratification of the space ${}^U\\Rt$, but the resulting total stratification will be the same.\n\\end{remark}\n\n\\rem{urt-grt}\nWe always have an embedding ${}^G\\Rt^k \\subset {}^U\\Rt^{0,k}$. Moreover, if $r$ and $d$ are coprime then this inclusion becomes an equality\n\\begin{equation*}\n{}^G\\Rt^k = {}^U\\Rt^{0,k}.\n\\end{equation*}\nIndeed, the union of ${}^U\\Rt^{0,k}$ over all $k$ parameterizes all monadic $\\tet^0$-semistable representations,\nand by~\\refc{coprime-stable} these are precisely all $\\tnt$-semistable representations.\n\\end{remark}\n\nBefore we go to the proof of the main Theorem we need one more result.\nWe will use the notation of the proof of~\\refl{stratification-uhlenbeck}.\nConsider the stratum ${}^U\\Rt^{k,0}$ of the space ${}^U\\Rt$. We checked in the proof\nof~\\refl{stratification-uhlenbeck} that the universal representation over it\nfits into an exact sequence\n\\begin{equation}\\label{uvw-sequence}\n0 \\to \\CU_\\bu \\to \\CV_\\bu \\to \\CW_\\bu \\to 0\n\\end{equation}\nwith $\\CU_\\bu$ being supermonadic and $\\CW_\\bu$ being Artin.\n\n\\lem{sequence-splits}\nThe subset ${}^U\\Rt^{k,\\mathrm{split}} \\subset {}^U\\Rt^k$ of all points over which the exact sequence~\\eqref{uvw-sequence} splits is closed.\n\\end{lemma}\n \\begin{proof} \nIndeed, ${}^U\\Rt^{k,\\mathrm{split}} = {}^U\\Rt^k \\cap {}^U\\Rt^{\\ge k,\\bullet} \\cap {}^U\\Rt^{\\bullet,\\ge k}$\nand both ${}^U\\Rt^{\\ge k,\\bullet}$ and ${}^U\\Rt^{\\bullet,\\ge k}$ are closed subsets in ${}^U\\Rt$ by their construction.\n \\end{proof} \n\n\n\\ssec{}{Proof of~\\reft{str}}\n\n\n(1) Follows immediately from the inclusion ${}^G\\Rt^k \\subset {}^U\\Rt^{0,k}$.\n\n(2) Equality $\\UMt^0(r,d,n) = \\GMt^0(r,d,n)$ for $r$ and $d$ coprime follows immediately\nfrom the equality ${}^G\\Rt^0 = {}^U\\Rt^{0,0}$ which is the only component of the stratum of ${}^U\\Rt$ giving $\\UMt^0(r,d,n)$.\n\n\nFurther, consider the subset ${}^U\\Rt^{k,\\mathrm{split}} \\subset {}^U\\Rt^k$.\nBy~\\refl{sequence-splits} it is closed.\nIt is also $\\GL(\\alpha(r,d,n))$-invariant. Finally, each point in ${}^U\\Rt^k$ is S-equivalent to a point in ${}^U\\Rt^{k,\\mathrm{split}}$.\nTherefore\n\\begin{equation*}\n\\UMt^k(r,d,n) = {}^U\\Rt^k\/\\!\/_{\\tet^0} \\GL(\\alpha(r,d,n)) = {}^U\\Rt^{k,\\mathrm{split}}\/\\!\/_{\\tet^0} \\GL(\\alpha(r,d,n)).\n\\end{equation*}\nSo, it is enough to find a direct product decomposition for the right hand side of the equality.\nFor this recall that the universal family of representations restricted to ${}^U\\Rt^{k,\\mathrm{split}}$\nsplits canonically as a direct sum\n\\begin{equation*}\n\\CV_\\bu = \\CU_\\bu \\oplus \\CW_\\bu\n\\end{equation*}\nof a supermonadic $\\tnt$-semistable $\\alpha(r,d,n-k)$-dimensional representation and of an Artin representation of dimension $(k,2k,k)$.\nThis decomposition gives a $\\GL(\\alpha(r,d,n))$-equivariant morphism to the homogeneous space\n\\begin{equation*}\n{}^U\\Rt^{k,\\mathrm{split}} \\to \\GL(\\alpha(r,d,n)) \/ (\\GL(\\alpha(r,d,n-k) \\times \\GL(k,2k,k))\n\\end{equation*}\nsuch that the fiber over a point is the product $\\hsRt^0(r,d,n-k) \\times {}^A\\mathsf{R}_\\tau(k,2k,k)$. It follows that\n\\begin{multline*}\n{}^U\\Rt^{k,\\mathrm{split}} \/\\!\/_{\\tet^0} \\GL(\\alpha(r,d,n))\n\\\\ \\cong (\\hsRt^0(r,d,n-k) \\times {}^A\\mathsf{R}_\\tau(k,2k,k)) \/\\!\/_{\\tet^0} (\\GL(\\alpha(r,d,n-k) \\times \\GL(k,2k,k))\n\\\\ \\cong (\\hsRt^0(r,d,n-k) \/\\!\/_{\\tet^0} \\GL(\\alpha(r,d,n-k)) \\times ({}^A\\mathsf{R}_\\tau(k,2k,k)) \/\\!\/_{\\tet^0} \\GL(k,2k,k))\n\\\\ \\cong \\UMt^0(r,d,n-k) \\times {}^A\\mathsf{M}_\\tau(k,2k,k).\n\\end{multline*}\nThis together with~\\refp{artin-moduli-k} proves part (2) of the Theorem.\n\n(3) The split representation in a closure of the $\\GL(\\alpha(r,d,n))$-orbit of a $\\tnt$-semistable representation $V_\\bu(E)$\nis just the direct sum of the supermonadic quotient and the Artin subrepresentation of $V_\\bu$. By~\\refp{tnt-ss}\nthese correspond to the sheaves $E^{**}$ and $E^{**}\/E$ respectively, hence the claim.\n\\qed\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}